In lieu of an abstract, here is a brief excerpt of the content:

  • Building a Crowdsourcing Platform for the Analysis of Film Colors
  • Barbara Flueckiger (bio) and Gaudenz Halter (bio)

Our recent article “A Digital Humanities Approach to Film Colors” reported on one of the most important aims of ERC Advanced Grant FilmColors: the development of an interactive crowdsourcing platform that allows users to apply (semi)automatic and manual tools for their own film color analyses and then compare their results to the analyses of the research team and other users.1

At the heart of this platform is the video annotation system VIAN (Visual Video Annotator), developed by Gaudenz Halter in collaboration with the Visualization and MultiMedia Lab of Renato Pajarola at the University of Zurich. VIAN is a stand-alone software that runs offline on users’ computers. It consists of four different layers: a presentation mode for the videos, a segmentation annotation mode, a screenshot manager, and a node editor for the (semi)automatic tools that process the videos and, most importantly, the screenshots to produce significant results. As described in the previous article, the video annotation system ELAN (https://tla.mpi.nl/tools/tla-tools/elan/) that was used by the research team for the analysis of the corpus of four hundred films originally had been developed for the analysis of language. It had some serious shortcomings for aesthetic analyses of films despite its generally high level of sophistication. VIAN, on the other hand, is complementary to ELAN, with an emphasis on visual analysis and annotation. It contains graphical annotation tools that allow marking regions of interest, drawing or writing directly onto the videos, and integrating images. In addition, it enables users to import ELAN projects if they need the specific strengths of ELAN, for instance, extremely fine temporal analysis.

VIAN is a Python-based application for temporal segmentation, visual annotation, and computational analysis of films. In the back end, it uses the well-known libVLC to play a movie and OpenCV in combination with other libraries for the analysis. VIAN uses PyQt as its graphical front end. Since the concept of temporal segmentation is well elaborated and known from existing software used in film analysis, it has been implemented in the same fashion, allowing seamless transition for users of other software.

When it comes to annotation, the time dimension presents a challenge for the process of annotation, since the position and size of a notation may change between different time stamps of a film. To overcome this challenge, VIAN’s visual annotation toolbox not only allows drawing of geometric or hand-drawn shapes onto a single frame or set of frames and their export as images but also enables the possibility of animating these shapes over time. This can be achieved using either manual or computation-based methods (i.e., Object Tracking). Since all annotations are based on vector graphics, annotated frames can be exported in any size, without leading to distorted annotations.

VIAN gives the user the possibility to either perform existing analysis implemented in Python or use the “Node Script Editor” to develop her own analysis using a visual scripting language. These analyses can be performed on any entity created in VIAN (segments, annotations) and will be connected to their source [End Page 80] entity for later queries or meta-analyses. Furthermore, the implementation and integration of these new analyses into VIAN is easy, allowing users to tailor its functionality to their specific needs.


Click for larger view
View full resolution
Figure 1.

VIAN annotation layer with segmentation and screenshot manager. ERC Advanced Grant FilmColors. Courtesy of the authors.

More so than ELAN, VIAN provides a complete working platform for all the steps required for a complete analysis of color films. In addition to the segmentation of the movies based on consistent color schemes within segments, generating screenshots is at the core of the FilmColors methodology. By manually selecting the most significant compositional arrangements plus framing, the analyzer creates a bin that arranges the screenshots by segment. Screenshots can then be exported with a defined set of preferences, including a standardized nomenclature. This approach enables uploading them onto the crowdsourcing platform in conjunction with the results of the manual, semiautomatic, and automatic analyses.

The manual analysis, currently...

pdf

Additional Information

ISSN
1542-4235
Print ISSN
1532-3978
Pages
pp. 80-83
Launched on MUSE
2019-01-05
Open Access
No
Archive Status
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.