Summary of the 23rd Brazilian Symposium on Multimedia and the Web

On October 18th and 19th I participated in the 23rd Brazilian Symposium on Multimedia and Web Systems (WebMedia 2017) in Gramado, RS, Brazil. As usual, in order to force myself to pay attention and to share the knowledge with others, I wrote quick summaries of the presentations that were given during the event.


Day One:

Theses and Dissertations Workshop – Session 1:

As the first activity of the first day, I attended this session as a member of the evaluation committee (selected researchers that should give MSc/PhD candidates feedback on their work). The theme of this session was Software Architecture and there were three presentations, as the authors of the work entitled S2D2: Security & Safety Driven Development didn’t show up.

Avaliando o Tratamento de Exceção em um Sistema Web Corporativo: Um Estudo de Caso (Evaluating Exception Handling on a Corporate Web System: a Case Study) was presented by Dêmora Bruna Cunha de Sousa, MSc candidate advised by prof. Windson Viana de Carvalho at UFC. The work consists of performing a case study on a large web system (6000+ code artifacts, 40+ developers in the team) in order to uncover the perception of the developers about exception handling in this system, how this view impacts the development of the system and how to define and implement guidelines for exception handling in order to avoid or diminish the occurrence of errors and bad smells. The case study is organized in three packages: (1) questionnaires and interviews with developers to obtain their perception; (2) source code analysis to find out if there are problems in exception handling; (3) creating the guidelines for exception handling.

Uma Arquitetura para Composição de Serviços com Modelos de Interação Heterogêneos (An Architecture for Service Composition with Heterogeneous Integration Models) was presented by Alexis Huf, MSc candidate advised by prof. Frank Siqueira at UFSC. The goal of the work is to provide an architecture (and algorithms) that can perform automatic heterogeneous service integration, in particular REST, SOAP and Event-Oriented services, guaranteeing conformance to the specified interaction model. After a systematic review of the literature, they proposed an architecture that uses some of the existing integration techniques, complemented by two algorithms they developed themselves. The proposal was evaluated with an existing dataset, comparing with two of the most prominent related works.

A Software Architecture Supporting Android Applications for Users with Motor Disabilities was presented by Olibário José Machado-Neto, PhD candidate advised by prof. Maria da Graça Campos Pimentel at USP. The work provides an architecture/API on top of the Android API that helps developers crete mobile applications for users with motor disabilities. The architecture follows a client-server structure and provides developers with a wrapper over the phone’s sensors (camera, microphone, gyroscope, etc.) and actuators (sound, vibration, etc.), which provides helper methods to access these functionalities. Evaluations with novice developers provided positive results.


Theses and Dissertations Workshop – Session 2:

I continued following this workshop as the theme of this session was Semantic Web. There were two presentations.

OntoGenesis: Uma Arquitetura para Enriquecimento Semântico de Web Data Services (OntoGenesis: an Architecture for the Semantic Enrichment of Web Data Services) was presented by Bruno Oliveira, MSc candidate advised by prof. Frank Siqueira at UFSC. The work focuses on the limitations on the use of Semantic Data Services: data stored without semantics, time and effort necessary for semantic enrichment, ontology heterogeneity (assuming the existence of a universal ontology is not realistic). Thus, the goal of his work is to propose an architecture to offer support to building/evolving domain ontologies and facilitate semantic enrichment, using ontology matching techniques and being 100% automatic. A Semantic Adapter intercepts service requests (e.g., JSON) and sends it to an engine that automatically builds/evolves the domain ontology and generates semantic mappings using external sources, returning semantically enriched results (e.g., returning JSON-LD to the client). In the service request, the engine recognizes which elements are owl:Class, owl:ObjectProperty and owl:DataProperty, building an initial OWL schema. Then, it tries to create owl:equivalentProperty relations with external vocabularies.

Anotações Semânticas em Repositórios Acadêmicos: um estudo de caso com o RI UFBA (Semantic Annotations in Academic Repositories: a case study with UFBA’s Institutional Repository) was presented by Aline Meira Rocha, MSc candidate advised by prof. Lais do Nascimento Salvador at UFBA. The work focuses on semantic annotation, i.e., extracting terms from documents and associating them with concepts from ontologies, on repositories of academic publications. In this scenario, sometimes the terms chosen for metadata annotation are not the most adequate or relevant, thus the relevant terms could be automatically extracted from the contents of the publications and suggested to the user (which could be a librarian that is not an expert on the contents of the publication). The proposed solution would use Natural Language Processing to extract terms and tries to match them to two ontologies: the publication repository ontology and an ontology of the domain of the publication (that would have to be chosen).


Tools and Applications Workshop – Session 3:

After lunch, I switched to the Tools and Applications Workshop in order to present Silas’ paper, which was accepted at this workshop. They themed the session Content Processing, Authoring and Annotation, so I guess our paper was an outlier in the workshop…

First, Marcello de Amorim (UFES) presented CrowdNote: crowdsourcing environment for complex video annotations. This work began in the context of Computer Science in Education, as they needed to add annotations containing extra material to educational videos. He presented the tool that implements the CrowdNote method, which uses crowdsourcing to annotate videos. The tool’s architecture uses node.js, MongoDB, HTML5 and JSON, and connects to YouTube through its API. Following the crowdsourcing approach, the tool sends segments of videos to people that will contribute with simple annotation tasks that are aggregated after these people send the data back to the server.

Next, it was my turn to present my student’s paper: FrameWeb Editor: Uma Ferramenta CASE para suporte ao Método FrameWeb (FrameWeb Editor: a CASE tool to support the FrameWeb method). If you can read Portuguese, you can read the paper here.

Carolina Coimbra Vieira (UFMG) presented ROPE and STRAW: Automatic Music Playlist Generators. She started by presenting some motivating scenarios for automatic playlist generation, such as to please a group of people with different tastes (a road trip, a wedding) or to generate different playlists for a person that listens to an hour of music on the way to work. As requirements for the tool, they specified heterogeneity, but with smooth transitions, novelty (non repetitive playlists), usability (demand very few information from users) and scalability (produce playlists fast). To accomplish this, they created a data space of music based on music metadata collected from a database of songs and provided two algorithms to generate paths in this music space: ROPE (Brownian Path Generator) and STRAW (Steered Random Walker, graph based). As frontend, they used HTML/CSS/JavaScript and integrate with YouTube for listening to the songs on YouTube. The backed was implemented in JavaScript and CSV files. In the future, they intend to integrate the tool with Spotify. The tool is currently available at a website.

Fábio Lobato (UFOPA) presented Opinion Label: A Gamified Crowdsourcing System for Sentiment Analysis Annotation. The work is about sentiment analysis (automatically mine opinions) of social media data, which has a big challenge: this data has to be annotated to generate the training set. The tool aids in this process by using crowdsourcing and gamification. Currently, the tool allows users to annotate polarity, identify emotion, recognize subjectivity and create new projects (the latter is the biggest difference between the tool and related work). Future work will include irony, keywords and entities, ambiguity and context-dependency. The tool is being developed with PHP, Bootstrap and MySQL. The tool is also available on the web.



There were four keynotes along the day (the conference has eight (!) keynotes in total).

Before lunch, Wagner Meira Júnior (UFMG) gave the keynote Justiça, Transparência e Responsabilidade em Mineração de Dados da Web e Redes Sociais (Justice, Transparency and Responsibility on Data Mining on the Web and Social Networks). Unfortunately, the session I was attending before ran late and I got to the keynote in the middle of it. For this reason, I did not take notes.

After lunch, there were two more keynotes. First, Simone C. O. Conceição (University of Wisconsin-Milwaukee) presented Uso de Tecnologias Educacionais na Era do Design Thinking (Use of Educational Technologies at the Era of Design Thinking), but I also didn’t take notes because I had to take care of some other stuff at that time. 🙁

Next, David Frohlich (University of Surrey, UK) presented the keynote From audio paper to next generation paper. He first presented a proposal from about 10 years ago to combine paper (actually, photographs) with audio, which did not take off back then but it’s picking up now (there are about 20 apps that combine sound and photos using smartphones). He then presented other similar projects, like audio cards or newspapers. He finalized the presentation talking about next generation paper, which he also called “webpaper”, that intends to connect complementary print and web media through touch. The paper would interact with devices around it, supporting both audio and visual playback of associated web media.

At the end of the day, after the official opening of the conference, the last keynote of the day: WebMedia: Histórico, Conteúdo e Meios (WebMedia: History, Contents and Means), in which José Valdeni de Lima (UFRGS) talked about the 23 years of the conference. I also couldn’t manage to take notes on it because I had to take a call from my wife and kid.

So I was 1 for 4 with the keynotes. I guess my conference logs are not the same as the used to be. 🙂


Day Two:

Main Conference – Technical Session 6:

On the second day I finally attended a technical session from the main conference. The theme of this session was Content search and retrieval, Linked Data and Semantic Web. There were two full papers, followed by three short papers.

João B. Rocha-Junior (UEFS) presented Efficient Processing of Spatio-Temporal-Textual Queries. There is a lot of data on the Web that contains location and time information (Twitter, YouTube, OpenStreetMap, etc.). Spatio-temporal-textual queries consider items that (a) contains the query keyword, (b) are inside the spatial area of interest, and (c) are inside the time interval. Their work presents new indexes (Adapted Spatial Inverted Index, Spatio-temporal-textual index) and algorithms (Textual Indexed Algorithm, Spatio-textual Indexed Algorithm and Spatio-temporal-textual Indexed Algorithm) to process this type of queries in an efficient way. The proposal was extensively evaluated using real data sets from Twitter, Foursquare and OpenStreetMap, evaluating response time and I/O.

Leandro Amancio (UFSC) presented Towards Recency Ranking in Community Question Answering: a Case Study of Stack Overflow. In Community Question Answering (CQA) sites such as Yahoo! Answers and Stack Overflow, anyone can ask questions and post answers, which can lead to low quality/relevance, repeated questions, long time to find answers. CQA sites use voting mechanisms to mitigate this, but they are manual (slow) and subjective. Moreover, recent/impopular questions may get no votes, so answers are not ranked. To rank best answers with few or no votes, one should judge the quality of answers, considering the text (structure, size, style, legibility, similarity with the question), relevance and recency (content is up-to-date). Existing work takes into account the two first criteria, but not recency. They conducted an experiment over Stack Overflow with recency ranking algorithms AdaRank, Coordinate Ascent, LambdaMart, Random Forests and Support Vector Machines, reporting the results.

Paulo Artur de Sousa Duarte (UFC) presented Generating Context Acquisition Code using the Awareness API. The work enhances an existing method (CRITiCAL — context modeling, code generation) with the Google Awareness API (a context-awareness API that is middleware independent). Future work intends to also provide code generation for the iOS platform.

Vinícius Maran (UFSM) presented Database Ontology-Supported Query for Ubiquitous Environments. The scope of the work is the modeling of context in ubiquitous systems, e.g., a middleware that supplies educational resources based on the context of the student in a MOOC. The proposal uses an ontology of context, a domain ontology and binding rules to intercept SQL queries and add contextual criteria to them. In this work, the ontologies used are operational ontologies, in OWL.

Cleilton Lima Rocha (UFC) presented Using Linked Data in the Data Integration for Maternal and Infant Death Risk of the SUS in the GISSA Project. The work presents an approach for the integration of data on maternal and infant death risk present in many different data sources of the Brazilian Health System (SUS). The approach uses operational ontologies in OWL and linked data mashup techniques: D2RQ-R2RML provides the data from relational databases in RDF, then R2R, Silk and Sieve are applied to integrate the data. The integrated data is then made available for processing via SPARQL and RDF API and presentation to the end user is done via a REST API. The proposal has been integrated into GISSA (umbrella project) and is being used in hospitals of small cities in Ceará.


Main Conference – Technical Session 9:

I attended this session to present Nilber’s paper, A Model-Driven Approach for Code Generation forWeb-based Information Systems Built with Frameworks, which was the first in the session, themed Web Systems and Applications. You can read the paper in my website or in the the ACM Digital Library (this time, in English).

Then, Danilo Avilar Silva (IFCE) presented Proposal to Use of the Websocket Protocol for Web Device Control. The motivation for the work is the need for remote connections on the Web that use protocols with low latency and bandwidth consumption, because the scope is real-time systems. The use of polling in HTTP generates high traffic and network overhead. Websockets were proposed to solve this problem, therefore the objective of this work was to validate an initial proposal for the use of Websocket in control/service Web applications.


Main Conference – Technical Session 11:

After lunch, I attended the technical session themed Content Processing and Classification, with three full papers and two short ones.

Ricardo Marcondes Marcassini (UFMS) presented Transductive Event Classification through Heterogeneous Networks. The work is in the context of event analysis (what? when? where? who?) extracted from textual data published on the Web. Tradicional classification algorithms consider that data is made available as vector spaces and require a large training set. The proposal is to use transductive classification over heterogeneous networks as the data model.

Renato Dilli (UFPel) presented EXEHDA-RR: Machine Learning and MCDA with Semantic Web in IoT Resources Classification. The work focuses on discovery and ranking (considering quality factors) of resources in the context of the Internet of Things. They propose an MCDA (Multi-Criteria Decision Analysis) algorithm, integrating it with Machine Learning in order to pre-rank new resources. They propose an architecture that uses this algorithm plus Semantic Web technologies (RDF/OWL, SPARQL).

Mayke Arruda (UFG) presented An Ontology-based Representation Service of Context Information for the Internet of Things. Also in the context of the Internet of Things, the work is motivated by the large growth in the amount of devices, which requires that information on devices be represented in a way that enables interoperability between machines. (Operational) ontologies are a promising path for this, but presents computational problems. They propose, then, an evolution of a previous work (Hermes Widget) with a focus on IoT: Hermes Widget IoT. The proposal, which uses LD vocabularies WGS84, QU, SSN and IoT-Lite, is a representation service for context information.

Bruno Kuehne (UNIFEI) presented Gap filling of missing streaming data in a network of Intelligent Surveillance Cameras. The work evaluated different techniques to fill in missing information: LOCF, Spline and SSA, the latter a proposal of the authors. SSA can predict small variations in the video streaming, showing improved results with respect to other methods in initial evaluations.

Ricardo de Azevedo Brandão (IME) presented Clusterização de dados distribuídos no contexto da Internet das Coisas (Distributed data clustering in the context of Internet of Things). Also motivated by the growing number of devices in the Internet of Things (and the problems this leads to), the work proposes clustering the data in order to perform data mining of information on the IoT devices. To form the clusters, the approach partitions the space according to a parameter (number of rows and columns) and then considers only the cells which are highly occupied (minimum number of points also a parameter), reducing network traffic.


Main Conference – Technical Session 12:

Next, I attended the technical session themed RVA, 3D environment and Mulsemedia, with a couple of papers. By the way, Mulsemedia stands for Multiple Sensorial Media, i.e., something that involves three or more senses (whereas multimedia traditionally involves vision and hearing only.

Marcelo Fernandes (UFPE) presented MulSeMaker: An MDD Tool for MulSeMedia Web Application Development. The work proposes a tool that tackles some of limitations in the state-of-the-art for mulsemedia languages and tools. The tool extends HTML5 with new mulsemedia tags. To build MulSeMaker, the authors used MDD tools such as EMF, FMP and MOFScript in the Eclipse platform.

Alexandre Santos and Suanny Fabyne (UFPB) presented A Study on the Use of Multiple Avatars in 3D Sign Language Dictionaries. Using Blender and Unity, the authors created animations with an avatar for 3D sign language dictionaries (VLibras suite, 13000+ signs). A requirement for such a dictionary in Brazil, however, is the existence of multiple avatars, representing the diversity of the Brazilian people. As a new avatar is created, you can transfer (retarget) the movement of an avatar that has already been animated to the new one, however there are a few problems that can happen. The study analyzed the existing animations in VLibras, detecting which signs had potential to cause problems when retargeted.


Keynotes of the day:

Just before lunch, Diogo Cortiz da Silva ( gave the keynote O futuro imediato das tecnologias e seus impactos sociais (The immediate future of technologies and their social impacts). After introducing and associated organizations (, and W3C Brasil), Diogo presented the roadmap of the Web, from the CERN report by Tim Berners-Lee to the future of the Web as a convergence media. He mentioned, for instance, the fact that blocking HTTP communication in a network connected to the Internet also blocks the operation of many apps (e.g., YouTube, Waze, Facebook). He mentioned other technologies converging to the Web, such as Virtual Reality, Augmented Reality, 360 degrees video, Brain Computer Interface, Blockchain and personal assistants. He then moved on to the social impacts of all this technology, such as privacy, the impact of virtual/augmented realities in our senses, the proliferation of fake news, the creation of ideology bubbles, information overload, etc. He then talked about some attempts to tackle these issues. For instance, to deal with fake news (which Diogo considers the biggest threat to the Web nowadays), companies are now working with fact checker agencies, putting highly shared posts under suspicion, etc. Diogo, however, thinks that the solution lies in educating people for the digital life.

At the end of the day, Jesus Favela (CICESE, Mexico) gave the keynote Inferring human behavior using mobile and wearable devices. The talk focused on Behavior-aware Computing, presenting some interesting uses, the technologies at the baseline (e.g., sensors) and research that has already been done in this field. One particularly interesting use for it is in Medicine, to better support collection of data in epidemiological studies (e.g., the All of US / Precision Medicine Initiative in the U.S. or the BioBank in the U.K.). In particular, his research group has been researching the behavioral and psychological symptoms of dementia, applying Behavior-aware Computing. He went on to describe some results and ongoing work in this field: analyzing behavior in home care and at a nursing home, a tool for developing apps that perform opportunistic/participatory sensing using smartphones and detecting anxiety in caregivers. Jesus then moved on to talk about ideas and theories that are yet to be pursued in this field.

Leave a Reply

Your email address will not be published. Required fields are marked *