Summary of the Web.br 2016 Conference

On October 13th and 14th, I attended Web.br 2016, in São Paulo, Brazil. As usual, in order to force myself to pay attention and to share the knowledge with others, I wrote quick summaries of the presentations that were given during the event.

Even though most presentations were in Portuguese, I decided to write the summary in English, in order to reach a wider audience (probably wishful thinking of my part, so if you’re reading and you don’t speak Portuguese, please leave a comment so I know this wasn’t for nothing). 🙂

 

Day 1: Thursday, October 13th

The first keynote was given by Dave Raggett, from W3C and it was about the Web of Things (title in Portuguese: “Internet das Coisas na Web — IoTw”). He discussed many aspects, so the best I could do was to write topics, which I present below:

  • About the Internet of Things (IoT):
    • IoT now = Silos of Things, W3C working on it;
    • Before standards, we need to understand communication patterns;
    • Need for open standards in order to avoid vendor lock-in;
    • Importance of Cyber-Physical Systems in this context;
    • Very early stage, companies are making mistakes and that’s fine. We can look at the history of the Internet for learned lessons;
  • About the Web of Things:
    • Abstraction layer above the IoT;
    • Rich description for every “thing”, identified by URI;
    • Standard metadata and APIs;
    • Semantics (e.g. Linked Data semantic models);
    • Separation of concerns of application developers (applications, things — WoT focus) and platform developer (transfer, transport, network — IoT focus).
  • W3C Web of Things Activity status report:
    • Workshop in Berlin in 2014;
    • Creation of a WoT Interest Group: lots of members, big companies (see photo);
    • Working Group will be launched soon, focused on cross-platform, cross-domain vocabularies;
    • Planning on launching a Business Group to get a view on business level requirements across domains;
    • Talked about the pressing need for coordination regarding these developments;
    • The process for standardization needs to be agile;
    • Issue: security, trust, safety, resilience;
    • Issue: estimated 1.6 zettabytes in 2020 coming from sensors.
  • Building the communities:

     

    The second keynote was “Behaviour Driven Development, e eu com isso?” (Behavior Driven Development, what does that have to do with me?), presented by Glaucimar Aguiar from Hewlett Packard Enterprise. Glaucimar talked about her experience with BDD as a Software Architect for the past 8 years at Hewlett Packard Enterprise (trivia: not the same as HP, Inc. — the former HP was divided in two companies). BDD is motivated by the fact that projects delay, exceed budgets, don’t satisfy client needs, are deployed with many defect (which are expensive to fix during production), deliver code that is hard to maintain, and have communication issues. The idea of BDD is to focus on functionality that provides value to clients, re-prioritizing often, lowering or at least establishing the cost of change, adapt to new realities, find problems as soon as possible and, in general, learn while the project progresses.

    BDD was born from the perception of Dan North that unit tests are descriptions of the expected behavior of the system. In 2009, he defined BDD as “a second-generation, outside-in, pull-based, multiple-stakeholder, multiple-scale, high-automation, agile methodology”. Explaining:

    • second-generation = re-leitura de TDD, DDD, Lean…;
    • outside-in = começa pela visão, valor do negócio;
    • pull-based = só o suficiente, sem excesso;
    • multiple-stakeholder = centrado no ususário e envolve stakeholders (todos que se importam);
    • multiple-scale = diversos níveis (aplicação, código);
    • high-automation = automação, regressão, TDD;
    • agile methodology = princípios e valores compartilhados com o Agile Manifesto.

     

    BDD is an adaptation of existing techniques and tools, but with a change of mindset that makes a difference. Its principles are: just enough, not more, not less; deliver value to business/stakeholders; the only thing that matters is behavior (in all levels). Functionalities are described using stories, which are work units, deliverable in one iteration, that have value and define expected behavior. Stories are further described in scenarios: examples that clarify ideas, clearing up confusion and removing ambiguity. They use a non-technical, ubiquitous language, a borrowed concept from Domain Driven Design (DDD). The focus is always on the user, considering the different perspectives: business, developer and tester (the “three friends”). The definition of a story is the result of interactions, conversations, clarifying… not a job of a single person. A story is an agreement between the three friends. A commonly used template for user stories is “As a <Role>; I request a <Feature>; To gain a <Benefit>”; and for its scenarios: “Given <some initial context>; When <an event occurs>; Then <ensure some outcomes>”. This is based on a DSL called Gherkin, which was created to document and automate tests (there are many available tools that work with Gherkin, e.g., JBehave, Cucumber, SpecFlow, etc.).

    In summary, the big change in mindset is switching the focus from tests to behavior, using a ubiquitous language and focusing on value for the business. Glaucimar presented a final slide disclaimer: BBD is not a silver bullet; manual testing will continue to be necessary; BDD is not only about UI; neither about the use of tools; BDD is a way of thinking, which includes management; the focus is on collaboration; it facilitates communication through common understanding.

    The third keynote was given by prof. Riichiro Mizoguchi and was entitled “Ontology as a sense-making technology”. He started summarizing what he has been doing — Theory: theories of roles/functions/objects/processes/events, building upper ontology YAMATO — and Practice: modeling functional structures of artifacts and its deployment into industry (SOFAST/OntoloGear); theory-aware authoring tool based on an ontology of learning/instructional theories (SMARTIES); building a disease ontology and its publication as linked data with links to existing ontologies.

    Next, prof. Mizoguchi defined ontology and ontology engineering, explaining types of ontology, light-weight vs. heavy-weight, top-down (heavy) vs. bottom-up (light) approaches, the Semantic Web and Linked Data technologies, etc. He concluded this part showing how ontologies can help make sense of data. He then moved on to discuss light-weight ontologies, presenting examples such as the LinkedJazz LOD, the Linked Open Vocabulary, Dublin Core and Schema.org.

    Finally, he made the case for the need of heavy-weight ontologies as the foundation of light-weight ontologies used on the Web. He mentioned the effort to interoperate four medical vocabularies: MeSH, SNOMED-CT, PATO and HPO, which had different definitions for the relation between abnormalities and diseases. He showed a series of problems in putting the concepts from these ontologies together, which could be solved by using top-level ontologies. In summary, light-weight ontologies work well for LD of common sense or daily life domains for making sense of data, but you would need heavy-weight ontologies for LD in special domains for making sense of concepts. Such data are not interoperable and require a careful analysis of the ontological assumptions. Heavy-weight ontologies can help with that.

    Unfortunately, I could not participate in the afternoon part of the day, so the report on the first day ends here.

     

    Day 2: Friday, October 14th

    First of all, it’s important to note that, except for the keynotes, there were four sessions in parallel at any given time (at the same auditorium! See, e.g., this photo), so my report below is on my participation in the conference, i.e., the sessions that I have attended.

    Bernadette Lóscio, Caroline Burle and Newton Calegari presented “Boas Práticas para Publicação de Dados na Web” (Best Practices for Data Publishing on the Web). They talked about the “Data on the Web Best Practices” W3C Working Group, part of Data Activity, and with the following goals: develop the open data ecosystem, give guidelines for data publishers, improve confidence of data developers. The best practices were based on an existing W3C recommendation for data sets called Data Catalog Vocabulary (DCAT) and target data on the web in general (not necessarily open, not necessarily linked). They started by collecting use cases for use of data on the Web. They analyzed these use cases and identified 12 challenges, then extracting the requirements for the best practices: metadata (for humans and machines), licensing, provenance and quality, versioning, identification, formats, vocabulary use, access, preservation, feedback, enrichment, re-publication. The result is available as a W3C Candidate Recommendation, which includes 35 best practices.

    Felipe N. Moura presented “Service Workers: A seus serviços” (Service Workers: at your service). Service Workers can be used to manage redirects, intercept requests, handle cache and perform version control. It uses open source Web technology (JavaScript) in an asynchronous and secure way. It is supported in multiple platforms (and do not cause errors in browsers without support), works even if offline as allows you to “install” your website at the client side. Workers, when installed and activated, work behind all the pages (e.g., multiple tabs open with the same website) opened in their context (the base URL), on a separate thread. Felipe showed a bunch of code showing in practice how to use service workers and provided useful tips for developing with them.

    After the break, Clara Meyer Cabral, Luiz Henrique Volso and Guilherme Minarelli participated in the panel “Dados Abertos na Prática – Painel 1” (Open Data in Practice – Panel 1):

    • Luiz Henrique started, presenting the project Reclamações PROCON, which contains open data made available by PROCON (Brazilian Consumer Rights Government Bureau). He mentioned challenges for publishing useful visualizations of data on the Web: lack of data and inconsistencies in data.
    • Then, Clara presented the “Programa Cidades Sustentáveis” (Sustainable Cities Program) from the “Rede Nossa São Paulo” (Our São Paulo Network). In the context of this program, a partner developed a software called IOTA that collects indicators from signatory cities and displays them on a website on human- and machine-readable versions. Data can be shown/downloaded in three granularity levels: city, macro-region and region.
    • Finally, Guilherme presented “Diário Livre” (Free Diary), an application that makes available in open formats the data from São Paulo’s Offical Diary. The platform has today 40 thousand daily accesses and provides accessibility features, which the PDF version of the diary does not. It was developed with open technologies, which makes the platform replicable in other cities in Brazil without cost.

     

    Next, Karina Moura, Judson Bandeira and Natália Mazotte participated in the panel “Dados Abertos na Prática – Painel 2” (Open Data in Practice – Panel 2):

     

    After lunch, Gisele Craveiro, Laila Bellix and Maria Alexandra Cunha conducted the panel “Governo Aberto” (Open Government):

    • Laila talked about open government from the point of view of the government, being a public servant for the State of São Paulo. She mentions that the technology used for open government is still under dispute and should be selected considering the goals of promoting citizen participation and transparency. She presented projects under different open government actions: (1) access to information — e.g., tranSParência (transparency portal); (2) formation in open government; (3) digital and in-person participation; (4) creation of labs and experimentations — e.g., MobiLab, LabPRODAM. She closed by mentioning some limitations, such as conflict of generations of government structures/people; use of costly, proprietary technologies; the purchase model for technology; management of information and public data; and privacy.
    • Gisele contextualized open government as another branch of the open culture in IT, which started with the Open Source movement. As with other Web 2.0 efforts, for some time now we acknowledge the need a new model for government. She talked about how people could participate in this process of co-design, co-implementation and co-production of government actions.
    • Finally, Maria Alexandra discussed the opportunity that we have now as government collect a lot of data from citizens, services, etc. She mentioned as example the resilience strategy of the city of Porto Alegre, which was built collaboratively with citizens from each district of the city, in a bottom-up fashion. There’s also an opportunity use technology from the Internet of Things, like sensors, cameras and other urban equipments. We can and should implement an open platform for real participation of citizens in co-production in public policies.

     

    Next, I participated in the panel “IoT e Dados Abertos Conectados” (IoT and Linked Open Data), together with Bernadette Lóscio, Carlos Laufer, Seiji Isotani and Wagner Meira. For obvious reasons, I couldn’t take notes during the panel. 🙂

    After the break, Claudia Melo gave a keynote called The Web of Gendered Innovations. She started the keynote presenting a series of “ficticious” stories showing how the gender-bias from the offline world is quite present on the Internet/the Web. Then, if society shapes technology and technology shapes society, what kind of technology do we want to build? And moving to the Interent of Things, it gets even more complicated.

    Claudia then moved on to the topic of gendered innovations. When developing a product (from a shoe to a high-tech product), for instance, integrate sex and gender into its planning in order to satisfy the needs of everyone. It’s like what people call “inclusion”, but she points out the incoherence in this term, given that women are the majority in the world.

    She cited some nice examples of projects that move towards this direction: Liquen, a Wikipedia Bias Detector; The OpEd Project, which monitors women’s voices in the news; textio, which improves job posts by detecting biased language; Cidade 50-50 (50-50 City), which stimulates gender-balanced discussions regarding public policies; and Digital Women’s Archive, which curates appropriate pictures of women.

    She closed the talk with the following sentence: “gender equality is not a woman’s issue, it is a human issue”.

    The last presentation of the conference was also a keynote, delivered by Bert Bos, from W3C (and co-creator of CSS), called “Livros conectados & CSS” (Connected Books & CSS). Bert started by stating that digital publishing is at a tipping point. There is an Interest Group in digital publishing (DPub) with various requirements documents, including one called Portable Web Document (PWP). The idea for PWP is to be a self-contained document that could be read in a browser or e-reader/app, online or offline and have associated services (i.e., errata, readers’ forum, etc.). PWP could be the basis for the whole publication process, support different publication formats, containing sufficient meta-data (for sales, cataloging, etc.), could be archived, provides access (URLs) to its components and better accessibility than current standards.

    Bert moved on to talk about CSS, its current state (after 20 years!) and things to come. He then mentioned another standard, XSL, which did not have a lot of success on the Web, but did succeed in the publishing world. Even on the publishing world, however, some people are switching to CSS and are now asking for features for advanced typography. More than 50 million books have already been made with CSS. In particular, he discussed the issue of rendering speed if such advanced features get added to CSS. The problem, then, is: how do we get both: speed and good typography? He discussed some possible solutions currently under discussion (a ‘will-change’ property to indicate dynamic behavior, profiles for ‘static’ and ‘dynamic’).

    He mentioned many advantages for using XML/HTML and CSS for publishing, but also a few disadvantages such as the limited layout capability, poor typography and lack of control. But it’s expected that it will catch up. He then presented examples of typographic challenges in books/magazines that CSS would have to deal with.

    His slides are available at the W3C website.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

code