Our productions


OSRD: Open Source Railway Designer

We work for the OSRD team to help developing a web-based railway infrastructure edition and management tool. We more precisely work on web interfaces dedicated to the infrastructure edition based on an advanced cartographic tool. We also contribute to some specific visualization tools such as the time-space graph.

If the project is for now centered on the french railway infrastructure, it is meant to be usable in other national contexts. It is supported by the OpenRail Association which supports efforts to spread open source railway tools across borders.

A Open Source and Open Data project

  • Industry
  • JavaScript
  • MapLibre
  • React
  • TypeScript
  • The editor allow to edit the metadata of many infrastructure elements (track sections, signals, switches...), or to review and correct data import errors.

  • The "warped" view enhances the classical railway space-time diagram with exhaustive infrastructure data mapped on top of OpenStreetMap geographic layers.

  • The editor reveals and allow to correct the infrastructure digital twin anomalies which block simulations to be run correctly. The application embed specific algorithms which can often ease those corrections.

A new GraphCommons version

Since 2021, we have been developing and maintaining the web platform for GraphCommons, which focuses on mapping, analyzing, and sharing network data. This project has leveraged our entire network expertise - from modeling and databases to visualization, as well as web development. We have integrated Neo4j for data management, and sigma.js and graphology for client-side operations. The website is built on Next.js and React. We continue to regularly develop new features while maintaining the application.

A Consulting and support project

  • JavaScript
  • Graphology
  • React
  • Neo4j
  • Data visualisation
  • DevOps
  • Architecture
  • In a large graph about 'dataviz', the node 'Benjamin Ooghe-Tabanou' is selected. Its neighbors are highlighted both on the graph and in the panel on the right which also displays the node's attributes.

  • The 'Hub' paid feature allow to create many views on one common knowledge graph. On this screen, we create a view by aggregating all paths from universities to tools passing through people.

  • A view allow to isolate a part of a larger knowledge graph hosted in a Hub. The right panel shows on this screen the caption revealing the data model of the graph.

HOPPE-Droit

Explore a 19th-20th centuries French law educational works collection

The HOPPE-Droit projects aims to create and publish a collection of French law educational works from the 19th-20th centuries. We designed and developed an exploration tool which help studying the evolutions of the French law through education materials from the 19th century.

The dataset is edited by the CUJAS team in a Heurist database. These data are exported through API to be indexed into ElasticSearch. We made sure to keep data complexity by for instance taking care of dates incertainty and levels of precision. A web application finally allows to explore the dataset under different angles: books, authors, editors, co-publication networks, genealogies...

A Custom development project

  • Digital humanities
  • Data visualisation
  • Conception
  • ElasticSearch
  • React
  • Heurist
  • Sigma.js
  • Faceted search on the collection authors

  • Network of authors-editors linked by educational works

  • Editor's genealogy page summing up association, acquisition...

Explore the french elections candidates' professions of faith since 1958

From 2013, the Sciences Po Library manage the publication of the electoral archives created by the Centre de recherches politiques (CEVIPOF) and now preserved by the Library Département archives: a unique collection of election (legislative but also presidential, european, etc.) candidates professions of faith from 1958.

After having published it on Internet Archive, the Sciences Po Library came to us to build a custom exploration tool. Indeed they built a very rich set of metadata which describe the candidates profils in details for the entire collection (more than thurty thousands documents). We created a faceted search engine alowing to filter the collection by election, electoral division, political group, candidates profils...

The resulting documents can then be explored either through lists, data visualisations or downloaded in CSV. The original documents are available thanks to Internet Archive's embeded player. Thus indexation choices made by the librarists and archivists can be checked against the source.

A Data valorization project

  • Digital humanities
  • JavaScript
  • React
  • TypeScript
  • ElasticSearch
  • A faceted search engine on legislative elections' candidates' professions of faith.

  • Visualisation of the selected document in time and in the french territory.

  • Candidates' profils: gender and age, professions, political support...

  • For one document, metadata and original scan can be read side by side.

Configuration management database

Exploring a CMDB through ego-centered networks

One of the largest French industrial group indexed their whole CMDB in a Neo4j database. They contacted us to develop an interface to explore this dataset.

The web application is composed of a search engine and a unique page for each node, displaying its neighborhood and metadata. To make the search engine efficient (errors tolerance, searching on multiple fields), we indexed the corpus in an ElasticSearch base.

The frontend is developed with Angular, and the API runs on Node - the whole with TypeScript.

A Custom development project

  • Industry
  • Neo4j
  • ElasticSearch
  • Angular
  • Sigma.js
  • Search page, through the different node types

  • Node page, with its ego-centered network, the list of its direct neighbors and its metadata

  • Fullscreen network exploration

Renew our understanding of urban sonor landscapes

The LASSO platform is a web-based platform that presents a range of spatio-temporal dataset related to soundscapes. These dataset are the result of collaborative projects between Gustave Eiffel University and the University of Cergy-Pontoise. The platform aims to demonstrate the value of soundscape mapping in comparison to standardized noise mapping, and to provide researchers and data analysts with access to exclusive soundscapes datasets.

In addition to serving as a repository of unique datasets, the LASSO platform is committed to advancing our understanding of the role of soundscapes in shaping our environment.

We designed and developed this platform as a serverless react application powering advanced vector tile cartography thanks to MapLibre.

A Data valorization project

  • MapLibre
  • TypeScript
  • React
  • JavaScript
  • The LASSO platform proposes to explore different soundscapes created by the research team

  • Two maps are synchronised to ease to compare variables: 'pleasant' on the left, standard noise levels on the right.

  • On each data point are listed the soundscapes variables values: frequency of sound sources from birds, traffic, voices, the sound level as well as the two emotional variables pleasant and liveliness.

Bibliograph

Bibliograph is an online tool which we created with and for Tommaso Venturini to equip his research on scientific communities dynamics. Our mission was to reproduce an analysis method based on co-references already implemented in python in a web tool allowing to visually explore the networks produced. A very high time constraint guides us to chose to develop this project under the form of a colocalized intensive workshop with the client. By navigating between ideas and constraints with the help of an agile method, we succeeded in producing simple yet efficient scientometric tool complying the scientific needs in a very short time.

A Data valorization project

  • Digital humanities
  • Visual analysis
  • Conception
  • JavaScript
  • React
  • TypeScript
  • Sigma.js
  • Graphology
  • Frist step: import CSV files data-set.

  • After parsing and indexation: filters settings.

  • Finally, the co-reference network with metadata nodes visualized.

The Digitization of Everyday Life During the Corona Crisis

We developped a web application which allows a research team to analyse an ethnographic data-set by navigating and qualifying the collected materials. The data-set was collected during the COVID-19 lockdown that took place between April and June, 2020 in Denmark. It includes 222 interviews, 84 online diaries, and 89 field observations.

This study was part of the project "The Grammar of Participation: The Digitization of Everyday Life During the Corona Crisis" which was carried out in collaboration between researchers from the Centre for Digital Welfare at the IT University of Copenhagen and the Techno-Anthropology Lab at University of Aalborg.

This tools is not publicly available. Access to the data is restricted to the research team. The screenshots below were made on fake data.

A Data valorization project

  • Digital humanities
  • JavaScript
  • React
  • TypeScript
  • ElasticSearch
  • A search engine on interviews and field observations segments

  • Each data-set document has its own web page.

  • Each document has been segmented. Segments can be referenced by their URL and qualified by inputing tags.

Exhibition-test

Data infrastructure specifications of an interactive exhibition

We designed the data infrastructure for an exhibition which observes its visitors: data flows specifications from data capture to video-walls projecting visualisations, going through analysis, archiving and graphic rendering processes.

The exhibition was canceled due to COVID-19 epidemic. We haven't realized those plans yet.

A Consulting and support project

  • Digital humanities
  • Realtime data
  • Data visualisation
  • Conception
  • Architecture
  • Data infrastucture schema extract

  • Physical infrastucture schema extract

Hyphe

Web content indexation and automatized deployment on OpenStack

Hyphe is a web crawler designed for social scientists, and developed by Sciences-Po médialab.

We added the following features:

  • Automatic textual indexation of web corpora by multiprocess content extraction and indexation inElasticsearch
  • Automatic deployment of Hyphe server on OpenStack compatible hosting services

A Open Source and Open Data project

  • Digital humanities
  • Python
  • ElasticSearch
  • JavaScript
  • DevOps
  • OpenStack
  • Functional tests of indexation processus

  • Hyphe server to be deployed configuration (Hyphe Browser)

  • Cloud server to be deployed specifications (Hyphe Browser)

Contractor for Neo4j

We work on behalf of Neo4j to assist their customers on their graphs projects. We do Neo4j consulting, from data modeling, loading and visualization, to prototypes and full web projects based on modern web technologies.

A Consulting and support project

  • Neo4j
  • Data visualisation
  • Conception

E-commerce and online payment

We helped developing the payment process of one of the biggest French e-commerce websites, using Clojure and ClojureScript.

A Consulting and support project

  • Industry
  • Clojure
  • ClojureScript
  • Web performance

RadioPolice

Visual analysis and semantic extraction of themes of a tweets data-set

We were contacted to semantically analyse a corpus of french tweets. We set up a topic extraction pipe, through terms co-occurrences analysis and CHI² token filtering. We also shared some online tool to explore topic communities, in terms co-occurrences network maps.

David Dufresne and the Mediapart french journal wanted to publish the corpus. We helped set up ElasticSearch and Kibana to forge one query per curated topic, and to get aggregated indicators for the final user interface designed and developed by WeDoData, Etamin Studio and Philippe Rivière / Visions carto.

A Data valorization project

  • Data journalism
  • Python
  • Natural Language Processing
  • Data science
  • Visual analysis
  • ElasticSearch
  • Kibana
  • Co-occurrences network of terms from the "(il)légitimité" theme

  • "palet" neighbors in the co-occurrence network of significant terms

  • Building the theme "outrage" as a search query in Kibana

RICardo

RICardo is a research project about international trades in the 19-20th centuries.

We improved the existing web application:

  • Refactoring of the existing visualizations
  • New visualizations of exchange rates and political statuses
  • Permalinks with visualization parameters, on all pages

Read our blog post "Some new visualizations for the RICardo project" to learn more about this contract.

A Data valorization project

  • Digital humanities
  • Data visualisation
  • Conception
  • JavaScript
  • AngularJS
  • This additionnal timeline foster taking the political context into account when analysing historical trade

  • We created a heatmap to compare relative importance of trade partners

  • Exchange rates exploration through small-multiples

Production monitoring dashboard

Custom Kibana plug-ins development

An industrial actor contacted us to help them distribute dashboards within one of their product. After a brief benchmarking, Kibana felt the best solution, despite missing some key features.

We developed a custom plugin with these features (integrating the dashboards inside a custom web page, with custom styles).

A Consulting and support project

  • Industry
  • Kibana
  • ElasticSearch
  • Dashboard

TOFLIT18

Toflit18 is an online exploratory analysis tool on 18th century french trade by products. We updated this tool created by Sciences Po médialab by optimizing the Neo4j queries and by adding permalinks and a data table which lists and filters trade flows.

A Open Source and Open Data project

  • Digital humanities
  • Neo4j
  • JavaScript
  • React
  • Nantes exports trade flows from 1720 to 1780

  • Classifications coverage ratio pptimization

  • Permalink of 18th century Nantes exports terms networks

You want to know more about what we do?

Learn more about our services