Mass estimating software

SIREHNA

As a software engineer at SIREHNA, I was tasked with the responsibility of developing a mass estimating software that met the specific needs of our clients. Using Python programming language, I worked on the core of the app, creating a database-centered API that was independent of the graphical user interface. My focus was on implementing robust file reading and writing operations, and handling incorrect or missing data. This included using pandas library to read excel files and manage dataframes. I used a relational database to ensure the integrity of the data, while also offering efficient computations. It implied working with SQL to query and update the software storage. Additionally, I implemented features that properly managed the data history, using Git as the versioning tool. To ensure the quality of the code, I followed several good practices, including the use of code formatters and linters (black, pylint, isort...), conteneurization with Docker, and continuous integration with Gitlab CI and code review. Throughout the project, I collaborated closely with the clients to understand their needs and build a software that was truly useful to them. The project is still ongoing while I am writing this.

Python SQL SQLite Docker Git GitLab GitLab CI CI/CD VS Code Pytest Pandas PySide PyQT

PhoneHistory

Android App

Phone History is an Android application retrieving the usage history of the phone. It gets the phone usage from Android system and displays it in a list starting with the most recent usage. It also explores the phone storage to display file modiciations, such as the pictures taken or the files downloaded. Finally, it display statistics about the phone usage during the last 24 hours. This includes the Top 10 user applications in terms of usage, and the associated duration as well as a chronological graph of the phone usage per hour. Technically, the app uses UsageEvents API to recover the apps' usage. An algorithm is dedicated to read data from this API, which only returns events. Android system only keeps record of the phone usage during few days, so the app is unable of showing any older app usage. Therefore, PhoneHistory stores the app usage in an internal SQL database. This application is - when I am writing this - my most popular app on the Google Play Store.

Android Java XML UI Design SQL IntelliJ IDEA Android Studio

E scooters

Mobile app as a school project

As a school project, I was part of a team tasked with coming up with an idea for a mobile application. We decided to create an app that regroups all the e-scooter brands in Stockholm. The idea was that the user would only need to have a single app to compare the locations and prices of e-scooters nearby, and access it. First, we created a web app. Using React JS and Tailwind CSS, we designed the front-end. The app could be standalone, as the back-end part was the e-scooter brands' APIs. I focused on using Leaflet library to display the map, the user location, and the e-scooter pins. The other members of the team built different parts of the app, such as the communication with e-scooter brands' API. In the end, we ended up with a standalone web app that displayed e-scooters' locations, but without the ability to actually rent an e-scooter. In the second part of the course, we had to create a native app. We chose to use React Native to be cross-platform and somehow close to what we had done during the first assignment. Once again, I was in charge of taking care of the map utilities. However, I didn't find easy-to-use and cross-platform libraries for React Native to display a map like Leaflet does. Thus, I implemented the Map tiles workflow myself, using Open Street Map data, in order to reproduce the behavior of a map component. It ended up working properly, except for zooming utilities. We reached almost the same point with the native app as with the web app. This experience allowed me to work as part of a team, fully in english as we were from different countries in Europe, and to develop my skills in web frontend and cross-platform development. I also gained experience in using React and React Native, as well as in working with maps, using Leaflet and Open Street Map data. It also helped me to understand the challenges and differences in developing a web app and a native app.

React JS React Native Tailwind CSS Leaflet JavaScript UI Design Web Mobile API VS Code

Simulation and data management environment

Intership at Sirehna

I worked on a project for SIREHNA, a company that runs computational fluid dynamics (CFD) simulations to evaluate ship performances (amongst other things). My role in this project was to create a simulation and data management environment that would help SIREHNA team track and retrieve their data, as well as automate low value-added tasks. After working on understanding the users' needs, I worked with the team on defining the environment architecture. We chose to use a web server as the corner stone of the architecture. We highlighted the functional needs such as User authentication, User management and data storage, and then benchmarked technologies. I used Girder library to instantiate a web server dedicated to managing users, their authentication and permissions. Using this library allowed us to gain time in implementing user authentication and management functionalities. To answer the large data storage need - coming from geometries involved in the CFD simulations that can weight tens of GB - I set up a MinIO instance. As MinIO uses Amazon Simple Storage Service (S3) standards, I could easily connect it to the Girder-based web server. Another main part of the environment, which was very time consuming, was to automated some of the CFD simulation processes. As a GitLab server was hosted internally, I used its pipelines features (which are mostly designed for CI/CD purpose) to fully automate simulations. The idea was to use GitLab pipeline's scheduler to separate the main steps of a simulation. The actual computation scripts were executed through Makefile scripts, while communications with the Girder-based web server were done via Python scripts. In order to execute computations efficiently, we installed a GitLab Runner on a device with a large computational power. The final piece of the puzzle was creating a user-friendly web interface using Girder web components, allowing users to easily start automated computations, monitor their progress, and keep track of their results.

Python Girder Git GitLab GitLab CI Docker Docker compose MinIO MongoDB Vue JS JavaScript API VS Code

TakeNews

Web mobile app to stay in touch

TakeNews is a mobile-orientied website built with React, MUI components, and Tailwind CSS. The goal of this project was to create a web app that allows users to stay in touch with their contacts regularly. The app is designed specifically for mobiles as the main objective was to explore the potential of a web application to take advantage of various mobile device features, similarly to native mobile apps. First, the app uses local storage feature of browsers, which enable user-specific storage on the client side. As the goal was to create a standalone website (i.e. without a back-end), this kind of storage was answering every need. I also though of using IndexedDB API but as the data stored was small, I prefered to use local storage. Then, as the app goal was to make the user keep contacts with friends and familly, the app is able of openning the phone dialer app. This is done simply by using tel: scheme. In the future, I hope I'll be able of openning other external applications, such as Facebook Messenger. Finally, using Navigator.contacts experimental API, the app is able (on Chrome) of browsing the phone's contact list and let the user pick one to import in the app. The app development is still in progress, and soon enough I would like to enable notifications, that would remember the user to contact the persons he/she haven't be talking for a while. Technically, TakeNews is hosted on GitHub pages as a standalone web app. It is build with React JS using MUI components and icons in addition to Tailwind CSS for some styling. ESLint and Prettier are used locally and in Continuous Integration workflows to enforce code format. Docker is used to offer a portable container, usable regardless of the Operating System.

TypeScript React JS MUI Tailwind CSS GitHub GitHub pages GitHub actions CI/CD Docker Prettier ES Lint VS Code

My portfolio

Static website to showcase my experiences

Project of building my portfolio, using high-level technologies and centering the development on the data. The idea was to first write down my experiences, projects, profile, education, etc. and then build a portfolio website around this. It ended up in a single yaml file, a website and python scripts. You are probably reading this on the website resulting of this project (https://antoine.mandin.dev). The first - and main - part of this project, is the actual data. I have condensed my experiences, projects and skills into a single yaml file. The second part of the project is the portfolio website. It is built using Jekyll, and hosted on GitHub pages. I used a pre-existing theme (named Beautiful Jekyll) which I overrided with custom styles inspired from other jekyll themes. Using a plugin, I managed to create almost entirely the website from the yaml data file. I also implemented a search bar using MiniSearch library. In the end, I built a static website, hosted on GitHub pages, with its own styling, search engine and domain name. Finally, the third part of the project was a small python module dedicated to check the format of the data. I build a grammar file in yaml, representing the expected content of the data (e.g. that a project must a title, which is a text, and may have an end-date with the format dd-mm-yyyy) including the references workflow. Indeed, the whole idea was to have a consistent website, where projects may be linked to a job, and a job to a company, etc. This python module was accompanied by formatters and linters, and used in a Continuous Integration workflow, to ensure that the data is well formatted all along its lifetime. This project is still in progress when I wrote that.

Jekyll GitHub pages GitHub actions GitHub Git JavaScript CSS HTML Web UI Design Python CI/CD VS Code

Data mining and conflicting areas identification

Internship in an IT research laboratory

During my internship in LIRIS research laboratory, I was tasked to find computation methods to analyze a dataset prior to a machine learning training. I ended up with python modules capable of analysing and highlighting possible incorrect data or problematic values for a machine learning training. At the very beginning of my internship I learned theories about machine learning methods and technologies, applied to python. I became familiar with machine learning concepts such as suppervised/unsupervised learning or underfiting/overfiting. Following a thesis, the idea of the project was to build scripts capable of finding incorrect data in a large dataset. The overall idea was the following: Aiming a classification algorithm, the datasat has multiple input attributes that are used to guess a target attribute. Then, the goal was to evaluate the capability of guessing the target attribute following the idea that, if there exist multiple instances in the dataset with the same input attributes but a different target, then any algorithm (regardless of its complexity) will be unable to be 100% sucessful. Let's illustrate that with a common dataset in machine learning: the Iris dataset. It is made of data about Iris flowers and contains 5 attributes: the width and length of the sepals, the width and length of the petals and the species. Then, a classification algorithm has to guess the species based on the width and length of the sepals and petals. Applied on this dataset, the idea of the project was to check whether two flowers have the same length/width of the sepals and petals but being of two different species. When considering the perfect equality as the comparison function between the input attributes, the algorithm has a complexity of O(n.log(n)) (using a Map as data structure). However, when considering a tolerance in the comparison function, the naive algorithm as a O(n²) complexity, which is not acceptable for large datasets. Thus, my goal was to find ways to perform this search within an acceptable computation time. I did a literature search to solve the problem of comparing each element of the dataset to all the other elements in a reasonable computational time. Following this, I implemented a method called Blocking. The idea behind it is the following: Mapping - First, the values are mapped into blocks of similar values. The goal is to avoid comparisons between values that are obviously different. Reducing - The second step is to actually compare the values, using a naive algorithm, block by block. This step can be parallelized, as each block is independent. I added a step to the process, to ensure to be exhaustive, taking advantage of the fact that we know the tolerance in the comparison function. This step consists in duplicating the values at the edges of the blocks inside the surrounding blocks, so that every value is compared to all of its neighbors, regardless of its absolute position in the dataset. The resulting algorithm returns pairs of contradictory values, i.e. values with similar input attributes while the target attributes are different. This can be inputed into visualisations of problematic/conclicting areas, i.e. areas of the dataset that contain a lot of contradictory values. This kind of visualisation can be useful for experts of the data source in order to explain why contradictory values appeared here or to question the manner of collecting data. Finally, the contradictory pairs can be post-treated to search for the minimum set of values to remove from the dataset in order to have no contradictary pairs. This search is similar to the search for a minimum vertex cover, having as vertex the values and as edges the fact of being contradictory. Indeed, if a single value is contradictory to 10 others, the conclusion can easily be taken to remove this one, but with more complexe networks, the decision is harder to make. The search for the minimum vertex cover is proven to be NP-hard. In the end, the developed python scripts were able to identify in about 30 minutes the pairs of contradictory values on a dataset for which the naive algorithm would have taken about 1 year of calculation. This project allowed me to learn the concepts and problems related to machine learning, as well as to deepen my skills in algorithms and parallel computing.

python algorithm machine learning artificial intelligence parallel computing research sphinx

Skillect

Full stack portfolio website

As a personal project, I aimed to create a back-end based on a relational database to store and serve my experiences and skills. The goal was to use this backend in a portfolio website, both displaying my skills to anybody and allowing myself to edit the data. From a full-stack template using FastAPI, PostgreSQL, and Vue JS, I developed a full-stack website. The back-end was based on FastAPI, which serves a REST API on top of a WSGI HTTP server (for which I used Gunicorn). Using SQLAlchemy library, I designed the data model, both for user authentication and authorization workflows and to store professional skills. Good practices of using linters/formaters (black, pylint and isort here), continuous integration (with GitHub Actions) and containerization (with Docker) were used. The front-end is based on React JS, MUI and Tailwind CSS libraries (in opposition to Vue JS that was used in the original template). It uses Axios to communicate with the backend, including for authentication. Here again, good practices of using linters/formaters (Prettier and ESLint), continuous integration and containerization were applied. I also set up a reverse proxy, based on Traefik, in charge of redirecting API calls properly to the back-end. It uses Let's Encrypt to automatically renew a SSL certificate (needed to enable HTTPS protocol). The architecture is described using Docker Compose, that includes the back-end, the front-end, the reverse proxy, and other services like the PostgreSQL database. The back-end was hosted on Amazon Web Service (AWS), using an Elastic Compute Cloud (EC2) instance. A custom domain name (mandin.dev) was dedicated to this website. Finally, Continuous Deployment workflows were setup to deploy automatically the website updates in production. During this project, which was ultimately abandoned, I learned how to build a real-world full-stack website, from the technical functionalities (e.g. authentication) to the deployment (GitHub Actions, AWS, domain name, SSL certificates, etc.). I also learned how to set up a reverse proxy and how to work with various libraries and technologies like FastAPI, React, and SQLAlchemy.

Docker Docker compose React JS MUI Tailwind CSS TypeScript FastAPI Python PostgreSQL Traefik CI/CD GitHub GitHub actions SQL Alchemy AWS Elastic Compute Cloud EC2 RabbitMQ SSL Certificate VS Code

sphinx-mermaid

Sphinx extension to enable mermaid graphs in documentation

As a personal project, I developed a python library resulting in a Sphinx extension to enable Mermaid graphs in generated documentation. This library allow to create interactive and visually appealing graphs within Sphinx documentation, making it more informative and user-friendly. I learned how to create a Sphinx extension and how to deploy a Python package to PyPI (Python Package Index). Personally, I use a lot mermaid graphs, which are really easy to use and allow to create nice looking graphs with a ridiculously small boilerplate. Therefore, I wanted a Sphinx extension with an open license (here MIT), that would allow the use of such graphs in generated documents. This project is Open Source on GitHub and I even had my very first contributor in this project, which submitted is own issue and pull request, that was finally merge. During the development of this extension, a set of best practices was put in place. First, linters and formatters were used (black, pylint and isort), to ensure the format of the code. Secondly, continuous integration, via GitHub actions, was implemented. It was in charge of making sure the format rules were respected. Finally, continuous deployment was also in place in the form of GitHub actions, pushing the package to PyPI when a GitHub release was created. This experience allowed me to improve my skills in Python development and package management, as well as my understanding of the Sphinx documentation system.

Python PyPI Sphinx Mermaid GitHub GitHub actions Git CI/CD VS Code

Ground station UI improvement

At Kissfly

At Kissfly, I was an assigned to improve the ground station user interface. The operation of the drones was the following: a classic remote control directed the drone while a microcomputer on board was in charge of broadcasting the live video from the drone in wifi, to the ground station. The ground station could be any device with an internet browser (computer, tablet, phone). Indeed, the microcomputer on board the drone was running a Node JS-based server, serving the interface of the ground station as a website and transmitting the video in peer to peer, using the WebRTC technology. During this mission, I did several improvement of the ground station UI. First, I added a 3D representation of the drone state, which displays the drone position relatively to its start position, computed via SLAM algorithm (which was out of my mission scope). It also displayed the trajectory and the drone's actual angles (pitch, roll and yaw). This representation was done using the Three.js library. Secondly, I added control buttons to the UI, which improved the user's ability to control the drone and made the interface more intuitive. This experience allowed me to work with new technologies and to understand how to improve the user experience. I was able to improve my skills in front-end development, user interface design, and problem-solving.

Vue JS ThreeJS JavaScript GitHub Git CSS HTML UI Design

Kanban App

Open source android app

I developped a free (and ad-free) Android app dedicated to manage a Kanban board. I am a huge consumer of ToDo lists, thus when I discovered the Kanban board concept, I had to use one for my day-to-day life. However, I did not find applications completely matching my needs, I wanted something really simple, without too much settings. I ended up building my own application. The code is available on GitHub and the app on Google Play Store. With hindsight, I find that the us er experience is not that great, it could clearly be improved, even if I find that the application fulfills its contract well: I can manage my tasks in a Kanban board, which was all I asked. Developping this application made me improve my skills in Kotlin, which was new to me at this time. It also chanllenged me in building a Data Access Object (DAO) database using Kotlin standards.

Kotlin Android XML Git SQL GitHub Android Studio IntelliJ IDEA

Prix carburants

A map of fuell prices

During this personal project, I build both python scripts to fetch data about french fuel prices from the gouvernment open data and a static website hosted on GitHub to display the prices. The first part of the project was building python scripts to fetch data from the french government open data about fuel prices and dump it into a JSON file. Python scripts are accompanied by linters and formatters, run in a Continuous Integration pipeline (via GitHub Actions). The scripts are used in a CRON pipeline dedicated to regularly fetch new data about french fuel prices. The second part was the static website. I build it using Jekyll and JavaScript. It is hosted on GitHub pages. It displays a map of France, with gas station locations and their prices. It also display the average fuel price per region in France. Map manipulations are done using Leaflet library. JavaScript code is also associated to linters and formatters that are used during Continuous Integration to enforce code format. In the end, this project showcase how to build quickly a static website displaying statistics from open data.

Jekyll Leaflet GeoJSON Python JavaScript GitHub GitHub pages GitHub actions CI/CD VS Code

Optimization of drone live video streaming

At Kissfly

At Kissfly, I was in charge of improving the live video streaming system that broadcasted the video taken on the drone to the ground station. The system was built with a Node.js Express server running on the drone and a headless Chromium instance running as a server's client (on the drone as well), offering a connection in peer-to-peer to the ground station. This allowed to stream the live video taken on the drone via WebRTC (RTC stands for Real Time Communication). My mission was to test multiple configurations in different environments in order to improve the maximum streaming distance, as well as the video latency and framerate. I did many tests twicking WebRTC parameters, and a lot of research in the documentation and on the Internet. This taught me a lot on peer-to-peer and video streaming systems. In the end, I managed to improve the maximum streaming distance to the goal we had defined beforehand while reducing the latency and increasing the framerate. This mission was a success, but I also concluded that these technologies were not well-suited for this specific use case and that they had reached their limits. This experience allowed me to work with new technologies and to understand their limitations and capabilities. I was able to improve my skills in live video streaming, web development, and problem-solving.

WebRTC Video streaming JavaScript VS Code Frontend Node JS Tests

Randos map

Static website displaying hikes in Loire-Atlantique (French region).

As a personnal project, I built a static website displaying some hiking trajectories in Loire-Atlantique. The website is built using Jekyll and some JavaScript. It is hosted on GitHub as a single-page website. The data about hikes is from the French government open data, as a JSON file. Using Leaflet library, I displayed hikes, their difficulty, description and some other attributes like the length or the esitmated duration. Leaflet is also used to display the user current location. Finally, the user can filter the hikes based on their length, estimated duration and whether there are markups. In the end, this small static website showcase how to simply use the open data along with libraries like Leaflet to create a usable website.

GitHub GitHub pages Jekyll JavaScript Leaflet Open data

Heads or Tails

Android App

Heads or Tails is the very first Android application that I published on the google play store. I had no ambition for this application, but it still was installed by more than 10K users. The concept is simple, make random draws quickly. The app enables the user to create draws between pre-defined choices or simple numbers. Animating the draw makes the result legitimate. At first the draw was instantaneous, but that made it illegitimate, and I found myself redoing the draw several times and averaging the results to accept a result. This project was one of my very first Android application projets, and as I learnt to code better and the app was still used by hundreds of people, I decided to refactor the code base. This taught me the importance of documentation and of maintaining a clean code base.

Android Java XML IntelliJ IDEA Android STudio

SMART VR

As part of a two-week school project for the ANTS (Advanced Neuro-rehabilitation Therapies & Sport) association, I was part of a team of 7 engineer students tasked with creating virtual reality workshops to make gym and mobilization exercises more fun for disabled individuals. The final product was a game running on Oculus Quest, developed using Unity. The player takes flight in a real city and must complete repetitive exercises to move through gates and collect points in the sky. This project made me learn about virtual reality concepts and game creation using Unity engine.

C# Unity Virtual Reality GitHub

My data analyser

Android app analysing personal data

As a personal project, I developed an Android application that enables users to analyze their personal data from various apps such as Spotify and Facebook Messenger. The app required users to download their personal data from these apps and then analyzed it to display statistics such as their favorite artists and the amount of time spent listening to them. Indeed, It is based on the fact that apps which collect information about individuals must offer the users a way to access it. Then, I found intersting to dig in this huge amout of personal data that companies like Facebook or Spotify had on me. The app then showed me, about Facebook Messeger, the conversations with the more messages exchanged, and who was sending the more messages. About Spotify, the app showed the more listened artist, and the listing duration evolution through the year. In fact, I could display almost everything, as I had the hand on the data. Although the app was never published, the source code is available on GitHub for anyone interested in learning more about its development and functionality. This project allowed me to gain experience in working with personal data, and even in some sort of data analysis, as I wanted to display relevant information based on a huge amount of data. It also challenged me in Android development.

Kotlin Android Studio GitHub Data mining Data analysis Data visualization

text-checker

Python library to check a text spelling using reverso

This personal project is a command line tool, built with python, allowing the user to spell check its text, using online tools such as Reverso. From a text, the tool find mistakes and suggest corrections. A continuous integration pipeline is setup, which checks the structure of the code using black, pylint and isort. The tool uses requests library to interact with Reverso API.

Python GitHub GitHub actions API requests Command line tool VS code

Jobs

Education