The XClanLab Project Progress

This project is developed in an agile way with iterations. Each iteration is defined by the regular meetings of the project partners which happen two times a year.



The project progress is documented during the iterations. Every meeting of the project members is a review of the past iteration and a planning meeting for the next iteration.

Kick-off: Meeting in Wiesbaden

The aims of the project were discussed.

 The base for the new app should be the existing app extended by a new items and substances management system, internationalization (for major european languages) and an iOS version.

Review Iteration 1: Meeting in Helsinki

The old app consists of a searchable image database which aids a first responder with identifying potentially dangerous substances and devices in a clandestine lab. The identifying relies completely on the manual work of the first responder, who must compare items found in the lab with example images provided by the app. Moreover, there is only a simple report function which transmits the identified objects and substances to a given receiver.

The project members decided it would be really helpful that the app would assist more profoundly the first responder by identifying items and substances automatically.

We agreed on using advanced technologies in order to support first responders the best way.

By using a machine learning (ML) object detection model, images from the camera could be analyzed and probably identified.

Moreover a text recognition subsystem should scan the lettering of containers etc, which is another crucial part of assessing the potential dangers in a clandestine lab.

It is crucial that the app works without connection to the internet. Therefore, all analytical work has to be done in the app, not on remote servers.

Review Iteration 2: Meeting in Stockholm

We developed a prototype (iOS and Android) of the new app which already supports objects detection taken from the video stream of the built-in camera. This was impressive but showed that interpreting the live video stream of the camera would not run in an acceptable speed on older (Android) devices.

Moreover, we discovered that the top models of iOS devices (iPhone 11 Pro) superseded the top models of Android devices (Google Pixel 4 or Xiaomi Mi 9T Pro) by far. Even older iPhone devices like the iPhone 8 Plus performed better than more recent Android devices. The item recognition in the live video stream under iOS was up to four times faster than under Android in frames per second (https://www.tensorflow.org/lite/models/object_detection/overview).

In order to solve these performance problems on older devices the project members suggested to include the functionality of the old XClanLab app - as a fallback. Basically the old app provided images and information of elements of a clandestine lab. The first responder could still identify items with the app, but the identification process would be manual and slower than the machine learning based process.

Because the manual identification is error prone and presupposes certain knowledge of chemical substances and laboratory equipment we think that we need to avoid the manual identification and rely on the object identification and on the algorithms analysing them.

Current Technology Stack:

Review Iteration 3: Meeting in Berlin (postponed due to COVID-19)

During spring of 2020, we did a lot of research and tests regarding an alternative to the analysis of the live video stream and the learning model for the machine learning based models for identifying the items.

Regarding the performance issues discovered in the last iteration we tried a proof of concept of the app which analyses photos made by the first responder instead of analysing the video stream. The photos are analysed image by image and the identified objects are summarized on a result page. This seems to be the way to go, doing without the intelligent assistance of machine based identification and avoid the performance issues. Of course it takes longer to identify the objects on older devices but we did not experienced any disfunctionality on older devices. Even if we omit the impressive live analysis of the video stream.

The second part of work was to get the data in order to built the machine learning training model. We scraped some sites for images and information, but to build a performing learning model we will need more and especially more diverse images for training

We are using open source software and libraries only for the whole project. For the app development we use a cross-platform technology called Flutter (https://flutter.dev) by Google which is free and open source. The machine learning part is developed using TenserFlow Lite (https://www.tensorflow.org/lite/), an adaptation of Tensorflow for mobile and embedded devices. TensorFlow is an open source industry standard platform for various machine learning applications.

The object detection is realized by the SSD MobileNet model. Further resources for image and text recognition will be open source as well.

In order for a trained machine learning model to perform at it’s best, it is paramount to use a lot of diverse images as raw data. In this context, diverse images are images of the same object taken in different resolutions, aspect ratios, lighting conditions, backgrounds and aperture settings. An examplary and standard model for identifying cats and dogs (http://www.robots.ox.ac.uk/~vgg/data/pets/) needs around 200 images of each species in order to identify the animals correctly. By scraping web shops we are able to collect some images of the relevant items, but we conclude that we will need more diverse images for each item. It is likely that web shops will show their products in a certain way, for instance in front of a white background and using optimal lighting.

Since the process to create learning data for a single item is non-trivial and most likely not automatable entirely, it is necessary to know the items that should be detected before the generation of training data starts.