|Supervisor:||Prof. Gudrun Klinker|
|Advisor:||Plecher, David (@ne23mux)|
With over 80,000 memorial plaques, the Stolpersteine are the world's largest decentralized memorial. However, no more information than the names of the Holocaust victims is accessible on the memorial slabs embedded in the ground. To address this shortcoming, a mobile application was developed that displays more detailed information about the victims using augmented reality. In order to ensure the precise location of this information, a method was developed that uses object detection to recognize the plaques and localize them precisely in three-dimensional space. Furthermore, the application is accompanied by a web interface, which allows authorized users to enter the information without much effort. Due to the circumstances, it was not possible to formally evaluate the work, but feedback from an informal evaluation suggests that the application is generally well received.
The Stolperstein Memorials
The Stolperstein project commemorates the victims of national socialism through small bronze plates, that are inlaid into the Sidewalk in front of the victim's last freely chosen place of residence. The image below shows the Stolperstein installation in the Seestraße 8 in Munich. Each memorial plate, the Stolperstein (engl. Stumbling Stone), bears the victims name, date and place of birth as well as date and place of death (if known).
While being, in our eyes, a great way to commemorate the victims as individuals, it is also unfortunate that we do not learn more about these persons through the memorial. Therefore, finding a way using Augmented Reality to present more information about these persons was the motivation behind our work.
The Goal of this thesis was to develop a mobile AR application that recognizes the Stolperstein Memorials and Presents more detailed information about the persons behind them in AR in an interactive way. Furthermore, we wanted to make it possible for responsible persons, such as researchers or relatives, to easily add information without any technical background in AR or 3D computer graphics. To this end, next to the mobile application, also a backend with web interface was created.
Below, an overview of the system architecture that was implemented to achieve these goals is shown.
The mobile application was implemented using Unity and the ARFoundation framework. In the following the components that make it up are briefly explained.
The Backend Interface handles the communication with the Backend. Via HTTP requests it acquires the necessary data as JSON, parses it, and makes it available to the other components.
The Location Handler uses GPS, in order to determine the current user position and compares it against the list of known Stolperstein locations. If the radius of 15 meters around a Stolperstein location is entered, it notifies the Stolperstein Area Handler. With the information about the locations of the Stolpersteine, this component also keeps track of which Stolpersteine are at the current location.
Stolperstein Area Handler
This component handles the logic when the user is close to a collection of Stolpersteine. It starts the detection of the Stolpersteine and then lets the user select the Stolperstein he or she wants to get more information about. Finally, for the selected Stolperstein, the AR content is being generated. In the following, a state diagram is given that visualizes the different states this component can be in.
In order to detect the Stolpersteine and infer there 3D positions, such that the AR content can be properly placed, a method based on Object Detection was used. The image below visualizes the approach: The Stolpersteine are detected on the 2D camera image using an Object Detection algorithm. The center of the found bounding box is projected back into 3D, yielding a line. The intersection of this line with the ground plane is the found position of the Stolperstein in 3D.
The ground plane can be detected with a method that already is contained in the ARFoundation framewor, however, the Stolperstein object detection had to be implemented by ourselves. To this end, we chose to use YOLOv4-tiny , a very fast and accurate Deep Neural Network for object detection, designed for compute restricted devices. Over 3700 training images of Stolpersteine from 55 different locations were gathered using a tool developed for this purpose, which allowed us to directly generate the labels while recording the frames. Once trained, the network was integrated into Unity using the Barracuda framework. Following the method described above, the 3D positions of the Stolpersteine are determined and additional plausibility checks are conducted to validate the predictions. This detection takes around 0.3-0.4 seconds on our test device, which is acceptable since the Stolpersteine only have to be located once. Afterwards an ARAnchor is used to track the detected positions.
This component programmatically generates the AR content based on the textual/imagery data that is available for the chosen Stolperstein. If the corresponding data is available, the following types of content are displayed (called scenes):
Base and Selection Scene:
In the Base Scene, the user is presented an image of the person this Stolperstein is dedicated to, augmented in AR at head height, as well as the persons name below it. The Selection Scene consists of 3D Icons that are augmented on the ground around the currently active Stolperstein. Touching these brings the user to the scene with the respective content. An annotated screenshot of the application featuring these two scenes is given below.
Info and Family Scene:
These scenes, which can be reached by clicking the "i" or the wedding rings icon display general information or family related information, respectively as 2D text on the screen.
Life Stations Scene:
Here, important stations in the victim's life are marked on a 3D map which is augmented on the ground behind the Stolperstein. Clicking on the markers a text on the screen appears that elaborates on this station. An annotated screenshot is given below.
Related Stolpersteine Scene
This scene shows Stolpersteine that stand in relation to the currently visited one on a 3D map as can be seen in the annotated screenshot below. The user can click on the Stolperstein markers to obtain information about what the connection between these Stolpersteine is.
The Backend was created with Python and the Django Framework and is supposed to run on a server, independent from the mobile application.
Through the Webinterface, authorized persons can add information regarding the Stolpersteine.
On the main page, they have the opportunity to select an existing location on a map or the create a new location. In either way, in the next step they are presented with an overview over the location, where they can specify the order the Stolpersteine are placed there and add or remove new Stolpersteine. Adding a Stolperstein is done through a form The information is entered either as text, by uploading images or by specifying a location through selection of the spot on a map. Below some Screenshots of the web interface pages mentioned are shown.
Logic, API and Database
The logic ensures, that only users with the right permissions are able to edit the information. Unless they are administrators, this means that a person can only edit the Stolpersteine at a location he or she created themselves and has to have received valid login credentials. The logic also coordinates the interaction with the database. It enters the information coming from the web interface and, when a call to the API is made, it retrieves the corresponding data from the database and returns it in a HTTP response as JSON string.
In this work, a mobile application was developed that enhances the Stolperstein memorials with information presented via Augmented Reality.
Alongside the application, a web interface was also created, through which information can be easily entered into the application.
The realization of the application was achieved by combining GPS data and a method developed by us for the detection and localization of the memorials in 3D space using object detection.
The implementation of the method yields positive and consistent results and is considered very satisfactory by us.
However, in the following tracking of the localized memorial positions, based on out-of-the-box methods, there are still noticeable inaccuracies in some cases.
An evaluation of the application was performed, which, due to the circumstances of the Covid-19 crisis, was only conducted on a small, informal scale.
From the feedback, it appears that the application was well received by users and seems to be fulfilling its purpose of providing information about memorial victims in an interactive and novel way.
User feedback also suggests that implementing Augmented Reality proved to be a good idea, as users with little experience using AR in particular reported increased interest in the content when it was provided through these means.
Accordingly, we have developed an application with great potential appealing both to the community of users interested in more information about the Stolpersteine and to those who are in a position to provide that information.
There are also already concrete ideas on how the application could be further improved.
The next major step would be to conduct a larger-scale evaluation that would also involve the people who are contributing the information.
In the larger context, the results of our work make us optimistic about the potential of combining Augmented Reality and Object Detection.
Furthermore, we think that the concept of our work can be applied to other memorial sites where there is little space for the placement of additional information (such as street signs or tombstones).
In this way, it can be ensured that not only the name of the person, but also their story is not forgotten.
Example video of the application: