AI has long been used in the work of investigators, sometimes directly as a model project, and sometimes indirectly, when companies search for content “intelligently” on their own. The question is what effects should occur on procedural processing, and the results that such techniques highlight.
The problem of the area has hardly been addressed
As I will explain a little later, the topic is by no means trivial – and above all it is not dealt with with difficulty. My impression is that if you do this, you’re going the usual way: you’re trying to apply analog thinking to digital operations. It is positive that the Federal Data Protection Commissioner has already dealt with this topic. There is also chapter (short) 12 in “The Legal Guide to Artificial Intelligence and Machine Learning”, where no less than 4 pages have been written on this topic.
Reputation is unanimous as is evident at first glance: traceability and lack of discrimination should be high on the agenda. At the same time, one has to admit that, for technical reasons, it becomes difficult to achieve classical traceability when using neural networks.
But the problem is elsewhere: digital evidence – including what is produced by artificial intelligence – is pushing our criminal procedure code and applicable case law to its limits.
Risks of unknown problems
The problem arises, especially in criminal proceedings or preliminary investigations, that the problematic legal, social and political question is completely lost, given the German legal situation:
As a very simple and primitive “black box” example, consider using artificial intelligence to analyze volumes of data, such as taking over a server used to distribute illegal products. Data processing raises suspicions about previously unknown persons who are potential customers, for example the purchase of weapons or BTM. Customer identification works in such a way that software used independently derive identities from existing (the little) data using fully automated OSINT research. Traceability is virtually non-existent, but the judge signs home search warrants, with these home searches, you won’t find any BTMs or weapons, but other illegal things.
The example is very simple and may at the same time be a dystopia, but it should also show one thing: the scenario described would indeed be possible. And even in this stark scenario, those affected have a problem: one can speak out loud about the illegality of the procedure, and the serendipitous discoveries that came up during the research are not useless in themselves, thanks to BGH. Even if the use of the program is illegal or even if one correctly criticizes that the way AI works is not understood – in the end, once something is found, that question no longer plays a role.
Under this aspect, the penal system degenerates into an arbitrary system in which the reason for searching depends on chance, but discovery by chance becomes in turn a system. My disturbing conclusion is that the question rightly raised about whether AI operates in an understandable and non-discriminatory manner ultimately plays no role whatsoever in the penal system as we have and maintain.
AI can do incredibly valuable work when processing data – but at the same time it can create uncertainties that are out of control. This should be questioned at an early stage!
A new (procedural) evidence system is needed
The result of this gist – done very briefly – is that in the end the call for an intelligible AI will achieve nothing if a lack of understanding has no practical effect anyway.
In my opinion, the only way to solve this problem is to realize that our procedural program – already under increasing criticism – finally needs an overhaul as a result of digitization. The fact of the matter is that we have extensive rules on evidence gathering, almost no rules on evidence gathering, and violations of evidence gathering rules have virtually no effect until they are violated at will. If one transfers this way of thinking not only to evidence that is already digitally available, but also to evidence that is primarily digitally generated, what remains is an unverifiable and unintelligible criminal system for the person concerned, which is more like Kafka’s imagination than modern constitutional state.
Anyone who is serious about using AI in investigative work and at the same time wants to strengthen the rule of law calls not only for understandable decisions, but also for legal remedies to demand that understanding. The earlier application of balancing theory fails here, since AI investigations in particular regularly lead to further investigations, and are subsequently discarded, which are aggregated as a preliminary stage for initial investigation successes.
Digital investigative actions that lead to direct infringements of fundamental rights must be available for legal review – by every affected person(s) in a criminal proceeding. The goal should be that the suspicion generated by reference to (other) findings in the course of a home inspection would not subsequently be legitimized as otherwise unjustified.
means: up to the initial stage, which makes it possible to find evidence of (other) crimes, it should not remain insignificant—especially if the evidence created here is created out of control in the domain of the state, and is therefore quite clear who is being searched and under any conditions. Here the classic criminal procedural way of thinking about the evidence must be tested.