||Understanding situations formed of patterns of in- terrelated events is a complex problem: often available training data are sparse and either noisy or potentially manipulated by other member of the coalition, if not by an adversary. We introduce DeepProbCEP, a hybrid neuro-symbolic architecture that leverages both a neural architecture, to interpret raw data, and logical rules, to express patterns defining complex events, while allowing for end-to-end learning. Compared to simple neural architectures, DeepProbCEP (i) needs fewer labelled data thanks to its end-to-end learning capability, (ii) is robust against noise and adversarial attacks in the form of training data poisoning and (iii) can classify individual events as a by-product of the end-to-end training. It clearly also suffers from drawbacks, notably: (i) maintainability of the logical rules; (ii) limits to the expressiveness of the chosen logical formalism; and (iii) training and inferencing time performance. We comment on possible solutions: inductive logic programming can help maintaining logical rules, and student-learner neural architectures can distil neural approximations of logical rules. We also report on the next steps towards leveraging our other research line towards an uncertainty-aware hybrid architecture (i) to ensure robustness also at inference time, and (ii) to detect out-of-distribution events.