EU AI Act Compliance Ready

Medical AI that
Explains Itself.

Transforming "Black Box" deep learning into transparent, auditable "Glass Boxes".
We map 4D fMRI predictions back to 1D EEG physiology for total clinical verification.
An Open Science Initiative at Comenius University.

The "Black Box" Liability

State-of-the-art Generative AI models (GANs, Diffusion) are mathematically impressive but clinically opaque. In a "High-Risk" medical setting (EU AI Act), you cannot trust a model you cannot audit.

  • Hallucination Risk: Generative models often invent plausible-looking fMRI structures based on noise artifacts (e.g., muscle movement).
  • Regulatory Blockade: The FDA and EU AI Act now demand "human oversight" and interpretability for approval.
  • Clinical Mistrust: Doctors will not rely on a system that cannot explain "why" it made a prediction.
Black Box
Standard AI

Input data goes in, prediction comes out. No reasoning.

Glass Box
NeuroSonic

Full traceability from Voxel to EEG Frequency Band.

The Clinical Audit
Pipeline

We use PyTorch and Captum to wrap our cGANs in a safety layer that validates biological plausibility.

1

EEG Input

Standard High-Density EEG serves as the ground truth physiological signal.

2

cGAN Prediction

The model generates a candidate fMRI volume based on the input EEG and anatomical priors.

3

Audit Layer

IntegratedGradients trace the prediction back to the source. If the source is muscle noise (Gamma), the prediction is rejected.

Academic Roots

👨‍⚕️

Founder

Comenius University Bratislava

4th Year Medical Student. Focus: Sleep Medicine, Neurology, and EEG biomarkers.

+

Open Role

Technical Co-Founder

Seeking a Deep Learning Engineer (GANs/Computer Vision) to lead our computational pipeline.