Skip to content
View fims9000's full-sized avatar
🏠
Работа из дома
🏠
Работа из дома
  • Dubna State University

Block or report fims9000

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
fims9000/README.md

Yuri Trofimov

AI Researcher | Explainable AI (XAI) | Hybrid Neuro-Symbolic Systems | AI for Sustainable Development

Working at the frontier of Artificial Intelligence:

  • Building transparent & interpretable ML
  • Designing hybrid neuro-fuzzy & symbolic architectures
  • Applying AI to medicine, sustainability & decision support systems

💡 Bridging deep technical AI with real-world impact.


🔬 Research & Expertise

  • Explainable & Interpretable AI (XAI)
  • Hybrid Neuro-Fuzzy & Neuro-Symbolic Systems
  • Decision Support Systems (medicine, sustainability, ecological risk)
  • Knowledge Representation & Reasoning
  • GeoAI & Spatial ML for Sustainability

⚙️ Technical Stack

  • Languages: Python, C++, R, MATLAB
  • ML/DL: PyTorch, TensorFlow, Keras, Scikit-learn
  • XAI: SHAP, LIME, Captum, Alibi Explain, InterpretML
  • Data & MLOps: SQL, NoSQL, Neo4j, Docker, DVC, MLflow, Airflow
  • GIS: QGIS, ArcGIS, GeoPandas, Rasterio, Shapely, Remote Sensing

📂 GitHub Projects

  • XAI Frameworks – interpretable ML and custom explainers
  • Hybrid Neuro-Fuzzy Systems – DSS prototypes
  • Medical DSS – AI models for preventive healthcare
  • GeoAI for Sustainability – GIS + ML for ecological risk
  • Knowledge Graph + ML Pipelines – hybrid reasoning + ML

🎯 Current Focus

  • Next-gen interpretable AI frameworks
  • Neuro-symbolic architectures for trustable AI
  • AI for sustainability & medicine
  • Systemic & theoretical foundations of XAI

📫 Contacts

Pinned Loading

  1. XAI-2.0-SHAP-regularized-ANFIS XAI-2.0-SHAP-regularized-ANFIS Public

    Implementation of Explainable AI 2.0: SHAP-regularized ANFIS for interpretable and robust machine learning. Combines neuro-fuzzy systems with SHAP values to enhance explainability, feature attribut…

    Python 1

  2. Trust-ADE Trust-ADE Public

    Trust-ADE (Trust Assessment through Dynamic Explainability) is a comprehensive protocol for quantitative trust assessment of artificial intelligence systems, based on the scientific research "From …

    Python

  3. XAI-CausalLayered XAI-CausalLayered Public

    A research framework for next-generation Explainable AI (XAI 2.0) based on multi-layer neuro-symbolic architectures. The project advances from post-hoc explanations toward embedded causal interpret…

    Python 1

  4. CAS_ISIC CAS_ISIC Public

    Python

  5. SYNT_ISIC SYNT_ISIC Public

    Графическое приложение для синтетики ISIC на DDPM (UNet2D) с Time-SHAP — временной атрибуцией важности по шагам денойзинга — и каузальной валидацией через контрафактуальные интервенции (noise/blur/…

    Python 1