an abstract photo of a curved building with a blue sky in the background
büyük istanbul depremibüyük istanbul depremi

This experimental project investigates potential relationships between Istanbul’s seismic history and planetary alignments. The aim is to assess whether astronomical configurations, typically excluded from conventional seismology, may show weak statistical associations with earthquake occurrences.

The methodology combines two primary datasets:

  • Seismic records: Magnitude, location, and timing of earthquakes that have affected the Istanbul region over the last several decades.

  • Astronomical ephemerides: Planetary positions, angular separations, and relative alignments computed for the same time intervals.

The workflow involves:

  1. Data preprocessing – cleaning, structuring, and synchronizing seismic and astronomical datasets on a unified temporal scale.

  2. Feature engineering – encoding planetary angular relations (conjunctions, oppositions, squares, trines, etc.) as variables alongside standard seismic attributes.

  3. Statistical correlation analysis – testing for non-random associations between planetary configurations and earthquake events.

  4. Machine learning exploration – applying regression and classification models to evaluate predictive capacity, while quantifying uncertainty and error bounds.

In preliminary experiments, a weak but noticeable correlation emerged when Uranus occupied specific angular relationships with other planets during certain seismic events. Although the statistical significance remains limited and does not support deterministic forecasting, these findings illustrate how unconventional variables can be integrated into exploratory models.

This project is explicitly educational and experimental. Its purpose is not to replace established geophysical models, but to demonstrate how interdisciplinary data science workflows—spanning geophysics, astronomy, and machine learning—can be applied to novel hypotheses. The emphasis lies on hypothesis generation, reproducibility, and methodological rigor, offering a fresh perspective on earthquake prediction research.

Exploring the Full Spectrum of Machine Learning

Our machine learning projects aim to explore a wide range of algorithms across supervised, unsupervised, and reinforcement learning. From fundamental models such as linear regression, decision trees, and clustering methods to more advanced techniques including ensemble methods, deep learning, and optimization algorithms, we strive to build a comprehensive portfolio that demonstrates the versatility of machine learning.

These projects are not limited to a single approach; instead, they are designed to show how different algorithms can be applied to diverse problem domains, highlighting both their strengths and limitations. By doing so, we aim to create an evolving body of work that reflects the full spectrum of machine learning methods, providing both educational value and a foundation for further research.

ML Model: XGBoost (XGBClassifier)
Dataset: Istanbul Earthquakes 1990–2025
Notebook: Great Istanbul Earthquake Prediction

This notebook demonstrates an experimental approach to earthquake risk forecasting by integrating seismic records with planetary alignment features. Using XGBoost, the model is trained on a synthetic daily time-series derived from Istanbul’s earthquake history (1990–2025) to estimate the probability of major earthquakes (M ≥ 4.0) within a 7-day forecast window.

ML Model: Linear Regression
Dataset: Simple Istanbul House Prices
Notebook: Istanbul Apartment Price Prediction

This notebook demonstrates an educational use of linear regression for predicting apartment prices in Istanbul. A synthetic dataset was generated with features such as size, location, and amenities to simulate the housing market.

The project focuses on showing how linear regression establishes a relationship between independent variables (apartment features) and a dependent variable (price). The model is trained using the ordinary least squares method, and performance is evaluated with standard error metrics such as RMSE and R².

This project is intentionally designed as a basic linear regression example, focusing on clarity rather than complexity. The aim is to demonstrate how the method works step by step — from preparing the dataset, selecting features, and fitting the model, to evaluating predictions with simple metrics.

Since the dataset itself is synthetic and created for educational purposes, the emphasis is not on producing highly accurate forecasts, but on helping learners understand the logic behind regression modeling. By working with a straightforward example, it becomes easier to see how independent variables influence the target variable, how the model line is fitted, and how errors are measured.

This makes the notebook a practical starting point for those who are new to machine learning, offering a clear introduction before moving on to more advanced algorithms.