Projects.

A curated collection of digital craftsmanship. Exploring the intersection of design, code, and user experience.

Featured Projects

Drag to explore
Web

BinauralStudio

Binaural Studio: Web-Based Neuro-Acoustic Frequency Modulator Binaural Studio is a professional audio synthesis tool that operates entirely within a web browser, based on the human brain's ability to synchronize with specific frequencies (Brainwave Entrainment). This project aims to optimize cognitive performance by generating pure sine waves using real-time mathematical algorithms, without the need for external audio files. 🧠 What are Binaural Beats? Binaural beats are actually an auditory illusion created by the brain, rather than a sound that the ears actually hear. Scientific Mechanism When two frequencies close to each other but different (e.g., 200 Hz and 210 Hz) are transmitted to the right and left ears, the Superior Olivary Complex in the brainstem combines these two signals. The brain perceives the difference (10 Hz) as a third sound and equalizes its electrical activity to this difference frequency. This is called "Frequency Tracking Response". Targeted Brainwaves The application switches between the following spectrums according to the user's needs: Delta (0.5 - 4 Hz): Deep sleep and physical regeneration. Theta (4 - 8 Hz): Deep meditation, creativity, and REM sleep. Alpha (8 - 14 Hz): Relaxed but alert mind, stress management. Beta (14 - 30 Hz): Active focus, problem-solving, and analytical thinking. Gamma (30+ Hz): Maximum cognitive capacity and information synthesis. 🛠️ Technical Architecture Unlike ready-made MP3 recordings on the market, in this project, sounds are generated on-the-fly using the Web Audio API. This approach prevents loss in sound quality and offers an endless loop. Implemented Engineering Solutions Stereo Panner Logic: An independent StereoPannerNode architecture is used to prevent interference between the left and right channels. Stereo headphone use is mandatory for a true binaural effect with this layer. Exponential Ramping: Signal waves are mathematically curved in milliseconds to prevent micro-interference and "crackling" (audio pop) sounds during frequency transitions. Waveform Modulation: The texture (tone) of the sound is made customizable by integrating Sine, Triangle, Sawtooth, and Square waveforms. 🚀 Technological Stack Language: Vanilla JavaScript (ES6+) Sound Engine: Web Audio API (Low-level DSP) Interface: Tailwind CSS (Dark Mode & Glassmorphism) Deployment: Vercel 📂 Usage and Installation When you start the application, force the browser's sound engine to wake up by clicking the "START" button. Choose one of the ready-made modes according to your target mental state (Focus, Sleep, etc.). Important: Use stereo headphones for the binaural effect to occur. Developer Note: To minimize system resources during project development, no heavy libraries were used. Images

JavaScriptTailwind
Web

Gyorscope - 3D Modeling

Turn Your Phone into a Magic Wand: Web-Based 3D Gyroscope Controller Greetings! I'm Yunus. Today I want to share with you my new project, which I've been working on with great pleasure, that connects the physical world and the digital environment in real time: Web-Based 3D Gyroscope Controller. As you know, I enjoy thinking about system architectures and interactive web experiences. My goal in this project was simple: to be able to rotate a 3D model (a car or a phone) on my computer screen in space as if it were in my own hands, simply by scanning a QR code with my phone's camera, without downloading any application. Sounds simple, right? However, when different axis spaces, live socket connections, multiple user sessions, and "deployment" processes came into play, bending the laws of physics in the backend became a little more complex than I expected. Let's take a closer look at how I built this system and the stages I went through. 🛠️ Tech Stack To keep the project modern, scalable, and type-safe, I used the following technologies: Frontend: Vite, TypeScript, Three.js, TailwindCSS Backend: Node.js, Express.js, Socket.IO Deployment: Vercel (Client), Render (API) Design Language: Spatial Computing & Minimalist Industrial (Frosted glass effects, monochrome tones) 🚀 Development Stages and Challenges I Encountered This project didn't happen overnight. I had to build the architecture step by step and in great detail to establish a solid foundation for the system. Stage 1: Prototyping and the HTTP Impasse Initially, I had set everything up in a single folder with simple HTML and JavaScript files. When I brought the server up and connected from my phone, I faced a harsh reality: iOS and modern mobile browsers completely blocked access to gyroscope (DeviceOrientationEvent) data in HTTP connections for security reasons. To test the system, I created a secure tunnel on the local network in the development environment (localhost) using Vite's @vitejs/plugin-basic-ssl plugin to automatically generate HTTPS certificates. This allowed me to stream the first data to the computer. Step 2: Deconstructing the Architecture and Switching to TypeScript As the project grew, the old require (CommonJS) structure and global variables started causing headaches. Models sometimes spun on their own even without gyroscope data, port 3000 conflicting and crashing the server. I took a deep breath and refactored the project from scratch using TypeScript. I divided the system into two isolated worlds: src/client and src/server. I made Socket.IO events type-safe using Interfaces. Now the server doesn't crash; it dynamically finds an available port and starts up again. Step 3: The Mathematics of the 3D World (Three.js Axes) This was the part where I spent the most time (and consumed the most coffee). I would tilt the phone forward, and the model on the screen would rotate to the right! This was because the reference system of the Euler angles (Alpha, Beta, Gamma) produced by the phone was completely different from the Y-Up (Right-Handed) coordinate system used by Three.js. To resolve this discrepancy, I created a custom axis mapping on THREE.Euler. Where necessary, I used radian transformations and negative multipliers to prevent reverse movement, making the model a perfect reflection of the phone. I also wrote an auto-scaling and centering algorithm to calculate the Bounding Box of the model entering the scene so that the different .glb models (some gigantic, some tiny) loaded would appear standard on the screen. Stage 4: Deployment Hell Complete chaos ensued when loading the project into Vercel and Render. The Vercel side froze in an infinite loop during the transforming... phase while trying to compile the TypeScript code of the backend. The Render side crashed due to a Rollup compilation error (@rollup/rollup-linux-x64-gnu not found) because it was a Linux machine and I had created a package-lock.json file on Windows. Solution: I sharply separated the monolithic structure during the deployment phase. I wrote a vercel.json rule for Vercel that completely ignores the background process. For rendering, I completely disabled the vite build processes and brought the server live directly with node --import tsx. Step 5: Interface and Multiple Session Management (Rooms) When I brought the system online, a new problem arose: If everyone was listening to the same channel, my phone would rotate the car on someone else's screen. To solve this, I implemented a Room-Based Session architecture. When the desktop interface opens, it creates a unique UUID and embeds it in a QR code. When you scan this QR code with your phone camera, you join that specific room via the URL. This way, data flows only between your devices. As a final touch, I redesigned the interface to conform to Spatial Computing trends; using frosted glass effects (Glassmorphism), clean, thin typography, and massive monospace fonts for data flow. I also added a switch to the mobile interface, offering control options via Touchpad in addition to gyroscope. Result More than simply connecting one device to another, synchronizing hardware data with a 3D graphics engine via web sockets was a tremendous experience. Combining the aesthetics of the frontend with the rigid logic of the backend is truly exciting, showing how far the boundaries of a web browser can be expanded. If you want to examine the code of this project in detail or upload and test your own models, you can check out my GitHub repository. See you in new projects, pushing the boundaries even further! Yunus. Images:

Node.jsViteTypeScript+2

All Works

Web

BinauralStudio

Binaural Studio: Web-Based Neuro-Acoustic Frequency Modulator Binaural Studio is a professional audio synthesis tool that operates entirely within a web browser, based on the human brain's ability to synchronize with specific frequencies (Brainwave Entrainment). This project aims to optimize cognitive performance by generating pure sine waves using real-time mathematical algorithms, without the need for external audio files. 🧠 What are Binaural Beats? Binaural beats are actually an auditory illusion created by the brain, rather than a sound that the ears actually hear. Scientific Mechanism When two frequencies close to each other but different (e.g., 200 Hz and 210 Hz) are transmitted to the right and left ears, the Superior Olivary Complex in the brainstem combines these two signals. The brain perceives the difference (10 Hz) as a third sound and equalizes its electrical activity to this difference frequency. This is called "Frequency Tracking Response". Targeted Brainwaves The application switches between the following spectrums according to the user's needs: Delta (0.5 - 4 Hz): Deep sleep and physical regeneration. Theta (4 - 8 Hz): Deep meditation, creativity, and REM sleep. Alpha (8 - 14 Hz): Relaxed but alert mind, stress management. Beta (14 - 30 Hz): Active focus, problem-solving, and analytical thinking. Gamma (30+ Hz): Maximum cognitive capacity and information synthesis. 🛠️ Technical Architecture Unlike ready-made MP3 recordings on the market, in this project, sounds are generated on-the-fly using the Web Audio API. This approach prevents loss in sound quality and offers an endless loop. Implemented Engineering Solutions Stereo Panner Logic: An independent StereoPannerNode architecture is used to prevent interference between the left and right channels. Stereo headphone use is mandatory for a true binaural effect with this layer. Exponential Ramping: Signal waves are mathematically curved in milliseconds to prevent micro-interference and "crackling" (audio pop) sounds during frequency transitions. Waveform Modulation: The texture (tone) of the sound is made customizable by integrating Sine, Triangle, Sawtooth, and Square waveforms. 🚀 Technological Stack Language: Vanilla JavaScript (ES6+) Sound Engine: Web Audio API (Low-level DSP) Interface: Tailwind CSS (Dark Mode & Glassmorphism) Deployment: Vercel 📂 Usage and Installation When you start the application, force the browser's sound engine to wake up by clicking the "START" button. Choose one of the ready-made modes according to your target mental state (Focus, Sleep, etc.). Important: Use stereo headphones for the binaural effect to occur. Developer Note: To minimize system resources during project development, no heavy libraries were used. Images

JavaScriptTailwind
Web

Gyorscope - 3D Modeling

Turn Your Phone into a Magic Wand: Web-Based 3D Gyroscope Controller Greetings! I'm Yunus. Today I want to share with you my new project, which I've been working on with great pleasure, that connects the physical world and the digital environment in real time: Web-Based 3D Gyroscope Controller. As you know, I enjoy thinking about system architectures and interactive web experiences. My goal in this project was simple: to be able to rotate a 3D model (a car or a phone) on my computer screen in space as if it were in my own hands, simply by scanning a QR code with my phone's camera, without downloading any application. Sounds simple, right? However, when different axis spaces, live socket connections, multiple user sessions, and "deployment" processes came into play, bending the laws of physics in the backend became a little more complex than I expected. Let's take a closer look at how I built this system and the stages I went through. 🛠️ Tech Stack To keep the project modern, scalable, and type-safe, I used the following technologies: Frontend: Vite, TypeScript, Three.js, TailwindCSS Backend: Node.js, Express.js, Socket.IO Deployment: Vercel (Client), Render (API) Design Language: Spatial Computing & Minimalist Industrial (Frosted glass effects, monochrome tones) 🚀 Development Stages and Challenges I Encountered This project didn't happen overnight. I had to build the architecture step by step and in great detail to establish a solid foundation for the system. Stage 1: Prototyping and the HTTP Impasse Initially, I had set everything up in a single folder with simple HTML and JavaScript files. When I brought the server up and connected from my phone, I faced a harsh reality: iOS and modern mobile browsers completely blocked access to gyroscope (DeviceOrientationEvent) data in HTTP connections for security reasons. To test the system, I created a secure tunnel on the local network in the development environment (localhost) using Vite's @vitejs/plugin-basic-ssl plugin to automatically generate HTTPS certificates. This allowed me to stream the first data to the computer. Step 2: Deconstructing the Architecture and Switching to TypeScript As the project grew, the old require (CommonJS) structure and global variables started causing headaches. Models sometimes spun on their own even without gyroscope data, port 3000 conflicting and crashing the server. I took a deep breath and refactored the project from scratch using TypeScript. I divided the system into two isolated worlds: src/client and src/server. I made Socket.IO events type-safe using Interfaces. Now the server doesn't crash; it dynamically finds an available port and starts up again. Step 3: The Mathematics of the 3D World (Three.js Axes) This was the part where I spent the most time (and consumed the most coffee). I would tilt the phone forward, and the model on the screen would rotate to the right! This was because the reference system of the Euler angles (Alpha, Beta, Gamma) produced by the phone was completely different from the Y-Up (Right-Handed) coordinate system used by Three.js. To resolve this discrepancy, I created a custom axis mapping on THREE.Euler. Where necessary, I used radian transformations and negative multipliers to prevent reverse movement, making the model a perfect reflection of the phone. I also wrote an auto-scaling and centering algorithm to calculate the Bounding Box of the model entering the scene so that the different .glb models (some gigantic, some tiny) loaded would appear standard on the screen. Stage 4: Deployment Hell Complete chaos ensued when loading the project into Vercel and Render. The Vercel side froze in an infinite loop during the transforming... phase while trying to compile the TypeScript code of the backend. The Render side crashed due to a Rollup compilation error (@rollup/rollup-linux-x64-gnu not found) because it was a Linux machine and I had created a package-lock.json file on Windows. Solution: I sharply separated the monolithic structure during the deployment phase. I wrote a vercel.json rule for Vercel that completely ignores the background process. For rendering, I completely disabled the vite build processes and brought the server live directly with node --import tsx. Step 5: Interface and Multiple Session Management (Rooms) When I brought the system online, a new problem arose: If everyone was listening to the same channel, my phone would rotate the car on someone else's screen. To solve this, I implemented a Room-Based Session architecture. When the desktop interface opens, it creates a unique UUID and embeds it in a QR code. When you scan this QR code with your phone camera, you join that specific room via the URL. This way, data flows only between your devices. As a final touch, I redesigned the interface to conform to Spatial Computing trends; using frosted glass effects (Glassmorphism), clean, thin typography, and massive monospace fonts for data flow. I also added a switch to the mobile interface, offering control options via Touchpad in addition to gyroscope. Result More than simply connecting one device to another, synchronizing hardware data with a 3D graphics engine via web sockets was a tremendous experience. Combining the aesthetics of the frontend with the rigid logic of the backend is truly exciting, showing how far the boundaries of a web browser can be expanded. If you want to examine the code of this project in detail or upload and test your own models, you can check out my GitHub repository. See you in new projects, pushing the boundaries even further! Yunus. Images:

Node.jsViteTypeScript+2
Web

CloudVault

I Created My Own Cloud Storage: YunusSayginli Cloud Greetings! Today I want to share a project I've been working on for a long time and that excites me a lot: YunusSayginli Cloud. As you know, our data storage needs are increasing every day. Between photos, videos, and documents, we have to pay monthly fees to services like Google Drive, iCloud, or Mega. Moreover, our data is stored on other people's servers. I thought, "Why not turn the free space on my own computer into a cloud that I can access from anywhere in the world?" and I started coding. What is This System? How Does It Work? This system, which operates via the address https://cloud.yunussayginli.com, actually reflects a specific folder on my local computer (or server) to the web interface. The difference from other solutions on the market is that they usually require complex network protocols like SSH or SMB, port forwarding, or FTP clients for file sharing. My system, however, has none of these features. The user (that is, me for now) simply enters the website and a familiar file manager appears. Key Features of the Project The advantages offered by this system I developed are: Web-Based Interface: I can access my files through the browser without downloading any application. Zero Configuration: I can upload and download files directly without dealing with SSH keys or complex IP settings. Complete Control: My data is stored on disks directly under my control, not in a data center I don't know. Speed ​​and Security: Since there is no third-party service provider in between, the limits (upload/download limits) depend entirely on my internet speed. When Will It Be Available? Currently, the project is in the development phase and I am conducting closed-loop tests. I am optimizing the system's stability, security, and speed. But my goals are ambitious! With the upcoming updates, I plan to transform it into a true cloud storage service: User Registration System: Anyone can create their own personalized account, Personalized Spaces: Each user will have their own isolated storage space, File Sharing: You can share a file you've uploaded with your friends with a single click. Stay tuned to my blog for updates. See you freely in our own cloud!

ReactNext.jsAV1