The Vibrational Signature of Alzheimer’s Disease: Visualizing Brain Activity With Laser Light

by rperezel in Workshop > Science

124 Views, 1 Favorites, 0 Comments

The Vibrational Signature of Alzheimer’s Disease: Visualizing Brain Activity With Laser Light

sonificacion_haz colimado.jpg

My name is Rubén Pérez-Elvira Ph.D., and I am a neuroscientist and professor in the Department of Psychobiology at the Pontifical University of Salamanca (Spain) [www.upsa.es]

My research focuses on brain dynamics, biomedical signal processing, and alternative methods for visualizing neural activity. This project brings together my academic research and hands-on experimentation, translating concepts from neuroscience into a physical, workshop-built system that can be explored outside the laboratory.

Brain activity is usually represented through abstract plots, numbers, or mathematical models. Yet the brain is not only information — it is also rhythm, vibration, and dynamic physical structure.

In this project, we present an experimental device that allows human brain activity to be visualized through laser light patterns, generated from EEG signals. By using direct sonification, brain signals are transformed into mechanical vibration, which in turn modulates a laser beam projected onto a surface, creating dynamic and reproducible patterns.

The system combines:

  1. computational neuroscience
  2. biomedical signal processing
  3. physical vibration
  4. optical projection

The result is a physical visualization of brain dynamics that, in experimental studies, reveals clear differences between signals from healthy subjects and those associated with Alzheimer’s disease.

This Instructable shows how to materialize this approach into a reproducible, accessible device, suitable for educational, experimental, and creative exploration beyond the laboratory.


Concept and Scope of the Project

Systems based on vibrating membranes excited by sound and visualized with laser projection have existed for decades and are widely used to study mechanical resonance, Chladni-type patterns, and vibrational phenomena.

The novelty of this project does not lie in the invention of the membrane-based device itself, but in its recontextualization as a tool for visualizing human brain activity.

In this work:

  1. Real EEG signals are used as the excitation source.
  2. Signals are sonified directly, without musical mappings or symbolic interpretations.
  3. The resulting vibration generates optical patterns that reflect the temporal and spectral structure of brain signals.
  4. These patterns can be observed, recorded, and visually analyzed.

This approach has been explored and validated in a scientific context, where it was shown that the resulting vibrational and optical patterns carry meaningful information and can differentiate signals associated with Alzheimer’s disease from those of healthy controls.

The goal of this Instructable is to translate that scientific approach into the maker domain, demonstrating how a known physical system can be transformed into a neurovisual instrument — a device that turns complex biomedical data into directly perceptible physical phenomena.

Important note

This device is not intended as a clinical diagnostic tool. Its purpose is experimental, educational, and exploratory, serving as a demonstration of how data science, physics, and visual design can converge in a single tangible system.

This project is presented as a workshop-built experimental instrument, emphasizing hands-on construction, iterative tuning, and reproducibility. All components can be assembled using accessible tools, making the system suitable for workshops, laboratories, and educational environments.


Research Background

This project is based on my own peer-reviewed scientific research, published as:

The Vibrational Signature of Alzheimer’s Disease: A Computational Approach Based on Sonification, Laser Projection, and Visual Analysis(Biomimetics, MDPI, open access: https://www.mdpi.com/2313-7673/10/12/792)

The scientific study explores how EEG signals associated with Alzheimer’s disease can be transformed into sound, vibration, and laser-projected patterns, and demonstrates that these patterns contain meaningful structural information.

This Instructable presents a workshop-built implementation of the core concepts described in that research, adapted for educational, experimental, and exploratory purposes.

Intellectual property notice

While the underlying physical principles are well known, the specific application, system integration, and procedural workflow for visualizing brain activity described here are part of an active line of research, and certain aspects of the device and/or methodology may be subject to intellectual property protection for potential commercial exploitation.

At the same time, the system may be freely used for teaching, demonstration, and non-commercial educational purposes.

Supplies

20W Vibration Speaker (Exciter Transducer)

A compact vibration speaker used to drive the membrane directly from the audio signal.

https://www.amazon.es/dp/B08FCDFHLB

A plastic tube or container

A rigid plastic 7cm diameter conainer

https://www.tintasytonercompatibles.es/bote-lava-pinceles-p-66678639.html

Latex ballons and a rubber band

https://amzn.eu/d/bEG0GFi

Small autoadhesive mirror

https://amzn.eu/d/7R31fKI

Polystyrene foam (polyspan) or similar

Low-power laser pointer

https://amzn.eu/d/fZuiR8j

PC and python software

The Vibration Speaker

For this project I used a vibration speaker (exciter) instead of a conventional loudspeaker. This type of speaker is designed to transmit vibrations directly to solid surfaces or membranes, which makes it ideal for this application.

  1. Power: 20 W
  2. Impedance: 4 Ω

This speaker will be the element that converts the audio signal (later derived from EEG data) into mechanical vibration.

https://www.amazon.es/dp/B08FCDFHLB?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1

Building the Speaker Housing (plastic Tube)

tubo altavoz 1.jpeg
tubo altavoz 2.jpeg

To create a simple and effective housing, I used a plastic tube or container with approximately the same diameter as the vibration speaker (7cm).

  1. I cut the plastic tube so that its length matched the depth of the speaker.
  2. The speaker was press-fitted into one end of the tube.
  3. No glue was required — the tight fit was enough to keep the speaker firmly in place.

This tube acts as a mechanical support and resonance chamber, while keeping the system lightweight and easy to modify.

Any rigid plastic container with a similar diameter can be used

https://www.tintasytonercompatibles.es/bote-lava-pinceles-p-66678639.html

Making the Membrane

globos.jpeg
colocacion membrana.jpeg

The vibrating membrane is a key component of the system.

For this build, I used a white balloon, which is:

  1. inexpensive
  2. easy to tension
  3. surprisingly effective as a flexible membrane

Steps:

  1. Cut the balloon to obtain a flat latex sheet.
  2. Stretch the latex over the open end of the plastic tube.
  3. Adjust the tension manually.
  4. Fix the membrane in place using a rubber band.

The membrane should be:

  1. evenly tensioned
  2. free of wrinkles
  3. firmly attached to avoid slipping during vibration

https://amzn.eu/d/bEG0GFi

(Optional) Adjusting Membrane Tension

In my experimental setup, I used a mathematical relationship between the sound produced by the membrane and its tension to estimate and fine-tune the membrane tension. You can find the formulas in the paper I referenced above.

This step is optional

It is not required for educational or learning purposes, and the system works well with manual adjustment by ear and observation.

For most users:

  1. stretch the membrane gradually
  2. test with simple audio signals
  3. adjust until stable and clear vibration patterns are observed


Adding the Reflective Mirror

espejo en membrana.jpeg

To visualize the vibration with a laser, I attached a small adhesive mirror directly to the membrane.

Steps:

  1. Use a small, lightweight adhesive mirror.
  2. Place it exactly at the center of the membrane.
  3. Press gently to ensure good adhesion without deforming the membrane.

This mirror reflects the laser beam, transforming membrane vibrations into visible light patterns.

Keep the mirror as light as possible to avoid damping the membrane motion.

https://amzn.eu/d/7R31fKI

Building the Support Structure

estructura.jpeg

To stabilize the system, I built a simple support structure using polystyrene foam (polispan).

The structure allows:

  1. secure placement of the membrane device
  2. easy adjustment of angle and height

In my setup:

  1. the device was tilted at approximately 40° relative to the base
  2. the structure faced a matte black screen placed 1 meter away

The dark, matte surface improves contrast and makes the laser patterns easier to see.

At this point, it's not necessary to be very strict (for educational purposes); you don't need fixed measurements, just that the devices remain stable.

Positioning the Laser

posicion laser.jpeg
estructura con laser.jpeg

A standard low-power laser pointer was used (https://amzn.eu/d/fZuiR8j).

Steps:

  1. Place the laser approximately 40 cm in front of the membrane device.
  2. Align the laser so that it points directly at the mirror.
  3. Adjust the angle until the reflected beam hits the screen.

Laser safety

  1. Use only low-power laser pointers
  2. Never point the laser at eyes or reflective surfaces unintentionally


Final Mechanical Setup

At this point, the physical system is complete:

  1. vibration speaker mounted in a tube
  2. tensioned latex membrane
  3. central reflective mirror
  4. stable support structure
  5. laser aligned with the mirror
  6. projection surface at fixed distance

Before moving on, it is recommended to:

  1. test the system with simple audio signals (e.g., sine waves)
  2. verify that the laser produces clear, stable patterns
  3. make small adjustments to membrane tension or alignment if needed


EEG Data Source and Preprocessing (background)

The EEG signals used in this project were obtained from an open-access EEG database, ensuring transparency and reproducibility.

EEG data source

The data come from the following OpenNeuro dataset:

https://openneuro.org/datasets/ds004504/versions/1.0.7

This dataset provides:

  1. raw (unprocessed) EEG recordings
  2. preprocessed EEG data
  3. detailed metadata and documentation

For this Instructable, EEG signals were selected and prepared to obtain clean and representative brain activity time series suitable for sonification.

EEG preprocessing (context)

EEG preprocessing typically includes steps such as:

  1. band-pass filtering
  2. artifact removal
  3. channel selection or averaging
  4. normalization

Important note

EEG preprocessing is a complex methodological process that goes beyond the scope of this Instructable. The focus here is on how already-prepared EEG signals can be physically visualized, not on teaching EEG cleaning techniques.

For educational and workshop purposes, this project assumes that a clean EEG time series is available as input.


Full reproducibility and materials

To ensure full reproducibility, all materials required to replicate the experiment are publicly available, including:

  1. code for EEG preprocessing
  2. sonification scripts
  3. example EEG data
  4. documentation

They are archived in the following Zenodo repository:

https://doi.org/10.5281/zenodo.17639457

All code used in the following steps is provided in this repository.

This allows:

  1. independent replication
  2. educational reuse
  3. adaptation to other datasets


Sonifying the EEG Signal

Once a clean EEG signal is available, the next step is direct sonification.

In this project:

  1. the EEG signal is converted directly into an audio waveform
  2. no musical mapping is applied
  3. no notes, scales, or symbolic transformations are used

This preserves the temporal and spectral structure of the original brain signal.



EEG sonification code:


#!/usr/bin/env python3
"""
Direct EEG Sonification (time-compressed waveform) -> WAV

- Loads EDF EEG files (single file or directory).
- Optional preprocessing (resample, bandpass, notch, average reference).
- Direct waveform sonification by resampling EEG time-series to audible sample rate.
- Exports WAV (PCM16).

This is "direct sonification" in the strict sense:
audio(t) is derived directly from EEG(t), only time-compressed and normalized,
without event detection, musical mapping, presets, or synthesized tones.
"""

from __future__ import annotations

import argparse
import json
import os
from dataclasses import dataclass, asdict
from pathlib import Path
from typing import Optional, List, Tuple

import numpy as np
import scipy.signal as sps
import scipy.io.wavfile as wav

# Optional dependency: MNE for EDF I/O and filtering
try:
import mne
MNE_AVAILABLE = True
except Exception:
MNE_AVAILABLE = False

# -----------------------------
# Config dataclass (metadata)
# -----------------------------
@dataclass
class SonificationConfig:
# EEG handling
eeg_target_sfreq: Optional[float] = 256.0
channel_mode: str = "mean" # "mean" or "pick"
pick_channel: Optional[str] = None

# Preprocessing (optional)
do_bandpass: bool = True
l_freq: float = 1.0
h_freq: float = 40.0
do_notch: bool = False
notch_freq: float = 50.0
do_avg_ref: bool = True

# Sonification (direct)
audio_sfreq: int = 44100 # MUST be >= 200 Hz (we use 44.1 kHz by default)
gain: float = 0.9 # headroom
detrend: bool = True
normalize: bool = True

# Output
export_metadata_json: bool = True

# -----------------------------
# Utilities
# -----------------------------
def list_edf_files(path: Path) -> List[Path]:
if path.is_file() and path.suffix.lower() == ".edf":
return [path]
if path.is_dir():
files = sorted([p for p in path.rglob("*.edf")])
return files
raise FileNotFoundError(f"Input path not found or not EDF/dir: {path}")

def safe_mkdir(p: Path) -> None:
p.mkdir(parents=True, exist_ok=True)

def robust_resample(x: np.ndarray, fs_in: float, fs_out: float) -> np.ndarray:
"""
Resample 1D signal from fs_in to fs_out using polyphase filtering.
"""
if fs_in <= 0 or fs_out <= 0:
raise ValueError("Sampling rates must be positive.")
if np.isclose(fs_in, fs_out):
return x.astype(np.float32, copy=False)

# Use rational approximation for polyphase resampling
# Limit denominator size to keep it stable for typical rates.
frac = fs_out / fs_in
up, down = _rational_approx(frac, max_den=1000)
y = sps.resample_poly(x, up=up, down=down).astype(np.float32, copy=False)
return y

def _rational_approx(value: float, max_den: int = 1000) -> Tuple[int, int]:
"""
Approximate a float as a rational number up/down with bounded denominator.
"""
from fractions import Fraction
frac = Fraction(value).limit_denominator(max_den)
return frac.numerator, frac.denominator

def to_pcm16(x: np.ndarray) -> np.ndarray:
x = np.clip(x, -1.0, 1.0)
return (x * 32767.0).astype(np.int16)

# -----------------------------
# EEG loading / preprocessing
# -----------------------------
def load_edf_with_mne(edf_path: Path, target_sfreq: Optional[float]) -> "mne.io.BaseRaw":
if not MNE_AVAILABLE:
raise RuntimeError("mne is required to load EDF files. Install with: pip install mne")

raw = mne.io.read_raw_edf(str(edf_path), preload=True, verbose="ERROR")

# Optional: resample to standard EEG rate for consistency
if target_sfreq is not None:
if not np.isclose(raw.info["sfreq"], target_sfreq):
raw.resample(target_sfreq, npad="auto", verbose="ERROR")
return raw

def preprocess_raw(raw: "mne.io.BaseRaw", cfg: SonificationConfig) -> "mne.io.BaseRaw":
r = raw.copy()

if cfg.do_avg_ref:
# average reference: helps reduce bias if channels are comparable
try:
r.set_eeg_reference("average", projection=False, verbose="ERROR")
except Exception:
# if channels are not EEG or reference fails, ignore gracefully
pass

if cfg.do_notch:
try:
r.notch_filter(freqs=[cfg.notch_freq], picks="all", verbose="ERROR")
except Exception:
pass

if cfg.do_bandpass:
try:
r.filter(l_freq=cfg.l_freq, h_freq=cfg.h_freq, picks="all", verbose="ERROR")
except Exception:
pass

return r

def raw_to_mono(raw: "mne.io.BaseRaw", cfg: SonificationConfig) -> Tuple[np.ndarray, float, dict]:
data = raw.get_data() # shape: (n_ch, n_samples)
sfreq = float(raw.info["sfreq"])
ch_names = list(raw.ch_names)

if cfg.channel_mode == "pick":
if not cfg.pick_channel:
raise ValueError("channel_mode='pick' requires --pick-channel.")
if cfg.pick_channel not in ch_names:
raise ValueError(f"Channel '{cfg.pick_channel}' not found. Available: {ch_names[:10]} ...")
idx = ch_names.index(cfg.pick_channel)
mono = data[idx].astype(np.float32, copy=False)
used = {"mode": "pick", "channel": cfg.pick_channel}
else:
# mean across channels
mono = np.mean(data, axis=0).astype(np.float32, copy=False)
used = {"mode": "mean", "channel": None}

return mono, sfreq, used

# -----------------------------
# Direct sonification
# -----------------------------
def direct_sonify(mono_eeg: np.ndarray, eeg_sfreq: float, cfg: SonificationConfig) -> np.ndarray:
x = mono_eeg.astype(np.float32, copy=False)

if cfg.detrend:
x = x - np.mean(x)

if cfg.normalize:
mx = float(np.max(np.abs(x))) if x.size else 0.0
if mx > 0:
x = x / mx

# Time-compress to audible sample rate by resampling
y = robust_resample(x, fs_in=eeg_sfreq, fs_out=float(cfg.audio_sfreq))

# Final gain (keep headroom)
my = float(np.max(np.abs(y))) if y.size else 0.0
if my > 0:
y = (y / my) * float(cfg.gain)

return y

# -----------------------------
# Export
# -----------------------------
def export_wav(audio: np.ndarray, audio_sfreq: int, out_wav: Path) -> None:
safe_mkdir(out_wav.parent)
pcm = to_pcm16(audio)
wav.write(str(out_wav), audio_sfreq, pcm)

def export_metadata(meta: dict, out_json: Path) -> None:
safe_mkdir(out_json.parent)
with open(out_json, "w", encoding="utf-8") as f:
json.dump(meta, f, indent=2, ensure_ascii=False)

# -----------------------------
# CLI main
# -----------------------------
def build_argparser() -> argparse.ArgumentParser:
p = argparse.ArgumentParser(
description="Direct EEG sonification (strict waveform) from EDF -> WAV",
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
p.add_argument("--input", required=True, help="Path to an EDF file or a directory containing EDF files.")
p.add_argument("--output-dir", required=True, help="Directory where WAV (and JSON metadata) will be saved.")
p.add_argument("--audio-sfreq", type=int, default=44100, help="Audio sampling rate (Hz). Must be >= 200.")
p.add_argument("--eeg-target-sfreq", type=float, default=256.0, help="Resample EEG to this rate (Hz). Use 0 to disable.")
p.add_argument("--channel-mode", choices=["mean", "pick"], default="mean", help="Use mean across channels or pick one.")
p.add_argument("--pick-channel", default=None, help="Channel name if channel-mode=pick (e.g., 'Pz').")

# preprocessing toggles
p.add_argument("--no-bandpass", action="store_true", help="Disable bandpass filtering.")
p.add_argument("--l-freq", type=float, default=1.0, help="Bandpass low cutoff (Hz).")
p.add_argument("--h-freq", type=float, default=40.0, help="Bandpass high cutoff (Hz).")
p.add_argument("--notch", action="store_true", help="Enable notch filter.")
p.add_argument("--notch-freq", type=float, default=50.0, help="Notch frequency (Hz).")
p.add_argument("--no-avg-ref", action="store_true", help="Disable average reference.")

# sonification
p.add_argument("--gain", type=float, default=0.9, help="Output gain (0..1).")
p.add_argument("--no-detrend", action="store_true", help="Disable DC removal (mean subtraction).")
p.add_argument("--no-normalize", action="store_true", help="Disable amplitude normalization.")

# metadata
p.add_argument("--no-metadata", action="store_true", help="Do not export JSON metadata.")
return p

def main() -> None:
args = build_argparser().parse_args()

in_path = Path(args.input).expanduser().resolve()
out_dir = Path(args.output_dir).expanduser().resolve()

if args.audio_sfreq < 200:
raise ValueError("--audio-sfreq must be >= 200 Hz for the requested constraint.")

cfg = SonificationConfig(
eeg_target_sfreq=None if args.eeg_target_sfreq == 0 else float(args.eeg_target_sfreq),
channel_mode=args.channel_mode,
pick_channel=args.pick_channel,
do_bandpass=not args.no_bandpass,
l_freq=float(args.l_freq),
h_freq=float(args.h_freq),
do_notch=bool(args.notch),
notch_freq=float(args.notch_freq),
do_avg_ref=not args.no_avg_ref,
audio_sfreq=int(args.audio_sfreq),
gain=float(args.gain),
detrend=not args.no_detrend,
normalize=not args.no_normalize,
export_metadata_json=not args.no_metadata,
)

edf_files = list_edf_files(in_path)

if not MNE_AVAILABLE:
raise RuntimeError(
"mne is required for EDF loading/filtering.\n"
"Install it with: pip install mne"
)

safe_mkdir(out_dir)

for edf_path in edf_files:
raw = load_edf_with_mne(edf_path, cfg.eeg_target_sfreq)
raw = preprocess_raw(raw, cfg)

mono, eeg_sr, used = raw_to_mono(raw, cfg)
audio = direct_sonify(mono, eeg_sr, cfg)

stem = edf_path.stem
out_wav = out_dir / f"{stem}_direct_{cfg.audio_sfreq}Hz.wav"
export_wav(audio, cfg.audio_sfreq, out_wav)

if cfg.export_metadata_json:
meta = {
"input_file": str(edf_path),
"output_wav": str(out_wav),
"config": asdict(cfg),
"eeg_sfreq_after_resample": float(eeg_sr),
"channel_selection": used,
"n_samples_eeg": int(mono.size),
"n_samples_audio": int(audio.size),
}
out_json = out_dir / f"{stem}_direct_{cfg.audio_sfreq}Hz.json"
export_metadata(meta, out_json)

print(f"[OK] {edf_path.name} -> {out_wav.name}")

print("\nDone.")
print("Tip: If you want Bluetooth output, pair/select your BT speaker in OS audio settings, then play the WAV.")

if __name__ == "__main__":
main()


The output of this step is a standard audio signal that can be played through any audio output device.

You can try the small attached sample (sonified eeg.mp3) to check the EEG sonification result

Downloads

Sending the Audio Signal to the Vibration Speaker

The sonified EEG audio is sent wirelessly via Bluetooth to the vibration speaker.

Steps:

  1. Pair the computer or playback device with the vibration speaker via Bluetooth.
  2. Select the speaker as the default audio output.
  3. Play the sonified EEG audio signal.

Using Bluetooth makes the setup:

  1. cleaner
  2. more flexible
  3. easier to adapt to different workshop environments

Keep the volume at a moderate level to avoid damaging the speaker or the membrane.

Generating the Laser Projection

As the sonified EEG audio drives the vibration speaker:

  1. the membrane vibrates according to the brain signal
  2. the central mirror reflects the laser beam
  3. the reflected beam traces dynamic patterns on the projection surface

For best results:

  1. use a matte black surface
  2. place it at a fixed distance (≈1 meter in this setup)
  3. ensure complete darkness in the room

Any smooth black surface can be used, as long as ambient light is minimized.

Recording the Laser Patterns

The resulting laser patterns were recorded using a camera.

In this setup:

  1. a smartphone camera (iPhone 15) was used
  2. the camera was placed facing the projection surface
  3. video recording was preferred over still images to capture dynamics

Tips:

  1. fix the camera on a tripod if possible
  2. avoid autofocus changes during recording
  3. keep exposure settings stable

These recordings can later be:

  1. visually compared
  2. analyzed with computer vision tools
  3. used for educational demonstrations


What You Are Seeing

(The video shows a brief example of laser pattern recording)

Each recorded laser pattern corresponds to:

  1. a specific EEG signal
  2. a specific brain state
  3. a specific vibration regime

Because the system is physical:

  1. the same signal produces the same pattern
  2. different signals produce different patterns

This makes the setup a physical data visualizer, where brain dynamics become directly observable as light and motion.


Educational and Experimental Use

This setup can be used for:

  1. teaching signal processing concepts
  2. demonstrating EEG dynamics
  3. exploring data sonification
  4. science–art workshops
  5. experimental visualization of complex data

This system is not intended for clinical diagnosis.

Its purpose is educational, experimental, and exploratory