Spiking Neural Network Model of Von Economo Neurons: Connecting Computational Neuroscience to Spaceflight Cognitive Performance

Hi everyone! I’m Esila, a 2nd year Computer Science student at UWE Bristol specialising in computational neuroscience and AI.

I’ve recently built the first computational model of Von Economo neuron (VEN) function, the Fast Lane Hypothesis. VENs are large bipolar neurons in the ACC selectively depleted in Frontotemporal Dementia and altered in autism. My model implements them as fast LIF neurons in a spiking neural network, demonstrating that VEN density modulates social decision speed rather than accuracy, with statistically significant differences between typical, autism-like, and FTD-like conditions (p<0.0001). Full code: https://github.com/esila-keskin/fast-lane-hypothesis

I want to extend this to spaceflight and I’d love to connect with the AWG community. My questions:

  • Does microgravity, radiation, or isolation affect ACC circuits where VENs operate?

  • Can OSDR behavioural data show signatures of VEN pathway degradation in astronauts?

  • Could we build a cognitive biomarker pipeline for early detection of ACC changes during long-duration missions?

I’m actively looking to collaborate, happy to bring the SNN modelling side if anyone has data or domain expertise. @BrainAWG

7 Likes

Those are some excellent questions! Shall we find a time to discuss?

1 Like

Update: Two analyses complete, significant findings in both

Since posting, I’ve run two open-data analyses and wanted to share the results with the community.

Analysis 1: Behavioral signatures (NASA OSD-618) Social approach behavior is largely preserved across all spaceflight stressor conditions (sociability p=0.31, social memory p=0.39), consistent with the Fast Lane prediction that VEN-relevant circuits modulate decision speed rather than capacity. Spatial memory (RAWM) shows a significant isolation effect (p=0.003) and combined stressor animals show altered balance beam performance (p=0.008). Locomotion unchanged across all conditions, ruling out motor confounds. Code and figures: GitHub - esila-keskin/spaceflight-ven-analysis · GitHub

Analysis 2: VEN gene signature in real ISS frontal cortex (NASA OSD-698 / GEO GSE239336). Using GeoMx Digital Spatial Profiling of frontal cortex from mice flown on SpaceX-24 (35 days ISS), I searched for 27 VEN-associated genes across five categories. Three reached significance:

  • CNP (myelination): log2FC= +0.62, p= 0.011, fast-conducting axon response

  • SOD2 (oxidative stress): log2FC= +0.44, p= 0.041, mitochondrial defence in large neurons

  • SNAP25 (fast signalling): log2FC=+0.42, p= 0.044, synaptic vesicle fusion machinery

  • NOS1 (direct VEN biochemical marker): log2FC= -0.35, p= 0.086, trending down

VEN genes are significant at 2.6x the background rate (Fisher OR=2.62), though n=12 limits power.

The myelination pattern is particularly interesting; CNP, MAG, and MBP all trend upward, suggesting fast-conducting axon pathways are responding to spaceflight stress. NOS1 trending down is notable because it is one of the few direct biochemical VEN markers known.

Code and figures: GitHub - esila-keskin/ven-frontalcortex-spaceflight · GitHub

Limitation to flag: mice don’t have VENs, so the frontal cortex is the best available proxy. These results motivate a search for datasets with more direct ACC/layer V resolution. I’d be happy to hear suggestions!

2 Likes

Thank you so much! I’d love to find a time to discuss. I’m based in the UK (GMT+1), so happy to work around any time that suits you. I’m also just getting started on a companion data analysis using OSD-618 behavioral data to test the Fast Lane Hypothesis predictions against open NASA datasets: GitHub - esila-keskin/spaceflight-ven-analysis · GitHub

Looking forward to connecting!

1 Like

Hi,

That sounds like a great analysis and thank you for sharing. I agree with Windy lets schedule a meeting to discuss… we have some related work going on at our AWG and this could be great addition.

Cheers,

Nilufar

1 Like

@nilufarali Thank you so much! I’d love to hear about the related work you have going on, it sounds like there could be some real overlap. I’m completely flexible on timing (UK, GMT+1),so happy to work around whatever suits you and Windy. Looking forward to it!

1 Like

Do you mind emailing me? windymc@stanford.edu. I just started teaching so my schedule is crazy busy, probably easiest if I can sit down and look through my calendars.

1 Like

@windymc Just emailed you. Thank you so much for the offer to discuss! Very much looking forward to it.

1 Like

Great preliminary findings! Could you also try and replicate the analysis in the same study model on ground exposed to a different stressor? For example pain & risk behaviour. I expect to see replication of the fast conducting axon response and SNAP25 synaptic vesicle fusion findings. Then the question needs to be diverted to ‘what is unique for spaceflight?’

1 Like

@anuiris You’re absolutely right that testing ground-based stress models is critical for establishing spaceflight specificity.

I’m analyzing OSD-202 right now (ground-based hindlimb unloading + low-dose radiation in mice with brain transcriptomics) as a direct comparison. It has the same stressor types as the spaceflight studies but in a ground environment, which should help tease apart what’s unique to spaceflight vs. general stress response.

I’m also searching for other ground-based datasets with frontal cortex tissue and different stress paradigms (social isolation, chronic restraint, pain models) to test whether the myelination response is specific to certain stressor combinations or a general stress adaptation.

I’ll run the same VEN gene signature analysis (myelination, fast signaling, oxidative stress categories) across datasets and compare the results. Should have preliminary findings in the next day or two.

I’ll post an update once I have the comparison results. Thank you for the comment!

1 Like

The script catches missing-file errors and still proceeds to generate figures and a JSON output, it can produce apparent outputs even when the required input files were never actually loaded, the implementation has several issues, but this is a particularly important one.

In the case of your implementation with OSD-618, the main problem is that it changes the experimental design itself, the repository reduces the study to four pseudo-groups (control, radiation, isolation, combined), whereas the source paper describes a factorial design with three housing conditions (group, SI, SI+HU), two radiation doses (0 and 50 cGy), and analyses that are explicitly sex-dependent, that makes the downstream statistics non-equivalent to the analysis reported in the paper.

I’d like to list it for you point by point, but I would recommend that when you start developing a method, you start with raw data.

@angel Thank you for the feedback. I’ve edited the code on GitHub, addressing your comments.

On error handling: the broad try/except that let the script produce outputs on failed loads is replaced with hard exits on missing metadata and a RuntimeError if zero animals match so the script can no longer produce figures from bad data.

On the experimental design: you’re right that collapsing to four pseudo-groups was wrong. The block-range approach was built from my reading of the sample table, and animals outside those ranges silently defaulted to “control” corrupting every group comparison. The condition assignment now parses OSD-618_metadata_OSD-618-ISA.zip directly, reads the factor value columns (sex, hindlimb unloading, ionizing radiation, housing condition), and derives six conditions GH_sham, GH_GCR, SI_sham, SI_GCR, HU_sham, HU_GCR with sex stratified throughout. This recovers all 118 animals correctly, 590 ISA rows deduplicated to 118 unique animals with approximately 10 per condition per sex.

Running this on the behavioral data reproduces the pattern reported in Rienecker’s paper, significant effects concentrated in males (RAWM one-way anovap=0.0005, open field radiation conditions showing increased locomotion), with females showing no significant differences across conditions.

Regarding the statistical approach: I’m currently running the analysis stratified by sex (separate comparisons for males and females) because the sex x condition interaction seems to be the key finding (differential vulnerability in males vs resilience in females), and this makes the pattern easier to visualize.

However, I could also run a mixed anova with sex as a between-subjects factor and condition as a within-subjects factor, which would formally test the sex x condition interaction term. Would that be more appropriate for this design? I want to make sure I’m using the right statistical framework given the factorial structure.

The current stratified approach shows clear male-specific deficits (RAWM p=0.0005 in males, ns in females), which matches Rienecker’s findings, but I’m open to changing the analysis if a mixed model would be better suited.

The more precise concern is that it does not start from the authoritative experimental layer needed for a faithful reanalysis. It uses transformed assay tables rather than raw data, and more importantly, it does not establish the analysis from the official experimental metadata in the way the study design requires.

That matters because most of the outputs it generates are downstream products of the same analytical choice. The figures, heatmaps, and summary JSON do not provide independent validation. They are only visual or aggregated restatements of the same pipeline. If the grouping logic, endpoint definition, or model specification is wrong, those additional outputs do not strengthen the analysis and do not recover scientific validity.

A second major concern is that the biological interpretation remains indirect. If the claim is ultimately about a difference in VEN-related response, this repository does not test that mechanism directly in the OSD-618 dataset. It applies a conceptual mapping between behavioral conditions and VEN model states. That can be used as a speculative interpretation, but it is not the same as experimental validation of the hypothesis.

@angel On the data layer: the ISA metadata parsing recovers the correct factorial design and reproduces published results, but starting from raw data would be more authoritative so I will be working with those.

On the biological interpretation: You’re right that this is indirect. OSD-618 doesn’t test the VEN mechanism directly, mice don’t have true VENs, and I’m mapping behavioral patterns onto model predictions. I framed this as “consistent with VEN hypothesis predictions” but I’ll make the distinction between consistency and validation more explicit in the documentation to avoid any misinterpretation.

The value of OSD-618 is establishing the behavioral dissociation (social preserved, spatial impaired) as a reproducible pattern. Whether VENs are causally involved needs direct testing with VEN-specific measurements in primates or circuit manipulation studies.

I’m currently searching OSDR for datasets with multi-region brain transcriptomics to test whether the myelination response I found in GSE239336 is ACC-specific or a general stress response. That spatial specificity test would more directly address whether this is VEN-circuit-related or broader compensation.

There are public resources that help characterize human VEN biology, but I am not aware of a public raw dataset specifically measuring VENs in FTD or autism in a way that would support direct mechanistic validation, in FTD the evidence is still mainly neuropathological and stereological. In autism the literature is limited, since mice do not have canonical VENs, OSD-618 and related mouse transcriptomic analyses can at most provide indirect consistency with a broader circuit or stress-response interpretation, not direct validation of a VEN-specific mechanism.

Agreed. That was actually my original intent when I posted my initial post, I was looking for datasets that would allow more direct testing. The “limitation to flag” section in my initial post already noted that mice don’t have VENs and asked the community for suggestions on datasets with better ACC/layer V resolution.

The analyses I’ve run establish the behavioral and molecular patterns as consistent with predictions, but as you’ve confirmed, direct VEN validation would require primate data or circuit manipulation studies that I haven’t been able to find yet.

@anuiris I’ve run the analysis and now have the preliminary results from the ground-based comparison using OSD-202 (21-day HLU + 0.04 Gy gamma radiation, whole brain RNA-Seq, 1 month post-treatment, GeneLab-processed DE results),(NASA OSDR: Open Science for Life in Space).

The myelination category shows the opposite response under ground stress compared to spaceflight. In GSE239336 (ISS frontal cortex), myelination genes were significantly upregulated (mean log2FC = +0.41, p = 0.016). In OSD-202, myelination genes are significantly downregulated across all three ground conditions:

  • Combined IR+HLU: mean log2FC = -0.22, t(4) = -3.00, p = 0.040
  • Radiation only: mean log2FC = -0.54, t(4) = -11.85, p = 0.0003
  • HLU only: mean log2FC = -0.31, t(4) = -2.97, p = 0.041

This suggests the myelination upregulation in spaceflight is not a general stress response. Ground analogs of the same stressors produce the opposite effect on myelination genes, which could mean the compensatory upregulation requires something unique to the spaceflight environment that ground models don’t fully replicate.

Important caveats: OSD-202 uses whole brain (not frontal cortex), tissue was collected 1 month post-treatment (not immediately after exposure), radiation type and dose differ (0.04 Gy gamma vs mixed ISS field). These differences could contribute to the divergent response and need to be considered when interpreting the comparison.

Code and results: GitHub - esila-keskin/ven-ground-stress-comparison · GitHub

Would you interpret the opposite-direction myelination response as supporting spaceflight specificity, or do you think the tissue and timing differences between OSD-202 and GSE239336 are large enough that the comparison isn’t clean enough to draw that conclusion yet?