Login | Register

CREMI

MICCAI Challenge on
Circuit Reconstruction from Electron Microscopy Images

Leaderboard Neuron Segmentation

Results for the neuron segmentation category, averaged over all samples.

Change the view to:
Group Submission CREMI score VOI split VOI merge ARAND
JRC MALAMCv2 0.276 0.490 0.089 0.132
JRC MALA v2 0.286 0.479 0.127 0.135
JRC MALALMC2 0.294 0.400 0.168 0.154
JRC MALAMCv3 0.300 0.584 0.062 0.140
JRC MALAMC 0.317 0.497 0.170 0.152
JRC MALA v1 0.327 0.524 0.165 0.156
JRC MALALMC 0.352 0.640 0.116 0.165
IAL LMC V5 0.398 0.597 0.272 0.184
IAL LMC V1 0.399 0.646 0.211 0.187
IAL LMC V3 0.445 0.655 0.308 0.208
IAL LMC V2 0.446 0.629 0.285 0.219
IAL LMC V4 0.447 0.622 0.295 0.219
IAL LMC V6 0.516 0.672 0.415 0.245
DIVE CRunet2 0.566 1.081 0.389 0.229
VCG LFC 0.616 1.085 0.140 0.313
VCG LearnC 0.618 1.093 0.139 0.313
DIVE CRunet1 0.619 1.141 0.381 0.275
VCG LearnCt2 0.623 1.091 0.148 0.316
DIVE MPfcn 0.625 1.063 0.486 0.260
DIVE Segment 0.670 1.275 0.315 0.287
DIVE seg&BW 0.679 1.287 0.318 0.292
DIVE SegCleft 0.679 1.286 0.319 0.292
DIVE segBW1 0.693 1.296 0.314 0.304
DIVE ftSeg1 0.695 1.224 0.355 0.311
DIVE ftSeg2 0.698 1.218 0.436 0.299
  • Page 1 of 3
  • 25 of 55 items

Legend

CREMI score
The geometric mean of (VOI split + VOI merge) and ARAND.
VOI split and merge
The Variation of Information between a segmentation X and ground truth Y. The split and merge parts correspond to the conditional entropies H(X|Y) and H(Y|X), respectively.
ARAND
The Adapted Rand Error, i.e., 1.0 - Rand F-Score.

For each value, lower is better.

Leaderboard Synaptic Cleft Detection

Results for the synaptic cleft detection category, averaged over all samples.

Change the view to:
Group Submission CREMI score FP FN 1 - F-score ADGT ADF
DIVE SegCleft 77.69 82750.7 20166.7 0.166 122.48 32.89
DIVE celft_1 77.69 82750.7 20166.7 0.166 122.48 32.89
DIVE cleft 89.07 98391.7 20766.7 0.175 141.09 37.05
DIVE cleft_hi 107.51 44945.0 51877.0 0.158 94.80 120.21
SDG Synap0.0 118.31 197257.7 12504.0 0.272 186.36 50.26
SDG SynapX.0 118.31 197257.7 12504.0 0.272 186.36 50.26
SDG SynapX.X 121.74 210218.3 16149.7 0.289 195.53 47.96
SDG SnapX.O 124.91 186232.0 16853.0 0.308 205.86 43.97
SDG SnapF.V 124.91 186232.0 16853.0 0.308 205.86 43.97
IAL full_mod 137.76 155264.0 61039.7 0.310 176.72 98.80
IAL unaries 150.68 177553.0 59232.0 0.328 213.85 87.50
DIVE clefitlo 151.11 106107.0 32455.3 0.220 190.42 111.81
IAL A+test 153.93 198486.7 55128.7 0.340 232.59 75.27
DIVE SegClef1 157.49 623651.0 3319.3 0.378 298.46 16.51
DIVE cleftlol 180.29 741428.0 2714.0 0.381 336.07 24.51
DIVE CRunet22 181.01 164339.3 23040.7 0.314 306.08 55.94
DIVE cleftloh 194.33 22713.3 67203.3 0.236 102.43 286.24
IAL 3d_model 229.99 87620.7 83656.3 0.318 255.80 204.19
DIVE CRunet2 319.04 108631573.0 4195.7 0.826 611.68 26.40
  • 19 items

Legend

CREMI score
The mean of ADGT and ADF. We are not including the F-Score for now, as our current way of computing it can lead to unfair comparison. We are working on a more robust measure and will update the results accordingly.
FP, FN, and F-Score
The False Positives, False Negatives, and the resulting F-Score in the clefts detection volume. See metrics for details.
ADGT
The average distance of any found cleft voxel to the closest ground truth cleft voxel.
ADF
The average distance of any ground truth cleft voxel to the closest found cleft voxel.

For each value, lower is better (F-Score shown as 1 - F-Score).

Leaderboard Synaptic Partner Identification

Results for the synaptic partner identification category, averaged over all samples.

Change the view to:
Group Submission
IAL PSP_unar
IAL PSP_full
  • 2 items

(Results are currently not shown until we receive more submissions.)

Legend

CREMI score
1 - F-Score of the FP and FN.
FP, FN
The False Positives and False Negatives synaptic partner pairs. See metrics for details.

For each value, lower is better.