Code from "EEG forward operator with a template MRI" doesn't seem to work

I am trying to go through the EEG source reconstruction and stuck at the creating of a forward model. What I am trying to do: I use this tutorial to create a forward operator with fsaverage. I followed the steps of the tutorial replacing the eegbci dataset with my data and making other minor changes (fixed channel names in specific way). Script worked well right until the make_forward_solution call which stopped at “Loading the solution matrix…” and gave me a traceback ending up with “TypeError: only size-1 arrays can be converted to Python scalars”. Unfortunately, I didn’t have any other messages that’d clarify what’s going on so I decided to just download the script attached to the tutorial and run it without a change. The result was the same, and I have no idea of what happened. What am I doing wrong?

MNE version: 1.2.2
operationg system: Windows 10
Full console output from the tutorial script, including the exception traceback:

> 0 files missing from root.txt in C:\Users\Admin\mne_data\MNE-fsaverage-data
> 0 files missing from bem.txt in C:\Users\Admin\mne_data\MNE-fsaverage-data\fsaverage
> Extracting EDF parameters from C:\Users\Admin\mne_data\MNE-eegbci-data\files\eegmmidb\1.0.0\S001\S001R06.edf...
> EDF file detected
> Setting channel info structure...
> Creating structure...
> Reading 0 ... 19999  =      0.000 ...   124.994 secs...
> EEG channel type selected for re-referencing
> Adding average EEG reference projection.
> 1 projection items deactivated
> Average reference projection was added, but has not been applied yet. Use the apply_proj method to apply it.
> Reading C:\Users\Admin\mne_data\MNE-fsaverage-data\fsaverage\bem\fsaverage-ico-5-src.fif...
> Using for head surface.
> Channel types::	eeg: 64
> Projecting sensors to the head surface
> Source space          : C:\Users\Admin\mne_data\MNE-fsaverage-data\fsaverage\bem\fsaverage-ico-5-src.fif
> MRI -> head transform : E:\Program Files\Anaconda\envs\mne\lib\site-packages\mne\data\fsaverage\fsaverage-trans.fif
> Measurement data      : instance of Info
> Conductor model   : C:\Users\Admin\mne_data\MNE-fsaverage-data\fsaverage\bem\fsaverage-5120-5120-5120-bem-sol.fif
> Accurate field computations
> Do computations in head coordinates
> Free source orientations
> Reading C:\Users\Admin\mne_data\MNE-fsaverage-data\fsaverage\bem\fsaverage-ico-5-src.fif...
> Read 2 source spaces a total of 20484 active source locations
> Coordinate transformation: MRI (surface RAS) -> head
>      0.999994  0.003552  0.000202      -1.76 mm
>     -0.003558  0.998389  0.056626      31.09 mm
>     -0.000001 -0.056626  0.998395      39.60 mm
>      0.000000  0.000000  0.000000       1.00
> Read  64 EEG channels from info
> Head coordinate coil definitions created.
> Source spaces are now in head coordinates.
> Setting up the BEM model using C:\Users\Admin\mne_data\MNE-fsaverage-data\fsaverage\bem\fsaverage-5120-5120-5120-bem-sol.fif...
> Loading surfaces...
> Loading the solution matrix...
> Traceback (most recent call last):
>   File "E:\Program Files\Anaconda\envs\mne\lib\site-packages\spyder_kernels\", line 356, in compat_exec
>     exec(code, globals, locals)
>   File "e:\work\mbs\eeg\modelling\sources\smth\", line 78, in <module>
>     fwd = mne.make_forward_solution(, trans=trans, src=src,
>   File "<decorator-gen-394>", line 12, in make_forward_solution
>   File "E:\Program Files\Anaconda\envs\mne\lib\site-packages\mne\forward\", line 591, in make_forward_solution
>     sensors, rr, info, update_kwargs, bem = _prepare_for_forward(
>   File "<decorator-gen-393>", line 12, in _prepare_for_forward
>   File "E:\Program Files\Anaconda\envs\mne\lib\site-packages\mne\forward\", line 448, in _prepare_for_forward
>     bem = _setup_bem(bem, bem_extra, len(eegnames), mri_head_t,
>   File "<decorator-gen-390>", line 12, in _setup_bem
>   File "E:\Program Files\Anaconda\envs\mne\lib\site-packages\mne\forward\", line 241, in _setup_bem
>     bem = read_bem_solution(bem)
>   File "<decorator-gen-74>", line 12, in read_bem_solution
>   File "E:\Program Files\Anaconda\envs\mne\lib\site-packages\mne\", line 1560, in read_bem_solution
>     bem = _read_bem_solution_fif(fname)
>   File "E:\Program Files\Anaconda\envs\mne\lib\site-packages\mne\", line 1643, in _read_bem_solution_fif
>     tag = find_tag(fid, bem_node, FIFF.FIFF_BEM_POT_SOLUTION)
>   File "E:\Program Files\Anaconda\envs\mne\lib\site-packages\mne\io\", line 504, in find_tag
>     return read_tag(fid, subnode.pos)
>   File "E:\Program Files\Anaconda\envs\mne\lib\site-packages\mne\io\", line 469, in read_tag
> = _read_matrix(fid, tag, shape, rlims, matrix_coding)
>   File "E:\Program Files\Anaconda\envs\mne\lib\site-packages\mne\io\", line 184, in _read_matrix
>     ndim = int(np.frombuffer(, dtype='>i4'))
> TypeError: only size-1 arrays can be converted to Python scalars

I just downloaded the file from the page you linked to, and ran it from a terminal with ipython -i It worked fine, no errors. This looks like it might be a windows-specific bug… @larsoner do you recognize this error?

Can you try downloading the example dataset again? That looks like a corrupted or incomplete file error


Alright, I dug into this and actually figured what was going on. TL;DR: larsoner is right, one of the files was corrupted, redownloading solved the problem.
I fell back through the traceback and set up debug messages here and there trying to catch something. Long story short, it turned out that fsaverage-5120-5120-5120-bem-sol.fif was incomplete, so the solution matrix could not be loaded. What still bothered me a little bit, though, was that the files are meant to be checked by their hashes, so I decided to explore this a bit more. The hash sum of a temporary zip-file is checked after the file is downloaded (pooch/, stream_download function, which, in turn, are called from MNE’s datasets/, _manifest_check_download function):

            with temporary_file(path=str(fname.parent)) as tmp:
                downloader(url, tmp, pooch)
                hash_matches(tmp, known_hash, strict=True, source=str(

This works just fine: if the downloaded file is good then nothing happens, otherwise hash_matches raises ValueError indicating a mismatch. I didn’t get any errors of this kind, but for good measure I compared the hashes of correct and corrupted files - because things happen and md5-hash collision is theoretically possible. Indeed, the hashes were different (and this is the fact I’m really glad of, because otherwise I would have officially manifested myself as the unluckiest person the history has ever seen).
Well, it works just fine unless something really weird happens. I’m not really sure what exactly went wrong, but I have two options: it was either the previous version of MNE’s fetch_average (first time I did it at 1.0.0, faced problems and updated to 1.2.2; 1.0.0 actually did raise the hash mismatch error and might have left the inconsistent data) or my hard drive that suddenly ran out of free space in the middle of the downloading process and caused hash_matches to not be called. Either way, the zip-file was corrupted but actually got unpacked. Actually, MNE itself has one more check (the first check, actually, datasets/, _manifest_check_download function):

    with open(manifest_path, 'r') as fid:
        names = [name.strip() for name in fid.readlines()]
    manifest_path = op.basename(manifest_path)
    need = list()
    for name in names:
        if not op.isfile(op.join(destination, name)):
            need.append(name)'%d file%s missing from %s in %s'
                % (len(need), _pl(need), manifest_path, destination))
    if len(need) > 0:
        with tempfile.TemporaryDirectory() as path:
  'Downloading missing files remotely')

            fname_path = op.join(path, '')

That is, it retrieves a list of needed files from a manifest file, and if something is missing, the downloading process starts. Unfortunately, it was not my case as no file was actually lacking, just one of them was incomplete (but presented still).
At the end of day, I’m not really sure what I should do now. On the one hand, it looks like a bug. Furthermore, it can easily be fixed by checking not only hash of a zip-file, but also hashes of the unpacked files instead of just their existence. On the other hand, it feels pretty much like a unique situation, and I highly doubt it will ever happen once again. Any suggestions?