How to Fix Gemma4 Not Supported Error and mlx_vlm.models.gemma4 Missing

If you are trying to load a Gemma 4 model on MLX and you see the error “Model type gemma4 not supported” or “No module named ‘mlx_vlm.models.gemma4’”, the problem usually comes from an outdated mlx-vlm install. Gemma 4 support was added to mlx-vlm in v0.4.3, so older versions cannot load the new module.

This guide explains what causes the error, how to update the package correctly, how to check whether your Python environment is using the right install, and what to do if the issue still appears inside apps like LM Studio. An open issue in mlx-engine shows the same loader error right now, which confirms that some bundled environments are still behind the latest mlx-vlm release.

What Causes the Gemma4 Not Supported Error?

The error appears when your current environment tries to load Gemma 4, but the installed mlx_vlm package does not include the gemma4 model module. The latest mlx-vlm releases include a dedicated mlx_vlm/models/gemma4 module with working usage examples, which means your package must be updated to support Gemma 4.

In simple terms, your app asks for Gemma 4 support, but your installed package does not know what Gemma 4 is yet.

Quick Fix for Gemma4 Not Supported Error

Update mlx-vlm to the latest version first.

python -m pip install -U mlx-vlm

Gemma 4 support was added in mlx-vlm v0.4.3. If you are running an older build, the package may throw the exact “not supported” and “No module named mlx_vlm.models.gemma4” error.

After the update, restart your terminal, notebook kernel, or app before you try to load the model again.

Step 1: Check your installed version

Run this command:

python -c "import mlx_vlm; print(mlx_vlm.__version__)"

You want to see 0.4.3 or newer. That is the release line where Gemma 4 support landed.

If the version is older, update again:

python -m pip install -U mlx-vlm

Step 2: Make sure Python is using the right environment

A lot of people update the package in one Python environment, then launch the model from another one. That makes it look like the update failed even though it installed correctly somewhere else.

Run this:

python -c "import mlx_vlm, sys; print(sys.executable); print(mlx_vlm.__file__); print(mlx_vlm.__version__)"

This command shows:

  • the Python executable you are using
  • the location of the installed mlx_vlm package
  • the actual installed version

If the path points to a different virtual environment than the one your app uses, that is your real problem.

Step 3: Do a clean reinstall if the update did not fix it

Sometimes an old package stays behind or a broken install leaves stale files in place. In that case, remove and reinstall mlx-vlm.

python -m pip uninstall -y mlx-vlm
python -m pip install -U mlx-vlm

Then verify again:

python -c "import mlx_vlm; print(mlx_vlm.__version__)"

If it still does not show 0.4.3 or newer, your system is probably installing into the wrong interpreter.

Step 4: Test with a direct Gemma 4 command

Recent mlx-vlm builds include CLI support for Gemma 4 models such as google/gemma-4-e4b-it. After updating the package, test it directly from the command line to confirm everything is working correctly.

python -m mlx_vlm.generate \
  --model google/gemma-4-e4b-it \
  --prompt "What is the capital of France?" \
  --max-tokens 200

If this works, your mlx-vlm install is fine and the remaining issue is probably specific to the app or wrapper you are using.

Step 5: If you are using LM Studio or another wrapper

This error is not always your fault. There is already an open mlx-engine issue showing the same exact message when users try to load Gemma 4. That suggests some tools still bundle or call an older mlx-vlm build even when Gemma 4 support exists upstream.

If you are using LM Studio or another MLX-based front end, check these points:

1. Restart the app fully

Close the app completely and reopen it after updating Python packages.

2. Check whether the app uses its own bundled environment

Some desktop tools do not use your system Python. They ship their own internal dependencies.

3. Update the app itself

Even if your local Python environment is correct, the app may still ship an older MLX engine.

4. Try loading the model outside the app

If the CLI test works but the app fails, the app environment is the issue.

Supported Gemma 4 Models in MLX-VLM

Recent mlx-vlm builds support multiple Gemma 4 variants, including:

  • google/gemma-4-e2b-it
  • google/gemma-4-e4b-it
  • google/gemma-4-26b-a4b-it
  • google/gemma-4-31b-it

These models also bring multimodal capabilities, allowing you to work with text, images, and in some cases audio, depending on the model size and configuration.

Important Things to Know Before Using Gemma 4

Gemma 4 loading support is available now, but the wider MLX ecosystem is still catching up in some areas. That means one tool may support basic model loading while another wrapper may still fail until it updates its bundled dependencies. The current mlx-engine issue is a good example of that gap.

So if you fixed the package version and still hit errors in a third-party app, do not assume your install is wrong right away.

Copy and Run These Commands to Fix Gemma4 Error

If you want the quickest all-in-one repair flow, use this order:

python -m pip uninstall -y mlx-vlm
python -m pip install -U mlx-vlm
python -c "import mlx_vlm, sys; print(sys.executable); print(mlx_vlm.__file__); print(mlx_vlm.__version__)"

Then restart the app or shell and try loading Gemma 4 again.

If you see “Model type gemma4 not supported” or “No module named ‘mlx_vlm.models.gemma4’”, your environment is usually running an older mlx-vlm build that does not include Gemma 4 support. Updating to mlx-vlm v0.4.3 or newer fixes the problem in most cases. If it does not, the next likely cause is a Python environment mismatch or a desktop app that still bundles an older MLX dependency.

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply