Using MinerU
Quick Model Source Configuration
MinerU uses huggingface as the default model source. If users cannot access huggingface due to network restrictions, they can conveniently switch the model source to modelscope through environment variables:
export MINERU_MODEL_SOURCE=modelscope
Quick Usage via Command Line
MinerU has built-in command line tools that allow users to quickly use MinerU for PDF parsing through the command line:
# Default parsing using pipeline backend
mineru -p <input_path> -o <output_path>
Tip
<input_path>: Local PDF/image file or directory<output_path>: Output directory
For more information about output files, please refer to Output File Documentation.
Note
The command line tool will automatically attempt cuda/mps acceleration on Linux and macOS systems.
Windows users who need cuda acceleration should visit the PyTorch official website to select the appropriate command for their cuda version to install acceleration-enabled torch and torchvision.
# Or specify vlm backend for parsing
mineru -p <input_path> -o <output_path> -b vlm-transformers
Tip
The vlm backend additionally supports vllm acceleration. Compared to the transformers backend, vllm can achieve 20-30x speedup. You can check the installation method for the complete package supporting vllm acceleration in the Extension Modules Installation Guide.
If you need to adjust parsing options through custom parameters, you can also check the more detailed Command Line Tools Usage Instructions in the documentation.
Advanced Usage via API, WebUI, http-client/server
- Direct Python API calls: Python Usage Example
- FastAPI calls:
mineru-api --host 0.0.0.0 --port 8000Tip
Access
http://127.0.0.1:8000/docsin your browser to view the API documentation. -
Start Gradio WebUI visual frontend:
# Using pipeline/vlm-transformers/vlm-http-client backends mineru-gradio --server-name 0.0.0.0 --server-port 7860 # Or using vlm-vllm-engine/pipeline backends (requires vllm environment) mineru-gradio --server-name 0.0.0.0 --server-port 7860 --enable-vllm-engine trueTip
- Access
http://127.0.0.1:7860in your browser to use the Gradio WebUI.
- Access
-
Using
http-client/servermethod:# Start vllm server (requires vllm environment) mineru-vllm-server --port 30000Tip
In another terminal, connect to vllm server via http client (only requires CPU and network, no vllm environment needed)
mineru -p <input_path> -o <output_path> -b vlm-http-client -u http://127.0.0.1:30000
Note
All officially supported vllm parameters can be passed to MinerU through command line arguments, including the following commands: mineru, mineru-vllm-server, mineru-gradio, mineru-api.
We have compiled some commonly used parameters and usage methods for vllm, which can be found in the documentation Advanced Command Line Parameters.
Extending MinerU Functionality with Configuration Files
MinerU is now ready to use out of the box, but also supports extending functionality through configuration files. You can edit mineru.json file in your user directory to add custom configurations.
Important
The mineru.json file will be automatically generated when you use the built-in model download command mineru-models-download, or you can create it by copying the configuration template file to your user directory and renaming it to mineru.json.
Here are some available configuration options:
-
latex-delimiter-config:- Used to configure LaTeX formula delimiters
- Defaults to
$symbol, can be modified to other symbols or strings as needed.
-
llm-aided-config:- Used to configure parameters for LLM-assisted title hierarchy
- Compatible with all LLM models supporting
openai protocol, defaults to using Alibaba Cloud Bailian'sqwen3-next-80b-a3b-instructmodel. - You need to configure your own API key and set
enabletotrueto enable this feature. - If your API provider does not support the
enable_thinkingparameter, please manually remove it.- For example, in your configuration file, the
llm-aided-configsection may look like:"llm-aided-config": { "api_key": "your_api_key", "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1", "model": "qwen3-next-80b-a3b-instruct", "enable_thinking": false, "enable": false } - To remove the
enable_thinkingparameter, simply delete the line containing"enable_thinking": false, resulting in:"llm-aided-config": { "api_key": "your_api_key", "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1", "model": "qwen3-next-80b-a3b-instruct", "enable": false }
- For example, in your configuration file, the
-
models-dir:- Used to specify local model storage directory
- Please specify model directories for
pipelineandvlmbackends separately. - After specifying the directory, you can use local models by configuring the environment variable
export MINERU_MODEL_SOURCE=local.