安装与快速开始
系统要求
最低系统要求
操作系统:Linux (Ubuntu 20.04+), macOS (10.15+), Windows 10+
Python 版本:Python 3.8 或更高版本
内存:最少 4GB RAM(推荐 8GB+)
存储空间:至少 10GB 可用空间
网络:稳定的互联网连接
推荐配置
CPU:8核心或更多
内存:16GB RAM 或更多
存储:SSD 硬盘,50GB+ 可用空间
GPU:NVIDIA GPU(支持 CUDA)用于模型推理加速
安装方法
方法 1:通过 pip 安装(推荐)
# 安装 LLMESH 核心包
pip install llmesh-network
# 验证安装
llmesh --version
方法 2:从源码安装
# 克隆仓库
git clone https://github.com/llmesh-cor/llmesh.git
cd llmesh
# 创建虚拟环境
python -m venv llmesh-env
source llmesh-env/bin/activate # Linux/macOS
# 或在 Windows 上
# llmesh-env\Scripts\activate
# 安装依赖
pip install -r requirements.txt
# 安装 LLMESH
pip install -e .
方法 3:使用 Docker
# 拉取官方镜像
docker pull llmesh/llmesh-node:latest
# 运行容器
docker run -d \
--name llmesh-node \
-p 8080:8080 \
-p 9000:9000 \
-v ~/.llmesh:/root/.llmesh \
llmesh/llmesh-node:latest
配置环境
初始化配置
# 初始化 LLMESH 配置
llmesh init
# 这将创建默认配置文件在 ~/.llmesh/config.yaml
配置文件示例
# ~/.llmesh/config.yaml
node:
id: "node_${RANDOM_ID}"
name: "My LLMESH Node"
description: "Personal LLMESH node for AI services"
network:
listen_address: "0.0.0.0"
listen_port: 9000
bootstrap_nodes:
- "bootstrap1.llmesh.network:9000"
- "bootstrap2.llmesh.network:9000"
max_peers: 50
models:
storage_path: "~/.llmesh/models"
cache_size: "5GB"
supported_formats: ["onnx", "pytorch", "tensorflow"]
tokens:
mesh_token_address: "0x1234567890123456789012345678901234567890"
wallet_private_key: "${MESH_PRIVATE_KEY}"
logging:
level: "INFO"
file: "~/.llmesh/logs/llmesh.log"
快速开始
1. 启动第一个节点
# 启动节点(需要质押代币)
llmesh-node start --stake 1000
# 或者以测试模式启动(无需质押)
llmesh-node start --testnet
启动成功后,你将看到类似输出:
🚀 LLMESH Node starting...
📡 Node ID: node_a1b2c3d4e5f6
🌐 Listening on: 0.0.0.0:9000
💰 Staked tokens: 1000 MESH
🔗 Connected to 12 peers
✅ Node is ready!
2. 部署你的第一个 LLM 模型
#!/usr/bin/env python3
"""部署模型示例"""
from llmesh import ModelDeployer, ModelConfig
def deploy_model():
# 创建模型配置
config = ModelConfig(
name="my-gpt-model",
model_path="./models/gpt-model.onnx",
model_type="text-generation",
description="Personal GPT model for text generation",
fee_per_request=0.1, # 每次请求收费 0.1 MESH
max_context_length=2048,
temperature=0.7
)
# 创建部署器
deployer = ModelDeployer()
# 部署模型
try:
deployment = deployer.deploy(config)
print(f"✅ Model deployed successfully!")
print(f"📝 Model ID: {deployment.model_id}")
print(f"🔗 Access URL: {deployment.access_url}")
print(f"💰 Fee per request: {config.fee_per_request} MESH")
return deployment
except Exception as e:
print(f"❌ Deployment failed: {e}")
return None
if __name__ == "__main__":
deploy_model()
运行部署脚本:
python deploy_model.py
3. 使用命令行工具部署模型
# 部署 ONNX 模型
llmesh-deploy ./models/my-model.onnx \
--name "my-llm" \
--type "text-generation" \
--fee 0.1 \
--description "My personal LLM model"
# 查看已部署的模型
llmesh models list
# 输出示例:
# Model ID Name Type Fee Status
# abc123def456 my-llm text-generation 0.1 Active
# def789ghi012 chat-bot conversational 0.05 Active
4. 调用网络中的模型
#!/usr/bin/env python3
"""调用模型示例"""
from llmesh import LLMeshClient
async def main():
# 创建客户端
client = LLMeshClient()
# 搜索可用模型
models = await client.search_models(
model_type="text-generation",
max_fee=0.2
)
print(f"Found {len(models)} available models:")
for model in models:
print(f" - {model.name} (Fee: {model.fee} MESH)")
if models:
# 选择第一个模型进行推理
model = models[0]
# 发送推理请求
response = await client.generate_text(
model_id=model.id,
prompt="What is the future of artificial intelligence?",
max_tokens=200,
temperature=0.7
)
print(f"\n🤖 Model Response:")
print(f"📝 Text: {response.text}")
print(f"💰 Cost: {response.cost} MESH")
print(f"⏱️ Response time: {response.response_time}ms")
if __name__ == "__main__":
import asyncio
asyncio.run(main())
5. Web 界面使用
启动节点后,你可以通过浏览器访问 Web 界面:
# 启动带 Web 界面的节点
llmesh-node start --web-ui --port 8080
然后在浏览器中打开 http://localhost:8080
,你将看到:
仪表板:节点状态、网络统计、收益概览
模型管理:部署、配置、监控你的模型
市场:浏览和使用网络中的其他模型
钱包:管理 MESH 代币和交易记录
高级配置
GPU 加速配置
# config.yaml
compute:
use_gpu: true
gpu_devices: [0, 1] # 使用 GPU 0 和 1
gpu_memory_limit: "8GB"
batch_size: 8
models:
inference_backend: "cuda" # 或 "cpu", "opencl"
mixed_precision: true
optimization_level: "O2"
集群部署配置
#!/usr/bin/env python3
"""集群部署脚本"""
from llmesh import ClusterManager, NodeConfig
def setup_cluster():
cluster = ClusterManager()
# 配置主节点
master_config = NodeConfig(
role="master",
compute_resources={"cpu": 16, "memory": "32GB", "gpu": 2},
services=["routing", "model_hosting", "load_balancing"]
)
# 配置工作节点
worker_configs = []
for i in range(3):
worker_config = NodeConfig(
role="worker",
compute_resources={"cpu": 8, "memory": "16GB", "gpu": 1},
services=["model_hosting"]
)
worker_configs.append(worker_config)
# 部署集群
cluster.deploy_master(master_config)
for worker_config in worker_configs:
cluster.add_worker(worker_config)
print("✅ Cluster deployed successfully!")
print(f"📊 Master node: {cluster.master_node.id}")
print(f"👥 Worker nodes: {len(cluster.worker_nodes)}")
if __name__ == "__main__":
setup_cluster()
常见问题解决
连接问题
# 检查节点状态
llmesh-node status
# 检查网络连接
llmesh-node ping-peers
# 重置网络连接
llmesh-node reset-network
模型加载问题
# 检查模型格式
from llmesh.utils import ModelValidator
validator = ModelValidator()
is_valid = validator.validate_model("./my-model.onnx")
if not is_valid:
print("❌ Model format is not supported")
print("Supported formats:", validator.supported_formats)
性能调优
# 性能优化配置
performance:
worker_threads: 8
async_io: true
connection_pool_size: 100
request_timeout: 30
batch_processing: true
cache_enabled: true
cache_ttl: 3600
通过以上步骤,你就可以成功安装并开始使用 LLMESH 网络了。接下来可以探索更多高级功能和配置选项。
Last updated