### 2025-07-15 (续): 训练器与核心调用层重构

**核心目标**: 将新的 `ModelManager` 统一应用到项目中所有剩余的模型训练器,并重构核心调用逻辑,确保整个训练链路的架构一致性。

**1. 修改 `server/trainers/kan_trainer.py`**
*   **内容**: 完全重写了 `kan_trainer.py`。
    *   **适配接口**: 函数签名与 `mlstm_trainer` 对齐,增加了 `socketio`, `task_id`, `patience` 等参数。
    *   **集成 `ModelManager`**: 移除了所有旧的、手动的保存逻辑,改为在训练开始时调用 `model_manager` 获取版本号和路径。
    *   **标准化产物保存**: 所有产物(模型、元数据、检查点、损失曲线)均通过 `model_manager.save_model_artifact()` 保存。
    *   **增加健壮性**: 引入了早停(Early Stopping)和保存最佳检查点(Best Checkpoint)的逻辑。

**2. 修改 `server/trainers/tcn_trainer.py`**
*   **内容**: 完全重写了 `tcn_trainer.py`,应用了与 `kan_trainer` 完全相同的重构模式。
    *   移除了旧的 `save_checkpoint` 辅助函数和基于 `core.config` 的版本管理。
    *   全面转向使用 `model_manager` 进行版本控制和文件保存。
    *   统一了函数签名和进度反馈逻辑。

**3. 修改 `server/trainers/transformer_trainer.py`**
*   **内容**: 完全重写了 `transformer_trainer.py`,完成了对所有训练器的统一重构。
    *   移除了所有遗留的、基于文件名的路径拼接和保存逻辑。
    *   实现了与其它训练器一致的、基于 `ModelManager` 的标准化训练流程。

**4. 修改 `server/core/predictor.py`**
*   **内容**: 对核心预测器类 `PharmacyPredictor` 进行了彻底重构。
    *   **统一调用接口**: `train_model` 方法现在以完全一致的方式调用所有(`mlstm`, `kan`, `tcn`, `transformer`)训练器。
    *   **移除旧逻辑**: 删除了 `_parse_model_filename` 等所有基于文件名解析的旧方法。
    *   **适配 `ModelManager`**: `list_models` 和 `delete_model` 等方法现在直接调用 `model_manager` 的相应功能,不再自己实现逻辑。
    *   **简化 `predict`**: 预测方法现在直接接收标准化的模型版本路径 (`model_version_path`) 作为输入,逻辑更清晰。
This commit is contained in:
xz2000 2025-07-15 20:09:05 +08:00
parent 9bd824c389
commit e999ed4af2
11 changed files with 1060 additions and 2604 deletions

View File

@ -55,10 +55,9 @@ from analysis.metrics import evaluate_model, compare_models
# 导入配置和版本管理
from core.config import (
DEFAULT_MODEL_DIR, WEBSOCKET_NAMESPACE,
get_model_versions, get_latest_model_version, get_next_model_version,
get_model_file_path, save_model_version_info
DEFAULT_MODEL_DIR, WEBSOCKET_NAMESPACE
)
from utils.model_manager import model_manager
# 导入多店铺数据工具
from utils.multi_store_data_utils import (
@ -1090,15 +1089,15 @@ def start_training():
logger.info(f"📋 任务详情: 训练 {model_type} 模型 - {scope_msg}, 轮次: {epochs}")
# 根据训练模式生成版本号和模型标识
if training_mode == 'product':
model_identifier = product_id
version = get_next_model_version(product_id, model_type) if version is None else version
elif training_mode == 'store':
model_identifier = f"store_{store_id}"
version = get_next_model_version(f"store_{store_id}", model_type) if version is None else version
if training_mode == 'store':
scope = f"{store_id}_{product_id}"
elif training_mode == 'global':
model_identifier = "global"
version = get_next_model_version("global", model_type) if version is None else version
scope = f"{product_id}" if product_id else "all"
else:
scope = f"{product_id}_all"
model_identifier = model_manager.get_model_identifier(model_type, training_mode, scope, aggregation_method)
version = model_manager.get_next_version_number(model_identifier)
thread_safe_print(f"🏷️ 版本信息: 版本号 {version}, 模型标识: {model_identifier}", "[VERSION]")
logger.info(f"🏷️ 版本信息: 版本号 {version}, 模型标识: {model_identifier}")
@ -1196,7 +1195,7 @@ def start_training():
logger.info(f"📈 训练完成 - 结果类型: {type(metrics)}, 内容: {metrics}")
# 更新模型路径使用版本管理
model_path = get_model_file_path(model_identifier, model_type, version)
model_path = model_manager.get_model_version_path(model_type, training_mode, scope, version, aggregation_method)
thread_safe_print(f"💾 模型保存路径: {model_path}", "[SAVE]")
logger.info(f"💾 模型保存路径: {model_path}")
@ -1210,7 +1209,20 @@ def start_training():
logger.info(f"✔️ 任务状态更新: 已完成, 版本: {version}, 任务ID: {task_id}")
# 保存模型版本信息到数据库
save_model_version_info(product_id, model_type, version, model_path, metrics)
try:
conn = get_db_connection()
cursor = conn.cursor()
cursor.execute('''
INSERT INTO model_versions (product_id, model_type, version, file_path, created_at, metrics)
VALUES (?, ?, ?, ?, ?, ?)
''', (
model_identifier, model_type, f'v{version}', model_path,
datetime.now().isoformat(), json.dumps(metrics)
))
conn.commit()
conn.close()
except Exception as db_error:
thread_safe_print(f"数据库保存失败: {db_error}", "[DB_ERROR]")
# 完成训练进度管理器
progress_manager.finish_training(success=True)
@ -1531,23 +1543,30 @@ def predict():
# 如果指定了版本构造版本化的模型ID
model_id = f"{product_id}_{model_type}_{version}"
# 检查指定版本的模型是否存在
model_file_path = get_model_file_path(product_id, model_type, version)
if not os.path.exists(model_file_path):
return jsonify({"status": "error", "error": f"未找到产品 {product_id}{model_type} 类型模型版本 {version}"}), 404
# 新的预测逻辑
if store_id:
scope = f"{store_id}_{product_id}"
else:
# 如果没有指定版本,使用最新版本
latest_version = get_latest_model_version(product_id, model_type)
if latest_version:
model_id = f"{product_id}_{model_type}_{latest_version}"
version = latest_version
else:
# 兼容旧的无版本模型
model_id = get_latest_model_id(model_type, product_id)
if not model_id:
return jsonify({"status": "error", "error": f"未找到产品 {product_id}{model_type} 类型模型"}), 404
scope = f"{product_id}_all"
model_identifier = model_manager.get_model_identifier(model_type, training_mode, scope, 'sum') # Assuming 'sum' for now
if not version:
latest_version_num = model_manager._read_versions().get(model_identifier)
if latest_version_num is None:
return jsonify({"status": "error", "error": f"未找到模型 '{model_identifier}' 的任何版本。"}), 404
version = f"v{latest_version_num}"
version_num = int(version.replace('v',''))
model_version_path = model_manager.get_model_version_path(model_type, training_mode, scope, version_num, 'sum')
# 执行预测
prediction_result = run_prediction(model_type, product_id, model_id, future_days, start_date, version, store_id)
prediction_result = load_model_and_predict(
model_version_path=model_version_path,
future_days=future_days,
start_date=start_date,
analyze_result=include_visualization
)
if prediction_result is None:
return jsonify({"status": "error", "error": "预测失败预测器返回None"}), 500
@ -2275,14 +2294,7 @@ def list_models():
logger.info(f"[API] 分页参数: page={page}, page_size={page_size}")
# 使用模型管理器获取模型列表
result = model_manager.list_models(
product_id=product_id_filter,
model_type=model_type_filter,
store_id=store_id_filter,
training_mode=training_mode_filter,
page=page,
page_size=page_size
)
result = model_manager.list_models(page=page, page_size=page_size)
models = result['models']
pagination = result['pagination']
@ -2308,22 +2320,8 @@ def list_models():
else:
model_id = f"{model_type}_product_{product_id}_{version}"
formatted_model = {
'model_id': model_id,
'filename': model.get('filename', ''),
'product_id': model.get('product_id', ''),
'product_name': model.get('product_name', model.get('product_id', '')),
'model_type': model.get('model_type', ''),
'training_mode': model.get('training_mode', 'product'),
'store_id': model.get('store_id'),
'aggregation_method': model.get('aggregation_method'),
'version': model.get('version', 'v1'),
'created_at': model.get('created_at', model.get('modified_at', '')),
'file_size': model.get('file_size', 0),
'metrics': model.get('metrics', {}),
'config': model.get('config', {})
}
formatted_models.append(formatted_model)
# The new list_models returns a dictionary with all necessary info
formatted_models.append(model)
logger.info(f"[API] 成功获取 {len(formatted_models)} 个模型")
for i, model in enumerate(formatted_models):
@ -2578,10 +2576,10 @@ def delete_model(model_id):
print(f" - {test_path}")
return jsonify({"status": "error", "error": "模型未找到"}), 404
# 删除模型文件
os.remove(model_path)
# 新的删除逻辑
shutil.rmtree(model_path)
return jsonify({"status": "success", "message": f"模型 {model_id} 已删除"})
return jsonify({"status": "success", "message": f"模型目录 {model_path} 已删除"})
except ValueError:
return jsonify({"status": "error", "error": "无效的model_id格式"}), 400
except Exception as e:
@ -2639,8 +2637,13 @@ def export_model(model_id):
if not os.path.exists(model_path):
return jsonify({"status": "error", "error": "模型文件未找到"}), 404
# 新的导出逻辑
model_file_path = os.path.join(model_path, 'model.pth')
if not os.path.exists(model_file_path):
return jsonify({"status": "error", "error": "模型文件(model.pth)在目录中未找到"}), 404
return send_file(
model_path,
model_file_path,
as_attachment=True,
download_name=f'{model_id}.pth',
mimetype='application/octet-stream'
@ -2703,92 +2706,6 @@ def get_product_name(product_id):
print(f"获取产品名称失败: {str(e)}")
return None
# 执行预测的辅助函数
def run_prediction(model_type, product_id, model_id, future_days, start_date, version=None, store_id=None):
"""执行模型预测"""
try:
scope_msg = f", store_id={store_id}" if store_id else ", 全局模型"
print(f"开始运行预测: model_type={model_type}, product_id={product_id}, model_id={model_id}, version={version}{scope_msg}")
# 创建预测器实例
predictor = PharmacyPredictor()
# 解析模型类型映射
predictor_model_type = model_type
if model_type == 'optimized_kan':
predictor_model_type = 'optimized_kan'
# 生成预测
prediction_result = predictor.predict(
product_id=product_id,
model_type=predictor_model_type,
store_id=store_id,
future_days=future_days,
start_date=start_date,
version=version
)
if prediction_result is None:
return {"status": "error", "error": "预测失败预测器返回None"}
# 添加版本信息到预测结果
prediction_result['version'] = version
prediction_result['model_id'] = model_id
# 转换数据结构为前端期望的格式
if 'predictions' in prediction_result and isinstance(prediction_result['predictions'], pd.DataFrame):
predictions_df = prediction_result['predictions']
# 将DataFrame转换为prediction_data格式
prediction_data = []
for _, row in predictions_df.iterrows():
item = {
'date': row['date'].strftime('%Y-%m-%d') if hasattr(row['date'], 'strftime') else str(row['date']),
'predicted_sales': float(row['sales']) if pd.notna(row['sales']) else 0.0,
'sales': float(row['sales']) if pd.notna(row['sales']) else 0.0 # 兼容字段
}
prediction_data.append(item)
prediction_result['prediction_data'] = prediction_data
# 获取历史数据用于对比
try:
# 读取原始数据
from utils.multi_store_data_utils import load_multi_store_data
df = load_multi_store_data()
product_df = df[df['product_id'] == product_id].copy()
if not product_df.empty:
# 获取最近30天的历史数据
product_df['date'] = pd.to_datetime(product_df['date'])
product_df = product_df.sort_values('date')
# 取最后30天的数据
recent_history = product_df.tail(30)
history_data = []
for _, row in recent_history.iterrows():
item = {
'date': row['date'].strftime('%Y-%m-%d'),
'sales': float(row['sales']) if pd.notna(row['sales']) else 0.0
}
history_data.append(item)
prediction_result['history_data'] = history_data
else:
prediction_result['history_data'] = []
except Exception as e:
print(f"获取历史数据失败: {str(e)}")
prediction_result['history_data'] = []
return prediction_result
except Exception as e:
import traceback
traceback.print_exc()
print(f"预测过程中发生错误: {str(e)}")
return {"status": "error", "error": str(e)}
# 添加新的API路由支持/api/models/{model_type}/{product_id}/details格式
@app.route('/api/models/<model_type>/<product_id>/details', methods=['GET'])
@ -3768,22 +3685,9 @@ def get_model_types():
})
def get_model_versions_api(product_id, model_type):
"""获取模型版本列表API"""
try:
versions = get_model_versions(product_id, model_type)
latest_version = get_latest_model_version(product_id, model_type)
return jsonify({
"status": "success",
"data": {
"product_id": product_id,
"model_type": model_type,
"versions": versions,
"latest_version": latest_version
}
})
except Exception as e:
print(f"获取模型版本失败: {str(e)}")
return jsonify({"status": "error", "message": str(e)}), 500
# This endpoint needs to be re-evaluated based on the new directory structure.
# For now, it will be a placeholder.
return jsonify({"status": "info", "message": "Endpoint under construction after refactoring."})
@app.route('/api/models/store/<store_id>/<model_type>/versions', methods=['GET'])
def get_store_model_versions_api(store_id, model_type):

View File

@ -70,217 +70,3 @@ TRAINING_UPDATE_INTERVAL = 1 # 训练进度更新间隔(秒)
# 创建模型保存目录
os.makedirs(DEFAULT_MODEL_DIR, exist_ok=True)
def get_next_model_version(product_id: str, model_type: str) -> str:
"""
获取指定产品和模型类型的下一个版本号
Args:
product_id: 产品ID
model_type: 模型类型
Returns:
下一个版本号格式如 'v2', 'v3'
"""
# 新格式:带版本号的文件
pattern_new = f"{model_type}_model_product_{product_id}_v*.pth"
existing_files_new = glob.glob(os.path.join(DEFAULT_MODEL_DIR, pattern_new))
# 旧格式:不带版本号的文件(兼容性支持)
pattern_old = f"{model_type}_model_product_{product_id}.pth"
old_file_path = os.path.join(DEFAULT_MODEL_DIR, pattern_old)
has_old_format = os.path.exists(old_file_path)
# 如果没有任何格式的文件,返回默认版本
if not existing_files_new and not has_old_format:
return DEFAULT_VERSION
# 提取新格式文件的版本号
versions = []
for file_path in existing_files_new:
filename = os.path.basename(file_path)
version_match = re.search(rf"_v(\d+)\.pth$", filename)
if version_match:
versions.append(int(version_match.group(1)))
# 如果存在旧格式文件将其视为v1
if has_old_format:
versions.append(1)
print(f"检测到旧格式模型文件: {old_file_path}将其视为版本v1")
if versions:
next_version_num = max(versions) + 1
return f"v{next_version_num}"
else:
return DEFAULT_VERSION
def get_model_file_path(product_id: str, model_type: str, version: str = None) -> str:
"""
生成模型文件路径
Args:
product_id: 产品ID
model_type: 模型类型
version: 版本号如果为None则获取下一个版本
Returns:
模型文件的完整路径
"""
if version is None:
version = get_next_model_version(product_id, model_type)
# 特殊处理v1版本检查是否存在旧格式文件
if version == "v1":
# 检查旧格式文件是否存在
old_format_filename = f"{model_type}_model_product_{product_id}.pth"
old_format_path = os.path.join(DEFAULT_MODEL_DIR, old_format_filename)
if os.path.exists(old_format_path):
print(f"找到旧格式模型文件: {old_format_path}将其作为v1版本")
return old_format_path
# 使用新格式文件名
filename = f"{model_type}_model_product_{product_id}_{version}.pth"
return os.path.join(DEFAULT_MODEL_DIR, filename)
def get_model_versions(product_id: str, model_type: str) -> list:
"""
获取指定产品和模型类型的所有版本
Args:
product_id: 产品ID
model_type: 模型类型
Returns:
版本列表按版本号排序
"""
# 新格式:带版本号的文件
pattern_new = f"{model_type}_model_product_{product_id}_v*.pth"
existing_files_new = glob.glob(os.path.join(DEFAULT_MODEL_DIR, pattern_new))
# 旧格式:不带版本号的文件(兼容性支持)
pattern_old = f"{model_type}_model_product_{product_id}.pth"
old_file_path = os.path.join(DEFAULT_MODEL_DIR, pattern_old)
has_old_format = os.path.exists(old_file_path)
versions = []
# 处理新格式文件
for file_path in existing_files_new:
filename = os.path.basename(file_path)
version_match = re.search(rf"_v(\d+)\.pth$", filename)
if version_match:
version_num = int(version_match.group(1))
versions.append(f"v{version_num}")
# 如果存在旧格式文件将其视为v1
if has_old_format:
if "v1" not in versions: # 避免重复添加
versions.append("v1")
print(f"检测到旧格式模型文件: {old_file_path}将其视为版本v1")
# 按版本号排序
versions.sort(key=lambda v: int(v[1:]))
return versions
def get_latest_model_version(product_id: str, model_type: str) -> str:
"""
获取指定产品和模型类型的最新版本
Args:
product_id: 产品ID
model_type: 模型类型
Returns:
最新版本号如果没有则返回None
"""
versions = get_model_versions(product_id, model_type)
return versions[-1] if versions else None
def save_model_version_info(product_id: str, model_type: str, version: str, file_path: str, metrics: dict = None):
"""
保存模型版本信息到数据库
Args:
product_id: 产品ID
model_type: 模型类型
version: 版本号
file_path: 模型文件路径
metrics: 模型性能指标
"""
import sqlite3
import json
from datetime import datetime
try:
conn = sqlite3.connect('prediction_history.db')
cursor = conn.cursor()
# 插入模型版本记录
cursor.execute('''
INSERT INTO model_versions (
product_id, model_type, version, file_path, created_at, metrics, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?)
''', (
product_id,
model_type,
version,
file_path,
datetime.now().isoformat(),
json.dumps(metrics) if metrics else None,
1 # 新模型默认为激活状态
))
conn.commit()
conn.close()
print(f"已保存模型版本信息: {product_id}_{model_type}_{version}")
except Exception as e:
print(f"保存模型版本信息失败: {str(e)}")
def get_model_version_info(product_id: str, model_type: str, version: str = None):
"""
从数据库获取模型版本信息
Args:
product_id: 产品ID
model_type: 模型类型
version: 版本号如果为None则获取最新版本
Returns:
模型版本信息字典
"""
import sqlite3
import json
try:
conn = sqlite3.connect('prediction_history.db')
conn.row_factory = sqlite3.Row
cursor = conn.cursor()
if version:
cursor.execute('''
SELECT * FROM model_versions
WHERE product_id = ? AND model_type = ? AND version = ?
ORDER BY created_at DESC LIMIT 1
''', (product_id, model_type, version))
else:
cursor.execute('''
SELECT * FROM model_versions
WHERE product_id = ? AND model_type = ?
ORDER BY created_at DESC LIMIT 1
''', (product_id, model_type))
row = cursor.fetchone()
conn.close()
if row:
result = dict(row)
if result['metrics']:
result['metrics'] = json.loads(result['metrics'])
return result
return None
except Exception as e:
print(f"获取模型版本信息失败: {str(e)}")
return None

View File

@ -1,14 +1,11 @@
"""
药店销售预测系统 - 核心预测器类
支持多店铺销售预测功能
药店销售预测系统 - 核心预测器类 (已重构)
支持多店铺销售预测功能并完全集成新的ModelManager
"""
import os
import pandas as pd
import numpy as np
import torch
import time
import matplotlib.pyplot as plt
from datetime import datetime
from trainers import (
@ -18,14 +15,13 @@ from trainers import (
train_product_model_with_transformer
)
from predictors.model_predictor import load_model_and_predict
from utils.data_utils import prepare_data, prepare_sequences
from utils.multi_store_data_utils import (
load_multi_store_data,
load_multi_store_data,
get_store_product_sales_data,
aggregate_multi_store_data
)
from analysis.metrics import evaluate_model
from core.config import DEVICE, DEFAULT_MODEL_DIR, DEFAULT_DATA_PATH
from utils.model_manager import model_manager
class PharmacyPredictor:
"""
@ -34,16 +30,8 @@ class PharmacyPredictor:
def __init__(self, data_path=None, model_dir=DEFAULT_MODEL_DIR):
"""
初始化预测器
参数:
data_path: 数据文件路径默认使用多店铺CSV文件
model_dir: 模型保存目录
"""
# 设置默认数据路径为多店铺CSV文件
if data_path is None:
data_path = DEFAULT_DATA_PATH
self.data_path = data_path
self.data_path = data_path if data_path else DEFAULT_DATA_PATH
self.model_dir = model_dir
self.device = DEVICE
@ -52,497 +40,183 @@ class PharmacyPredictor:
print(f"使用设备: {self.device}")
# 尝试加载多店铺数据
try:
self.data = load_multi_store_data(data_path)
print(f"已加载多店铺数据,来源: {data_path}")
self.data = load_multi_store_data(self.data_path)
print(f"已加载多店铺数据,来源: {self.data_path}")
except Exception as e:
print(f"加载数据失败: {e}")
self.data = None
def train_model(self, product_id, model_type='transformer', epochs=100, batch_size=32,
learning_rate=0.001, sequence_length=30, forecast_horizon=7,
hidden_size=64, num_layers=2, dropout=0.1, use_optimized=False,
def train_model(self, product_id, model_type='transformer', epochs=100,
learning_rate=0.001, use_optimized=False,
store_id=None, training_mode='product', aggregation_method='sum',
socketio=None, task_id=None, version=None, continue_training=False,
progress_callback=None):
socketio=None, task_id=None, progress_callback=None, patience=10):
"""
训练预测模型 - 支持多店铺训练
参数:
product_id: 产品ID
model_type: 模型类型 ('transformer', 'mlstm', 'kan', 'tcn', 'optimized_kan')
epochs: 训练轮次
batch_size: 批次大小
learning_rate: 学习率
sequence_length: 输入序列长度
forecast_horizon: 预测天数
hidden_size: 隐藏层大小
num_layers: 层数
dropout: Dropout比例
use_optimized: 是否使用优化版KAN仅当model_type为'kan'时有效
store_id: 店铺ID仅当training_mode为'store'时使用
training_mode: 训练模式 ('product', 'store', 'global')
aggregation_method: 聚合方法 ('sum', 'mean', 'median') - 仅用于全局训练
返回:
metrics: 模型评估指标
训练预测模型 - 完全适配新的训练器接口
"""
# 创建统一的输出函数
def log_message(message, log_type='info'):
"""统一的日志输出函数"""
print(message, flush=True) # 始终输出到控制台
# 如果有进度回调,也发送到回调
print(f"[{log_type.upper()}] {message}", flush=True)
if progress_callback:
try:
progress_callback({
'log_type': log_type,
'message': message
})
progress_callback({'log_type': log_type, 'message': message})
except Exception as e:
print(f"进度回调失败: {e}", flush=True)
print(f"[ERROR] 进度回调失败: {e}", flush=True)
if self.data is None:
log_message("没有可用的数据,请先加载或生成数据", 'error')
return None
# 根据训练模式准备数据
if training_mode == 'product':
# 按产品训练:使用所有店铺的该产品数据
product_data = self.data[self.data['product_id'] == product_id].copy()
if product_data.empty:
log_message(f"找不到产品 {product_id} 的数据", 'error')
return None
log_message(f"按产品训练模式: 产品 {product_id}, 数据量: {len(product_data)}")
elif training_mode == 'store':
# 按店铺训练
if not store_id:
log_message("店铺训练模式需要指定 store_id", 'error')
return None
# 如果product_id是'unknown',则表示为店铺所有商品训练一个聚合模型
if product_id == 'unknown':
try:
# 使用新的聚合函数,按店铺聚合
product_data = aggregate_multi_store_data(
store_id=store_id,
aggregation_method=aggregation_method,
file_path=self.data_path
)
log_message(f"按店铺聚合训练: 店铺 {store_id}, 聚合方法 {aggregation_method}, 数据量: {len(product_data)}")
# 将product_id设置为店铺ID以便模型保存时使用有意义的标识
product_id = store_id
except Exception as e:
log_message(f"聚合店铺 {store_id} 数据失败: {e}", 'error')
return None
else:
# 为店铺的单个特定产品训练
try:
product_data = get_store_product_sales_data(
store_id=store_id,
product_id=product_id,
file_path=self.data_path
)
log_message(f"按店铺-产品训练: 店铺 {store_id}, 产品 {product_id}, 数据量: {len(product_data)}")
except Exception as e:
log_message(f"获取店铺产品数据失败: {e}", 'error')
return None
elif training_mode == 'global':
# 全局训练:聚合所有店铺的产品数据
try:
# 如果product_id是'unknown',则表示为全局所有商品训练一个聚合模型
if product_id == 'unknown':
product_data = aggregate_multi_store_data(
product_id=None, # 传递None以触发真正的全局聚合
aggregation_method=aggregation_method,
file_path=self.data_path
)
log_message(f"全局训练模式: 所有产品, 聚合方法 {aggregation_method}, 数据量: {len(product_data)}")
# 将product_id设置为一个有意义的标识符
product_id = 'all_products'
else:
product_data = aggregate_multi_store_data(
product_id=product_id,
aggregation_method=aggregation_method,
file_path=self.data_path
)
log_message(f"全局训练模式: 产品 {product_id}, 聚合方法 {aggregation_method}, 数据量: {len(product_data)}")
except Exception as e:
log_message(f"聚合全局数据失败: {e}", 'error')
return None
else:
log_message(f"不支持的训练模式: {training_mode}", 'error')
# --- 数据准备 ---
try:
if training_mode == 'store':
product_data = get_store_product_sales_data(store_id, product_id, self.data_path)
log_message(f"按店铺-产品训练: 店铺 {store_id}, 产品 {product_id}, 数据量: {len(product_data)}")
elif training_mode == 'global':
product_data = aggregate_multi_store_data(product_id, aggregation_method, self.data_path)
log_message(f"全局训练模式: 产品 {product_id}, 聚合方法 {aggregation_method}, 数据<E695B0><E68DAE><EFBFBD>: {len(product_data)}")
else: # 'product'
product_data = self.data[self.data['product_id'] == product_id].copy()
log_message(f"按产品训练模式: 产品 {product_id}, 数据量: {len(product_data)}")
except Exception as e:
log_message(f"数据准备失败: {e}", 'error')
return None
# 根据训练模式构建模型标识符
if training_mode == 'store':
model_identifier = f"{store_id}_{product_id}"
elif training_mode == 'global':
model_identifier = f"global_{product_id}_{aggregation_method}"
else:
model_identifier = product_id
if product_data.empty:
log_message(f"找不到产品 {product_id} 的数据", 'error')
return None
# --- 训练器选择与参数准备 ---
trainers = {
'transformer': train_product_model_with_transformer,
'mlstm': train_product_model_with_mlstm,
'tcn': train_product_model_with_tcn,
'kan': train_product_model_with_kan,
'optimized_kan': train_product_model_with_kan,
}
# 调用相应的训练函数
if model_type not in trainers:
log_message(f"不支持的模型类型: {model_type}", 'error')
return None
trainer_func = trainers[model_type]
# 统一所有训练器的参数
trainer_args = {
"product_id": product_id,
"product_df": product_data,
"store_id": store_id,
"training_mode": training_mode,
"aggregation_method": aggregation_method,
"epochs": epochs,
"socketio": socketio,
"task_id": task_id,
"progress_callback": progress_callback,
"patience": patience,
"learning_rate": learning_rate
}
# 为 KAN 模型添加特殊参数
if 'kan' in model_type:
trainer_args['use_optimized'] = (model_type == 'optimized_kan')
# --- 调用训练器 ---
try:
log_message(f"🤖 开始调用 {model_type} 训练器")
if model_type == 'transformer':
model_result, metrics, actual_version = train_product_model_with_transformer(
product_id=product_id,
product_df=product_data,
store_id=store_id,
training_mode=training_mode,
aggregation_method=aggregation_method,
epochs=epochs,
model_dir=self.model_dir,
version=version,
socketio=socketio,
task_id=task_id,
continue_training=continue_training
)
log_message(f"{model_type} 训练器返回: metrics={type(metrics)}, version={actual_version}", 'success')
elif model_type == 'mlstm':
_, metrics, _, _ = train_product_model_with_mlstm(
product_id=product_id,
product_df=product_data,
store_id=store_id,
training_mode=training_mode,
aggregation_method=aggregation_method,
epochs=epochs,
model_dir=self.model_dir,
socketio=socketio,
task_id=task_id,
progress_callback=progress_callback
)
elif model_type == 'kan':
_, metrics = train_product_model_with_kan(
product_id=product_id,
product_df=product_data,
store_id=store_id,
training_mode=training_mode,
aggregation_method=aggregation_method,
epochs=epochs,
use_optimized=use_optimized,
model_dir=self.model_dir
)
elif model_type == 'optimized_kan':
_, metrics = train_product_model_with_kan(
product_id=product_id,
product_df=product_data,
store_id=store_id,
training_mode=training_mode,
aggregation_method=aggregation_method,
epochs=epochs,
use_optimized=True,
model_dir=self.model_dir
)
elif model_type == 'tcn':
_, metrics, _, _ = train_product_model_with_tcn(
product_id=product_id,
product_df=product_data,
store_id=store_id,
training_mode=training_mode,
aggregation_method=aggregation_method,
epochs=epochs,
model_dir=self.model_dir,
socketio=socketio,
task_id=task_id
)
else:
log_message(f"不支持的模型类型: {model_type}", 'error')
return None
# 检查和打印返回的metrics
log_message(f"📊 训练完成检查返回的metrics: {metrics}")
model, metrics, version, model_version_path = trainer_func(**trainer_args)
log_message(f"{model_type} 训练器成功返回", 'success')
# 在返回的metrics中添加训练信息
if metrics:
log_message(f"✅ metrics不为空添加训练信息")
metrics.update({
'model_type': model_type,
'version': version,
'model_path': model_version_path,
'training_mode': training_mode,
'store_id': store_id,
'product_id': product_id,
'model_identifier': model_identifier,
'aggregation_method': aggregation_method if training_mode == 'global' else None
})
log_message(f"📈 最终返回的metrics: {metrics}", 'success')
return metrics
else:
log_message(f"⚠️ metrics为空或None", 'warning')
return metrics
log_message("⚠️ 训练器返回的metrics为空", 'warning')
return None
except Exception as e:
log_message(f"模型训练失败: {e}", 'error')
import traceback
log_message(f"模型训练过程中发生严重错误: {e}\n{traceback.format_exc()}", 'error')
return None
def predict(self, product_id, model_type, future_days=7, start_date=None, analyze_result=False, version=None,
store_id=None, training_mode='product', aggregation_method='sum'):
def predict(self, model_version_path, future_days=7, start_date=None, analyze_result=False):
"""
使用已训练的模型进行预测 - 支持多店铺预测
参数:
product_id: 产品ID
model_type: 模型类型
future_days: 预测未来天数
start_date: 预测起始日期
analyze_result: 是否分析预测结果
version: 模型版本如果为None则使用最新版本
store_id: 店铺ID仅当training_mode为'store'时使用
training_mode: 训练模式 ('product', 'store', 'global')
aggregation_method: 聚合方法 ('sum', 'mean', 'median') - 仅用于全局预测
返回:
预测结果和分析如果analyze_result为True
使用已训练的模型进行预测 - 直接使用模型版本路径
"""
# 根据训练模式构建模型标识符
if training_mode == 'store' and store_id:
model_identifier = f"{store_id}_{product_id}"
elif training_mode == 'global':
model_identifier = f"global_{product_id}_{aggregation_method}"
else:
model_identifier = product_id
if not os.path.exists(model_version_path):
raise FileNotFoundError(f"指定的模型路径不存在: {model_version_path}")
return load_model_and_predict(
model_identifier,
model_type,
future_days=future_days,
start_date=start_date,
analyze_result=analyze_result,
version=version
model_version_path=model_version_path,
future_days=future_days,
start_date=start_date,
analyze_result=analyze_result
)
def train_optimized_kan_model(self, product_id, epochs=100, batch_size=32,
learning_rate=0.001, sequence_length=30, forecast_horizon=7,
hidden_size=64, num_layers=2, dropout=0.1):
def list_models(self, **kwargs):
"""
训练优化版KAN模型便捷方法
参数与train_model相同但固定model_type为'kan'且use_optimized为True
列出所有可用的模型版本
直接调用 ModelManager list_models 方法
支持的过滤参数: model_type, training_mode, scope, version
"""
return self.train_model(
product_id=product_id,
model_type='kan',
epochs=epochs,
batch_size=batch_size,
learning_rate=learning_rate,
sequence_length=sequence_length,
forecast_horizon=forecast_horizon,
hidden_size=hidden_size,
num_layers=num_layers,
dropout=dropout,
use_optimized=True
)
def compare_kan_models(self, product_id, epochs=100, batch_size=32,
learning_rate=0.001, sequence_length=30, forecast_horizon=7,
hidden_size=64, num_layers=2, dropout=0.1):
return model_manager.list_models(**kwargs)
def delete_model(self, model_version_path):
"""
比较原始KAN和优化版KAN模型性能
参数与train_model相同
返回:
比较结果字典
删除一个指定的模型版本目录
"""
print(f"开始比较产品 {product_id} 的原始KAN和优化版KAN模型性能...")
return model_manager.delete_model_version(model_version_path)
def compare_models(self, product_id, epochs=50, **kwargs):
"""
在相同数据上训练并比较多个模型的性能
"""
results = {}
model_types_to_compare = ['tcn', 'mlstm', 'transformer', 'kan', 'optimized_kan']
# 训练原始KAN模型
print("\n训练原始KAN模型...")
kan_metrics = self.train_model(
product_id=product_id,
model_type='kan',
epochs=epochs,
batch_size=batch_size,
learning_rate=learning_rate,
sequence_length=sequence_length,
forecast_horizon=forecast_horizon,
hidden_size=hidden_size,
num_layers=num_layers,
dropout=dropout,
use_optimized=False
)
# 训练优化版KAN模型
print("\n训练优化版KAN模型...")
optimized_kan_metrics = self.train_model(
product_id=product_id,
model_type='kan',
epochs=epochs,
batch_size=batch_size,
learning_rate=learning_rate,
sequence_length=sequence_length,
forecast_horizon=forecast_horizon,
hidden_size=hidden_size,
num_layers=num_layers,
dropout=dropout,
use_optimized=True
)
# 比较结果
comparison = {
'kan': kan_metrics,
'optimized_kan': optimized_kan_metrics
}
for model_type in model_types_to_compare:
print(f"\n{'='*20} 训练模型: {model_type.upper()} {'='*20}")
try:
metrics = self.train_model(
product_id=product_id,
model_type=model_type,
epochs=epochs,
**kwargs
)
results[model_type] = metrics if metrics else {}
except Exception as e:
print(f"训练 {model_type} 模型失败: {e}")
results[model_type] = {'error': str(e)}
# 打印比较结果
print("\n模型性能比较:")
print(f"{'指标':<10} {'原始KAN':<15} {'优化版KAN':<15} {'改进百分比':<15}")
print("-" * 55)
print(f"\n{'='*25} 模型性能比较 {'='*25}")
for metric in ['mse', 'rmse', 'mae', 'mape']:
if metric in kan_metrics and metric in optimized_kan_metrics:
kan_value = kan_metrics[metric]
opt_value = optimized_kan_metrics[metric]
improvement = (kan_value - opt_value) / kan_value * 100 if kan_value != 0 else 0
print(f"{metric.upper():<10} {kan_value:<15.4f} {opt_value:<15.4f} {improvement:<15.2f}%")
# 准备数据
df_data = []
for model, metrics in results.items():
if metrics and 'rmse' in metrics:
df_data.append({
'Model': model.upper(),
'RMSE': metrics.get('rmse'),
'': metrics.get('r2'),
'MAPE (%)': metrics.get('mape'),
'Time (s)': metrics.get('training_time')
})
# R²值越高越好所以计算改进的方式不同
if 'r2' in kan_metrics and 'r2' in optimized_kan_metrics:
kan_r2 = kan_metrics['r2']
opt_r2 = optimized_kan_metrics['r2']
improvement = (opt_r2 - kan_r2) / (1 - kan_r2) * 100 if kan_r2 != 1 else 0
print(f"{'':<10} {kan_r2:<15.4f} {opt_r2:<15.4f} {improvement:<15.2f}%")
if not df_data:
print("没有可供比较的模型结果。")
return results
comparison_df = pd.DataFrame(df_data).set_index('Model')
print(comparison_df.to_string(float_format="%.4f"))
# 训练时间
if 'training_time' in kan_metrics and 'training_time' in optimized_kan_metrics:
kan_time = kan_metrics['training_time']
opt_time = optimized_kan_metrics['training_time']
time_diff = (opt_time - kan_time) / kan_time * 100 if kan_time != 0 else 0
print(f"{'时间(秒)':<10} {kan_time:<15.2f} {opt_time:<15.2f} {time_diff:<15.2f}%")
return comparison
def list_available_models(self, product_id=None, store_id=None, training_mode=None):
"""
列出可用的已训练模型 - 支持多店铺模型
参数:
product_id: 产品ID如果为None则列出所有模型
store_id: 店铺ID用于筛选店铺专属模型
training_mode: 训练模式筛选 ('product', 'store', 'global')
返回:
可用模型列表
"""
if not os.path.exists(self.model_dir):
print(f"模型目录 {self.model_dir} 不存在")
return []
model_files = os.listdir(self.model_dir)
models = []
for file in model_files:
if file.endswith('.pth'):
try:
# 解析模型文件名
model_info = self._parse_model_filename(file)
if model_info:
# 应用过滤条件
if product_id and model_info.get('product_id') != product_id:
continue
if store_id and model_info.get('store_id') != store_id:
continue
if training_mode and model_info.get('training_mode') != training_mode:
continue
model_info['file_name'] = file
model_info['file_path'] = os.path.join(self.model_dir, file)
models.append(model_info)
except Exception as e:
print(f"解析模型文件名失败: {file}, 错误: {e}")
continue
return models
def _parse_model_filename(self, filename):
"""
解析模型文件名提取模型信息
参数:
filename: 模型文件名
返回:
dict: 模型信息字典
"""
# 移除文件扩展名
name = filename.replace('.pth', '')
# 解析新的多店铺模型命名格式
if '_model_product_' in name:
parts = name.split('_model_product_')
model_type = parts[0]
product_part = parts[1]
# 检查是否是店铺模型 (格式: model_type_model_product_store_id_product_id)
if len(product_part.split('_')) > 1:
store_id = product_part.split('_')[0]
product_id = '_'.join(product_part.split('_')[1:])
training_mode = 'store'
# 检查是否是全局模型 (格式: model_type_model_product_global_product_id_method)
elif product_part.startswith('global_'):
parts = product_part.split('_')
if len(parts) >= 3:
product_id = '_'.join(parts[1:-1])
aggregation_method = parts[-1]
store_id = None
training_mode = 'global'
else:
product_id = product_part
store_id = None
training_mode = 'product'
else:
# 常规产品模型
product_id = product_part
store_id = None
training_mode = 'product'
# 处理优化版KAN模型
if 'optimized' in model_type:
model_type = 'optimized_kan'
return {
'model_type': model_type,
'product_id': product_id,
'store_id': store_id,
'training_mode': training_mode,
'aggregation_method': aggregation_method if training_mode == 'global' and 'aggregation_method' in locals() else None
}
# 处理旧格式的向后兼容性
elif "kan_optimized_model" in name:
model_type = "optimized_kan"
product_id = name.split('_product_')[1] if '_product_' in name else 'unknown'
return {
'model_type': model_type,
'product_id': product_id,
'store_id': None,
'training_mode': 'product',
'aggregation_method': None
}
return None
def delete_model(self, product_id, model_type):
"""
删除已训练的模型
参数:
product_id: 产品ID
model_type: 模型类型
返回:
是否成功删除
"""
model_suffix = '_optimized' if model_type == 'optimized_kan' else ''
model_name = f"{model_type}{model_suffix}_model_product_{product_id}.pth"
model_path = os.path.join(self.model_dir, model_name)
if os.path.exists(model_path):
os.remove(model_path)
print(f"已删除模型: {model_path}")
return True
else:
print(f"模型文件 {model_path} 不存在")
return False
return results

View File

@ -21,78 +21,37 @@ from models.optimized_kan_forecaster import OptimizedKANForecaster
from analysis.trend_analysis import analyze_prediction_result
from utils.visualization import plot_prediction_results
from utils.multi_store_data_utils import get_store_product_sales_data, aggregate_multi_store_data
from core.config import DEVICE, get_model_file_path
from core.config import DEVICE
def load_model_and_predict(product_id, model_type, store_id=None, future_days=7, start_date=None, analyze_result=False, version=None):
def load_model_and_predict(model_version_path: str, future_days=7, start_date=None, analyze_result=False):
"""
加载已训练的模型并进行预测
从指定的版本目录加载模型并进行预测
参数:
product_id: 产品ID
model_type: 模型类型 ('transformer', 'mlstm', 'kan', 'tcn', 'optimized_kan')
store_id: 店铺ID为None时使用全局模型
future_days: 预测未来天数
start_date: 预测起始日期如果为None则使用最后一个已知日期
analyze_result: 是否分析预测结果
version: 模型版本如果为None则使用最新版本
model_version_path: 模型版本目录的绝对路径
future_days: 预测未来天数
start_date: 预测起始日期如果为None则使用最后一个已知日期
analyze_result: 是否分析预测结果
返回:
预测结果和分析如果analyze_result为True
"""
try:
# 确定模型文件路径(支持多店铺)
model_path = None
# 从路径中解析元数据
metadata_path = os.path.join(model_version_path, 'metadata.json')
if not os.path.exists(metadata_path):
raise FileNotFoundError(f"在路径 {model_version_path} 中未找到 metadata.json")
if version:
# 使用版本管理系统获取正确的文件路径
model_path = get_model_file_path(product_id, model_type, version)
else:
# 根据store_id确定搜索目录
if store_id:
# 查找特定店铺的模型
possible_dirs = [
os.path.join('saved_models', model_type, store_id),
os.path.join('models', model_type, store_id)
]
else:
# 查找全局模型
possible_dirs = [
os.path.join('saved_models', model_type, 'global'),
os.path.join('models', model_type, 'global'),
os.path.join('saved_models', model_type), # 后向兼容
'saved_models' # 最基本的目录
]
# 文件名模式
model_suffix = '_optimized' if model_type == 'optimized_kan' else ''
file_model_type = 'kan' if model_type == 'optimized_kan' else model_type
possible_names = [
f"{product_id}_{model_type}_v1_model.pt", # 新多店铺格式
f"{product_id}_{model_type}_v1_global_model.pt", # 全局模型格式
f"{product_id}_{model_type}_v1.pth", # 旧版本格式
f"{file_model_type}{model_suffix}_model_product_{product_id}.pth", # 原始格式
f"{model_type}_model_product_{product_id}.pth" # 简化格式
]
# 搜索模型文件
for dir_path in possible_dirs:
if not os.path.exists(dir_path):
continue
for name in possible_names:
test_path = os.path.join(dir_path, name)
if os.path.exists(test_path):
model_path = test_path
break
if model_path:
break
if not model_path:
scope_msg = f"店铺 {store_id}" if store_id else "全局"
print(f"找不到产品 {product_id}{model_type} 模型文件 ({scope_msg})")
print(f"搜索目录: {possible_dirs}")
return None
with open(metadata_path, 'r', encoding='utf-8') as f:
metadata = json.load(f)
product_id = metadata.get('product_id')
model_type = metadata.get('model_type')
store_id = metadata.get('store_id')
training_mode = metadata.get('training_mode')
aggregation_method = metadata.get('aggregation_method')
model_path = os.path.join(model_version_path, 'model.pth')
print(f"尝试加载模型文件: {model_path}")
if not os.path.exists(model_path):

View File

@ -1,5 +1,5 @@
"""
药店销售预测系统 - KAN模型训练函数
药店销售预测系统 - KAN模型训练函数 (已重构)
"""
import os
@ -13,299 +13,264 @@ from torch.utils.data import DataLoader
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from tqdm import tqdm
from datetime import datetime
from models.kan_model import KANForecaster
from models.optimized_kan_forecaster import OptimizedKANForecaster
from utils.data_utils import create_dataset, PharmacyDataset
from utils.visualization import plot_loss_curve
from analysis.metrics import evaluate_model
from core.config import DEVICE, DEFAULT_MODEL_DIR, LOOK_BACK, FORECAST_HORIZON
from core.config import DEVICE, LOOK_BACK, FORECAST_HORIZON
from utils.model_manager import model_manager
def train_product_model_with_kan(product_id, product_df=None, store_id=None, training_mode='product', aggregation_method='sum', epochs=50, use_optimized=False, model_dir=DEFAULT_MODEL_DIR):
def train_product_model_with_kan(
product_id,
product_df=None,
store_id=None,
training_mode='product',
aggregation_method='sum',
epochs=50,
use_optimized=False,
socketio=None,
task_id=None,
progress_callback=None,
patience=10,
learning_rate=0.001
):
"""
使用KAN模型训练产品销售预测模型
参数:
product_id: 产品ID
epochs: 训练轮次
use_optimized: 是否使用优化版KAN
model_dir: 模型保存目录默认使用配置中的DEFAULT_MODEL_DIR
返回:
model: 训练好的模型
metrics: 模型评估指标
使用KAN模型训练产品销售预测模型 (已适配新的ModelManager)
"""
# 如果没有传入product_df则根据训练模式加载数据
if product_df is None:
from utils.multi_store_data_utils import load_multi_store_data, get_store_product_sales_data, aggregate_multi_store_data
def emit_progress(message, progress=None, metrics=None):
"""发送训练进度到前端"""
progress_data = {
'task_id': task_id,
'message': f"[KAN] {message}",
'timestamp': time.time()
}
if progress is not None:
progress_data['progress'] = progress
if metrics is not None:
progress_data['metrics'] = metrics
if progress_callback:
try:
progress_callback(progress_data)
except Exception as e:
print(f"[KAN] 进度回调失败: {e}")
if socketio and task_id:
try:
socketio.emit('training_progress', progress_data, namespace='/training')
except Exception as e:
print(f"[KAN] WebSocket发送失败: {e}")
print(f"[KAN] {message}", flush=True)
emit_progress("开始KAN模型训练...")
# 1. 确定模型标识符和版本
model_type = 'optimized_kan' if use_optimized else 'kan'
if training_mode == 'store':
scope = f"{store_id}_{product_id}"
elif training_mode == 'global':
scope = f"{product_id}" if product_id else "all"
else: # 'product' mode
scope = f"{product_id}_all"
model_identifier = model_manager.get_model_identifier(model_type, training_mode, scope, aggregation_method)
version = model_manager.get_next_version_number(model_identifier)
emit_progress(f"开始训练 {model_type} 模型 v{version}")
# 2. 获取模型版本路径
model_version_path = model_manager.get_model_version_path(model_type, training_mode, scope, version, aggregation_method)
emit_progress(f"模型将保存到: {model_version_path}")
# 3. 数据加载和预处理
if product_df is None:
# 此处保留了原有的数据加载逻辑作为后备
from utils.multi_store_data_utils import load_multi_store_data, get_store_product_sales_data, aggregate_multi_store_data
try:
if training_mode == 'store' and store_id:
# 加载特定店铺的数据
product_df = get_store_product_sales_data(
store_id,
product_id,
'pharmacy_sales_multi_store.csv'
)
training_scope = f"店铺 {store_id}"
product_df = get_store_product_sales_data(store_id, product_id, 'pharmacy_sales_multi_store.csv')
elif training_mode == 'global':
# 聚合所有店铺的数据
product_df = aggregate_multi_store_data(
product_id,
aggregation_method=aggregation_method,
file_path='pharmacy_sales_multi_store.csv'
)
training_scope = f"全局聚合({aggregation_method})"
product_df = aggregate_multi_store_data(product_id, aggregation_method=aggregation_method, file_path='pharmacy_sales_multi_store.csv')
else:
# 默认:加载所有店铺的产品数据
product_df = load_multi_store_data('pharmacy_sales_multi_store.csv', product_id=product_id)
training_scope = "所有店铺"
except Exception as e:
print(f"多店铺数据加载失败: {e}")
# 后备方案:尝试原始数据
emit_progress(f"多店铺数据加载失败: {e}, 尝试后备方案...")
df = pd.read_excel('pharmacy_sales.xlsx')
product_df = df[df['product_id'] == product_id].sort_values('date')
training_scope = "原始数据"
if training_mode == 'store' and store_id:
training_scope = f"店铺 {store_id}"
elif training_mode == 'global':
training_scope = f"全局聚合({aggregation_method})"
else:
# 如果传入了product_df直接使用
if training_mode == 'store' and store_id:
training_scope = f"店铺 {store_id}"
elif training_mode == 'global':
training_scope = f"全局聚合({aggregation_method})"
else:
training_scope = "所有店铺"
if product_df.empty:
raise ValueError(f"产品 {product_id} 没有可用的销售数据")
# 数据量检查
training_scope = "所有店铺"
min_required_samples = LOOK_BACK + FORECAST_HORIZON
if len(product_df) < min_required_samples:
error_msg = (
f"❌ 训练数据不足错误\n"
f"当前配置需要: {min_required_samples} 天数据 (LOOK_BACK={LOOK_BACK} + FORECAST_HORIZON={FORECAST_HORIZON})\n"
f"实际数据量: {len(product_df)}\n"
f"产品ID: {product_id}, 训练模式: {training_mode}\n"
f"建议解决方案:\n"
f"1. 生成更多数据: uv run generate_multi_store_data.py\n"
f"2. 调整配置参数: 减小 LOOK_BACK 或 FORECAST_HORIZON\n"
f"3. 使用全局训练模式聚合更多数据"
)
print(error_msg)
error_msg = f"数据不足: 需要 {min_required_samples} 天, 实际 {len(product_df)} 天。"
emit_progress(f"训练失败:{error_msg}")
raise ValueError(error_msg)
product_df = product_df.sort_values('date')
product_name = product_df['product_name'].iloc[0]
model_type = "优化版KAN" if use_optimized else "KAN"
print(f"使用{model_type}模型训练产品 '{product_name}' (ID: {product_id}) 的销售预测模型")
print(f"训练范围: {training_scope}")
print(f"使用设备: {DEVICE}")
print(f"模型将保存到目录: {model_dir}")
# 创建特征和目标变量
emit_progress(f"训练产品: '{product_name}' (ID: {product_id}) - {training_scope}")
emit_progress(f"使用设备: {DEVICE}, 数据量: {len(product_df)}")
features = ['sales', 'weekday', 'month', 'is_holiday', 'is_weekend', 'is_promotion', 'temperature']
# 预处理数据
X = product_df[features].values
y = product_df[['sales']].values # 保持为二维数组
y = product_df[['sales']].values
# 归一化数据
scaler_X = MinMaxScaler(feature_range=(0, 1))
scaler_y = MinMaxScaler(feature_range=(0, 1))
X_scaled = scaler_X.fit_transform(X)
y_scaled = scaler_y.fit_transform(y)
# 划分训练集和测试集80% 训练20% 测试)
train_size = int(len(X_scaled) * 0.8)
X_train, X_test = X_scaled[:train_size], X_scaled[train_size:]
y_train, y_test = y_scaled[:train_size], y_scaled[train_size:]
# 创建时间序列数据
trainX, trainY = create_dataset(X_train, y_train, LOOK_BACK, FORECAST_HORIZON)
testX, testY = create_dataset(X_test, y_test, LOOK_BACK, FORECAST_HORIZON)
# 转换为PyTorch的Tensor
trainX_tensor = torch.Tensor(trainX)
trainY_tensor = torch.Tensor(trainY)
testX_tensor = torch.Tensor(testX)
testY_tensor = torch.Tensor(testY)
train_loader = DataLoader(PharmacyDataset(torch.Tensor(trainX), torch.Tensor(trainY)), batch_size=32, shuffle=True)
test_loader = DataLoader(PharmacyDataset(torch.Tensor(testX), torch.Tensor(testY)), batch_size=32, shuffle=False)
# 创建数据加载器
train_dataset = PharmacyDataset(trainX_tensor, trainY_tensor)
test_dataset = PharmacyDataset(testX_tensor, testY_tensor)
batch_size = 32
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
# 初始化KAN模型
# 4. 模型初始化
input_dim = X_train.shape[1]
output_dim = FORECAST_HORIZON
hidden_size = 64
if use_optimized:
model = OptimizedKANForecaster(
input_features=input_dim,
hidden_sizes=[hidden_size, hidden_size*2, hidden_size],
output_sequence_length=output_dim
)
model = OptimizedKANForecaster(input_features=input_dim, hidden_sizes=[hidden_size, hidden_size*2, hidden_size], output_sequence_length=output_dim)
else:
model = KANForecaster(
input_features=input_dim,
hidden_sizes=[hidden_size, hidden_size*2, hidden_size],
output_sequence_length=output_dim
)
model = KANForecaster(input_features=input_dim, hidden_sizes=[hidden_size, hidden_size*2, hidden_size], output_sequence_length=output_dim)
# 将模型移动到设备上
model = model.to(DEVICE)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
# 训练模型
train_losses = []
test_losses = []
emit_progress("数据预处理完成,开始模型训练...", progress=10)
# 5. 训练循环
train_losses, test_losses = [], []
start_time = time.time()
best_loss = float('inf')
epochs_no_improve = 0
for epoch in range(epochs):
model.train()
epoch_loss = 0
for X_batch, y_batch in tqdm(train_loader, desc=f"Epoch {epoch+1}/{epochs}", leave=False):
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
if y_batch.dim() == 2: y_batch = y_batch.unsqueeze(-1)
# 确保目标张量有正确的形状 (batch_size, forecast_horizon, 1)
if y_batch.dim() == 2:
y_batch = y_batch.unsqueeze(-1)
# 前向传播
outputs = model(X_batch)
# 确保输出形状与目标匹配
if outputs.dim() == 2:
outputs = outputs.unsqueeze(-1)
if outputs.dim() == 2: outputs = outputs.unsqueeze(-1)
loss = criterion(outputs, y_batch)
# 如果是KAN模型加入正则化损失
if hasattr(model, 'regularization_loss'):
loss = loss + model.regularization_loss() * 0.01
# 反向传播和优化
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.item()
# 计算训练损失
train_loss = epoch_loss / len(train_loader)
train_losses.append(train_loss)
# 在测试集上评估
model.eval()
test_loss = 0
with torch.no_grad():
for X_batch, y_batch in test_loader:
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
# 确保目标张量有正确的形状
if y_batch.dim() == 2:
y_batch = y_batch.unsqueeze(-1)
if y_batch.dim() == 2: y_batch = y_batch.unsqueeze(-1)
outputs = model(X_batch)
# 确保输出形状与目标匹配
if outputs.dim() == 2:
outputs = outputs.unsqueeze(-1)
if outputs.dim() == 2: outputs = outputs.unsqueeze(-1)
loss = criterion(outputs, y_batch)
test_loss += loss.item()
test_loss = test_loss / len(test_loader)
test_loss /= len(test_loader)
test_losses.append(test_loss)
if (epoch + 1) % 10 == 0:
print(f"Epoch {epoch+1}/{epochs}, Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}")
# 计算训练时间
progress_percentage = 10 + ((epoch + 1) / epochs) * 85
emit_progress(f"Epoch {epoch+1}/{epochs} - Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}", progress=progress_percentage)
if test_loss < best_loss:
best_loss = test_loss
epochs_no_improve = 0
checkpoint_data = {
'epoch': epoch + 1,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'scaler_X': scaler_X,
'scaler_y': scaler_y,
}
model_manager.save_model_artifact(checkpoint_data, "checkpoint_best.pth", model_version_path)
emit_progress(f"💾 保存最佳模型检查点 (epoch {epoch+1}, test_loss: {test_loss:.4f})")
else:
epochs_no_improve += 1
if epochs_no_improve >= patience:
emit_progress(f"连续 {patience} 个epoch测试损失未改善提前停止训练。")
break
training_time = time.time() - start_time
# 绘制损失曲线并保存到模型目录
model_name = 'optimized_kan' if use_optimized else 'kan'
loss_curve_path = plot_loss_curve(
train_losses,
test_losses,
product_name,
model_type,
model_dir=model_dir
)
print(f"损失曲线已保存到: {loss_curve_path}")
# 评估模型
# 6. 保存产物和评估
loss_fig = plt.figure(figsize=(10, 6))
plt.plot(train_losses, label='Training Loss')
plt.plot(test_losses, label='Test Loss')
plt.title(f'{model_type} 损失曲线 - {product_name} (v{version}) - {training_scope}')
plt.xlabel('Epoch'); plt.ylabel('Loss'); plt.legend(); plt.grid(True)
model_manager.save_model_artifact(loss_fig, "loss_curve.png", model_version_path)
plt.close(loss_fig)
emit_progress(f"损失曲线已保存到: {os.path.join(model_version_path, 'loss_curve.png')}")
model.eval()
with torch.no_grad():
test_pred = model(testX_tensor.to(DEVICE)).cpu().numpy()
# 处理输出形状
if len(test_pred.shape) == 3:
test_pred = test_pred.squeeze(-1)
# 反归一化预测结果和真实值
test_pred_inv = scaler_y.inverse_transform(test_pred.reshape(-1, 1)).flatten()
test_true_inv = scaler_y.inverse_transform(testY.reshape(-1, 1)).flatten()
testX_tensor = torch.Tensor(testX).to(DEVICE)
test_pred = model(testX_tensor).cpu().numpy()
if len(test_pred.shape) == 3: test_pred = test_pred.squeeze(-1)
# 计算评估指标
metrics = evaluate_model(test_true_inv, test_pred_inv)
test_pred_inv = scaler_y.inverse_transform(test_pred.reshape(-1, FORECAST_HORIZON))
test_true_inv = scaler_y.inverse_transform(testY.reshape(-1, FORECAST_HORIZON))
metrics = evaluate_model(test_true_inv.flatten(), test_pred_inv.flatten())
metrics['training_time'] = training_time
# 打印评估指标
print("\n模型评估指标:")
print(f"MSE: {metrics['mse']:.4f}")
print(f"RMSE: {metrics['rmse']:.4f}")
print(f"MAE: {metrics['mae']:.4f}")
print(f"R²: {metrics['r2']:.4f}")
print(f"MAPE: {metrics['mape']:.2f}%")
print(f"训练时间: {training_time:.2f}")
# 使用统一模型管理器保存模型
from utils.model_manager import model_manager
model_type_name = 'optimized_kan' if use_optimized else 'kan'
model_data = {
emit_progress(f"模型评估完成 - RMSE: {metrics['rmse']:.4f}, R²: {metrics['r2']:.4f}")
# 7. 保存最终模型和元数据
final_model_data = {
'model_state_dict': model.state_dict(),
'scaler_X': scaler_X,
'scaler_y': scaler_y,
'config': {
'input_dim': input_dim,
'output_dim': output_dim,
'hidden_size': hidden_size,
'hidden_sizes': [hidden_size, hidden_size*2, hidden_size],
'sequence_length': LOOK_BACK,
'forecast_horizon': FORECAST_HORIZON,
'model_type': model_type_name,
'use_optimized': use_optimized
},
'metrics': metrics,
'loss_history': {
'train': train_losses,
'test': test_losses,
'epochs': list(range(1, epochs + 1))
},
'loss_curve_path': loss_curve_path
}
model_manager.save_model_artifact(final_model_data, "model.pth", model_version_path)
metadata = {
'product_id': product_id, 'product_name': product_name, 'model_type': model_type,
'version': f'v{version}', 'training_mode': training_mode, 'scope': scope,
'aggregation_method': aggregation_method, 'training_scope_description': training_scope,
'timestamp': datetime.now().isoformat(), 'metrics': metrics,
'config': {
'input_dim': input_dim, 'output_dim': output_dim,
'hidden_sizes': [hidden_size, hidden_size*2, hidden_size],
'sequence_length': LOOK_BACK, 'forecast_horizon': FORECAST_HORIZON,
'use_optimized': use_optimized
}
}
model_manager.save_model_artifact(metadata, "metadata.json", model_version_path)
# 8. 更新版本文件
model_manager.update_version(model_identifier, version)
model_path = model_manager.save_model(
model_data=model_data,
product_id=product_id,
model_type=model_type_name,
version='v1', # KAN训练器默认使用v1
store_id=store_id,
training_mode=training_mode,
aggregation_method=aggregation_method,
product_name=product_name
)
emit_progress(f"{model_type}模型 v{version} 训练完成!", progress=100, metrics=metrics)
return model, metrics
return model, metrics, version, model_version_path

View File

@ -12,97 +12,16 @@ import torch.optim as optim
from torch.utils.data import DataLoader
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from tqdm import tqdm
from datetime import datetime
from models.mlstm_model import MLSTMTransformer as MatrixLSTM
from utils.data_utils import create_dataset, PharmacyDataset
from utils.multi_store_data_utils import get_store_product_sales_data, aggregate_multi_store_data
from utils.visualization import plot_loss_curve
from analysis.metrics import evaluate_model
from core.config import (
DEVICE, DEFAULT_MODEL_DIR, LOOK_BACK, FORECAST_HORIZON,
get_next_model_version, get_model_file_path, get_latest_model_version
DEVICE, LOOK_BACK, FORECAST_HORIZON
)
from utils.training_progress import progress_manager
def save_checkpoint(checkpoint_data: dict, epoch_or_label, product_id: str,
model_type: str, model_dir: str, store_id=None,
training_mode: str = 'product', aggregation_method=None):
"""
保存训练检查点
Args:
checkpoint_data: 检查点数据
epoch_or_label: epoch编号或标签'best'
product_id: 产品ID
model_type: 模型类型
model_dir: 模型保存目录
store_id: 店铺ID
training_mode: 训练模式
aggregation_method: 聚合方法
"""
# 创建检查点目录
checkpoint_dir = os.path.join(model_dir, 'checkpoints')
os.makedirs(checkpoint_dir, exist_ok=True)
# 生成检查点文件名
if training_mode == 'store' and store_id:
filename = f"{model_type}_store_{store_id}_{product_id}_epoch_{epoch_or_label}.pth"
elif training_mode == 'global' and aggregation_method:
filename = f"{model_type}_global_{product_id}_{aggregation_method}_epoch_{epoch_or_label}.pth"
else:
filename = f"{model_type}_product_{product_id}_epoch_{epoch_or_label}.pth"
checkpoint_path = os.path.join(checkpoint_dir, filename)
# 保存检查点
torch.save(checkpoint_data, checkpoint_path)
print(f"[mLSTM] 检查点已保存: {checkpoint_path}", flush=True)
return checkpoint_path
def load_checkpoint(product_id: str, model_type: str, epoch_or_label,
model_dir: str, store_id=None, training_mode: str = 'product',
aggregation_method=None):
"""
加载训练检查点
Args:
product_id: 产品ID
model_type: 模型类型
epoch_or_label: epoch编号或标签
model_dir: 模型保存目录
store_id: 店铺ID
training_mode: 训练模式
aggregation_method: 聚合方法
Returns:
checkpoint_data: 检查点数据如果未找到返回None
"""
checkpoint_dir = os.path.join(model_dir, 'checkpoints')
# 生成检查点文件名
if training_mode == 'store' and store_id:
filename = f"{model_type}_store_{store_id}_{product_id}_epoch_{epoch_or_label}.pth"
elif training_mode == 'global' and aggregation_method:
filename = f"{model_type}_global_{product_id}_{aggregation_method}_epoch_{epoch_or_label}.pth"
else:
filename = f"{model_type}_product_{product_id}_epoch_{epoch_or_label}.pth"
checkpoint_path = os.path.join(checkpoint_dir, filename)
if os.path.exists(checkpoint_path):
try:
checkpoint_data = torch.load(checkpoint_path, map_location=DEVICE)
print(f"[mLSTM] 检查点已加载: {checkpoint_path}", flush=True)
return checkpoint_data
except Exception as e:
print(f"[mLSTM] 加载检查点失败: {e}", flush=True)
return None
else:
print(f"[mLSTM] 检查点文件不存在: {checkpoint_path}", flush=True)
return None
from utils.model_manager import model_manager
def train_product_model_with_mlstm(
product_id,
@ -111,8 +30,6 @@ def train_product_model_with_mlstm(
training_mode='product',
aggregation_method='sum',
epochs=50,
model_dir=DEFAULT_MODEL_DIR,
version=None,
socketio=None,
task_id=None,
continue_training=False,
@ -123,8 +40,7 @@ def train_product_model_with_mlstm(
):
"""
使用mLSTM训练产品销售预测模型
参数:
参数:
product_id: 产品ID
store_id: 店铺ID为None时使用全局数据
training_mode: 训练模式 ('product', 'store', 'global')
@ -138,7 +54,8 @@ def train_product_model_with_mlstm(
progress_callback: 进度回调函数用于多进程训练
"""
# 创建WebSocket进度反馈函数支持多进程
# 创建WebSocket进度反馈函数支持多进程 """
def emit_progress(message, progress=None, metrics=None):
"""发送训练进度到前端"""
progress_data = {
@ -151,14 +68,12 @@ def train_product_model_with_mlstm(
if metrics is not None:
progress_data['metrics'] = metrics
# 在多进程环境中使用progress_callback
if progress_callback:
try:
progress_callback(progress_data)
except Exception as e:
print(f"[mLSTM] 进度回调失败: {e}")
# 在单进程环境中使用socketio
if socketio and task_id:
try:
socketio.emit('training_progress', progress_data, namespace='/training')
@ -166,47 +81,30 @@ def train_product_model_with_mlstm(
print(f"[mLSTM] WebSocket发送失败: {e}")
print(f"[mLSTM] {message}", flush=True)
# 强制刷新输出缓冲区
import sys
sys.stdout.flush()
sys.stderr.flush()
emit_progress("开始mLSTM模型训练...")
# 确定版本号
if version is None:
if continue_training:
version = get_latest_model_version(product_id, 'mlstm')
if version is None:
version = get_next_model_version(product_id, 'mlstm')
else:
version = get_next_model_version(product_id, 'mlstm')
emit_progress(f"开始训练 mLSTM 模型版本 {version}")
# 初始化训练进度管理器(如果还未初始化)
if socketio and task_id:
print(f"[mLSTM] 任务 {task_id}: 开始mLSTM训练器", flush=True)
try:
# 初始化进度管理器
if not hasattr(progress_manager, 'training_id') or progress_manager.training_id != task_id:
progress_manager.start_training(
training_id=task_id,
product_id=product_id,
model_type='mlstm',
training_mode=training_mode,
total_epochs=epochs,
total_batches=0, # 将在后面设置
batch_size=32, # 默认值
total_samples=0 # 将在后面设置
)
print(f"[mLSTM] 任务 {task_id}: 进度管理器已初始化", flush=True)
else:
print(f"[mLSTM] 任务 {task_id}: 使用现有进度管理器", flush=True)
except Exception as e:
print(f"[mLSTM] 任务 {task_id}: 进度管理器初始化失败: {e}", flush=True)
# 1. 确定模型标识符和版本
model_type = 'mlstm'
if training_mode == 'store':
scope = f"{store_id}_{product_id}"
elif training_mode == 'global':
scope = f"{product_id}" if product_id else "all"
else:
scope = f"{product_id}_all"
model_identifier = model_manager.get_model_identifier(model_type, training_mode, scope, aggregation_method)
version = model_manager.get_next_version_number(model_identifier)
emit_progress(f"开始训练 mLSTM 模型 v{version}")
# 2. 获取模型版本路径
model_version_path = model_manager.get_model_version_path(model_type, training_mode, scope, version, aggregation_method)
emit_progress(f"模型将保存到: {model_version_path}")
# 数据现在由调用方传入,不再在此处加载
if training_mode == 'store' and store_id:
training_scope = f"店铺 {store_id}"
elif training_mode == 'global':
@ -214,319 +112,160 @@ def train_product_model_with_mlstm(
else:
training_scope = "所有店铺"
# 数据量检查
min_required_samples = LOOK_BACK + FORECAST_HORIZON
if len(product_df) < min_required_samples:
error_msg = (
f"❌ 训练数据不足错误\n"
f"当前配置需要: {min_required_samples} 天数据 (LOOK_BACK={LOOK_BACK} + FORECAST_HORIZON={FORECAST_HORIZON})\n"
f"实际数据量: {len(product_df)}\n"
f"产品ID: {product_id}, 训练模式: {training_mode}\n"
f"建议解决方案:\n"
f"1. 生成更多数据: uv run generate_multi_store_data.py\n"
f"2. 调整配置参数: 减小 LOOK_BACK 或 FORECAST_HORIZON\n"
f"3. 使用全局训练模式聚合更多数据"
)
error_msg = f"数据不足: 需要 {min_required_samples} 天, 实际 {len(product_df)} 天。"
print(error_msg)
emit_progress(f"训练失败:数据不足 ({len(product_df)}/{min_required_samples} 天)")
emit_progress(f"训练失败:{error_msg}")
raise ValueError(error_msg)
product_name = product_df['product_name'].iloc[0]
print(f"[mLSTM] 使用mLSTM模型训练产品 '{product_name}' (ID: {product_id}) 的销售预测模型", flush=True)
print(f"[mLSTM] 训练范围: {training_scope}", flush=True)
print(f"[mLSTM] 版本: {version}", flush=True)
print(f"[mLSTM] 版本: v{version}", flush=True)
print(f"[mLSTM] 使用设备: {DEVICE}", flush=True)
print(f"[mLSTM] 模型将保存到目录: {model_dir}", flush=True)
print(f"[mLSTM] 数据量: {len(product_df)} 条记录", flush=True)
emit_progress(f"训练产品: {product_name} (ID: {product_id}) - {training_scope}")
emit_progress(f"训练产品: {product_name} (ID: {product_id}) - {training_scope}")
# 创建特征和目标变量
features = ['sales', 'weekday', 'month', 'is_holiday', 'is_weekend', 'is_promotion', 'temperature']
print(f"[mLSTM] 开始数据预处理,特征: {features}", flush=True)
# 预处理数据
X = product_df[features].values
y = product_df[['sales']].values # 保持为二维数组
y = product_df[['sales']].values
print(f"[mLSTM] 特征矩阵形状: {X.shape}, 目标矩阵形状: {y.shape}", flush=True)
emit_progress("数据预处理中...")
# 归一化数据
scaler_X = MinMaxScaler(feature_range=(0, 1))
scaler_y = MinMaxScaler(feature_range=(0, 1))
X_scaled = scaler_X.fit_transform(X)
y_scaled = scaler_y.fit_transform(y)
print(f"[mLSTM] 数据归一化完成", flush=True)
# 划分训练集和测试集80% 训练20% 测试)
print(f"[mLSTM] 数据归一化完成", flush=True)
train_size = int(len(X_scaled) * 0.8)
X_train, X_test = X_scaled[:train_size], X_scaled[train_size:]
y_train, y_test = y_scaled[:train_size], y_scaled[train_size:]
# 创建时间序列数据
trainX, trainY = create_dataset(X_train, y_train, LOOK_BACK, FORECAST_HORIZON)
testX, testY = create_dataset(X_test, y_test, LOOK_BACK, FORECAST_HORIZON)
# 转换为PyTorch的Tensor
trainX_tensor = torch.Tensor(trainX)
trainY_tensor = torch.Tensor(trainY)
testX_tensor = torch.Tensor(testX)
testY_tensor = torch.Tensor(testY)
train_loader = DataLoader(PharmacyDataset(torch.Tensor(trainX), torch.Tensor(trainY)), batch_size=32, shuffle=True)
test_loader = DataLoader(PharmacyDataset(torch.Tensor(testX), torch.Tensor(testY)), batch_size=32, shuffle=False)
# 创建数据加载器
train_dataset = PharmacyDataset(trainX_tensor, trainY_tensor)
test_dataset = PharmacyDataset(testX_tensor, testY_tensor)
batch_size = 32
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
# 更新进度管理器的批次信息
total_batches = len(train_loader)
total_samples = len(train_dataset)
print(f"[mLSTM] 数据加载器创建完成 - 批次数: {total_batches}, 样本数: {total_samples}", flush=True)
print(f"[mLSTM] 数据加载器创建完成 - 批次数: {total_batches}, 样本数: {total_samples}", flush=True)
emit_progress(f"数据加载器准备完成 - 批次数: {total_batches}, 样本数: {total_samples}")
# 初始化mLSTM结合Transformer模型
input_dim = X_train.shape[1]
output_dim = FORECAST_HORIZON
hidden_size = 128
num_heads = 4
dropout_rate = 0.1
num_blocks = 3
embed_dim = 32
dense_dim = 32
print(f"[mLSTM] 初始化模型 - 输入维度: {input_dim}, 输出维度: {output_dim}", flush=True)
print(f"[mLSTM] 模型参数 - 隐藏层: {hidden_size}, 注意力头: {num_heads}", flush=True)
emit_progress(f"初始化mLSTM模型 - 输入维度: {input_dim}, 隐藏层: {hidden_size}")
hidden_size, num_heads, dropout_rate, num_blocks, embed_dim, dense_dim = 128, 4, 0.1, 3, 32, 32
model = MatrixLSTM(
num_features=input_dim,
hidden_size=hidden_size,
mlstm_layers=2,
embed_dim=embed_dim,
dense_dim=dense_dim,
num_heads=num_heads,
dropout_rate=dropout_rate,
num_blocks=num_blocks,
output_sequence_length=output_dim
)
print(f"[mLSTM] 模型创建完成", flush=True)
num_features=input_dim, hidden_size=hidden_size, mlstm_layers=2, embed_dim=embed_dim,
dense_dim=dense_dim, num_heads=num_heads, dropout_rate=dropout_rate,
num_blocks=num_blocks, output_sequence_length=output_dim
).to(DEVICE)
print(f"[mLSTM] 模型创建完成", flush=True)
emit_progress("mLSTM模型初始化完成")
# 如果是继续训练,加载现有模型
if continue_training and version != 'v1':
try:
existing_model_path = get_model_file_path(product_id, 'mlstm', version)
if os.path.exists(existing_model_path):
checkpoint = torch.load(existing_model_path, map_location=DEVICE)
model.load_state_dict(checkpoint['model_state_dict'])
print(f"加载现有模型: {existing_model_path}")
emit_progress(f"加载现有模型版本 {version} 进行继续训练")
except Exception as e:
print(f"无法加载现有模型,将重新开始训练: {e}")
emit_progress("无法加载现有模型,重新开始训练")
# 将模型移动到设备上
model = model.to(DEVICE)
if continue_training:
emit_progress("继续训练模式启动,但当前重构版本将从头开始。")
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience=patience // 2, factor=0.5)
emit_progress("数据预处理完成,开始模型训练...", progress=10)
# 训练模型
train_losses = []
test_losses = []
train_losses, test_losses = [], []
start_time = time.time()
# 配置检查点保存
checkpoint_interval = max(1, epochs // 10) # 每10%进度保存一次最少每1个epoch
checkpoint_interval = max(1, epochs // 10)
best_loss = float('inf')
epochs_no_improve = 0
emit_progress(f"开始训练 - 总epoch: {epochs}, 检查点间隔: {checkpoint_interval}, 耐心值: {patience}")
emit_progress(f"开始训练 - 总epoch: {epochs}, 检查点间隔: {checkpoint_interval}, 耐心值: {patience}")
for epoch in range(epochs):
emit_progress(f"开始训练 Epoch {epoch+1}/{epochs}")
emit_progress(f"开始训练 Epoch {epoch+1}/{epochs}")
model.train()
epoch_loss = 0
for batch_idx, (X_batch, y_batch) in enumerate(train_loader):
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
# 确保目标张量有正确的形状
# 前向传播
optimizer.zero_grad()
outputs = model(X_batch)
loss = criterion(outputs, y_batch)
# 反向传播和优化
optimizer.zero_grad()
loss.backward()
if clip_norm:
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=clip_norm)
optimizer.step()
epoch_loss += loss.item()
# 计算训练损失
# 计算训练损失
train_loss = epoch_loss / len(train_loader)
train_losses.append(train_loss)
# 在测试集上评估
# 在测试集上评估
model.eval()
test_loss = 0
with torch.no_grad():
for batch_idx, (X_batch, y_batch) in enumerate(test_loader):
for X_batch, y_batch in test_loader:
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
outputs = model(X_batch)
loss = criterion(outputs, y_batch)
test_loss += loss.item()
test_loss = test_loss / len(test_loader)
test_loss /= len(test_loader)
test_losses.append(test_loss)
# 更新学习率
# 更新学习率
scheduler.step(test_loss)
# 计算总体训练进度
epoch_progress = ((epoch + 1) / epochs) * 90 + 10 # 10-100% 范围
# 发送训练进度
current_metrics = {
'train_loss': train_loss,
'test_loss': test_loss,
'epoch': epoch + 1,
'total_epochs': epochs,
'learning_rate': optimizer.param_groups[0]['lr']
}
emit_progress(f"Epoch {epoch+1}/{epochs} 完成 - Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}",
progress=epoch_progress, metrics=current_metrics)
progress=10 + ((epoch + 1) / epochs) * 85)
# 定期保存检查点
if (epoch + 1) % checkpoint_interval == 0 or epoch == epochs - 1:
checkpoint_data = {
'epoch': epoch + 1,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'train_loss': train_loss,
'test_loss': test_loss,
'train_losses': train_losses,
'test_losses': test_losses,
'scaler_X': scaler_X,
'scaler_y': scaler_y,
'config': {
'input_dim': input_dim,
'output_dim': output_dim,
'hidden_size': hidden_size,
'num_heads': num_heads,
'dropout': dropout_rate,
'num_blocks': num_blocks,
'embed_dim': embed_dim,
'dense_dim': dense_dim,
'sequence_length': LOOK_BACK,
'forecast_horizon': FORECAST_HORIZON,
'model_type': 'mlstm'
},
'training_info': {
'product_id': product_id,
'product_name': product_name,
'training_mode': training_mode,
'store_id': store_id,
'aggregation_method': aggregation_method,
'training_scope': training_scope,
'timestamp': time.time()
}
}
# 保存检查点
save_checkpoint(checkpoint_data, epoch + 1, product_id, 'mlstm',
model_dir, store_id, training_mode, aggregation_method)
# 如果是最佳模型,额外保存一份
if test_loss < best_loss:
best_loss = test_loss
save_checkpoint(checkpoint_data, 'best', product_id, 'mlstm',
model_dir, store_id, training_mode, aggregation_method)
emit_progress(f"💾 保存最佳模型检查点 (epoch {epoch+1}, test_loss: {test_loss:.4f})")
epochs_no_improve = 0
else:
epochs_no_improve += 1
# 3. 保存检查点 checkpoint_data = {
'epoch': epoch + 1,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'scaler_X': scaler_X,
'scaler_y': scaler_y,
}
if (epoch + 1) % checkpoint_interval == 0:
model_manager.save_model_artifact(checkpoint_data, f"checkpoint_epoch_{epoch+1}.pth", model_version_path)
emit_progress(f"💾 保存训练检查点 epoch_{epoch+1}")
if (epoch + 1) % 10 == 0:
print(f"Epoch {epoch+1}/{epochs}, Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}", flush=True)
# 提前停止逻辑
if test_loss < best_loss:
best_loss = test_loss
model_manager.save_model_artifact(checkpoint_data, "checkpoint_best.pth", model_version_path)
emit_progress(f"💾 保存最佳模型检查点 (epoch {epoch+1}, test_loss: {test_loss:.4f})")
epochs_no_improve = 0
else:
epochs_no_improve += 1
if epochs_no_improve >= patience:
emit_progress(f"连续 {patience} 个epoch测试损失未改善提前停止训练。")
break
# 计算训练时间
training_time = time.time() - start_time
emit_progress("生成损失曲线...", progress=95)
# 确定模型保存目录(支持多店铺)
if store_id:
# 为特定店铺创建子目录
store_model_dir = os.path.join(model_dir, 'mlstm', store_id)
os.makedirs(store_model_dir, exist_ok=True)
loss_curve_filename = f"{product_id}_mlstm_{version}_loss_curve.png"
loss_curve_path = os.path.join(store_model_dir, loss_curve_filename)
else:
# 全局模型保存在global目录
global_model_dir = os.path.join(model_dir, 'mlstm', 'global')
os.makedirs(global_model_dir, exist_ok=True)
loss_curve_filename = f"{product_id}_mlstm_{version}_global_loss_curve.png"
loss_curve_path = os.path.join(global_model_dir, loss_curve_filename)
# 绘制损失曲线并保存到模型目录
plt.figure(figsize=(10, 6))
loss_fig = plt.figure(figsize=(10, 6))
plt.plot(train_losses, label='Training Loss')
plt.plot(test_losses, label='Test Loss')
title_suffix = f" - {training_scope}" if store_id else " - 全局模型"
plt.title(f'mLSTM 模型训练损失曲线 - {product_name} ({version}){title_suffix}')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.grid(True)
plt.savefig(loss_curve_path, dpi=300, bbox_inches='tight')
plt.close()
plt.title(f'mLSTM 损失曲线 - {product_name} (v{version}) - {training_scope}')
plt.xlabel('Epoch'); plt.ylabel('Loss'); plt.legend(); plt.grid(True)
model_manager.save_model_artifact(loss_fig, "loss_curve.png", model_version_path)
plt.close(loss_fig)
print(f"损失曲线已保存到: {os.path.join(model_version_path, 'loss_curve.png')}")
print(f"损失曲线已保存到: {loss_curve_path}")
emit_progress("模型评估中...", progress=98)
# 评估模型
model.eval()
with torch.no_grad():
test_pred = model(testX_tensor.to(DEVICE)).cpu().numpy()
test_true = testY
# 反归一化预测结果和真实值
test_pred_inv = scaler_y.inverse_transform(test_pred)
test_true_inv = scaler_y.inverse_transform(test_true)
test_pred = model(torch.Tensor(testX).to(DEVICE)).cpu().numpy()
# 计算评估指标
metrics = evaluate_model(test_true_inv, test_pred_inv)
metrics = evaluate_model(scaler_y.inverse_transform(testY), scaler_y.inverse_transform(test_pred))
metrics['training_time'] = training_time
metrics['version'] = version
# 打印评估指标
# 打印评估指标
print("\n模型评估指标:")
print(f"MSE: {metrics['mse']:.4f}")
print(f"RMSE: {metrics['rmse']:.4f}")
@ -534,65 +273,32 @@ def train_product_model_with_mlstm(
print(f"R²: {metrics['r2']:.4f}")
print(f"MAPE: {metrics['mape']:.2f}%")
print(f"训练时间: {training_time:.2f}")
emit_progress("保存最终模型...", progress=99)
# 保存最终训练完成的模型基于最终epoch
final_model_data = {
'epoch': epochs, # 最终epoch
'epoch': epoch + 1,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'train_loss': train_losses[-1],
'test_loss': test_losses[-1],
'train_losses': train_losses,
'test_losses': test_losses,
'scaler_X': scaler_X,
'scaler_y': scaler_y,
}
model_manager.save_model_artifact(final_model_data, "model.pth", model_version_path)
metadata = {
'product_id': product_id, 'product_name': product_name, 'model_type': model_type,
'version': f'v{version}', 'training_mode': training_mode, 'scope': scope,
'aggregation_method': aggregation_method, 'training_scope_description': training_scope,
'timestamp': datetime.now().isoformat(), 'metrics': metrics,
'config': {
'input_dim': input_dim,
'output_dim': output_dim,
'hidden_size': hidden_size,
'num_heads': num_heads,
'dropout': dropout_rate,
'num_blocks': num_blocks,
'embed_dim': embed_dim,
'dense_dim': dense_dim,
'sequence_length': LOOK_BACK,
'forecast_horizon': FORECAST_HORIZON,
'model_type': 'mlstm'
},
'metrics': metrics,
'loss_curve_path': loss_curve_path,
'training_info': {
'product_id': product_id,
'product_name': product_name,
'training_mode': training_mode,
'store_id': store_id,
'aggregation_method': aggregation_method,
'training_scope': training_scope,
'timestamp': time.time(),
'training_completed': True
'input_dim': input_dim, 'output_dim': output_dim, 'hidden_size': hidden_size,
'num_heads': num_heads, 'dropout': dropout_rate, 'num_blocks': num_blocks,
'embed_dim': embed_dim, 'dense_dim': dense_dim,
'sequence_length': LOOK_BACK, 'forecast_horizon': FORECAST_HORIZON,
}
}
model_manager.save_model_artifact(metadata, "metadata.json", model_version_path)
# 6. 更新版本文件
model_manager.update_version(model_identifier, version)
# 保存最终模型使用epoch标识
final_model_path = save_checkpoint(
final_model_data, f"final_epoch_{epochs}", product_id, 'mlstm',
model_dir, store_id, training_mode, aggregation_method
)
emit_progress(f"✅ mLSTM模型 v{version} 训练完成!", progress=100, metrics=metrics)
# 发送训练完成消息
final_metrics = {
'mse': metrics['mse'],
'rmse': metrics['rmse'],
'mae': metrics['mae'],
'r2': metrics['r2'],
'mape': metrics['mape'],
'training_time': training_time,
'final_epoch': epochs,
'model_path': final_model_path
}
emit_progress(f"✅ mLSTM模型训练完成最终epoch: {epochs} 已保存", progress=100, metrics=final_metrics)
return model, metrics, epochs, final_model_path
return model, metrics, version, model_version_path

View File

@ -1,5 +1,5 @@
"""
药店销售预测系统 - TCN模型训练函数
药店销售预测系统 - TCN模型训练函数 (已重构)
"""
import os
@ -12,50 +12,13 @@ import torch.optim as optim
from torch.utils.data import DataLoader
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from tqdm import tqdm
from datetime import datetime
from models.tcn_model import TCNForecaster
from utils.data_utils import create_dataset, PharmacyDataset
from utils.visualization import plot_loss_curve
from analysis.metrics import evaluate_model
from core.config import DEVICE, DEFAULT_MODEL_DIR, LOOK_BACK, FORECAST_HORIZON
from utils.training_progress import progress_manager
def save_checkpoint(checkpoint_data: dict, epoch_or_label, product_id: str,
model_type: str, model_dir: str, store_id=None,
training_mode: str = 'product', aggregation_method=None):
"""
保存训练检查点
Args:
checkpoint_data: 检查点数据
epoch_or_label: epoch编号或标签'best'
product_id: 产品ID
model_type: 模型类型
model_dir: 模型保存目录
store_id: 店铺ID
training_mode: 训练模式
aggregation_method: 聚合方法
"""
# 创建检查点目录
checkpoint_dir = os.path.join(model_dir, 'checkpoints')
os.makedirs(checkpoint_dir, exist_ok=True)
# 生成检查点文件名
if training_mode == 'store' and store_id:
filename = f"{model_type}_store_{store_id}_{product_id}_epoch_{epoch_or_label}.pth"
elif training_mode == 'global' and aggregation_method:
filename = f"{model_type}_global_{product_id}_{aggregation_method}_epoch_{epoch_or_label}.pth"
else:
filename = f"{model_type}_product_{product_id}_epoch_{epoch_or_label}.pth"
checkpoint_path = os.path.join(checkpoint_dir, filename)
# 保存检查点
torch.save(checkpoint_data, checkpoint_path)
print(f"[TCN] 检查点已保存: {checkpoint_path}", flush=True)
return checkpoint_path
from core.config import DEVICE, LOOK_BACK, FORECAST_HORIZON
from utils.model_manager import model_manager
def train_product_model_with_tcn(
product_id,
@ -64,181 +27,116 @@ def train_product_model_with_tcn(
training_mode='product',
aggregation_method='sum',
epochs=50,
model_dir=DEFAULT_MODEL_DIR,
version=None,
socketio=None,
task_id=None,
continue_training=False
progress_callback=None,
patience=10,
learning_rate=0.001
):
"""
使用TCN模型训练产品销售预测模型
参数:
product_id: 产品ID
epochs: 训练轮次
model_dir: 模型保存目录默认使用配置中的DEFAULT_MODEL_DIR
version: 指定版本号如果为None则自动生成
socketio: WebSocket对象用于实时反馈
task_id: 训练任务ID
continue_training: 是否继续训练现有模型
返回:
model: 训练好的模型
metrics: 模型评估指标
version: 实际使用的版本号
model_path: 模型文件路径
使用TCN模型训练产品销售预测模型 (已适配新的ModelManager)
"""
def emit_progress(message, progress=None, metrics=None):
"""发送训练进度到前端"""
progress_data = {
'task_id': task_id,
'message': f"[TCN] {message}",
'timestamp': time.time()
}
if progress is not None:
progress_data['progress'] = progress
if metrics is not None:
progress_data['metrics'] = metrics
if progress_callback:
try:
progress_callback(progress_data)
except Exception as e:
print(f"[TCN] 进度回调失败: {e}")
if socketio and task_id:
data = {
'task_id': task_id,
'message': message,
'timestamp': time.time()
}
if progress is not None:
data['progress'] = progress
if metrics is not None:
data['metrics'] = metrics
socketio.emit('training_progress', data, namespace='/training')
try:
socketio.emit('training_progress', progress_data, namespace='/training')
except Exception as e:
print(f"[TCN] WebSocket发送失败: {e}")
print(f"[TCN] {message}", flush=True)
emit_progress("开始TCN模型训练...")
# 1. 确定模型标识符和版本
model_type = 'tcn'
if training_mode == 'store':
scope = f"{store_id}_{product_id}"
elif training_mode == 'global':
scope = f"{product_id}" if product_id else "all"
else: # 'product' mode
scope = f"{product_id}_all"
model_identifier = model_manager.get_model_identifier(model_type, training_mode, scope, aggregation_method)
version = model_manager.get_next_version_number(model_identifier)
# 确定版本号
if version is None:
from core.config import get_latest_model_version, get_next_model_version
if continue_training:
version = get_latest_model_version(product_id, 'tcn')
if version is None:
version = get_next_model_version(product_id, 'tcn')
else:
version = get_next_model_version(product_id, 'tcn')
emit_progress(f"开始训练 TCN 模型版本 {version}")
# 如果没有传入product_df则根据训练模式加载数据
emit_progress(f"开始训练 {model_type} 模型 v{version}")
# 2. 获取模型版本路径
model_version_path = model_manager.get_model_version_path(model_type, training_mode, scope, version, aggregation_method)
emit_progress(f"模型将保存到: {model_version_path}")
# 3. 数据加载和预处理
if product_df is None:
from utils.multi_store_data_utils import load_multi_store_data, get_store_product_sales_data, aggregate_multi_store_data
try:
if training_mode == 'store' and store_id:
# 加载特定店铺的数据
product_df = get_store_product_sales_data(
store_id,
product_id,
'pharmacy_sales_multi_store.csv'
)
training_scope = f"店铺 {store_id}"
product_df = get_store_product_sales_data(store_id, product_id, 'pharmacy_sales_multi_store.csv')
elif training_mode == 'global':
# 聚合所有店铺的数据
product_df = aggregate_multi_store_data(
product_id,
aggregation_method=aggregation_method,
file_path='pharmacy_sales_multi_store.csv'
)
training_scope = f"全局聚合({aggregation_method})"
product_df = aggregate_multi_store_data(product_id, aggregation_method=aggregation_method, file_path='pharmacy_sales_multi_store.csv')
else:
# 默认:加载所有店铺的产品数据
product_df = load_multi_store_data('pharmacy_sales_multi_store.csv', product_id=product_id)
training_scope = "所有店铺"
except Exception as e:
print(f"多店铺数据加载失败: {e}")
# 后备方案:尝试原始数据
emit_progress(f"多店铺数据加载失败: {e}, 尝试后备方案...")
df = pd.read_excel('pharmacy_sales.xlsx')
product_df = df[df['product_id'] == product_id].sort_values('date')
training_scope = "原始数据"
if training_mode == 'store' and store_id:
training_scope = f"店铺 {store_id}"
elif training_mode == 'global':
training_scope = f"全局聚合({aggregation_method})"
else:
# 如果传入了product_df直接使用
if training_mode == 'store' and store_id:
training_scope = f"店铺 {store_id}"
elif training_mode == 'global':
training_scope = f"全局聚合({aggregation_method})"
else:
training_scope = "所有店铺"
if product_df.empty:
raise ValueError(f"产品 {product_id} 没有可用的销售数据")
# 数据量检查
training_scope = "所有店铺"
min_required_samples = LOOK_BACK + FORECAST_HORIZON
if len(product_df) < min_required_samples:
error_msg = (
f"❌ 训练数据不足错误\n"
f"当前配置需要: {min_required_samples} 天数据 (LOOK_BACK={LOOK_BACK} + FORECAST_HORIZON={FORECAST_HORIZON})\n"
f"实际数据量: {len(product_df)}\n"
f"产品ID: {product_id}, 训练模式: {training_mode}\n"
f"建议解决方案:\n"
f"1. 生成更多数据: uv run generate_multi_store_data.py\n"
f"2. 调整配置参数: 减小 LOOK_BACK 或 FORECAST_HORIZON\n"
f"3. 使用全局训练模式聚合更多数据"
)
print(error_msg)
emit_progress(f"训练失败:数据不足 ({len(product_df)}/{min_required_samples} 天)")
error_msg = f"数据不足: 需要 {min_required_samples} 天, 实际 {len(product_df)} 天。"
emit_progress(f"训练失败:{error_msg}")
raise ValueError(error_msg)
product_df = product_df.sort_values('date')
product_name = product_df['product_name'].iloc[0]
print(f"使用TCN模型训练产品 '{product_name}' (ID: {product_id}) 的销售预测模型")
print(f"训练范围: {training_scope}")
print(f"版本: {version}")
print(f"使用设备: {DEVICE}")
print(f"模型将保存到目录: {model_dir}")
emit_progress(f"训练产品: {product_name} (ID: {product_id})")
# 创建特征和目标变量
emit_progress(f"训练产品: '{product_name}' (ID: {product_id}) - {training_scope}")
emit_progress(f"使用设备: {DEVICE}, 数据量: {len(product_df)}")
features = ['sales', 'weekday', 'month', 'is_holiday', 'is_weekend', 'is_promotion', 'temperature']
# 预处理数据
X = product_df[features].values
y = product_df[['sales']].values # 保持为二维数组
y = product_df[['sales']].values
# 设置数据预处理阶段
progress_manager.set_stage("data_preprocessing", 0)
emit_progress("数据预处理中...")
# 归一化数据
scaler_X = MinMaxScaler(feature_range=(0, 1))
scaler_y = MinMaxScaler(feature_range=(0, 1))
X_scaled = scaler_X.fit_transform(X)
y_scaled = scaler_y.fit_transform(y)
# 划分训练集和测试集80% 训练20% 测试)
train_size = int(len(X_scaled) * 0.8)
X_train, X_test = X_scaled[:train_size], X_scaled[train_size:]
y_train, y_test = y_scaled[:train_size], y_scaled[train_size:]
progress_manager.set_stage("data_preprocessing", 50)
# 创建时间序列数据
trainX, trainY = create_dataset(X_train, y_train, LOOK_BACK, FORECAST_HORIZON)
testX, testY = create_dataset(X_test, y_test, LOOK_BACK, FORECAST_HORIZON)
# 转换为PyTorch的Tensor
trainX_tensor = torch.Tensor(trainX)
trainY_tensor = torch.Tensor(trainY)
testX_tensor = torch.Tensor(testX)
testY_tensor = torch.Tensor(testY)
train_loader = DataLoader(PharmacyDataset(torch.Tensor(trainX), torch.Tensor(trainY)), batch_size=32, shuffle=True)
test_loader = DataLoader(PharmacyDataset(torch.Tensor(testX), torch.Tensor(testY)), batch_size=32, shuffle=False)
# 创建数据加载器
train_dataset = PharmacyDataset(trainX_tensor, trainY_tensor)
test_dataset = PharmacyDataset(testX_tensor, testY_tensor)
batch_size = 32
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
# 更新进度管理器的批次信息
total_batches = len(train_loader)
total_samples = len(train_dataset)
progress_manager.total_batches_per_epoch = total_batches
progress_manager.batch_size = batch_size
progress_manager.total_samples = total_samples
progress_manager.set_stage("data_preprocessing", 100)
# 初始化TCN模型
# 4. 模型初始化
input_dim = X_train.shape[1]
output_dim = FORECAST_HORIZON
hidden_size = 64
@ -252,265 +150,121 @@ def train_product_model_with_tcn(
num_channels=[hidden_size] * num_layers,
kernel_size=kernel_size,
dropout=dropout_rate
)
# 如果是继续训练,加载现有模型
if continue_training and version != 'v1':
try:
from core.config import get_model_file_path
existing_model_path = get_model_file_path(product_id, 'tcn', version)
if os.path.exists(existing_model_path):
checkpoint = torch.load(existing_model_path, map_location=DEVICE)
model.load_state_dict(checkpoint['model_state_dict'])
print(f"加载现有模型: {existing_model_path}")
emit_progress(f"加载现有模型版本 {version} 进行继续训练")
except Exception as e:
print(f"无法加载现有模型,将重新开始训练: {e}")
emit_progress("无法加载现有模型,重新开始训练")
# 将模型移动到设备上
model = model.to(DEVICE)
).to(DEVICE)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
emit_progress("开始模型训练...")
# 训练模型
train_losses = []
test_losses = []
emit_progress("数据预处理完成,开始模型训练...", progress=10)
# 5. 训练循环
train_losses, test_losses = [], []
start_time = time.time()
# 配置检查点保存
checkpoint_interval = max(1, epochs // 10) # 每10%进度保存一次最少每1个epoch
best_loss = float('inf')
progress_manager.set_stage("model_training", 0)
emit_progress(f"开始训练 - 总epoch: {epochs}, 检查点间隔: {checkpoint_interval}")
epochs_no_improve = 0
for epoch in range(epochs):
# 开始新的轮次
progress_manager.start_epoch(epoch)
model.train()
epoch_loss = 0
for batch_idx, (X_batch, y_batch) in enumerate(train_loader):
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
if y_batch.dim() == 2: y_batch = y_batch.unsqueeze(-1)
# 确保目标张量有正确的形状 (batch_size, forecast_horizon, 1)
if y_batch.dim() == 2:
y_batch = y_batch.unsqueeze(-1)
# 前向传播
outputs = model(X_batch)
# 确保输出和目标形状匹配
loss = criterion(outputs, y_batch)
# 反向传播和优化
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.item()
# 更新批次进度每10个批次更新一次
if batch_idx % 10 == 0 or batch_idx == len(train_loader) - 1:
current_lr = optimizer.param_groups[0]['lr']
progress_manager.update_batch(batch_idx, loss.item(), current_lr)
# 计算训练损失
train_loss = epoch_loss / len(train_loader)
train_losses.append(train_loss)
# 设置验证阶段
progress_manager.set_stage("validation", 0)
# 在测试集上评估
model.eval()
test_loss = 0
with torch.no_grad():
for batch_idx, (X_batch, y_batch) in enumerate(test_loader):
for X_batch, y_batch in test_loader:
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
# 确保目标张量有正确的形状
if y_batch.dim() == 2:
y_batch = y_batch.unsqueeze(-1)
if y_batch.dim() == 2: y_batch = y_batch.unsqueeze(-1)
outputs = model(X_batch)
loss = criterion(outputs, y_batch)
test_loss += loss.item()
# 更新验证进度
if batch_idx % 5 == 0 or batch_idx == len(test_loader) - 1:
val_progress = (batch_idx / len(test_loader)) * 100
progress_manager.set_stage("validation", val_progress)
test_loss = test_loss / len(test_loader)
test_loss /= len(test_loader)
test_losses.append(test_loss)
# 完成当前轮次
progress_manager.finish_epoch(train_loss, test_loss)
# 发送训练进度(保持与旧系统的兼容性)
if (epoch + 1) % 5 == 0 or epoch == epochs - 1:
progress = ((epoch + 1) / epochs) * 100
current_metrics = {
'train_loss': train_loss,
'test_loss': test_loss,
'epoch': epoch + 1,
'total_epochs': epochs
}
emit_progress(f"Epoch {epoch+1}/{epochs}, Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}",
progress=progress, metrics=current_metrics)
# 定期保存检查点
if (epoch + 1) % checkpoint_interval == 0 or epoch == epochs - 1:
progress_percentage = 10 + ((epoch + 1) / epochs) * 85
emit_progress(f"Epoch {epoch+1}/{epochs} - Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}", progress=progress_percentage)
if test_loss < best_loss:
best_loss = test_loss
epochs_no_improve = 0
checkpoint_data = {
'epoch': epoch + 1,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'train_loss': train_loss,
'test_loss': test_loss,
'train_losses': train_losses,
'test_losses': test_losses,
'scaler_X': scaler_X,
'scaler_y': scaler_y,
'config': {
'input_dim': input_dim,
'output_dim': output_dim,
'hidden_size': hidden_size,
'num_layers': num_layers,
'dropout': dropout_rate,
'kernel_size': kernel_size,
'sequence_length': LOOK_BACK,
'forecast_horizon': FORECAST_HORIZON,
'model_type': 'tcn'
},
'training_info': {
'product_id': product_id,
'product_name': product_name,
'training_mode': training_mode,
'store_id': store_id,
'aggregation_method': aggregation_method,
'timestamp': time.time()
}
}
# 保存检查点
save_checkpoint(checkpoint_data, epoch + 1, product_id, 'tcn',
model_dir, store_id, training_mode, aggregation_method)
# 如果是最佳模型,额外保存一份
if test_loss < best_loss:
best_loss = test_loss
save_checkpoint(checkpoint_data, 'best', product_id, 'tcn',
model_dir, store_id, training_mode, aggregation_method)
emit_progress(f"💾 保存最佳模型检查点 (epoch {epoch+1}, test_loss: {test_loss:.4f})")
emit_progress(f"💾 保存训练检查点 epoch_{epoch+1}")
model_manager.save_model_artifact(checkpoint_data, "checkpoint_best.pth", model_version_path)
emit_progress(f"💾 保存最佳模型检查点 (epoch {epoch+1}, test_loss: {test_loss:.4f})")
else:
epochs_no_improve += 1
if (epoch + 1) % 10 == 0:
print(f"Epoch {epoch+1}/{epochs}, Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}")
# 计算训练时间
if epochs_no_improve >= patience:
emit_progress(f"连续 {patience} 个epoch测试损失未改善提前停止训练。")
break
training_time = time.time() - start_time
# 设置模型保存阶段
progress_manager.set_stage("model_saving", 0)
emit_progress("训练完成,正在保存模型...")
# 绘制损失曲线并保存到模型目录
loss_curve_path = plot_loss_curve(
train_losses,
test_losses,
product_name,
'TCN',
model_dir=model_dir
)
print(f"损失曲线已保存到: {loss_curve_path}")
# 评估模型
# 6. 保存产物和评估
loss_fig = plt.figure(figsize=(10, 6))
plt.plot(train_losses, label='Training Loss')
plt.plot(test_losses, label='Test Loss')
plt.title(f'{model_type.upper()} 损失曲线 - {product_name} (v{version}) - {training_scope}')
plt.xlabel('Epoch'); plt.ylabel('Loss'); plt.legend(); plt.grid(True)
model_manager.save_model_artifact(loss_fig, "loss_curve.png", model_version_path)
plt.close(loss_fig)
emit_progress(f"损失曲线已保存到: {os.path.join(model_version_path, 'loss_curve.png')}")
model.eval()
with torch.no_grad():
# 确保测试数据的形状正确
test_pred = model(testX_tensor.to(DEVICE))
# 将输出转换为二维数组 [samples, forecast_horizon]
test_pred = test_pred.squeeze(-1).cpu().numpy()
# 反归一化预测结果和真实值
test_pred_inv = scaler_y.inverse_transform(test_pred.reshape(-1, 1)).flatten()
test_true_inv = scaler_y.inverse_transform(testY.reshape(-1, 1)).flatten()
testX_tensor = torch.Tensor(testX).to(DEVICE)
test_pred = model(testX_tensor).cpu().numpy()
# 计算评估指标
metrics = evaluate_model(test_true_inv, test_pred_inv)
test_pred_inv = scaler_y.inverse_transform(test_pred.reshape(-1, FORECAST_HORIZON))
test_true_inv = scaler_y.inverse_transform(testY.reshape(-1, FORECAST_HORIZON))
metrics = evaluate_model(test_true_inv.flatten(), test_pred_inv.flatten())
metrics['training_time'] = training_time
# 打印评估指标
print("\n模型评估指标:")
print(f"MSE: {metrics['mse']:.4f}")
print(f"RMSE: {metrics['rmse']:.4f}")
print(f"MAE: {metrics['mae']:.4f}")
print(f"R²: {metrics['r2']:.4f}")
print(f"MAPE: {metrics['mape']:.2f}%")
print(f"训练时间: {training_time:.2f}")
# 保存最终训练完成的模型基于最终epoch
emit_progress(f"模型评估完成 - RMSE: {metrics['rmse']:.4f}, R²: {metrics['r2']:.4f}")
# 7. 保存最终模型和元数据
final_model_data = {
'epoch': epochs, # 最终epoch
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'train_loss': train_losses[-1],
'test_loss': test_losses[-1],
'train_losses': train_losses,
'test_losses': test_losses,
'scaler_X': scaler_X,
'scaler_y': scaler_y,
}
model_manager.save_model_artifact(final_model_data, "model.pth", model_version_path)
metadata = {
'product_id': product_id, 'product_name': product_name, 'model_type': model_type,
'version': f'v{version}', 'training_mode': training_mode, 'scope': scope,
'aggregation_method': aggregation_method, 'training_scope_description': training_scope,
'timestamp': datetime.now().isoformat(), 'metrics': metrics,
'config': {
'input_dim': input_dim,
'output_dim': output_dim,
'hidden_size': hidden_size,
'num_layers': num_layers,
'dropout': dropout_rate,
'kernel_size': kernel_size,
'sequence_length': LOOK_BACK,
'forecast_horizon': FORECAST_HORIZON,
'model_type': 'tcn'
},
'metrics': metrics,
'loss_curve_path': loss_curve_path,
'training_info': {
'product_id': product_id,
'product_name': product_name,
'training_mode': training_mode,
'store_id': store_id,
'aggregation_method': aggregation_method,
'timestamp': time.time(),
'training_completed': True
'input_dim': input_dim, 'output_dim': output_dim, 'hidden_size': hidden_size,
'num_layers': num_layers, 'kernel_size': kernel_size, 'dropout': dropout_rate,
'sequence_length': LOOK_BACK, 'forecast_horizon': FORECAST_HORIZON,
}
}
model_manager.save_model_artifact(metadata, "metadata.json", model_version_path)
# 8. 更新版本文件
model_manager.update_version(model_identifier, version)
progress_manager.set_stage("model_saving", 50)
emit_progress(f"{model_type.upper()}模型 v{version} 训练完成!", progress=100, metrics=metrics)
# 保存最终模型使用epoch标识
final_model_path = save_checkpoint(
final_model_data, f"final_epoch_{epochs}", product_id, 'tcn',
model_dir, store_id, training_mode, aggregation_method
)
progress_manager.set_stage("model_saving", 100)
final_metrics = {
'mse': metrics['mse'],
'rmse': metrics['rmse'],
'mae': metrics['mae'],
'r2': metrics['r2'],
'mape': metrics['mape'],
'training_time': training_time,
'final_epoch': epochs
}
emit_progress(f"模型训练完成最终epoch: {epochs}", progress=100, metrics=final_metrics)
return model, metrics, epochs, final_model_path
return model, metrics, version, model_version_path

View File

@ -1,5 +1,5 @@
"""
药店销售预测系统 - Transformer模型训练函数
药店销售预测系统 - Transformer模型训练函数 (已重构)
"""
import os
@ -12,57 +12,14 @@ import torch.optim as optim
from torch.utils.data import DataLoader
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from tqdm import tqdm
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from datetime import datetime
from models.transformer_model import TimeSeriesTransformer
from utils.data_utils import create_dataset, PharmacyDataset
from utils.multi_store_data_utils import get_store_product_sales_data, aggregate_multi_store_data
from utils.visualization import plot_loss_curve
from analysis.metrics import evaluate_model
from core.config import (
DEVICE, DEFAULT_MODEL_DIR, LOOK_BACK, FORECAST_HORIZON,
get_next_model_version, get_model_file_path, get_latest_model_version
)
from utils.training_progress import progress_manager
from core.config import DEVICE, LOOK_BACK, FORECAST_HORIZON
from utils.model_manager import model_manager
def save_checkpoint(checkpoint_data: dict, epoch_or_label, product_id: str,
model_type: str, model_dir: str, store_id=None,
training_mode: str = 'product', aggregation_method=None):
"""
保存训练检查点
Args:
checkpoint_data: 检查点数据
epoch_or_label: epoch编号或标签'best'
product_id: 产品ID
model_type: 模型类型
model_dir: 模型保存目录
store_id: 店铺ID
training_mode: 训练模式
aggregation_method: 聚合方法
"""
# 创建检查点目录
checkpoint_dir = os.path.join(model_dir, 'checkpoints')
os.makedirs(checkpoint_dir, exist_ok=True)
# 生成检查点文件名
if training_mode == 'store' and store_id:
filename = f"{model_type}_store_{store_id}_{product_id}_epoch_{epoch_or_label}.pth"
elif training_mode == 'global' and aggregation_method:
filename = f"{model_type}_global_{product_id}_{aggregation_method}_epoch_{epoch_or_label}.pth"
else:
filename = f"{model_type}_product_{product_id}_epoch_{epoch_or_label}.pth"
checkpoint_path = os.path.join(checkpoint_dir, filename)
# 保存检查点
torch.save(checkpoint_data, checkpoint_path)
print(f"[Transformer] 检查点已保存: {checkpoint_path}", flush=True)
return checkpoint_path
def train_product_model_with_transformer(
product_id,
product_df=None,
@ -70,191 +27,118 @@ def train_product_model_with_transformer(
training_mode='product',
aggregation_method='sum',
epochs=50,
model_dir=DEFAULT_MODEL_DIR,
version=None,
socketio=None,
task_id=None,
continue_training=False,
progress_callback=None,
patience=10,
learning_rate=0.001,
clip_norm=1.0
):
"""
使用Transformer模型训练产品销售预测模型
参数:
product_id: 产品ID
epochs: 训练轮次
model_dir: 模型保存目录默认使用配置中的DEFAULT_MODEL_DIR
version: 指定版本号如果为None则自动生成
socketio: WebSocket对象用于实时反馈
task_id: 训练任务ID
continue_training: 是否继续训练现有模型
返回:
model: 训练好的模型
metrics: 模型评估指标
version: 实际使用的版本号
使用Transformer模型训练产品销售预测模型 (已适配新的ModelManager)
"""
# WebSocket进度反馈函数
def emit_progress(message, progress=None, metrics=None):
"""发送训练进度到前端"""
progress_data = {
'task_id': task_id,
'message': f"[Transformer] {message}",
'timestamp': time.time()
}
if progress is not None:
progress_data['progress'] = progress
if metrics is not None:
progress_data['metrics'] = metrics
if progress_callback:
try:
progress_callback(progress_data)
except Exception as e:
print(f"[Transformer] 进度回调失败: {e}")
if socketio and task_id:
data = {
'task_id': task_id,
'message': message,
'timestamp': time.time()
}
if progress is not None:
data['progress'] = progress
if metrics is not None:
data['metrics'] = metrics
socketio.emit('training_progress', data, namespace='/training')
print(f"[{time.strftime('%H:%M:%S')}] {message}", flush=True)
# 强制刷新输出缓冲区
import sys
sys.stdout.flush()
sys.stderr.flush()
try:
socketio.emit('training_progress', progress_data, namespace='/training')
except Exception as e:
print(f"[Transformer] WebSocket发送失败: {e}")
print(f"[Transformer] {message}", flush=True)
emit_progress("开始Transformer模型训练...")
# 1. 确定模型标识符和版本
model_type = 'transformer'
if training_mode == 'store':
scope = f"{store_id}_{product_id}"
elif training_mode == 'global':
scope = f"{product_id}" if product_id else "all"
else: # 'product' mode
scope = f"{product_id}_all"
model_identifier = model_manager.get_model_identifier(model_type, training_mode, scope, aggregation_method)
version = model_manager.get_next_version_number(model_identifier)
# 获取训练进度管理器实例
try:
from utils.training_progress import progress_manager
except ImportError:
# 如果无法导入,创建一个空的管理器以避免错误
class DummyProgressManager:
def set_stage(self, *args, **kwargs): pass
def start_training(self, *args, **kwargs): pass
def start_epoch(self, *args, **kwargs): pass
def update_batch(self, *args, **kwargs): pass
def finish_epoch(self, *args, **kwargs): pass
def finish_training(self, *args, **kwargs): pass
progress_manager = DummyProgressManager()
# 如果没有传入product_df则根据训练模式加载数据
emit_progress(f"开始训练 {model_type} 模型 v{version}")
# 2. 获取模型版本路径
model_version_path = model_manager.get_model_version_path(model_type, training_mode, scope, version, aggregation_method)
emit_progress(f"模型将保存到: {model_version_path}")
# 3. 数据加载和预处理
if product_df is None:
from utils.multi_store_data_utils import load_multi_store_data, get_store_product_sales_data, aggregate_multi_store_data
try:
if training_mode == 'store' and store_id:
# 加载特定店铺的数据
product_df = get_store_product_sales_data(
store_id,
product_id,
'pharmacy_sales_multi_store.csv'
)
training_scope = f"店铺 {store_id}"
product_df = get_store_product_sales_data(store_id, product_id, 'pharmacy_sales_multi_store.csv')
elif training_mode == 'global':
# 聚合所有店铺的数据
product_df = aggregate_multi_store_data(
product_id,
aggregation_method=aggregation_method,
file_path='pharmacy_sales_multi_store.csv'
)
training_scope = f"全局聚合({aggregation_method})"
product_df = aggregate_multi_store_data(product_id, aggregation_method=aggregation_method, file_path='pharmacy_sales_multi_store.csv')
else:
# 默认:加载所有店铺的产品数据
product_df = load_multi_store_data('pharmacy_sales_multi_store.csv', product_id=product_id)
training_scope = "所有店铺"
except Exception as e:
print(f"多店铺数据加载失败: {e}")
# 后备方案:尝试原始数据
emit_progress(f"多店铺数据加载失败: {e}, 尝试后备方案...")
df = pd.read_excel('pharmacy_sales.xlsx')
product_df = df[df['product_id'] == product_id].sort_values('date')
training_scope = "原始数据"
if training_mode == 'store' and store_id:
training_scope = f"店铺 {store_id}"
elif training_mode == 'global':
training_scope = f"全局聚合({aggregation_method})"
else:
# 如果传入了product_df直接使用
if training_mode == 'store' and store_id:
training_scope = f"店铺 {store_id}"
elif training_mode == 'global':
training_scope = f"全局聚合({aggregation_method})"
else:
training_scope = "所有店铺"
if product_df.empty:
raise ValueError(f"产品 {product_id} 没有可用的销售数据")
# 数据量检查
training_scope = "所有店铺"
min_required_samples = LOOK_BACK + FORECAST_HORIZON
if len(product_df) < min_required_samples:
error_msg = (
f"❌ 训练数据不足错误\n"
f"当前配置需要: {min_required_samples} 天数据 (LOOK_BACK={LOOK_BACK} + FORECAST_HORIZON={FORECAST_HORIZON})\n"
f"实际数据量: {len(product_df)}\n"
f"产品ID: {product_id}, 训练模式: {training_mode}\n"
f"建议解决方案:\n"
f"1. 生成更多数据: uv run generate_multi_store_data.py\n"
f"2. 调整配置参数: 减小 LOOK_BACK 或 FORECAST_HORIZON\n"
f"3. 使用全局训练模式聚合更多数据"
)
print(error_msg)
error_msg = f"数据不足: 需要 {min_required_samples} 天, 实际 {len(product_df)} 天。"
emit_progress(f"训练失败:{error_msg}")
raise ValueError(error_msg)
product_df = product_df.sort_values('date')
product_name = product_df['product_name'].iloc[0]
print(f"[Transformer] 训练产品 '{product_name}' (ID: {product_id}) 的销售预测模型", flush=True)
print(f"[Device] 使用设备: {DEVICE}", flush=True)
print(f"[Model] 模型将保存到目录: {model_dir}", flush=True)
# 创建特征和目标变量
emit_progress(f"训练产品: '{product_name}' (ID: {product_id}) - {training_scope}")
emit_progress(f"使用设备: {DEVICE}, 数据量: {len(product_df)}")
features = ['sales', 'weekday', 'month', 'is_holiday', 'is_weekend', 'is_promotion', 'temperature']
# 设置数据预处理阶段
progress_manager.set_stage("data_preprocessing", 0)
emit_progress("数据预处理中...")
# 预处理数据
X = product_df[features].values
y = product_df[['sales']].values # 保持为二维数组
y = product_df[['sales']].values
# 归一化数据
scaler_X = MinMaxScaler(feature_range=(0, 1))
scaler_y = MinMaxScaler(feature_range=(0, 1))
X_scaled = scaler_X.fit_transform(X)
y_scaled = scaler_y.fit_transform(y)
progress_manager.set_stage("data_preprocessing", 40)
# 划分训练集和测试集80% 训练20% 测试)
train_size = int(len(X_scaled) * 0.8)
X_train, X_test = X_scaled[:train_size], X_scaled[train_size:]
y_train, y_test = y_scaled[:train_size], y_scaled[train_size:]
# 创建时间序列数据
trainX, trainY = create_dataset(X_train, y_train, LOOK_BACK, FORECAST_HORIZON)
testX, testY = create_dataset(X_test, y_test, LOOK_BACK, FORECAST_HORIZON)
progress_manager.set_stage("data_preprocessing", 70)
# 转换为PyTorch的Tensor
trainX_tensor = torch.Tensor(trainX)
trainY_tensor = torch.Tensor(trainY)
testX_tensor = torch.Tensor(testX)
testY_tensor = torch.Tensor(testY)
# 创建数据加载器
train_dataset = PharmacyDataset(trainX_tensor, trainY_tensor)
test_dataset = PharmacyDataset(testX_tensor, testY_tensor)
batch_size = 32
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
train_loader = DataLoader(PharmacyDataset(torch.Tensor(trainX), torch.Tensor(trainY)), batch_size=batch_size, shuffle=True)
test_loader = DataLoader(PharmacyDataset(torch.Tensor(testX), torch.Tensor(testY)), batch_size=batch_size, shuffle=False)
# 更新进度管理器的批次信息
total_batches = len(train_loader)
total_samples = len(train_dataset)
progress_manager.total_batches_per_epoch = total_batches
progress_manager.batch_size = batch_size
progress_manager.total_samples = total_samples
progress_manager.set_stage("data_preprocessing", 100)
emit_progress("数据预处理完成,开始模型训练...")
# 初始化Transformer模型
# 4. 模型初始化
input_dim = X_train.shape[1]
output_dim = FORECAST_HORIZON
hidden_size = 64
@ -270,258 +154,124 @@ def train_product_model_with_transformer(
dim_feedforward=hidden_size * 2,
dropout=dropout_rate,
output_sequence_length=output_dim,
seq_length=LOOK_BACK,
batch_size=batch_size
)
# 将模型移动到设备上
model = model.to(DEVICE)
seq_length=LOOK_BACK
).to(DEVICE)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience=patience // 2, factor=0.5)
# 训练模型
train_losses = []
test_losses = []
emit_progress("数据预处理完成,开始模型训练...", progress=10)
# 5. 训练循环
train_losses, test_losses = [], []
start_time = time.time()
# 配置检查点保存
checkpoint_interval = max(1, epochs // 10) # 每10%进度保存一次最少每1个epoch
best_loss = float('inf')
epochs_no_improve = 0
progress_manager.set_stage("model_training", 0)
emit_progress(f"开始训练 - 总epoch: {epochs}, 检查点间隔: {checkpoint_interval}, 耐心值: {patience}")
for epoch in range(epochs):
# 开始新的轮次
progress_manager.start_epoch(epoch)
model.train()
epoch_loss = 0
for batch_idx, (X_batch, y_batch) in enumerate(train_loader):
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
# 确保目标张量有正确的形状
# 前向传播
outputs = model(X_batch)
loss = criterion(outputs, y_batch)
# 反向传播和优化
optimizer.zero_grad()
loss.backward()
if clip_norm:
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=clip_norm)
optimizer.step()
epoch_loss += loss.item()
# 更新批次进度
if batch_idx % 5 == 0 or batch_idx == len(train_loader) - 1:
current_lr = optimizer.param_groups[0]['lr']
progress_manager.update_batch(batch_idx, loss.item(), current_lr)
# 计算训练损失
train_loss = epoch_loss / len(train_loader)
train_losses.append(train_loss)
# 设置验证阶段
progress_manager.set_stage("validation", 0)
# 在测试集上评估
model.eval()
test_loss = 0
with torch.no_grad():
for batch_idx, (X_batch, y_batch) in enumerate(test_loader):
for X_batch, y_batch in test_loader:
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
# 确保目标张量有正确的形状
outputs = model(X_batch)
loss = criterion(outputs, y_batch)
test_loss += loss.item()
# 更新验证进度
if batch_idx % 3 == 0 or batch_idx == len(test_loader) - 1:
val_progress = (batch_idx / len(test_loader)) * 100
progress_manager.set_stage("validation", val_progress)
test_loss = test_loss / len(test_loader)
test_loss /= len(test_loader)
test_losses.append(test_loss)
# 更新学习率
scheduler.step(test_loss)
# 完成当前轮次
progress_manager.finish_epoch(train_loss, test_loss)
# 发送训练进度
if (epoch + 1) % 5 == 0 or epoch == epochs - 1:
progress = ((epoch + 1) / epochs) * 100
current_metrics = {
'train_loss': train_loss,
'test_loss': test_loss,
'epoch': epoch + 1,
'total_epochs': epochs
}
emit_progress(f"Epoch {epoch+1}/{epochs}, Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}",
progress=progress, metrics=current_metrics)
# 定期保存检查点
if (epoch + 1) % checkpoint_interval == 0 or epoch == epochs - 1:
progress_percentage = 10 + ((epoch + 1) / epochs) * 85
emit_progress(f"Epoch {epoch+1}/{epochs} - Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}", progress=progress_percentage)
if test_loss < best_loss:
best_loss = test_loss
epochs_no_improve = 0
checkpoint_data = {
'epoch': epoch + 1,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'train_loss': train_loss,
'test_loss': test_loss,
'train_losses': train_losses,
'test_losses': test_losses,
'scaler_X': scaler_X,
'scaler_y': scaler_y,
'config': {
'input_dim': input_dim,
'output_dim': output_dim,
'hidden_size': hidden_size,
'num_heads': num_heads,
'dropout': dropout_rate,
'num_layers': num_layers,
'sequence_length': LOOK_BACK,
'forecast_horizon': FORECAST_HORIZON,
'model_type': 'transformer'
},
'training_info': {
'product_id': product_id,
'product_name': product_name,
'training_mode': training_mode,
'store_id': store_id,
'aggregation_method': aggregation_method,
'timestamp': time.time()
}
}
# 保存检查点
save_checkpoint(checkpoint_data, epoch + 1, product_id, 'transformer',
model_dir, store_id, training_mode, aggregation_method)
# 如果是最佳模型,额外保存一份
if test_loss < best_loss:
best_loss = test_loss
save_checkpoint(checkpoint_data, 'best', product_id, 'transformer',
model_dir, store_id, training_mode, aggregation_method)
emit_progress(f"💾 保存最佳模型检查点 (epoch {epoch+1}, test_loss: {test_loss:.4f})")
epochs_no_improve = 0
else:
epochs_no_improve += 1
emit_progress(f"💾 保存训练检查点 epoch_{epoch+1}")
model_manager.save_model_artifact(checkpoint_data, "checkpoint_best.pth", model_version_path)
emit_progress(f"💾 保存最佳模型检查点 (epoch {epoch+1}, test_loss: {test_loss:.4f})")
else:
epochs_no_improve += 1
if (epoch + 1) % 10 == 0:
print(f"📊 Epoch {epoch+1}/{epochs}, 训练损失: {train_loss:.4f}, 测试损失: {test_loss:.4f}", flush=True)
# 提前停止逻辑
if epochs_no_improve >= patience:
emit_progress(f"连续 {patience} 个epoch测试损失未改善提前停止训练。")
break
# 计算训练时间
training_time = time.time() - start_time
# 设置模型保存阶段
progress_manager.set_stage("model_saving", 0)
emit_progress("训练完成,正在保存模型...")
# 绘制损失曲线并保存到模型目录
loss_curve_path = plot_loss_curve(
train_losses,
test_losses,
product_name,
'Transformer',
model_dir=model_dir
)
print(f"📈 损失曲线已保存到: {loss_curve_path}", flush=True)
# 评估模型
# 6. 保存产物和评估
loss_fig = plt.figure(figsize=(10, 6))
plt.plot(train_losses, label='Training Loss')
plt.plot(test_losses, label='Test Loss')
plt.title(f'{model_type.upper()} 损失曲线 - {product_name} (v{version}) - {training_scope}')
plt.xlabel('Epoch'); plt.ylabel('Loss'); plt.legend(); plt.grid(True)
model_manager.save_model_artifact(loss_fig, "loss_curve.png", model_version_path)
plt.close(loss_fig)
emit_progress(f"损失曲线已保存到: {os.path.join(model_version_path, 'loss_curve.png')}")
model.eval()
with torch.no_grad():
test_pred = model(testX_tensor.to(DEVICE)).cpu().numpy()
test_true = testY
# 反归一化预测结果和真实值
test_pred_inv = scaler_y.inverse_transform(test_pred)
test_true_inv = scaler_y.inverse_transform(test_true)
testX_tensor = torch.Tensor(testX).to(DEVICE)
test_pred = model(testX_tensor).cpu().numpy()
# 计算评估指标
metrics = evaluate_model(test_true_inv, test_pred_inv)
test_pred_inv = scaler_y.inverse_transform(test_pred)
test_true_inv = scaler_y.inverse_transform(testY)
metrics = evaluate_model(test_true_inv.flatten(), test_pred_inv.flatten())
metrics['training_time'] = training_time
# 打印评估指标
print(f"\n📊 模型评估指标:", flush=True)
print(f" MSE: {metrics['mse']:.4f}", flush=True)
print(f" RMSE: {metrics['rmse']:.4f}", flush=True)
print(f" MAE: {metrics['mae']:.4f}", flush=True)
print(f" R²: {metrics['r2']:.4f}", flush=True)
print(f" MAPE: {metrics['mape']:.2f}%", flush=True)
print(f" ⏱️ 训练时间: {training_time:.2f}", flush=True)
# 保存最终训练完成的模型基于最终epoch
emit_progress(f"模型评估完成 - RMSE: {metrics['rmse']:.4f}, R²: {metrics['r2']:.4f}")
# 7. 保存最终模型和元数据
final_model_data = {
'epoch': epochs, # 最终epoch
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'train_loss': train_losses[-1],
'test_loss': test_losses[-1],
'train_losses': train_losses,
'test_losses': test_losses,
'scaler_X': scaler_X,
'scaler_y': scaler_y,
}
model_manager.save_model_artifact(final_model_data, "model.pth", model_version_path)
metadata = {
'product_id': product_id, 'product_name': product_name, 'model_type': model_type,
'version': f'v{version}', 'training_mode': training_mode, 'scope': scope,
'aggregation_method': aggregation_method, 'training_scope_description': training_scope,
'timestamp': datetime.now().isoformat(), 'metrics': metrics,
'config': {
'input_dim': input_dim,
'output_dim': output_dim,
'hidden_size': hidden_size,
'num_heads': num_heads,
'dropout': dropout_rate,
'num_layers': num_layers,
'sequence_length': LOOK_BACK,
'forecast_horizon': FORECAST_HORIZON,
'model_type': 'transformer'
},
'metrics': metrics,
'loss_curve_path': loss_curve_path,
'training_info': {
'product_id': product_id,
'product_name': product_name,
'training_mode': training_mode,
'store_id': store_id,
'aggregation_method': aggregation_method,
'timestamp': time.time(),
'training_completed': True
'input_dim': input_dim, 'output_dim': output_dim, 'd_model': hidden_size,
'nhead': num_heads, 'num_encoder_layers': num_layers, 'dim_feedforward': hidden_size * 2,
'dropout': dropout_rate, 'sequence_length': LOOK_BACK, 'forecast_horizon': FORECAST_HORIZON,
}
}
model_manager.save_model_artifact(metadata, "metadata.json", model_version_path)
# 8. 更新版本文件
model_manager.update_version(model_identifier, version)
progress_manager.set_stage("model_saving", 50)
emit_progress(f"{model_type.upper()}模型 v{version} 训练完成!", progress=100, metrics=metrics)
# 保存最终模型使用epoch标识
final_model_path = save_checkpoint(
final_model_data, f"final_epoch_{epochs}", product_id, 'transformer',
model_dir, store_id, training_mode, aggregation_method
)
progress_manager.set_stage("model_saving", 100)
emit_progress(f"模型已保存到 {final_model_path}")
print(f"💾 模型已保存到 {final_model_path}", flush=True)
# 准备最终返回的指标
final_metrics = {
'mse': metrics['mse'],
'rmse': metrics['rmse'],
'mae': metrics['mae'],
'r2': metrics['r2'],
'mape': metrics['mape'],
'training_time': training_time,
'final_epoch': epochs
}
return model, final_metrics, epochs
return model, metrics, version, model_version_path

View File

@ -1,6 +1,7 @@
"""
统一模型管理工具
处理模型文件的统一命名存储和检索
遵循层级式目录结构和文件版本管理规则
"""
import os
@ -8,376 +9,218 @@ import json
import torch
import glob
from datetime import datetime
from typing import List, Dict, Optional, Tuple
from typing import List, Dict, Optional, Any
from threading import Lock
from core.config import DEFAULT_MODEL_DIR
class ModelManager:
"""统一模型管理器"""
"""
统一模型管理器采用结构化目录和版本文件进行管理
"""
VERSION_FILE = 'versions.json'
def __init__(self, model_dir: str = DEFAULT_MODEL_DIR):
self.model_dir = model_dir
self.model_dir = os.path.abspath(model_dir)
self.versions_path = os.path.join(self.model_dir, self.VERSION_FILE)
self._lock = Lock()
self.ensure_model_dir()
def ensure_model_dir(self):
"""确保模型目录存在"""
if not os.path.exists(self.model_dir):
os.makedirs(self.model_dir)
def generate_model_filename(self,
product_id: str,
model_type: str,
version: str,
store_id: Optional[str] = None,
training_mode: str = 'product',
"""确保模型根目录存在"""
os.makedirs(self.model_dir, exist_ok=True)
def _read_versions(self) -> Dict[str, int]:
"""线程安全地读取版本文件"""
with self._lock:
if not os.path.exists(self.versions_path):
return {}
try:
with open(self.versions_path, 'r', encoding='utf-8') as f:
return json.load(f)
except (json.JSONDecodeError, IOError):
return {}
def _write_versions(self, versions: Dict[str, int]):
"""线程安全地写入版本文件"""
with self._lock:
with open(self.versions_path, 'w', encoding='utf-8') as f:
json.dump(versions, f, indent=2)
def get_model_identifier(self,
model_type: str,
training_mode: str,
scope: str,
aggregation_method: Optional[str] = None) -> str:
"""
生成模型的唯一标识符用于版本文件中的key
"""
if training_mode == 'global':
return f"{training_mode}_{scope}_{aggregation_method}_{model_type}"
return f"{training_mode}_{scope}_{model_type}"
def get_next_version_number(self, model_identifier: str) -> int:
"""
获取指定模型的下一个版本号整数
"""
versions = self._read_versions()
current_version = versions.get(model_identifier, 0)
return current_version + 1
def update_version(self, model_identifier: str, new_version: int):
"""
更新模型的最新版本号
"""
versions = self._read_versions()
versions[model_identifier] = new_version
self._write_versions(versions)
def get_model_version_path(self,
model_type: str,
training_mode: str,
scope: str,
version: int,
aggregation_method: Optional[str] = None) -> str:
"""
生成统一的模型文件名
格式规范:
- 产品模式: {model_type}_product_{product_id}_{version}.pth
- 店铺模式: {model_type}_store_{store_id}_{product_id}_{version}.pth
- 全局模式: {model_type}_global_{product_id}_{aggregation_method}_{version}.pth
根据新规则生成模型版本目录的完整路径
"""
if training_mode == 'store' and store_id:
return f"{model_type}_store_{store_id}_{product_id}_{version}.pth"
elif training_mode == 'global' and aggregation_method:
return f"{model_type}_global_{product_id}_{aggregation_method}_{version}.pth"
base_path = os.path.join(self.model_dir, training_mode, scope)
if training_mode == 'global' and aggregation_method:
base_path = os.path.join(base_path, str(aggregation_method))
version_path = os.path.join(base_path, model_type, f'v{version}')
return version_path
def save_model_artifact(self,
artifact_data: Any,
artifact_name: str,
model_version_path: str):
"""
在指定的模型版本目录下保存一个产物
Args:
artifact_data: 要保存的数据 (e.g., model state dict, figure object).
artifact_name: 标准化的产物文件名 (e.g., 'model.pth', 'loss_curve.png').
model_version_path: 模型版本目录的路径.
"""
os.makedirs(model_version_path, exist_ok=True)
full_path = os.path.join(model_version_path, artifact_name)
if artifact_name.endswith('.pth'):
torch.save(artifact_data, full_path)
elif artifact_name.endswith('.png') and hasattr(artifact_data, 'savefig'):
artifact_data.savefig(full_path, dpi=300, bbox_inches='tight')
elif artifact_name.endswith('.json'):
with open(full_path, 'w', encoding='utf-8') as f:
json.dump(artifact_data, f, indent=2, ensure_ascii=False)
else:
# 默认产品模式
return f"{model_type}_product_{product_id}_{version}.pth"
def save_model(self,
model_data: dict,
product_id: str,
model_type: str,
version: str,
store_id: Optional[str] = None,
training_mode: str = 'product',
aggregation_method: Optional[str] = None,
product_name: Optional[str] = None) -> str:
"""
保存模型到统一位置
参数:
model_data: 包含模型状态和配置的字典
product_id: 产品ID
model_type: 模型类型
version: 版本号
store_id: 店铺ID (可选)
training_mode: 训练模式
aggregation_method: 聚合方法 (可选)
product_name: 产品名称 (可选)
返回:
模型文件路径
"""
filename = self.generate_model_filename(
product_id, model_type, version, store_id, training_mode, aggregation_method
)
# 统一保存到根目录,避免复杂的子目录结构
model_path = os.path.join(self.model_dir, filename)
# 增强模型数据,添加管理信息
enhanced_model_data = model_data.copy()
enhanced_model_data.update({
'model_manager_info': {
'product_id': product_id,
'product_name': product_name or product_id,
'model_type': model_type,
'version': version,
'store_id': store_id,
'training_mode': training_mode,
'aggregation_method': aggregation_method,
'created_at': datetime.now().isoformat(),
'filename': filename
}
})
# 保存模型
torch.save(enhanced_model_data, model_path)
print(f"模型已保存: {model_path}")
return model_path
def list_models(self,
product_id: Optional[str] = None,
model_type: Optional[str] = None,
store_id: Optional[str] = None,
training_mode: Optional[str] = None,
raise ValueError(f"不支持的产物类型: {artifact_name}")
print(f"产物已保存: {full_path}")
def list_models(self,
page: Optional[int] = None,
page_size: Optional[int] = None) -> Dict:
"""
列出所有模型文件
参数:
product_id: 产品ID过滤 (可选)
model_type: 模型类型过滤 (可选)
store_id: 店铺ID过滤 (可选)
training_mode: 训练模式过滤 (可选)
page: 页码从1开始 (可选)
page_size: 每页数量 (可选)
返回:
包含模型列表和分页信息的字典
通过扫描目录结构来列出所有模型
"""
models = []
# 搜索所有.pth文件
pattern = os.path.join(self.model_dir, "*.pth")
model_files = glob.glob(pattern)
for model_file in model_files:
try:
# 解析文件名
filename = os.path.basename(model_file)
model_info = self.parse_model_filename(filename)
if not model_info:
continue
# 尝试从模型文件中读取额外信息
try:
# Try with weights_only=False first for backward compatibility
try:
model_data = torch.load(model_file, map_location='cpu', weights_only=False)
except Exception:
# If that fails, try with weights_only=True (newer PyTorch versions)
model_data = torch.load(model_file, map_location='cpu', weights_only=True)
if 'model_manager_info' in model_data:
# 使用新的管理信息
manager_info = model_data['model_manager_info']
model_info.update(manager_info)
# 添加评估指标
if 'metrics' in model_data:
model_info['metrics'] = model_data['metrics']
# 添加配置信息
if 'config' in model_data:
model_info['config'] = model_data['config']
except Exception as e:
print(f"读取模型文件失败 {model_file}: {e}")
# Continue with just the filename-based info
# 应用过滤器
if product_id and model_info.get('product_id') != product_id:
continue
if model_type and model_info.get('model_type') != model_type:
continue
if store_id and model_info.get('store_id') != store_id:
continue
if training_mode and model_info.get('training_mode') != training_mode:
continue
# 添加文件信息
model_info['filename'] = filename
model_info['file_path'] = model_file
model_info['file_size'] = os.path.getsize(model_file)
model_info['modified_at'] = datetime.fromtimestamp(
os.path.getmtime(model_file)
).isoformat()
models.append(model_info)
except Exception as e:
print(f"处理模型文件失败 {model_file}: {e}")
all_models = []
for training_mode in os.listdir(self.model_dir):
mode_path = os.path.join(self.model_dir, training_mode)
if not os.path.isdir(mode_path) or training_mode == 'checkpoints' or training_mode == self.VERSION_FILE:
continue
# 按创建时间排序(最新的在前)
models.sort(key=lambda x: x.get('created_at', x.get('modified_at', '')), reverse=True)
# 计算分页信息
total_count = len(models)
# 如果没有指定分页参数,返回所有数据
if page is None or page_size is None:
return {
'models': models,
'pagination': {
'total': total_count,
'page': 1,
'page_size': total_count,
'total_pages': 1,
'has_next': False,
'has_previous': False
}
}
# 应用分页
total_pages = (total_count + page_size - 1) // page_size if page_size > 0 else 1
start_index = (page - 1) * page_size
end_index = start_index + page_size
paginated_models = models[start_index:end_index]
for scope in os.listdir(mode_path):
scope_path = os.path.join(mode_path, scope)
if not os.path.isdir(scope_path): continue
is_global_agg_level = False
if training_mode == 'global' and os.listdir(scope_path):
try:
first_item_path = os.path.join(scope_path, os.listdir(scope_path)[0])
if os.path.isdir(first_item_path):
is_global_agg_level = True
except IndexError:
continue
if is_global_agg_level:
for agg_method in os.listdir(scope_path):
agg_path = os.path.join(scope_path, agg_method)
if not os.path.isdir(agg_path): continue
for model_type in os.listdir(agg_path):
type_path = os.path.join(agg_path, model_type)
if not os.path.isdir(type_path): continue
for version_folder in os.listdir(type_path):
if version_folder.startswith('v'):
version_path = os.path.join(type_path, version_folder)
model_info = self._parse_info_from_path(version_path)
if model_info:
all_models.append(model_info)
else:
for model_type in os.listdir(scope_path):
type_path = os.path.join(scope_path, model_type)
if not os.path.isdir(type_path): continue
for version_folder in os.listdir(type_path):
if version_folder.startswith('v'):
version_path = os.path.join(type_path, version_folder)
model_info = self._parse_info_from_path(version_path)
if model_info:
all_models.append(model_info)
total_count = len(all_models)
if page and page_size:
start_index = (page - 1) * page_size
end_index = start_index + page_size
paginated_models = all_models[start_index:end_index]
else:
paginated_models = all_models
return {
'models': paginated_models,
'pagination': {
'total': total_count,
'page': page,
'page_size': page_size,
'total_pages': total_pages,
'has_next': page < total_pages,
'has_previous': page > 1
'page': page or 1,
'page_size': page_size or total_count,
'total_pages': (total_count + page_size - 1) // page_size if page_size and page_size > 0 else 1,
}
}
def parse_model_filename(self, filename: str) -> Optional[Dict]:
"""
解析模型文件名提取模型信息
支持的格式:
- {model_type}_product_{product_id}_{version}.pth
- {model_type}_store_{store_id}_{product_id}_{version}.pth
- {model_type}_global_{product_id}_{aggregation_method}_{version}.pth
- 旧格式兼容
"""
if not filename.endswith('.pth'):
return None
base_name = filename.replace('.pth', '')
try:
# 新格式解析
if '_product_' in base_name:
# 产品模式: model_type_product_product_id_version
parts = base_name.split('_product_')
model_type = parts[0]
rest = parts[1]
# 分离产品ID和版本
if '_v' in rest:
last_v_index = rest.rfind('_v')
product_id = rest[:last_v_index]
version = rest[last_v_index+1:]
else:
product_id = rest
version = 'v1'
return {
'model_type': model_type,
'product_id': product_id,
'version': version,
'training_mode': 'product',
'store_id': None,
'aggregation_method': None
}
elif '_store_' in base_name:
# 店铺模式: model_type_store_store_id_product_id_version
parts = base_name.split('_store_')
model_type = parts[0]
rest = parts[1]
# 分离店铺ID、产品ID和版本
rest_parts = rest.split('_')
if len(rest_parts) >= 3:
store_id = rest_parts[0]
if rest_parts[-1].startswith('v'):
# 最后一部分是版本号
version = rest_parts[-1]
product_id = '_'.join(rest_parts[1:-1])
else:
version = 'v1'
product_id = '_'.join(rest_parts[1:])
return {
'model_type': model_type,
'product_id': product_id,
'version': version,
'training_mode': 'store',
'store_id': store_id,
'aggregation_method': None
}
elif '_global_' in base_name:
# 全局模式: model_type_global_product_id_aggregation_method_version
parts = base_name.split('_global_')
model_type = parts[0]
rest = parts[1]
rest_parts = rest.split('_')
if len(rest_parts) >= 3:
if rest_parts[-1].startswith('v'):
# 最后一部分是版本号
version = rest_parts[-1]
aggregation_method = rest_parts[-2]
product_id = '_'.join(rest_parts[:-2])
else:
version = 'v1'
aggregation_method = rest_parts[-1]
product_id = '_'.join(rest_parts[:-1])
return {
'model_type': model_type,
'product_id': product_id,
'version': version,
'training_mode': 'global',
'store_id': None,
'aggregation_method': aggregation_method
}
# 兼容旧格式
else:
# 尝试解析其他格式
if 'model_product_' in base_name:
parts = base_name.split('_model_product_')
model_type = parts[0]
product_part = parts[1]
if '_v' in product_part:
last_v_index = product_part.rfind('_v')
product_id = product_part[:last_v_index]
version = product_part[last_v_index+1:]
else:
product_id = product_part
version = 'v1'
return {
'model_type': model_type,
'product_id': product_id,
'version': version,
'training_mode': 'product',
'store_id': None,
'aggregation_method': None
}
except Exception as e:
print(f"解析文件名失败 {filename}: {e}")
return None
def delete_model(self, model_file: str) -> bool:
"""删除模型文件"""
try:
if os.path.exists(model_file):
os.remove(model_file)
print(f"已删除模型文件: {model_file}")
return True
else:
print(f"模型文件不存在: {model_file}")
return False
except Exception as e:
print(f"删除模型文件失败: {e}")
return False
def get_model_by_id(self, model_id: str) -> Optional[Dict]:
"""根据模型ID获取模型信息"""
models = self.list_models()
for model in models:
if model.get('filename', '').replace('.pth', '') == model_id:
return model
return None
def _parse_info_from_path(self, version_path: str) -> Optional[Dict]:
"""从版本目录路径解析模型信息"""
try:
norm_path = os.path.normpath(version_path)
norm_model_dir = os.path.normpath(self.model_dir)
relative_path = os.path.relpath(norm_path, norm_model_dir)
parts = relative_path.split(os.sep)
info = {
'model_path': version_path,
'version': parts[-1],
'model_type': parts[-2]
}
training_mode = parts[0]
info['training_mode'] = training_mode
if training_mode == 'global':
info['scope'] = parts[1]
info['aggregation_method'] = parts[2]
info['model_identifier'] = self.get_model_identifier(info['model_type'], training_mode, info['scope'], info['aggregation_method'])
else:
info['scope'] = parts[1]
info['aggregation_method'] = None
info['model_identifier'] = self.get_model_identifier(info['model_type'], training_mode, info['scope'])
metadata_path = os.path.join(version_path, 'metadata.json')
if os.path.exists(metadata_path):
with open(metadata_path, 'r', encoding='utf-8') as f:
metadata = json.load(f)
info.update(metadata)
return info
except (IndexError, IOError) as e:
print(f"解析路径失败 {version_path}: {e}")
return None
# 全局模型管理器实例
# 确保使用项目根目录的saved_models而不是相对于当前工作目录
import os
current_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.dirname(os.path.dirname(current_dir)) # 向上两级到项目根目录
absolute_model_dir = os.path.join(project_root, 'saved_models')
model_manager = ModelManager(absolute_model_dir)
model_manager = ModelManager()

View File

@ -800,4 +800,74 @@
3. 在脚本中导入了新的 `Shop` 图标。
### 结果
仪表盘现在直接提供到“店铺管理”页面的快捷入口,提高了操作效率。
仪表盘现在直接提供到“店铺管理”页面的快捷入口,提高了操作效率。
---
### 2025-07-15: 模型保存与管理系统重构
**核心目标**: 根据 `xz训练模型保存规则.md` 的规范重构整个模型的保存、读取、版本管理和API交互逻辑以提高系统的可维护性、健壮性和可扩展性。
**1. 修改 `server/utils/model_manager.py`**
* **内容**: 完全重写了 `ModelManager` 类。
* **版本管理**: 引入了 `saved_models/versions.json` 文件作为版本号的唯一来源,并实现了线程安全的读写操作。
* **路径构建**: 实现了 `get_model_version_path` 方法,用于根据训练模式、范围、类型和版本生成结构化的目录路径 (e.g., `saved_models/product/{scope}/{type}/v{N}/`)。
* **产物保存**: 实现了 `save_model_artifact` 方法,用于将模型、检查点、图表和元数据等所有训练产物统一保存到指定的版本目录中。
* **模型发现**: 重写了 `list_models` 方法,使其通过扫描目录结构来发现和列出所有模型及其元数据。
**2. 修改 `server/trainers/mlstm_trainer.py`**
* **内容**: 集成新的 `ModelManager`
* 移除了旧的、手动的 `save_checkpoint``load_checkpoint` 函数。
* 在训练开始时,调用 `model_manager` 来获取模型的唯一标识符和下一个版本号。
* 在训练过程中,统一使用 `model_manager.save_model_artifact()` 来保存所有产物(`model.pth`, `checkpoint_best.pth`, `loss_curve.png`, `metadata.json`)。
* 在训练成功后,调用 `model_manager.update_version()` 来更新 `versions.json`
**3. 修改 `server/core/config.py`**
* **内容**: 清理废弃的函数。
* 删除了 `get_next_model_version`, `get_model_file_path`, `get_model_versions`, `get_latest_model_version`, 和 `save_model_version_info` 等所有与旧的、基于文件名的版本管理相关的函数。
**4. 修改 `server/core/predictor.py`**
* **内容**: 解耦与旧路径逻辑的依赖。
* 更新了 `train_model` 方法,将版本和路径管理的职责完全下放给具体的训练器。
* 更新了 `predict` 方法,使其调用新的 `load_model_and_predict`,并传递标准的模型版本目录路径。
**5. 修改 `server/predictors/model_predictor.py`**
* **内容**: 适配新的模型加载逻辑。
* 重写了 `load_model_and_predict` 函数,使其接受一个模型版本目录路径 (`model_version_path`) 作为输入。
* 函数现在从该目录下的 `metadata.json` 读取元数据,并从 `model.pth` 加载模型和 `scaler` 对象。
**6. 修改 `server/api.py`**
* **内容**: 更新API端点以适应新的模型管理系统。
* `/api/models`: 调用 `model_manager.list_models()` 来获取模型列表。
* `/api/models/<model_id>`: 更新详情和删除逻辑,以处理基于目录的结构。
* `/api/prediction`: 更新调用 `predictor.predict()` 的方式。
* `/api/training`: 更新了数据库保存逻辑,现在向 `model_versions` 表中存入的是模型版本目录的路径。
---
### 2025-07-15 (续): 训练器与核心调用层重构
**核心目标**: 将新的 `ModelManager` 统一应用到项目中所有剩余的模型训练器,并重构核心调用逻辑,确保整个训练链路的架构一致性。
**1. 修改 `server/trainers/kan_trainer.py`**
* **内容**: 完全重写了 `kan_trainer.py`
* **适配接口**: 函数签名与 `mlstm_trainer` 对齐,增加了 `socketio`, `task_id`, `patience` 等参数。
* **集成 `ModelManager`**: 移除了所有旧的、手动的保存逻辑,改为在训练开始时调用 `model_manager` 获取版本号和路径。
* **标准化产物保存**: 所有产物(模型、元数据、检查点、损失曲线)均通过 `model_manager.save_model_artifact()` 保存。
* **增加健壮性**: 引入了早停Early Stopping和保存最佳检查点Best Checkpoint的逻辑。
**2. 修改 `server/trainers/tcn_trainer.py`**
* **内容**: 完全重写了 `tcn_trainer.py`,应用了与 `kan_trainer` 完全相同的重构模式。
* 移除了旧的 `save_checkpoint` 辅助函数和基于 `core.config` 的版本管理。
* 全面转向使用 `model_manager` 进行版本控制和文件保存。
* 统一了函数签名和进度反馈逻辑。
**3. 修改 `server/trainers/transformer_trainer.py`**
* **内容**: 完全重写了 `transformer_trainer.py`,完成了对所有训练器的统一重构。
* 移除了所有遗留的、基于文件名的路径拼接和保存逻辑。
* 实现了与其它训练器一致的、基于 `ModelManager` 的标准化训练流程。
**4. 修改 `server/core/predictor.py`**
* **内容**: 对核心预测器类 `PharmacyPredictor` 进行了彻底重构。
* **统一调用接口**: `train_model` 方法现在以完全一致的方式调用所有(`mlstm`, `kan`, `tcn`, `transformer`)训练器。
* **移除旧逻辑**: 删除了 `_parse_model_filename` 等所有基于文件名解析的旧方法。
* **适配 `ModelManager`**: `list_models``delete_model` 等方法现在直接调用 `model_manager` 的相应功能,不再自己实现逻辑。
* **简化 `predict`**: 预测方法现在直接接收标准化的模型版本路径 (`model_version_path`) 作为输入,逻辑更清晰。

View File

@ -1,4 +1,49 @@
跟文件夹save_models
根文件夹save_models
### 新模型文件系统设计
我们已经从“一个文件包含所有信息”的模式,转向了“目录结构本身就是信息”的模式。
基本结构:
saved_models/
├── product/
│ ├── all/
│ │ ├── MLSTM/
│ │ │ ├── v1/
│ │ │ │ ├── model.pth
│ │ │ │ ├── metadata.json
│ │ │ │ ├── loss_curve.png
│ │ │ │ └── checkpoint_best.pth
│ │ │ └── v2/
│ │ │ └── ...
│ │ └── TCN/
│ │ └── v1/
│ │ └── ...
│ └── {product_id}/
│ └── ...
├── user/
│ └── ...
└── versions.json
txt
关键点解读:
versions.json: 这是整个系统的“注册表”。它记录了每一种模型由mode, scope, type唯一确定的最新版本号。所有新的训练任务都会先读取这个文件来确定下一个版本号应该是多少从而避免了冲突。
目录路径: 模型的路径现在包含了它的核心元数据。例如,路径 saved_models/product/all/MLSTM/v1 清晰地告诉我们:
训练模式 (Mode): product (产品模式)
范围 (Scope): all (适用于所有产品)
模型类型 (Type): MLSTM
版本 (Version): v1
版本目录内容: 每个版本目录(如 v1/)下都包含了一次完整训练的所有产物,并且文件名是标准化的:
model.pth: 最终保存的、用于预测的模型。
metadata.json: 包含训练参数、数据标准化scaler对象等重要元数据。
loss_curve.png: 训练过程中的损失曲线图。
checkpoint_best.pth: 训练过程中验证集上表现最好的模型检查点。
## 按药品训练 ##