**版本**: 4.0 (最终版) **核心思想**: 逻辑路径被转换为文件名的一部分,实现极致扁平化的文件存储。 --- ## 一、 文件保存规则 ### 1.1. 核心原则 所有元数据都被编码到文件名中。一个逻辑上的层级路径(例如 `product/P001_all/mlstm/v2`)应该被转换为一个用下划线连接的文件名前缀(`product_P001_all_mlstm_v2`)。 ### 1.2. 文件存储位置 - **最终产物**: 所有最终模型、元数据文件、损失图等,统一存放在 `saved_models/` 根目录下。 - **过程文件**: 所有训练过程中的检查点文件,统一存放在 `saved_models/checkpoints/` 目录下。 ### 1.3. 文件名生成规则 1. **构建逻辑路径**: 根据训练参数(模式、范围、类型、版本)确定逻辑路径。 - *示例*: `product/P001_all/mlstm/v2` 2. **生成文件名前缀**: 将逻辑路径中的所有 `/` 替换为 `_`。 - *示例*: `product_P001_all_mlstm_v2` 3. **拼接文件后缀**: 在前缀后加上描述文件类型的后缀。 - `_model.pth` - `_metadata.json` - `_loss_curve.png` - `_checkpoint_best.pth` - `_checkpoint_epoch_{N}.pth` #### **完整示例:** - **最终模型**: `saved_models/product_P001_all_mlstm_v2_model.pth` - **元数据**: `saved_models/product_P001_all_mlstm_v2_metadata.json` - **最佳检查点**: `saved_models/checkpoints/product_P001_all_mlstm_v2_checkpoint_best.pth` - **Epoch 50 检查点**: `saved_models/checkpoints/product_P001_all_mlstm_v2_checkpoint_epoch_50.pth` --- ## 二、 文件读取规则 1. **确定模型元数据**: 根据需求确定要加载的模型的训练模式、范围、类型和版本。 2. **构建文件名前缀**: 按照与保存时相同的逻辑,将元数据拼接成文件名前缀(例如 `product_P001_all_mlstm_v2`)。 3. **定位文件**: - 要加载最终模型,查找文件: `saved_models/{prefix}_model.pth`。 - 要加载最佳检查点,查找文件: `saved_models/checkpoints/{prefix}_checkpoint_best.pth`。 --- ## 三、 数据库存储规则 数据库用于索引,应存储足以重构文件名前缀的关键元数据。 #### **`models` 表结构建议:** | 字段名 | 类型 | 描述 | 示例 | | :--- | :--- | :--- | :--- | | `id` | INTEGER | 主键 | 1 | | `filename_prefix` | TEXT | **完整文件名前缀,可作为唯一标识** | `product_P001_all_mlstm_v2` | | `model_identifier`| TEXT | 用于版本控制的标识符 (不含版本) | `product_P001_all_mlstm` | | `version` | INTEGER | 版本号 | `2` | | `status` | TEXT | 模型状态 | `completed`, `training`, `failed` | | `created_at` | TEXT | 创建时间 | `2025-07-21 02:29:00` | | `metrics_summary`| TEXT | 关键性能指标的JSON字符串 | `{"rmse": 10.5, "r2": 0.89}` | #### **保存逻辑:** - 训练完成后,向表中插入一条记录。`filename_prefix` 字段是查找与该次训练相关的所有文件的关键。 --- ## 四、 版本记录规则 版本管理依赖于根目录下的 `versions.json` 文件,以实现原子化、线程安全的版本号递增。 - **文件名**: `versions.json` - **位置**: `saved_models/versions.json` - **结构**: 一个JSON对象,`key` 是不包含版本号的标识符,`value` 是该标识符下最新的版本号(整数)。 - **Key**: `{prefix_core}_{model_type}` (例如: `product_P001_all_mlstm`) - **Value**: `Integer` #### **`versions.json` 示例:** ```json { "product_P001_all_mlstm": 2, "store_S001_P002_transformer": 1 } ``` #### **版本管理流程:** 1. **获取新版本**: 开始训练前,构建 `key`。读取 `versions.json`,找到对应 `key` 的 `value`。新版本号为 `value + 1` (若key不存在,则为 `1`)。 2. **更新版本**: 训练成功后,将新的版本号写回到 `versions.json`。此过程**必须使用文件锁**以防止并发冲突。 调试完成药品预测和店铺预测
310 lines
11 KiB
Python
310 lines
11 KiB
Python
"""
|
||
药店销售预测系统 - KAN模型训练函数
|
||
"""
|
||
|
||
import os
|
||
import time
|
||
import pandas as pd
|
||
import numpy as np
|
||
import torch
|
||
import torch.nn as nn
|
||
import torch.optim as optim
|
||
from torch.utils.data import DataLoader
|
||
from sklearn.preprocessing import MinMaxScaler
|
||
import matplotlib.pyplot as plt
|
||
from tqdm import tqdm
|
||
|
||
from models.kan_model import KANForecaster
|
||
from models.optimized_kan_forecaster import OptimizedKANForecaster
|
||
from utils.data_utils import create_dataset, PharmacyDataset
|
||
from utils.visualization import plot_loss_curve
|
||
from analysis.metrics import evaluate_model
|
||
from core.config import DEVICE, DEFAULT_MODEL_DIR, LOOK_BACK, FORECAST_HORIZON
|
||
|
||
def train_product_model_with_kan(product_id, product_df=None, store_id=None, training_mode='product', aggregation_method='sum', epochs=50, use_optimized=False, path_info=None, **kwargs):
|
||
"""
|
||
使用KAN模型训练产品销售预测模型
|
||
|
||
参数:
|
||
product_id: 产品ID
|
||
epochs: 训练轮次
|
||
use_optimized: 是否使用优化版KAN
|
||
path_info: 包含所有路径信息的字典
|
||
|
||
返回:
|
||
model: 训练好的模型
|
||
metrics: 模型评估指标
|
||
"""
|
||
if not path_info:
|
||
raise ValueError("train_product_model_with_kan 需要 'path_info' 参数。")
|
||
# 如果没有传入product_df,则根据训练模式加载数据
|
||
if product_df is None:
|
||
from utils.multi_store_data_utils import load_multi_store_data, get_store_product_sales_data, aggregate_multi_store_data
|
||
|
||
try:
|
||
if training_mode == 'store' and store_id:
|
||
# 加载特定店铺的数据
|
||
product_df = get_store_product_sales_data(
|
||
store_id,
|
||
product_id,
|
||
'pharmacy_sales_multi_store.csv'
|
||
)
|
||
training_scope = f"店铺 {store_id}"
|
||
elif training_mode == 'global':
|
||
# 聚合所有店铺的数据
|
||
product_df = aggregate_multi_store_data(
|
||
product_id,
|
||
aggregation_method=aggregation_method,
|
||
file_path='pharmacy_sales_multi_store.csv'
|
||
)
|
||
training_scope = f"全局聚合({aggregation_method})"
|
||
else:
|
||
# 默认:加载所有店铺的产品数据
|
||
product_df = load_multi_store_data('pharmacy_sales_multi_store.csv', product_id=product_id)
|
||
training_scope = "所有店铺"
|
||
except Exception as e:
|
||
print(f"多店铺数据加载失败: {e}")
|
||
# 后备方案:尝试原始数据
|
||
df = pd.read_excel('pharmacy_sales.xlsx')
|
||
product_df = df[df['product_id'] == product_id].sort_values('date')
|
||
training_scope = "原始数据"
|
||
else:
|
||
# 如果传入了product_df,直接使用
|
||
if training_mode == 'store' and store_id:
|
||
training_scope = f"店铺 {store_id}"
|
||
elif training_mode == 'global':
|
||
training_scope = f"全局聚合({aggregation_method})"
|
||
else:
|
||
training_scope = "所有店铺"
|
||
|
||
if product_df.empty:
|
||
raise ValueError(f"产品 {product_id} 没有可用的销售数据")
|
||
|
||
# 数据量检查
|
||
min_required_samples = LOOK_BACK + FORECAST_HORIZON
|
||
if len(product_df) < min_required_samples:
|
||
error_msg = (
|
||
f"❌ 训练数据不足错误\n"
|
||
f"当前配置需要: {min_required_samples} 天数据 (LOOK_BACK={LOOK_BACK} + FORECAST_HORIZON={FORECAST_HORIZON})\n"
|
||
f"实际数据量: {len(product_df)} 天\n"
|
||
f"产品ID: {product_id}, 训练模式: {training_mode}\n"
|
||
f"建议解决方案:\n"
|
||
f"1. 生成更多数据: uv run generate_multi_store_data.py\n"
|
||
f"2. 调整配置参数: 减小 LOOK_BACK 或 FORECAST_HORIZON\n"
|
||
f"3. 使用全局训练模式聚合更多数据"
|
||
)
|
||
print(error_msg)
|
||
raise ValueError(error_msg)
|
||
|
||
product_df = product_df.sort_values('date')
|
||
product_name = product_df['product_name'].iloc[0]
|
||
|
||
model_type = "优化版KAN" if use_optimized else "KAN"
|
||
print(f"使用{model_type}模型训练产品 '{product_name}' (ID: {product_id}) 的销售预测模型")
|
||
print(f"训练范围: {training_scope}")
|
||
print(f"使用设备: {DEVICE}")
|
||
print(f"模型将保存到: {path_info['base_dir']}")
|
||
|
||
# 创建特征和目标变量
|
||
features = ['sales', 'weekday', 'month', 'is_holiday', 'is_weekend', 'is_promotion', 'temperature']
|
||
|
||
# 预处理数据
|
||
X = product_df[features].values
|
||
y = product_df[['sales']].values # 保持为二维数组
|
||
|
||
# 归一化数据
|
||
scaler_X = MinMaxScaler(feature_range=(0, 1))
|
||
scaler_y = MinMaxScaler(feature_range=(0, 1))
|
||
|
||
X_scaled = scaler_X.fit_transform(X)
|
||
y_scaled = scaler_y.fit_transform(y)
|
||
|
||
# 划分训练集和测试集(80% 训练,20% 测试)
|
||
train_size = int(len(X_scaled) * 0.8)
|
||
X_train, X_test = X_scaled[:train_size], X_scaled[train_size:]
|
||
y_train, y_test = y_scaled[:train_size], y_scaled[train_size:]
|
||
|
||
# 创建时间序列数据
|
||
trainX, trainY = create_dataset(X_train, y_train, LOOK_BACK, FORECAST_HORIZON)
|
||
testX, testY = create_dataset(X_test, y_test, LOOK_BACK, FORECAST_HORIZON)
|
||
|
||
# 转换为PyTorch的Tensor
|
||
trainX_tensor = torch.Tensor(trainX)
|
||
trainY_tensor = torch.Tensor(trainY)
|
||
testX_tensor = torch.Tensor(testX)
|
||
testY_tensor = torch.Tensor(testY)
|
||
|
||
# 创建数据加载器
|
||
train_dataset = PharmacyDataset(trainX_tensor, trainY_tensor)
|
||
test_dataset = PharmacyDataset(testX_tensor, testY_tensor)
|
||
|
||
batch_size = 32
|
||
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
|
||
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
|
||
|
||
# 初始化KAN模型
|
||
input_dim = X_train.shape[1]
|
||
output_dim = FORECAST_HORIZON
|
||
hidden_size = 64
|
||
|
||
if use_optimized:
|
||
model = OptimizedKANForecaster(
|
||
input_features=input_dim,
|
||
hidden_sizes=[hidden_size, hidden_size*2, hidden_size],
|
||
output_sequence_length=output_dim
|
||
)
|
||
else:
|
||
model = KANForecaster(
|
||
input_features=input_dim,
|
||
hidden_sizes=[hidden_size, hidden_size*2, hidden_size],
|
||
output_sequence_length=output_dim
|
||
)
|
||
|
||
# 将模型移动到设备上
|
||
model = model.to(DEVICE)
|
||
|
||
criterion = nn.MSELoss()
|
||
optimizer = optim.Adam(model.parameters(), lr=0.001)
|
||
|
||
# 训练模型
|
||
train_losses = []
|
||
test_losses = []
|
||
start_time = time.time()
|
||
|
||
for epoch in range(epochs):
|
||
model.train()
|
||
epoch_loss = 0
|
||
for X_batch, y_batch in tqdm(train_loader, desc=f"Epoch {epoch+1}/{epochs}", leave=False):
|
||
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
|
||
|
||
# 确保目标张量有正确的形状 (batch_size, forecast_horizon, 1)
|
||
if y_batch.dim() == 2:
|
||
y_batch = y_batch.unsqueeze(-1)
|
||
|
||
# 前向传播
|
||
outputs = model(X_batch)
|
||
|
||
# 确保输出形状与目标匹配
|
||
if outputs.dim() == 2:
|
||
outputs = outputs.unsqueeze(-1)
|
||
|
||
loss = criterion(outputs, y_batch)
|
||
|
||
# 如果是KAN模型,加入正则化损失
|
||
if hasattr(model, 'regularization_loss'):
|
||
loss = loss + model.regularization_loss() * 0.01
|
||
|
||
# 反向传播和优化
|
||
optimizer.zero_grad()
|
||
loss.backward()
|
||
optimizer.step()
|
||
|
||
epoch_loss += loss.item()
|
||
|
||
# 计算训练损失
|
||
train_loss = epoch_loss / len(train_loader)
|
||
train_losses.append(train_loss)
|
||
|
||
# 在测试集上评估
|
||
model.eval()
|
||
test_loss = 0
|
||
with torch.no_grad():
|
||
for X_batch, y_batch in test_loader:
|
||
X_batch, y_batch = X_batch.to(DEVICE), y_batch.to(DEVICE)
|
||
|
||
# 确保目标张量有正确的形状
|
||
if y_batch.dim() == 2:
|
||
y_batch = y_batch.unsqueeze(-1)
|
||
|
||
outputs = model(X_batch)
|
||
|
||
# 确保输出形状与目标匹配
|
||
if outputs.dim() == 2:
|
||
outputs = outputs.unsqueeze(-1)
|
||
|
||
loss = criterion(outputs, y_batch)
|
||
test_loss += loss.item()
|
||
|
||
test_loss = test_loss / len(test_loader)
|
||
test_losses.append(test_loss)
|
||
|
||
if (epoch + 1) % 10 == 0:
|
||
print(f"Epoch {epoch+1}/{epochs}, Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}")
|
||
|
||
# 计算训练时间
|
||
training_time = time.time() - start_time
|
||
|
||
# 绘制损失曲线并保存到模型目录
|
||
model_name = 'optimized_kan' if use_optimized else 'kan'
|
||
loss_curve_path = path_info['loss_curve_path']
|
||
plot_loss_curve(
|
||
train_losses,
|
||
test_losses,
|
||
product_name,
|
||
model_type,
|
||
save_path=loss_curve_path
|
||
)
|
||
print(f"损失曲线已保存到: {loss_curve_path}")
|
||
|
||
# 评估模型
|
||
model.eval()
|
||
with torch.no_grad():
|
||
test_pred = model(testX_tensor.to(DEVICE)).cpu().numpy()
|
||
|
||
# 处理输出形状
|
||
if len(test_pred.shape) == 3:
|
||
test_pred = test_pred.squeeze(-1)
|
||
|
||
# 反归一化预测结果和真实值
|
||
test_pred_inv = scaler_y.inverse_transform(test_pred.reshape(-1, 1)).flatten()
|
||
test_true_inv = scaler_y.inverse_transform(testY.reshape(-1, 1)).flatten()
|
||
|
||
# 计算评估指标
|
||
metrics = evaluate_model(test_true_inv, test_pred_inv)
|
||
metrics['training_time'] = training_time
|
||
|
||
# 打印评估指标
|
||
print("\n模型评估指标:")
|
||
print(f"MSE: {metrics['mse']:.4f}")
|
||
print(f"RMSE: {metrics['rmse']:.4f}")
|
||
print(f"MAE: {metrics['mae']:.4f}")
|
||
print(f"R²: {metrics['r2']:.4f}")
|
||
print(f"MAPE: {metrics['mape']:.2f}%")
|
||
print(f"训练时间: {training_time:.2f}秒")
|
||
|
||
model_type_name = 'optimized_kan' if use_optimized else 'kan'
|
||
|
||
model_data = {
|
||
'model_state_dict': model.state_dict(),
|
||
'scaler_X': scaler_X,
|
||
'scaler_y': scaler_y,
|
||
'config': {
|
||
'input_dim': input_dim,
|
||
'output_dim': output_dim,
|
||
'hidden_size': hidden_size,
|
||
'hidden_sizes': [hidden_size, hidden_size*2, hidden_size],
|
||
'sequence_length': LOOK_BACK,
|
||
'forecast_horizon': FORECAST_HORIZON,
|
||
'model_type': model_type_name,
|
||
'use_optimized': use_optimized
|
||
},
|
||
'metrics': metrics,
|
||
'loss_history': {
|
||
'train': train_losses,
|
||
'test': test_losses,
|
||
'epochs': list(range(1, epochs + 1))
|
||
},
|
||
'loss_curve_path': loss_curve_path
|
||
}
|
||
|
||
# 检查模型性能是否达标
|
||
# 移除R2检查,始终保存模型
|
||
if metrics:
|
||
# 使用 path_info 中的路径保存模型
|
||
model_path = path_info['model_path']
|
||
torch.save(model_data, model_path)
|
||
print(f"模型已保存到: {model_path}")
|
||
else:
|
||
print(f"训练过程中未生成评估指标,不保存最终模型。")
|
||
|
||
return model, metrics |