Compare commits
10 Commits
98506927be
...
6fe41c54cf
| Author | SHA1 | Date | |
|---|---|---|---|
| 6fe41c54cf | |||
| 8ad2fb7cd9 | |||
| 576a83f406 | |||
| 34e0b42c2f | |||
| 19d7f921be | |||
| 82c71f26d4 | |||
| 8412ca739c | |||
| 6486d733b7 | |||
| 92ebb5db20 | |||
| 7679e21024 |
6
.gitignore
vendored
6
.gitignore
vendored
@@ -10,3 +10,9 @@
|
||||
/out/
|
||||
*.iml
|
||||
/target
|
||||
|
||||
# Eclipse / STS
|
||||
/.classpath
|
||||
/.project
|
||||
/.factorypath
|
||||
/.settings/
|
||||
|
||||
40
AGENTS.md
40
AGENTS.md
@@ -1,40 +0,0 @@
|
||||
# Repository Guidelines
|
||||
|
||||
## Project Structure & Module Organization
|
||||
- Source code: `src/main/java/com/point/strategy/**` (controllers, services, mappers, beans)
|
||||
- Resources: `src/main/resources/` (MyBatis `mapper/*.xml`, fonts, config, codegen `generatorConfig.xml`)
|
||||
- Web assets: `src/main/webapp/**` (static pages, templates, pdf assets)
|
||||
- Libraries: `src/main/lib/*.jar` (bundled into WAR via `maven-war-plugin`)
|
||||
- SQL and logs: `sql/`, `logs/`
|
||||
- Build output: `target/` (WAR: `point-strategy.war`)
|
||||
|
||||
## Build, Test, and Development Commands
|
||||
- Verify environment: `mvn -v` (Java 8, Maven required)
|
||||
- Build WAR: `mvn clean package -DskipTests`
|
||||
- Run tests: `mvn test`
|
||||
- Regenerate MyBatis artifacts (optional): `mvn mybatis-generator:generate` (uses `src/main/resources/generatorConfig.xml`)
|
||||
- Deploy: copy WAR from `target/` to an external Servlet container (e.g., Tomcat 9+). This project packages as `war` with Tomcat set to `provided`.
|
||||
|
||||
## Coding Style & Naming Conventions
|
||||
- Java 8, 4-space indentation, max line ~120 chars.
|
||||
- Packages: `com.point.strategy.<module>`
|
||||
- Classes `UpperCamelCase`; methods/fields `lowerCamelCase`; constants `UPPER_SNAKE_CASE`.
|
||||
- Suffixes: controllers `*Controller`, services `*Service`, mappers `*Mapper`, entities/VOs `*Entity`/`*VO`.
|
||||
- Prefer Lombok where present; avoid boilerplate duplication.
|
||||
|
||||
## Testing Guidelines
|
||||
- Framework: Spring Boot Test + JUnit (vintage excluded).
|
||||
- Naming: place tests under same package, file ends with `*Test.java`.
|
||||
- Run all tests: `mvn test`; run a class: `mvn -Dtest=ClassNameTest test`.
|
||||
- Aim for coverage of services and mappers; add lightweight controller tests for critical endpoints.
|
||||
|
||||
## Commit & Pull Request Guidelines
|
||||
- Commit messages: short imperative subject; optionally follow Conventional Commits (e.g., `feat: ...`, `fix: ...`).
|
||||
- PRs must include: concise description, rationale, screenshots for UI-impacting changes, and linked issue (e.g., `Closes #123`).
|
||||
- Keep PRs focused; update tests/resources when touching mappers or SQL.
|
||||
|
||||
## Security & Configuration Tips
|
||||
- Do not commit secrets; externalize DB credentials and keystores.
|
||||
- Verify bundled JARs in `src/main/lib/` are necessary and licensed.
|
||||
- Large file outputs and logs should be gitignored; use `logs/` for local runs.
|
||||
|
||||
20
Dockerfile
20
Dockerfile
@@ -75,7 +75,7 @@ RUN mvn install:install-file \
|
||||
-DartifactId=aspose-cells \
|
||||
-Dversion=8.5.2 \
|
||||
-Dpackaging=jar \
|
||||
-B -s /root/.m2/settings.xml
|
||||
-B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml
|
||||
|
||||
RUN mvn install:install-file \
|
||||
-Dfile=/tmp/local-jars/aspose-words-15.8.0-jdk16.jar \
|
||||
@@ -83,7 +83,7 @@ RUN mvn install:install-file \
|
||||
-DartifactId=aspose-words \
|
||||
-Dversion=15.8.0 \
|
||||
-Dpackaging=jar \
|
||||
-B -s /root/.m2/settings.xml
|
||||
-B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml
|
||||
|
||||
RUN mvn install:install-file \
|
||||
-Dfile=/tmp/local-jars/jai_codec-1.1.3.jar \
|
||||
@@ -91,7 +91,7 @@ RUN mvn install:install-file \
|
||||
-DartifactId=jai_codec \
|
||||
-Dversion=1.1.3 \
|
||||
-Dpackaging=jar \
|
||||
-B -s /root/.m2/settings.xml
|
||||
-B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml
|
||||
|
||||
RUN mvn install:install-file \
|
||||
-Dfile=/tmp/local-jars/jai_core.jar \
|
||||
@@ -99,7 +99,7 @@ RUN mvn install:install-file \
|
||||
-DartifactId=jai_core \
|
||||
-Dversion=1.0.0-SNAPSHOT \
|
||||
-Dpackaging=jar \
|
||||
-B -s /root/.m2/settings.xml
|
||||
-B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml
|
||||
|
||||
RUN mvn install:install-file \
|
||||
-Dfile=/tmp/local-jars/jce-0.0.1.jar \
|
||||
@@ -107,7 +107,7 @@ RUN mvn install:install-file \
|
||||
-DartifactId=jce \
|
||||
-Dversion=0.0.1 \
|
||||
-Dpackaging=jar \
|
||||
-B -s /root/.m2/settings.xml
|
||||
-B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml
|
||||
|
||||
RUN mvn install:install-file \
|
||||
-Dfile=/tmp/local-jars/agent-1.0.0.jar \
|
||||
@@ -115,7 +115,7 @@ RUN mvn install:install-file \
|
||||
-DartifactId=scofd \
|
||||
-Dversion=1.0.1 \
|
||||
-Dpackaging=jar \
|
||||
-B -s /root/.m2/settings.xml
|
||||
-B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml
|
||||
|
||||
# 安装twain4java JAR(用于扫描仪功能)
|
||||
RUN mvn install:install-file \
|
||||
@@ -124,7 +124,7 @@ RUN mvn install:install-file \
|
||||
-DartifactId=twain4java \
|
||||
-Dversion=0.3.3-all \
|
||||
-Dpackaging=jar \
|
||||
-B -s /root/.m2/settings.xml
|
||||
-B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml
|
||||
|
||||
# 确保Maven仓库权限正确
|
||||
RUN chown -R root:root /root/.m2 && \
|
||||
@@ -139,12 +139,12 @@ RUN echo "=== 验证Maven环境 ===" && \
|
||||
echo "=== 验证本地JAR文件 ===" && \
|
||||
ls -la /tmp/local-jars/ && \
|
||||
echo "=== 下载依赖(不编译) ===" && \
|
||||
mvn dependency:go-offline -B -s /root/.m2/settings.xml -e || \
|
||||
mvn dependency:resolve -B -s /root/.m2/settings.xml -e
|
||||
mvn dependency:go-offline -B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml -e || \
|
||||
mvn dependency:resolve -B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml -e
|
||||
|
||||
# 构建应用(移除离线模式,允许下载依赖)
|
||||
RUN echo "=== 开始构建应用 ===" && \
|
||||
mvn clean package -DskipTests -B -s /root/.m2/settings.xml -e \
|
||||
mvn clean package -DskipTests -B -gs /root/.m2/settings.xml -s /root/.m2/settings.xml -e \
|
||||
-Dmaven.test.skip=true \
|
||||
-Dmaven.compiler.optimize=true \
|
||||
&& \
|
||||
|
||||
@@ -1,94 +0,0 @@
|
||||
# 🚨 项目硬编码路径分析报告
|
||||
|
||||
## ✅ **已正确配置化的路径**
|
||||
|
||||
以下路径已正确使用配置文件,不再硬编码:
|
||||
|
||||
### 1. **基础文件路径** ✅
|
||||
- `@Value("${img.upload}")` - 图片上传路径
|
||||
- `@Value("${temp.path}")` - 临时文件路径
|
||||
- `@Value("${upload.path}")` - 上传根路径
|
||||
- `@Value("${report.path}")` - 报表路径
|
||||
- `@Value("${unzip.path}")` - 解压路径
|
||||
|
||||
### 2. **使用配置的主要文件** ✅
|
||||
- `ImportService.java` - tempPath已配置
|
||||
- `CompactShelvingController.java` - tempPath已配置
|
||||
- `ReportTemplateService.java` - reportPath已配置
|
||||
- `OADockingIml.java` - unzipPath已配置
|
||||
|
||||
## ❌ **发现的问题路径**
|
||||
|
||||
### 1. **UReport文件下载路径硬编码** 🚨
|
||||
|
||||
#### **文件**: `src/main/java/com/point/strategy/docSimpleArrange/controller/DocSimpleController.java`
|
||||
```java
|
||||
String fileName = "创建文书简化pdf文件.pdf";
|
||||
String downLoadPath = "D:\\\\ureportfiles\\\\"+fileName; // ❌ 硬编码
|
||||
```
|
||||
|
||||
#### **文件**: `src/main/java/com/point/strategy/oaDocking/controller/DocTraditionVolumeOaController.java`
|
||||
```java
|
||||
String downLoadPath = "C:\\\\ureportfiles\\\\"+fileName; // ❌ 硬编码
|
||||
outputStream = new FileOutputStream(new File("C:\\ureportfiles\\"+fileName)); // ❌ 硬编码
|
||||
```
|
||||
|
||||
### 2. **测试代码中的硬编码** ⚠️
|
||||
|
||||
#### **文件**: `src/main/java/com/point/strategy/oaDocking/service/YcjSystemIntegration.java`
|
||||
```java
|
||||
// main方法中的测试代码
|
||||
new FileInputStream(new File("D:\\2\\222.pdf")); // ⚠️ 测试代码硬编码
|
||||
new File("D:\\2\\222.pdf").length(); // ⚠️ 测试代码硬编码
|
||||
```
|
||||
|
||||
### 3. **注释中的示例路径** 💡
|
||||
```java
|
||||
// 这些是注释和示例,通常不影响运行
|
||||
// DbOperate.dbBackUp("root", "123456", "zaizhi", "d:/3", backName);
|
||||
// if (exportDatabaseTool("192.168.1.112", "3306","root", "123456", "d:/3", "zaizhi.sql", "zaizhi")) {
|
||||
```
|
||||
|
||||
## 🔧 **修复建议**
|
||||
|
||||
### 1. **立即修复: UReport下载路径**
|
||||
|
||||
#### **方案A: 添加配置项**
|
||||
```yaml
|
||||
# application-prod.yml 中添加
|
||||
ureport:
|
||||
download:
|
||||
path: ${UREPORT_DOWNLOAD_PATH:/app/data/ureport}
|
||||
```
|
||||
|
||||
#### **方案B: 使用report.path**
|
||||
```java
|
||||
// 建议修改为
|
||||
@Value("${report.path}")
|
||||
private String reportPath;
|
||||
|
||||
String downLoadPath = reportPath + File.separator + fileName;
|
||||
```
|
||||
|
||||
### 2. **清理测试代码**
|
||||
```java
|
||||
// 移除main方法中的硬编码路径,或改为配置化
|
||||
```
|
||||
|
||||
## 📊 **风险评估**
|
||||
|
||||
| 问题类型 | 影响程度 | 修复难度 | 优先级 |
|
||||
|----------|----------|----------|--------|
|
||||
| UReport下载路径 | 🔴 高 | 🟢 低 | **P0** |
|
||||
| 测试代码硬编码 | 🟡 中 | 🟢 低 | P2 |
|
||||
| 注释示例路径 | 🟢 低 | 🟢 低 | P3 |
|
||||
|
||||
## 🎯 **推荐修复顺序**
|
||||
|
||||
1. **第一步**: 修复UReport下载路径硬编码 (P0)
|
||||
2. **第二步**: 清理YcjSystemIntegration测试代码 (P2)
|
||||
3. **第三步**: 统一路径处理工具类 (P1)
|
||||
|
||||
---
|
||||
|
||||
**总结**: 主要问题是UReport文件下载使用了Windows硬编码路径,需要立即修复以支持Linux/Mac环境部署。
|
||||
213
IFLOW.md
213
IFLOW.md
@@ -1,213 +0,0 @@
|
||||
# 数字档案标准系统 (Digital Archive Standard System)
|
||||
|
||||
## 项目概述
|
||||
|
||||
这是一个基于 Spring Boot 2.3.7 的企业级数字档案管理系统,专门用于档案的数字化管理、存储、检索和利用。系统支持多种数据库(MySQL、人大金仓、达梦等),集成了 OCR 识别、全文检索、报表生成、工作流等功能。
|
||||
|
||||
### 核心技术栈
|
||||
|
||||
- **后端框架**: Spring Boot 2.3.7.RELEASE
|
||||
- **数据库**: MySQL 5.1.6 / 人大金仓 8.6.0 / 达梦数据库
|
||||
- **ORM框架**: MyBatis 3.4.5 + MyBatis-Spring-Boot-Starter 1.2.0
|
||||
- **分页插件**: PageHelper 4.1.0
|
||||
- **连接池**: Druid 1.1.9
|
||||
- **缓存**: Redis
|
||||
- **搜索引擎**: Elasticsearch
|
||||
- **报表工具**: UReport2 2.2.9
|
||||
- **文档处理**:
|
||||
- Aspose Words 15.8.0 (Word文档处理)
|
||||
- Aspose Cells 8.5.2 (Excel文档处理)
|
||||
- PDFBox 2.0.27 (PDF处理)
|
||||
- Tess4j 4.5.3 (OCR识别)
|
||||
- **API文档**: Swagger 2.9.2
|
||||
- **工具类**: Hutool 5.5.2, Lombok 1.18.16
|
||||
|
||||
## 项目结构
|
||||
|
||||
```
|
||||
src/main/java/com/point/strategy/
|
||||
├── controller/ # 控制器层
|
||||
├── service/ # 服务层
|
||||
├── dao/ # 数据访问层
|
||||
├── bean/ # 实体类
|
||||
├── common/ # 公共工具类
|
||||
├── workFlow/ # 工作流模块
|
||||
├── elasticsearch/ # 搜索模块
|
||||
├── ocr/ # OCR识别模块
|
||||
├── pdf/ # PDF处理模块
|
||||
├── webSocket/ # WebSocket模块
|
||||
├── webService/ # WebService模块
|
||||
└── PointStrategyApplication.java # 主启动类
|
||||
```
|
||||
|
||||
## 核心功能模块
|
||||
|
||||
### 1. 档案管理
|
||||
- **文件管理** (`FileManageController`): 文件上传、下载、批量操作
|
||||
- **档案接收** (`ArchivesReceiveController`): 档案接收和登记
|
||||
- **档案移交** (`ArchivesTransferController`): 档案移交流程
|
||||
- **档案借阅** (`BorrowingFilesController`): 档案借阅管理
|
||||
|
||||
### 2. 库房管理
|
||||
- **库房实体** (`StorehouseEntityController`): 库房实体管理
|
||||
- **库房点位** (`StorehousePointController`): 库房点位管理
|
||||
- **温湿度监控** (`TemperatureController`): 环境监控
|
||||
- **档案装具** (`FileBoxController`, `FileFrameController`): 装具管理
|
||||
|
||||
### 3. 检索与统计
|
||||
- **全文检索** (`FulltextSearchLogController`): 基于 Elasticsearch 的全文搜索
|
||||
- **统计分析** (`StatisticsController`, `HomeStatisticsController`): 各类统计报表
|
||||
- **四性检测** (`fourCheck`): 档案四性检测功能
|
||||
|
||||
### 4. 工作流与审批
|
||||
- **工作流** (`workFlow`): 档案审批流程
|
||||
- **审批设置** (`ApproveSettingController`): 审批流程配置
|
||||
- **预约管理** (`ArchiveAppointmentController`): 档案预约
|
||||
|
||||
### 5. 元数据管理
|
||||
- **元数据标准** (`metaData`): 元数据标准管理
|
||||
- **实体结构** (`TentityStructDescriptionController`): 实体结构描述
|
||||
- **数据字典** (`DictController`): 数据字典管理
|
||||
|
||||
## 构建和运行
|
||||
|
||||
### 环境要求
|
||||
- JDK 1.8+
|
||||
- Maven 3.6+
|
||||
- MySQL 5.7+ / 人大金仓 8.6.0 / 达梦数据库
|
||||
- Redis 3.0+
|
||||
- Elasticsearch 7.x (可选)
|
||||
|
||||
### 构建命令
|
||||
|
||||
```bash
|
||||
# 编译项目
|
||||
mvn clean compile
|
||||
|
||||
# 打包项目
|
||||
mvn clean package
|
||||
|
||||
# 跳过测试打包
|
||||
mvn clean package -DskipTests
|
||||
|
||||
# 运行项目
|
||||
mvn spring-boot:run
|
||||
|
||||
# 生成 MyBatis 代码
|
||||
mvn mybatis-generator:generate
|
||||
```
|
||||
|
||||
### 运行配置
|
||||
|
||||
#### 开发环境 (application-dev.yml)
|
||||
- 服务端口: 9081
|
||||
- 数据库: MySQL (100.64.11.2:3311)
|
||||
- Redis: 100.64.11.2:6379
|
||||
|
||||
#### 生产环境配置
|
||||
修改 `application.properties` 中的数据库连接和相关配置:
|
||||
|
||||
```properties
|
||||
# 数据库配置
|
||||
spring.datasource.driverClassName=com.mysql.jdbc.Driver
|
||||
spring.datasource.url=jdbc:mysql://your-host:port/database
|
||||
spring.datasource.username=your-username
|
||||
spring.datasource.password=your-password
|
||||
|
||||
# 人大金仓配置 (取消注释使用)
|
||||
#spring.datasource.driverClassName=com.kingbase8.Driver
|
||||
#spring.datasource.url=jdbc:kingbase8://your-host:port/database
|
||||
```
|
||||
|
||||
## 开发约定
|
||||
|
||||
### 代码规范
|
||||
- 使用 Lombok 简化代码
|
||||
- 遵循 RESTful API 设计规范
|
||||
- 统一使用 `AjaxJson` 作为返回格式
|
||||
- 接口文档使用 Swagger 注解
|
||||
|
||||
### 数据库规范
|
||||
- 表名使用小写字母和下划线
|
||||
- 字段名使用驼峰命名,MyBatis 自动转换
|
||||
- 主键统一使用 `id`
|
||||
- 时间字段使用 `datetime` 类型
|
||||
|
||||
### 文件存储规范
|
||||
- 上传文件路径: `${upload.path}`
|
||||
- 临时文件路径: `${temp.path}`
|
||||
- 解压文件路径: `${unzip.path}`
|
||||
- 图片上传路径: `${img.upload}`
|
||||
- 报表生成路径: `${report.path}`
|
||||
|
||||
## 部署说明
|
||||
|
||||
### 传统部署
|
||||
1. 打包生成 WAR 文件
|
||||
2. 部署到 Tomcat 8.5+
|
||||
3. 配置数据库连接和 Redis 连接
|
||||
4. 配置文件上传路径
|
||||
|
||||
### Docker 部署
|
||||
```dockerfile
|
||||
FROM openjdk:8-jre-alpine
|
||||
COPY target/point-strategy.war app.war
|
||||
EXPOSE 9081
|
||||
CMD ["java", "-jar", "app.war"]
|
||||
```
|
||||
|
||||
## 第三方集成
|
||||
|
||||
### OCR 集成
|
||||
- 友虹 OCR: 通过 HTTP API 调用
|
||||
- Tess4j: 本地 OCR 识别
|
||||
|
||||
### 报表集成
|
||||
- UReport2: 可视化报表设计器
|
||||
- 访问路径: `/ureport/*`
|
||||
|
||||
### 全文检索
|
||||
- Elasticsearch: 基于 Spring Data Elasticsearch
|
||||
- 支持多字段检索和高亮显示
|
||||
|
||||
## 监控和日志
|
||||
|
||||
### 日志配置
|
||||
- 日志文件位置: `logs/`
|
||||
- 日志级别: DEBUG/INFO/WARN/ERROR
|
||||
- 日志轮转: 按大小和时间轮转
|
||||
- 最大文件大小: 500MB
|
||||
- 保留天数: 20 天
|
||||
|
||||
### 性能监控
|
||||
- 数据库连接池监控
|
||||
- Redis 连接池监控
|
||||
- 接口响应时间监控
|
||||
|
||||
## 常见问题
|
||||
|
||||
### 数据库连接问题
|
||||
1. 检查数据库服务是否启动
|
||||
2. 验证连接字符串和凭据
|
||||
3. 确认网络连通性
|
||||
|
||||
### 文件上传问题
|
||||
1. 检查上传路径权限
|
||||
2. 确认磁盘空间充足
|
||||
3. 验证文件大小限制
|
||||
|
||||
### OCR 识别问题
|
||||
1. 确认 OCR 服务配置正确
|
||||
2. 检查图片格式和质量
|
||||
3. 验证 Tess4j 本地环境
|
||||
|
||||
## 开发工具推荐
|
||||
|
||||
- IDE: IntelliJ IDEA
|
||||
- 数据库工具: DBeaver, Navicat
|
||||
- API 测试: Postman, Swagger UI
|
||||
- 版本控制: Git
|
||||
|
||||
## 联系方式
|
||||
|
||||
如有问题请联系开发团队或提交 Issue。
|
||||
@@ -1,156 +0,0 @@
|
||||
# ImportService.java 硬编码路径深度分析报告
|
||||
|
||||
## 概述
|
||||
对 `src/main/java/com/point/strategy/datas/service/ImportService.java` 中的 `hookUp` 系列方法进行深度分析,发现多处硬编码路径问题。
|
||||
|
||||
## 发现的硬编码路径问题
|
||||
|
||||
### 1. hookUpNet 方法 (行1611)
|
||||
**问题位置**: `ImportService.java:1611-1640`
|
||||
**硬编码路径**:
|
||||
```java
|
||||
if(system.equals("win")){
|
||||
filePath = "D:\\fileAll\\"; // 硬编码Windows路径
|
||||
targetPath = "D:\\testFile"; // 硬编码Windows路径
|
||||
FileUtil2.makedir(filePath);
|
||||
}else {
|
||||
filePath = "/home/fileAll/"; // 硬编码Linux路径
|
||||
targetPath = "/home/testFile"; // 硬编码Linux路径
|
||||
FileUtil2.makedir(filePath);
|
||||
}
|
||||
```
|
||||
|
||||
**影响**: 该方法用于处理网络文件上传,如果部署在不同环境会导致路径不存在错误。
|
||||
|
||||
### 2. hookUpTwoNet 方法 (行1650)
|
||||
**问题位置**: `ImportService.java:1650-1679`
|
||||
**硬编码路径**:
|
||||
```java
|
||||
if(system.equals("win")){
|
||||
filePath = "D:\\fileAllTwo\\"; // 硬编码Windows路径
|
||||
targetPath = "D:\\testFileTwo"; // 硬编码Windows路径
|
||||
FileUtil2.makedir(filePath);
|
||||
}else {
|
||||
filePath = "/home/fileAllTwo/"; // 硬编码Linux路径
|
||||
targetPath = "/home/testFileTwo"; // 硬编码Linux路径
|
||||
FileUtil2.makedir(filePath);
|
||||
}
|
||||
```
|
||||
|
||||
**影响**: 这是用户特别关注的方法,用于双网络文件上传,存在相同的路径硬编码问题。
|
||||
|
||||
### 3. hookUpJztNet 方法 (行2742)
|
||||
**问题位置**: `ImportService.java:2742-2770`
|
||||
**硬编码路径**:
|
||||
```java
|
||||
if(system.equals("win")){
|
||||
filePath = "D:\\fileAll\\"; // 硬编码Windows路径
|
||||
targetPath = "D:\\testFile"; // 硬编码Windows路径
|
||||
FileUtil2.makedir(filePath);
|
||||
}else {
|
||||
filePath = "/opt/fileAll/"; // 硬编码Linux路径 (注意这里是/opt而不是/home)
|
||||
targetPath = "/opt/testFile"; // 硬编码Linux路径
|
||||
FileUtil2.makedir(filePath);
|
||||
}
|
||||
```
|
||||
|
||||
**影响**: 该方法用于极态通文件上传,Linux路径使用了 `/opt` 而其他方法使用 `/home`,存在不一致性。
|
||||
|
||||
### 4. hookUpXiaoGan 方法 (行2780)
|
||||
**问题位置**: `ImportService.java:2780+`
|
||||
**硬编码表名**:
|
||||
```java
|
||||
String tableName = "wsdajh_table_20220402164528"; // 硬编码表名
|
||||
```
|
||||
|
||||
**影响**: 该方法用于孝感特定业务逻辑,表名硬编码导致无法适应不同环境的数据库配置。
|
||||
|
||||
## 其他相关方法分析
|
||||
|
||||
### 5. hookUp 方法 (行891)
|
||||
**问题位置**: `ImportService.java:891-990`
|
||||
**路径使用**: 该方法使用配置化路径,相对较好:
|
||||
```java
|
||||
String saveUrl = uploadPath + File.separator + "uploadFile" + File.separator ;
|
||||
```
|
||||
|
||||
### 6. hookUpTwoZip 方法 (行2003)
|
||||
**问题位置**: `ImportService.java:2003-2100`
|
||||
**路径使用**: 该方法也使用配置化路径:
|
||||
```java
|
||||
String saveUrl = uploadPath + File.separator + "uploadFile" + File.separator ;
|
||||
```
|
||||
|
||||
## 硬编码路径的问题分析
|
||||
|
||||
### 1. 环境依赖性问题
|
||||
- **Windows路径**: `D:\fileAll\`、`D:\testFile` 等路径在Linux环境下不存在
|
||||
- **Linux路径**: `/home/fileAll/` 在某些Linux发行版可能没有相应权限
|
||||
- **路径不一致**: 不同方法使用不同的Linux路径前缀 (`/home/` vs `/opt/`)
|
||||
|
||||
### 2. 部署风险
|
||||
- 在Docker容器中运行时,这些路径可能不存在
|
||||
- 在云服务器环境中,权限配置可能不同
|
||||
- 跨平台部署时会失败
|
||||
|
||||
### 3. 维护性问题
|
||||
- 路径变更需要修改代码
|
||||
- 不同环境需要不同的代码版本
|
||||
- 不符合12-Factor App原则
|
||||
|
||||
## 建议的解决方案
|
||||
|
||||
### 1. 配置化路径
|
||||
在 `application.yml` 中添加配置:
|
||||
```yaml
|
||||
file:
|
||||
upload:
|
||||
temp:
|
||||
win: "D:\\fileAll\\"
|
||||
linux: "/home/fileAll/"
|
||||
processing:
|
||||
win: "D:\\testFile"
|
||||
linux: "/home/testFile"
|
||||
```
|
||||
|
||||
### 2. 环境变量支持
|
||||
使用系统环境变量:
|
||||
```java
|
||||
String filePath = System.getenv("ARCHIVE_FILE_PATH") != null ?
|
||||
System.getenv("ARCHIVE_FILE_PATH") : "/default/path";
|
||||
```
|
||||
|
||||
### 3. 动态路径生成
|
||||
基于应用根目录动态生成:
|
||||
```java
|
||||
String basePath = System.getProperty("user.home") + File.separator + "archive";
|
||||
String filePath = basePath + File.separator + "fileAll" + File.separator;
|
||||
```
|
||||
|
||||
### 4. 统一路径管理
|
||||
创建一个专门的路径管理类:
|
||||
```java
|
||||
@Component
|
||||
public class PathManager {
|
||||
@Value("${archive.temp.path:./temp}")
|
||||
private String tempPath;
|
||||
|
||||
@Value("${archive.upload.path:./uploads}")
|
||||
private String uploadPath;
|
||||
|
||||
public String getTempFilePath() {
|
||||
return tempPath + File.separator + "fileAll" + File.separator;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 修复优先级
|
||||
|
||||
1. **高优先级**: `hookUpTwoNet` 方法 - 用户特别关注的功能
|
||||
2. **高优先级**: `hookUpJztNet` 方法 - 存在路径不一致问题
|
||||
3. **中优先级**: `hookUpNet` 方法 - 常规网络上传功能
|
||||
4. **低优先级**: `hookUpXiaoGan` 方法 - 特定业务功能
|
||||
|
||||
## 总结
|
||||
|
||||
`ImportService.java` 中的 `hookUp` 系列方法存在多处硬编码路径问题,主要集中在网络文件上传相关的方法中。这些硬编码路径会导致跨平台部署失败、环境依赖性强、维护困难等问题。建议尽快进行配置化改造,使用环境变量或配置文件来管理路径,提高系统的可移植性和可维护性。
|
||||
@@ -1,258 +0,0 @@
|
||||
# ImportService 硬编码路径修复总结
|
||||
|
||||
## 修复概述
|
||||
成功修复了 `ImportService.java` 中除表名外的所有文件路径硬编码问题,提升了系统的可移植性和可维护性。
|
||||
|
||||
## 修复内容
|
||||
|
||||
### 1. 配置文件修改
|
||||
|
||||
#### application-dev.yml (开发环境)
|
||||
```yaml
|
||||
# 新增网络上传文件路径配置
|
||||
net:
|
||||
upload:
|
||||
win:
|
||||
filePath: "D:\\fileAll\\"
|
||||
targetPath: "D:\\testFile"
|
||||
linux:
|
||||
filePath: "/home/fileAll/"
|
||||
targetPath: "/home/testFile"
|
||||
upload-two:
|
||||
win:
|
||||
filePath: "D:\\fileAllTwo\\"
|
||||
targetPath: "D:\\testFileTwo"
|
||||
linux:
|
||||
filePath: "/home/fileAllTwo/"
|
||||
targetPath: "/home/testFileTwo"
|
||||
jzt:
|
||||
win:
|
||||
filePath: "D:\\fileAll\\"
|
||||
targetPath: "D:\\testFile"
|
||||
linux:
|
||||
filePath: "/opt/fileAll/"
|
||||
targetPath: "/opt/testFile"
|
||||
```
|
||||
|
||||
#### application-prod.yml (生产环境)
|
||||
```yaml
|
||||
# 新增网络上传文件路径配置(Docker环境安全路径)
|
||||
net:
|
||||
upload:
|
||||
win:
|
||||
filePath: ${NET_UPLOAD_WIN_FILEPATH:"D:\\fileAll\\"}
|
||||
targetPath: ${NET_UPLOAD_WIN_TARGETPATH:"D:\\testFile"}
|
||||
linux:
|
||||
filePath: ${NET_UPLOAD_LINUX_FILEPATH:"/app/data/fileAll/"}
|
||||
targetPath: ${NET_UPLOAD_LINUX_TARGETPATH:"/app/data/testFile"}
|
||||
upload-two:
|
||||
win:
|
||||
filePath: ${NET_UPLOAD_TWO_WIN_FILEPATH:"D:\\fileAllTwo\\"}
|
||||
targetPath: ${NET_UPLOAD_TWO_WIN_TARGETPATH:"D:\\testFileTwo"}
|
||||
linux:
|
||||
filePath: ${NET_UPLOAD_TWO_LINUX_FILEPATH:"/app/data/fileAllTwo/"}
|
||||
targetPath: ${NET_UPLOAD_TWO_LINUX_TARGETPATH:"/app/data/testFileTwo"}
|
||||
jzt:
|
||||
win:
|
||||
filePath: ${NET_JZT_WIN_FILEPATH:"D:\\fileAll\\"}
|
||||
targetPath: ${NET_JZT_WIN_TARGETPATH:"D:\\testFile"}
|
||||
linux:
|
||||
filePath: ${NET_JZT_LINUX_FILEPATH:"/app/data/fileAll/"}
|
||||
targetPath: ${NET_JZT_LINUX_TARGETPATH:"/app/data/testFile"}
|
||||
```
|
||||
|
||||
### 2. Java代码修改
|
||||
|
||||
#### 2.1 新增配置属性
|
||||
在 `ImportService.java` 中添加了12个配置属性:
|
||||
```java
|
||||
// 网络上传文件路径配置
|
||||
@Value("${net.upload.win.filePath}")
|
||||
private String netUploadWinFilePath;
|
||||
|
||||
@Value("${net.upload.win.targetPath}")
|
||||
private String netUploadWinTargetPath;
|
||||
|
||||
@Value("${net.upload.linux.filePath}")
|
||||
private String netUploadLinuxFilePath;
|
||||
|
||||
@Value("${net.upload.linux.targetPath}")
|
||||
private String netUploadLinuxTargetPath;
|
||||
|
||||
@Value("${net.upload-two.win.filePath}")
|
||||
private String netUploadTwoWinFilePath;
|
||||
|
||||
@Value("${net.upload-two.win.targetPath}")
|
||||
private String netUploadTwoWinTargetPath;
|
||||
|
||||
@Value("${net.upload-two.linux.filePath}")
|
||||
private String netUploadTwoLinuxFilePath;
|
||||
|
||||
@Value("${net.upload-two.linux.targetPath}")
|
||||
private String netUploadTwoLinuxTargetPath;
|
||||
|
||||
@Value("${net.jzt.win.filePath}")
|
||||
private String netJztWinFilePath;
|
||||
|
||||
@Value("${net.jzt.win.targetPath}")
|
||||
private String netJztWinTargetPath;
|
||||
|
||||
@Value("${net.jzt.linux.filePath}")
|
||||
private String netJztLinuxFilePath;
|
||||
|
||||
@Value("${net.jzt.linux.targetPath}")
|
||||
private String netJztLinuxTargetPath;
|
||||
```
|
||||
|
||||
#### 2.2 新增路径配置辅助方法
|
||||
```java
|
||||
/**
|
||||
* 根据操作系统类型获取网络上传文件路径配置
|
||||
* @param type 路径类型: upload, upload-two, jzt
|
||||
* @return 包含filePath和targetPath的Map
|
||||
*/
|
||||
private Map<String, String> getNetUploadPathConfig(String type) {
|
||||
Map<String, String> pathConfig = new HashMap<>();
|
||||
|
||||
if ("upload".equals(type)) {
|
||||
if (system.equals("win")) {
|
||||
pathConfig.put("filePath", netUploadWinFilePath);
|
||||
pathConfig.put("targetPath", netUploadWinTargetPath);
|
||||
} else {
|
||||
pathConfig.put("filePath", netUploadLinuxFilePath);
|
||||
pathConfig.put("targetPath", netUploadLinuxTargetPath);
|
||||
}
|
||||
} else if ("upload-two".equals(type)) {
|
||||
if (system.equals("win")) {
|
||||
pathConfig.put("filePath", netUploadTwoWinFilePath);
|
||||
pathConfig.put("targetPath", netUploadTwoWinTargetPath);
|
||||
} else {
|
||||
pathConfig.put("filePath", netUploadTwoLinuxFilePath);
|
||||
pathConfig.put("targetPath", netUploadTwoLinuxTargetPath);
|
||||
}
|
||||
} else if ("jzt".equals(type)) {
|
||||
if (system.equals("win")) {
|
||||
pathConfig.put("filePath", netJztWinFilePath);
|
||||
pathConfig.put("targetPath", netJztWinTargetPath);
|
||||
} else {
|
||||
pathConfig.put("filePath", netJztLinuxFilePath);
|
||||
pathConfig.put("targetPath", netJztLinuxTargetPath);
|
||||
}
|
||||
}
|
||||
|
||||
return pathConfig;
|
||||
}
|
||||
```
|
||||
|
||||
#### 2.3 修改的方法
|
||||
|
||||
**hookUpNet 方法** (行1611):
|
||||
```java
|
||||
// 修改前
|
||||
if(system.equals("win")){
|
||||
filePath = "D:\\fileAll\\";
|
||||
targetPath = "D:\\testFile";
|
||||
FileUtil2.makedir(filePath);
|
||||
}else {
|
||||
filePath = "/home/fileAll/";
|
||||
targetPath = "/home/testFile";
|
||||
FileUtil2.makedir(filePath);
|
||||
}
|
||||
|
||||
// 修改后
|
||||
Map<String, String> pathConfig = getNetUploadPathConfig("upload");
|
||||
String filePath = pathConfig.get("filePath");
|
||||
String targetPath = pathConfig.get("targetPath");
|
||||
FileUtil2.makedir(filePath);
|
||||
```
|
||||
|
||||
**hookUpTwoNet 方法** (行1650) - 用户特别关注:
|
||||
```java
|
||||
// 修改前
|
||||
if(system.equals("win")){
|
||||
filePath = "D:\\fileAllTwo\\";
|
||||
targetPath = "D:\\testFileTwo";
|
||||
FileUtil2.makedir(filePath);
|
||||
}else {
|
||||
filePath = "/home/fileAllTwo/";
|
||||
targetPath = "/home/testFileTwo";
|
||||
FileUtil2.makedir(filePath);
|
||||
}
|
||||
|
||||
// 修改后
|
||||
Map<String, String> pathConfig = getNetUploadPathConfig("upload-two");
|
||||
String filePath = pathConfig.get("filePath");
|
||||
String targetPath = pathConfig.get("targetPath");
|
||||
FileUtil2.makedir(filePath);
|
||||
```
|
||||
|
||||
**hookUpJztNet 方法** (行2742):
|
||||
```java
|
||||
// 修改前
|
||||
if(system.equals("win")){
|
||||
filePath = "D:\\fileAll\\";
|
||||
targetPath = "D:\\testFile";
|
||||
FileUtil2.makedir(filePath);
|
||||
}else {
|
||||
filePath = "/opt/fileAll/";
|
||||
targetPath = "/opt/testFile";
|
||||
FileUtil2.makedir(filePath);
|
||||
}
|
||||
|
||||
// 修改后
|
||||
Map<String, String> pathConfig = getNetUploadPathConfig("jzt");
|
||||
String filePath = pathConfig.get("filePath");
|
||||
String targetPath = pathConfig.get("targetPath");
|
||||
FileUtil2.makedir(filePath);
|
||||
```
|
||||
|
||||
### 3. 保留的硬编码
|
||||
|
||||
按照用户要求,保留了以下硬编码:
|
||||
- `hookUpXiaoGan` 方法中的硬编码表名: `String tableName = "wsdajh_table_20220402164528";`
|
||||
|
||||
## 修复效果
|
||||
|
||||
### 1. 环境兼容性
|
||||
- ✅ 支持跨平台部署 (Windows/Linux)
|
||||
- ✅ 支持Docker容器环境
|
||||
- ✅ 支持不同环境的配置差异
|
||||
|
||||
### 2. 可维护性
|
||||
- ✅ 路径配置统一管理
|
||||
- ✅ 易于修改和扩展
|
||||
- ✅ 遵循12-Factor App原则
|
||||
|
||||
### 3. 安全性
|
||||
- ✅ 生产环境使用安全的Docker路径 `/app/data/`
|
||||
- ✅ 支持环境变量覆盖
|
||||
- ✅ 避免硬编码敏感路径
|
||||
|
||||
### 4. 向后兼容
|
||||
- ✅ 开发环境保持原有路径
|
||||
- ✅ 不影响现有业务逻辑
|
||||
- ✅ API接口保持不变
|
||||
|
||||
## 使用说明
|
||||
|
||||
### 开发环境
|
||||
直接使用默认配置路径,或在 `application-dev.yml` 中修改。
|
||||
|
||||
### 生产环境
|
||||
可以通过环境变量覆盖配置:
|
||||
```bash
|
||||
export NET_UPLOAD_LINUX_FILEPATH="/custom/path/fileAll/"
|
||||
export NET_UPLOAD_LINUX_TARGETPATH="/custom/path/testFile"
|
||||
```
|
||||
|
||||
### Docker环境
|
||||
在Docker Compose或Kubernetes配置中设置环境变量。
|
||||
|
||||
## 验证结果
|
||||
- ✅ 编译通过:`mvn compile -q`
|
||||
- ✅ 无语法错误
|
||||
- ✅ 无硬编码路径残留
|
||||
- ✅ 保持原有功能完整性
|
||||
|
||||
## 总结
|
||||
成功将 `ImportService.java` 中的3个网络文件上传方法的硬编码路径全部配置化,提升了系统的可移植性、可维护性和部署灵活性。同时保持了原有的业务逻辑不变,确保了系统的稳定性。
|
||||
@@ -1,99 +0,0 @@
|
||||
# Jar包优化方案 - 800MB问题解决
|
||||
|
||||
## 问题分析
|
||||
|
||||
当前项目的800MB jar包主要来源于以下几类依赖:
|
||||
|
||||
### 1. 系统作用域依赖 (System Scope) - 直接打包
|
||||
- aspose-words-15.8.0-jdk16.jar (9.8MB)
|
||||
- aspose-cells-8.5.2.jar (5.8MB)
|
||||
- twain4java-0.3.3-all.jar (2.5MB)
|
||||
- jai_core.jar (1.5MB)
|
||||
- agent-1.0.0.jar (224KB)
|
||||
|
||||
### 2. 视频处理依赖 (最大体积来源)
|
||||
- javacv + ffmpeg-platform (通常几十MB到上百MB)
|
||||
|
||||
### 3. 重复依赖
|
||||
- jxl依赖重复声明
|
||||
|
||||
### 4. 多余的PDF处理库
|
||||
- pdfbox, itextpdf, ofdrw-full可能存在功能重叠
|
||||
|
||||
## 优化策略
|
||||
|
||||
### 方案一:功能模块化 (推荐)
|
||||
```xml
|
||||
<!-- 创建独立的处理模块,不在主应用中使用 -->
|
||||
<dependency>
|
||||
<groupId>com.point.strategy</groupId>
|
||||
<artifactId>document-processor</artifactId>
|
||||
<version>1.0.0</version>
|
||||
<scope>runtime</scope> <!-- 只在运行时使用 -->
|
||||
</dependency>
|
||||
```
|
||||
|
||||
### 方案二:分离部署
|
||||
- 主应用 (约50-100MB)
|
||||
- 文档处理服务 (独立部署)
|
||||
- OCR服务 (独立部署)
|
||||
|
||||
### 方案三:减少scope=system依赖
|
||||
```xml
|
||||
<!-- 改为provided或exclude -->
|
||||
<dependency>
|
||||
<groupId>com.aspose</groupId>
|
||||
<artifactId>aspose-words</artifactId>
|
||||
<version>15.8.0</version>
|
||||
<scope>provided</scope> <!-- 只在编译时使用 -->
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## 立即可执行的优化
|
||||
|
||||
### 1. 移除重复依赖
|
||||
```xml
|
||||
<!-- 删除重复的jxl依赖 -->
|
||||
```
|
||||
|
||||
### 2. 调整视频处理依赖scope
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>org.bytedeco</groupId>
|
||||
<artifactId>javacv</artifactId>
|
||||
<version>1.4.1</version>
|
||||
<scope>provided</scope> <!-- 改为provided -->
|
||||
</dependency>
|
||||
```
|
||||
|
||||
### 3. 使用Spring Boot分层打包
|
||||
```xml
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-maven-plugin</artifactId>
|
||||
<configuration>
|
||||
<requiresUnpack>
|
||||
<dependency>
|
||||
<groupId>com.aspose</groupId>
|
||||
<artifactId>aspose-words</artifactId>
|
||||
</dependency>
|
||||
</requiresUnpack>
|
||||
</configuration>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
```
|
||||
|
||||
## 预期效果
|
||||
|
||||
优化后可将800MB减少到:
|
||||
- **保守估计**: 200-300MB (70%减少)
|
||||
- **激进优化**: 50-100MB (90%减少)
|
||||
|
||||
## 实施建议
|
||||
|
||||
1. **第一阶段**: 移除重复依赖,调整scope
|
||||
2. **第二阶段**: 模块化文档处理功能
|
||||
3. **第三阶段**: 考虑微服务拆分
|
||||
@@ -1,202 +0,0 @@
|
||||
# 数字档案系统脚本使用说明
|
||||
|
||||
## 统一管理脚本
|
||||
|
||||
所有功能已合并到 `archive-manager.sh` 一个脚本中,提供完整的系统管理功能。
|
||||
|
||||
### 基本用法
|
||||
|
||||
```bash
|
||||
# 显示帮助
|
||||
./archive-manager.sh -h
|
||||
|
||||
# 构建镜像
|
||||
./archive-manager.sh build
|
||||
|
||||
# 部署应用(默认目录)
|
||||
./archive-manager.sh deploy
|
||||
|
||||
# 部署到指定目录
|
||||
./archive-manager.sh deploy /opt/digital-archive
|
||||
|
||||
# 检查环境
|
||||
./archive-manager.sh check
|
||||
|
||||
# 启动服务
|
||||
./archive-manager.sh start
|
||||
|
||||
# 停止服务
|
||||
./archive-manager.sh stop
|
||||
|
||||
# 重启服务
|
||||
./archive-manager.sh restart
|
||||
|
||||
# 查看日志
|
||||
./archive-manager.sh logs
|
||||
./archive-manager.sh logs -f # 实时日志
|
||||
|
||||
# 查看状态
|
||||
./archive-manager.sh status
|
||||
|
||||
# 更新服务
|
||||
./archive-manager.sh update
|
||||
|
||||
# 清理资源
|
||||
./archive-manager.sh clean
|
||||
```
|
||||
|
||||
### 高级选项
|
||||
|
||||
```bash
|
||||
# 详细输出
|
||||
./archive-manager.sh -v build
|
||||
|
||||
# 强制执行(跳过确认)
|
||||
./archive-manager.sh -f deploy
|
||||
|
||||
# 静默模式
|
||||
./archive-manager.sh -q build
|
||||
```
|
||||
|
||||
## 功能特性
|
||||
|
||||
### 1. 统一管理
|
||||
- 一个脚本管理所有功能
|
||||
- 自动检测Docker Compose版本
|
||||
- 统一的日志和错误处理
|
||||
- 彩色的输出显示
|
||||
|
||||
### 2. 环境检查
|
||||
- 检查Docker环境
|
||||
- 验证proxy网络
|
||||
- 检查必需容器(MySQL、Redis、Elasticsearch)
|
||||
- 测试网络连通性
|
||||
|
||||
### 3. 智能部署
|
||||
- 自动创建部署目录结构
|
||||
- 生成环境配置文件
|
||||
- 创建管理脚本
|
||||
- 设置正确的权限
|
||||
|
||||
### 4. 服务管理
|
||||
- 启动/停止/重启服务
|
||||
- 查看日志和状态
|
||||
- 更新服务(拉取最新镜像)
|
||||
- 清理Docker资源
|
||||
|
||||
### 5. 优化配置
|
||||
- 使用容错Dockerfile(Dockerfile.robust)
|
||||
- Maven国内镜像加速
|
||||
- Alpine包管理器国内镜像
|
||||
- 多镜像源容错机制
|
||||
|
||||
## 部署目录结构
|
||||
|
||||
```
|
||||
/root/server/archive/
|
||||
├── docker-compose.yml # 服务编排文件
|
||||
├── .env # 环境变量配置
|
||||
├── start.sh # 启动脚本
|
||||
├── stop.sh # 停止脚本
|
||||
├── update.sh # 更新脚本
|
||||
├── data/ # 数据目录
|
||||
│ ├── upload/
|
||||
│ ├── temp/
|
||||
│ ├── unzip/
|
||||
│ ├── images/
|
||||
│ ├── reports/
|
||||
│ └── elasticsearch/
|
||||
└── logs/ # 日志目录
|
||||
```
|
||||
|
||||
## 环境要求
|
||||
|
||||
### 必需服务
|
||||
- Docker和Docker Compose
|
||||
- proxy网络:`docker network create proxy`
|
||||
- MySQL容器(连接到proxy网络)
|
||||
- Redis容器(连接到proxy网络)
|
||||
|
||||
### 可选服务
|
||||
- Elasticsearch容器(连接到proxy网络)
|
||||
|
||||
## 故障排除
|
||||
|
||||
### 构建失败
|
||||
```bash
|
||||
# 检查Docker环境
|
||||
./archive-manager.sh -v check
|
||||
|
||||
# 重新构建
|
||||
./archive-manager.sh clean
|
||||
./archive-manager.sh build
|
||||
```
|
||||
|
||||
### 服务启动失败
|
||||
```bash
|
||||
# 查看详细日志
|
||||
./archive-manager.sh logs -f
|
||||
|
||||
# 检查服务状态
|
||||
./archive-manager.sh status
|
||||
|
||||
# 重启服务
|
||||
./archive-manager.sh restart
|
||||
```
|
||||
|
||||
### 网络问题
|
||||
```bash
|
||||
# 检查网络配置
|
||||
docker network ls
|
||||
docker network inspect proxy
|
||||
|
||||
# 测试连通性
|
||||
docker run --rm --network proxy alpine ping mysql
|
||||
```
|
||||
|
||||
## 配置文件说明
|
||||
|
||||
### settings.xml
|
||||
Maven配置文件,包含:
|
||||
- 多个国内镜像源
|
||||
- 仓库和插件仓库配置
|
||||
- 代理设置(可选)
|
||||
|
||||
### Dockerfile.robust
|
||||
优化的Dockerfile,包含:
|
||||
- 多段构建
|
||||
- Alpine国内镜像源
|
||||
- 容错机制
|
||||
- 安全配置
|
||||
|
||||
### docker-compose.simple.yml
|
||||
简化的服务编排文件,包含:
|
||||
- 应用服务配置
|
||||
- Elasticsearch服务
|
||||
- 网络配置
|
||||
|
||||
## 最佳实践
|
||||
|
||||
1. **首次部署**
|
||||
```bash
|
||||
./archive-manager.sh check
|
||||
./archive-manager.sh build
|
||||
./archive-manager.sh deploy
|
||||
./archive-manager.sh start
|
||||
```
|
||||
|
||||
2. **日常维护**
|
||||
```bash
|
||||
./archive-manager.sh status # 检查状态
|
||||
./archive-manager.sh logs -f # 查看日志
|
||||
./archive-manager.sh update # 更新服务
|
||||
```
|
||||
|
||||
3. **问题排查**
|
||||
```bash
|
||||
./archive-manager.sh -v status # 详细状态信息
|
||||
./archive-manager.sh clean # 清理资源
|
||||
./archive-manager.sh rebuild # 重新构建
|
||||
```
|
||||
|
||||
现在你只需要一个 `archive-manager.sh` 脚本就能完成所有管理任务!
|
||||
@@ -1,75 +0,0 @@
|
||||
# 🎯 800MB Jar包优化解决方案
|
||||
|
||||
## 当前状态分析
|
||||
|
||||
✅ **已完成优化**:
|
||||
- 移除重复依赖 (jxl, metadata-extractor, commons-imaging)
|
||||
- 预期减少约2-5MB
|
||||
|
||||
❌ **主要问题**:
|
||||
- JavaCV + FFmpeg平台库占用400-500MB
|
||||
- 系统jar包占用约20MB
|
||||
- 代码中直接依赖JavaCV类
|
||||
|
||||
## 🏆 推荐解决方案
|
||||
|
||||
### 方案一:微服务拆分 (最佳实践)
|
||||
```yaml
|
||||
# 主应用 (目标大小: 80-120MB)
|
||||
point-strategy-main/
|
||||
├── 档案管理核心功能
|
||||
├── 文件上传下载
|
||||
├── 数据库操作
|
||||
└── 基础OCR功能
|
||||
|
||||
# 视频处理服务 (独立部署)
|
||||
video-processing-service/
|
||||
├── 视频转码功能
|
||||
├── JavaCV + FFmpeg
|
||||
└── 与主应用通过API通信
|
||||
```
|
||||
|
||||
### 方案二:外部依赖部署 (快速方案)
|
||||
```bash
|
||||
# 1. 将JavaCV相关jar移至外部lib目录
|
||||
cp ffmpeg-platform*.jar /app/lib/
|
||||
cp javacv*.jar /app/lib/
|
||||
|
||||
# 2. 修改启动脚本
|
||||
java -cp "point-strategy.jar:/app/lib/*" com.point.strategy.PointStrategyApplication
|
||||
|
||||
# 3. 主jar包预期大小: 120-150MB
|
||||
```
|
||||
|
||||
### 方案三:Docker分层优化
|
||||
```dockerfile
|
||||
# 使用分层Dockerfile
|
||||
FROM openjdk:8-jre-alpine
|
||||
COPY point-strategy.jar app.jar
|
||||
COPY video-libs/ /app/lib/
|
||||
CMD ["java", "-Djava.library.path=/app/lib", "-jar", "app.jar"]
|
||||
```
|
||||
|
||||
## 🚀 立即可执行的临时方案
|
||||
|
||||
如果需要快速解决问题,建议使用**方案二**:
|
||||
|
||||
1. **恢复JavaCV依赖** (编译需要)
|
||||
2. **外部化部署** (启动时分离)
|
||||
3. **预计主jar包**: 150-200MB (减少75%)
|
||||
|
||||
## 📈 优化效果预期
|
||||
|
||||
| 方案 | 主jar包大小 | 部署复杂度 | 推荐度 |
|
||||
|------|-------------|------------|--------|
|
||||
| 微服务拆分 | 80-120MB | 高 | ⭐⭐⭐⭐⭐ |
|
||||
| 外部依赖 | 150-200MB | 中 | ⭐⭐⭐⭐ |
|
||||
| 保持现状 | 598MB | 低 | ⭐ |
|
||||
|
||||
## 🎯 建议实施步骤
|
||||
|
||||
1. **短期**: 实施外部依赖方案,快速减小jar包
|
||||
2. **中期**: 逐步拆分视频处理模块
|
||||
3. **长期**: 完全微服务化重构
|
||||
|
||||
需要我实施哪个方案?
|
||||
@@ -1,622 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# 数字档案系统统一管理脚本
|
||||
set -e
|
||||
|
||||
# 颜色定义
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
PURPLE='\033[0;35m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# 项目配置
|
||||
PROJECT_NAME="digital-archive"
|
||||
VERSION="latest"
|
||||
DEFAULT_DEPLOY_DIR="/root/server/archive"
|
||||
DOCKERFILE="Dockerfile"
|
||||
VERBOSE=false
|
||||
|
||||
# 显示横幅
|
||||
show_banner() {
|
||||
echo -e "${CYAN}"
|
||||
echo "╔══════════════════════════════════════════════════════════════════╗"
|
||||
echo "║ 数字档案系统管理脚本 ║"
|
||||
echo "║ ║"
|
||||
echo "║ 功能: 构建、部署、检查、管理数字档案系统 ║"
|
||||
echo "║ 版本: v2.0 ║"
|
||||
echo "╚════════════════════════════════════════════════════════════════╝"
|
||||
echo -e "${NC}"
|
||||
}
|
||||
|
||||
# 显示帮助信息
|
||||
show_help() {
|
||||
show_banner
|
||||
echo -e "${GREEN}用法: $0 <命令> [选项] [参数]${NC}"
|
||||
echo ""
|
||||
echo -e "${BLUE}命令:${NC}"
|
||||
echo " build 构建Docker镜像"
|
||||
echo " deploy 部署应用到指定目录"
|
||||
echo " check 检查环境配置"
|
||||
echo " start 启动服务"
|
||||
echo " stop 停止服务"
|
||||
echo " restart 重启服务"
|
||||
echo " logs 查看日志"
|
||||
echo " status 查看服务状态"
|
||||
echo " update 更新服务"
|
||||
echo " clean 清理资源"
|
||||
echo ""
|
||||
echo -e "${BLUE}选项:${NC}"
|
||||
echo " -h, --help 显示帮助信息"
|
||||
echo " -v, --verbose 详细输出"
|
||||
echo " -f, --force 强制执行(跳过确认)"
|
||||
echo " -q, --quiet 静默模式"
|
||||
echo ""
|
||||
echo -e "${BLUE}参数:${NC}"
|
||||
echo " 部署目录 目标部署目录 (默认: ${DEFAULT_DEPLOY_DIR})"
|
||||
echo ""
|
||||
echo -e "${YELLOW}示例:${NC}"
|
||||
echo " $0 build # 构建镜像"
|
||||
echo " $0 deploy /opt/myapp # 部署到指定目录"
|
||||
echo " $0 check # 检查环境"
|
||||
echo " $0 start # 启动服务"
|
||||
echo " $0 stop # 停止服务"
|
||||
echo " $0 logs -f # 查看实时日志"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 日志函数
|
||||
log_info() {
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
echo -e "${GREEN}[INFO]${NC} $1"
|
||||
fi
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
# 检查Docker环境
|
||||
check_docker() {
|
||||
if ! command -v docker &> /dev/null; then
|
||||
log_error "Docker未安装"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 检测Docker Compose命令类型
|
||||
if docker compose version &> /dev/null; then
|
||||
COMPOSE_CMD="docker compose"
|
||||
elif command -v docker-compose &> /dev/null; then
|
||||
COMPOSE_CMD="docker-compose"
|
||||
else
|
||||
log_error "Docker Compose未安装"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "使用命令: ${COMPOSE_CMD}"
|
||||
}
|
||||
|
||||
# 检查脚本文件
|
||||
check_scripts() {
|
||||
local scripts=("settings.xml")
|
||||
for script in "${scripts[@]}"; do
|
||||
if [ ! -f "$script" ]; then
|
||||
log_error "必需文件 $script 不存在"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# 镜像是否存在
|
||||
image_exists() {
|
||||
local image_ref="$1"
|
||||
docker image inspect "$image_ref" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
# 构建镜像
|
||||
build_image() {
|
||||
log_info "开始构建Docker镜像..."
|
||||
|
||||
# 检查必需文件
|
||||
check_scripts
|
||||
|
||||
# 构建镜像
|
||||
docker build -f ${DOCKERFILE} -t ${PROJECT_NAME}:${VERSION} .
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
log_success "镜像构建成功: ${PROJECT_NAME}:${VERSION}"
|
||||
|
||||
# 显示镜像信息
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
echo -e "${YELLOW}镜像信息:${NC}"
|
||||
docker images ${PROJECT_NAME}:${VERSION} --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}\t{{.CreatedAt}}"
|
||||
fi
|
||||
else
|
||||
log_error "镜像构建失败"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# 检查环境
|
||||
check_environment() {
|
||||
log_info "检查环境配置..."
|
||||
|
||||
# 检查Docker网络
|
||||
if ! docker network ls | grep -q proxy; then
|
||||
log_error "proxy网络不存在,请创建: docker network create proxy"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 检查容器
|
||||
local mysql_container=$(docker network inspect proxy --format '{{range .Containers}}{{.Name}} {{end}}' 2>/dev/null | tr ' ' '\n' | grep -i mysql | head -1 || true)
|
||||
local redis_container=$(docker network inspect proxy --format '{{range .Containers}}{{.Name}} {{end}}' 2>/dev/null | tr ' ' '\n' | grep -i redis | head -1 || true)
|
||||
local es_container=$(docker network inspect proxy --format '{{range .Containers}}{{.Name}} {{end}}' 2>/dev/null | tr ' ' '
|
||||
' | grep -w "es" | head -1 || true)
|
||||
|
||||
if [ -z "$mysql_container" ]; then
|
||||
log_error "proxy网络中未找到MySQL容器"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$redis_container" ]; then
|
||||
log_error "proxy网络中未找到Redis容器"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$es_container" ]; then
|
||||
log_warn "proxy网络中未找到Elasticsearch容器(可选)"
|
||||
fi
|
||||
|
||||
log_success "环境检查完成"
|
||||
log_info "MySQL: $mysql_container"
|
||||
log_info "Redis: $redis_container"
|
||||
[ ! -z "$es_container" ] && log_info "Elasticsearch: $es_container"
|
||||
}
|
||||
|
||||
# 部署应用
|
||||
deploy_app() {
|
||||
local deploy_dir=${1:-$DEFAULT_DEPLOY_DIR}
|
||||
|
||||
log_info "部署应用到: $deploy_dir"
|
||||
|
||||
# 检查镜像(若不存在则自动构建一次)
|
||||
if ! image_exists "${PROJECT_NAME}:${VERSION}"; then
|
||||
log_warn "未找到镜像 ${PROJECT_NAME}:${VERSION},尝试自动构建..."
|
||||
build_image || {
|
||||
log_error "自动构建镜像失败,请手动执行: $0 build";
|
||||
exit 1;
|
||||
}
|
||||
fi
|
||||
|
||||
# 创建部署目录
|
||||
mkdir -p "$deploy_dir"/{data/{upload,temp,unzip,images,reports,elasticsearch},logs,nginx}
|
||||
|
||||
# 设置日志目录权限(确保容器内 app 用户可以写入)
|
||||
chmod 755 "$deploy_dir/logs"
|
||||
chown -R 1001:1001 "$deploy_dir/logs" 2>/dev/null || true
|
||||
|
||||
# 动态生成 docker-compose.yml 文件
|
||||
cat > "$deploy_dir/docker-compose.yml" << EOF
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# 主应用服务
|
||||
app:
|
||||
image: ${PROJECT_NAME}:${VERSION}
|
||||
container_name: digital-archive-app
|
||||
ports:
|
||||
- "9081:9081"
|
||||
volumes:
|
||||
- ./data/upload:/app/data/upload
|
||||
- ./data/temp:/app/data/temp
|
||||
- ./data/unzip:/app/data/unzip
|
||||
- ./data/images:/app/data/images
|
||||
- ./data/reports:/app/data/reports
|
||||
- ./logs:/app/logs
|
||||
environment:
|
||||
- SPRING_PROFILES_ACTIVE=prod
|
||||
- SERVER_PORT=9081
|
||||
# MySQL数据库配置
|
||||
- DB_HOST=mysql
|
||||
- DB_PORT=3306
|
||||
- DB_NAME=enterprise_digital_archives
|
||||
- DB_USERNAME=root
|
||||
- DB_PASSWORD=Abc@123456
|
||||
- DB_DRIVER=com.mysql.cj.jdbc.Driver
|
||||
# Redis配置
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=Abc123456
|
||||
# Elasticsearch配置 - 使用已有的 "es" 容器
|
||||
- ELASTICSEARCH_HOST=es
|
||||
- ELASTICSEARCH_PORT=9200
|
||||
- ELASTICSEARCH_SCHEME=http
|
||||
# OCR配置
|
||||
- TESS_PATH=/usr/bin/tesseract
|
||||
# 其他配置
|
||||
- SWAGGER_SHOW=false
|
||||
- LOG_ROOT_LEVEL=info
|
||||
- LOG_APP_LEVEL=info
|
||||
networks:
|
||||
- proxy
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:9081/point-strategy/actuator/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
# Elasticsearch - 使用已有的 "es" 容器
|
||||
# 注意:确保已有的 "es" 容器已连接到 proxy 网络
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
external: true
|
||||
EOF
|
||||
|
||||
# 创建环境配置
|
||||
cat > "$deploy_dir/.env" << EOF
|
||||
COMPOSE_PROJECT_NAME=digital-archive
|
||||
|
||||
# 服务配置
|
||||
SERVER_PORT=9081
|
||||
SERVER_CONTEXT_PATH=/point-strategy
|
||||
|
||||
# MySQL数据库配置
|
||||
DB_HOST=mysql
|
||||
DB_PORT=3306
|
||||
DB_NAME=enterprise_digital_archives
|
||||
DB_USERNAME=root
|
||||
DB_PASSWORD=Abc@123456
|
||||
DB_DRIVER=com.mysql.cj.jdbc.Driver
|
||||
|
||||
# Redis配置
|
||||
REDIS_HOST=redis
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=Abc123456
|
||||
|
||||
# Elasticsearch配置 - 使用已有的 "es" 容器
|
||||
ELASTICSEARCH_HOST=es
|
||||
ELASTICSEARCH_PORT=9200
|
||||
ELASTICSEARCH_SCHEME=http
|
||||
|
||||
# OCR配置
|
||||
TESS_PATH=/usr/bin/tesseract
|
||||
|
||||
# 其他配置
|
||||
SWAGGER_SHOW=false
|
||||
LOG_ROOT_LEVEL=info
|
||||
LOG_APP_LEVEL=info
|
||||
JAVA_OPTS=-Xmx2g -Xms1g -XX:+UseG1GC -XX:MaxGCPauseMillis=200
|
||||
EOF
|
||||
|
||||
# 创建管理脚本
|
||||
cat > "$deploy_dir/start.sh" << 'EOF'
|
||||
#!/bin/bash
|
||||
echo "启动数字档案系统..."
|
||||
|
||||
# 检测Docker Compose命令类型
|
||||
if docker compose version &> /dev/null; then
|
||||
COMPOSE_CMD="docker compose"
|
||||
else
|
||||
COMPOSE_CMD="docker-compose"
|
||||
fi
|
||||
|
||||
${COMPOSE_CMD} up -d
|
||||
|
||||
echo "等待服务启动..."
|
||||
sleep 30
|
||||
|
||||
echo "检查服务状态..."
|
||||
${COMPOSE_CMD} ps
|
||||
|
||||
echo "服务启动完成!"
|
||||
EOF
|
||||
|
||||
cat > "$deploy_dir/stop.sh" << 'EOF'
|
||||
#!/bin/bash
|
||||
echo "停止数字档案系统..."
|
||||
|
||||
# 检测Docker Compose命令类型
|
||||
if docker compose version &> /dev/null; then
|
||||
COMPOSE_CMD="docker compose"
|
||||
else
|
||||
COMPOSE_CMD="docker-compose"
|
||||
fi
|
||||
|
||||
${COMPOSE_CMD} down
|
||||
|
||||
echo "清理未使用的镜像和容器..."
|
||||
docker system prune -f
|
||||
EOF
|
||||
|
||||
cat > "$deploy_dir/update.sh" << 'EOF'
|
||||
#!/bin/bash
|
||||
echo "更新数字档案系统..."
|
||||
|
||||
# 检测Docker Compose命令类型
|
||||
if docker compose version &> /dev/null; then
|
||||
COMPOSE_CMD="docker compose"
|
||||
else
|
||||
COMPOSE_CMD="docker-compose"
|
||||
fi
|
||||
|
||||
echo "停止服务..."
|
||||
${COMPOSE_CMD} down
|
||||
|
||||
echo "拉取最新镜像..."
|
||||
${COMPOSE_CMD} pull
|
||||
|
||||
echo "启动服务..."
|
||||
${COMPOSE_CMD} up -d
|
||||
|
||||
echo "清理旧镜像..."
|
||||
docker image prune -f
|
||||
EOF
|
||||
|
||||
# 设置执行权限
|
||||
chmod +x "$deploy_dir"/{start.sh,stop.sh,update.sh}
|
||||
|
||||
log_success "应用部署完成: $deploy_dir"
|
||||
log_info "管理命令:"
|
||||
echo " 启动: cd $deploy_dir && ./start.sh"
|
||||
echo " 停止: cd $deploy_dir && ./stop.sh"
|
||||
echo " 更新: cd $deploy_dir && ./update.sh"
|
||||
}
|
||||
|
||||
# 启动服务
|
||||
start_services() {
|
||||
local deploy_dir=${1:-$DEFAULT_DEPLOY_DIR}
|
||||
|
||||
if [ ! -d "$deploy_dir" ]; then
|
||||
log_error "部署目录不存在: $deploy_dir"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$deploy_dir"
|
||||
|
||||
if [ ! -f "start.sh" ]; then
|
||||
log_error "启动脚本不存在,请先部署: $0 deploy $deploy_dir"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
./start.sh
|
||||
log_success "服务启动完成"
|
||||
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
echo -e "${YELLOW}访问地址: http://localhost:9081/point-strategy${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
# 停止服务
|
||||
stop_services() {
|
||||
local deploy_dir=${1:-$DEFAULT_DEPLOY_DIR}
|
||||
|
||||
if [ ! -d "$deploy_dir" ]; then
|
||||
log_error "部署目录不存在: $deploy_dir"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$deploy_dir"
|
||||
|
||||
if [ -f "stop.sh" ]; then
|
||||
./stop.sh
|
||||
log_success "服务停止完成"
|
||||
else
|
||||
log_warn "停止脚本不存在,尝试使用Docker Compose"
|
||||
if docker compose version &> /dev/null; then
|
||||
docker compose down
|
||||
else
|
||||
docker-compose down
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# 重启服务
|
||||
restart_services() {
|
||||
local deploy_dir=${1:-$DEFAULT_DEPLOY_DIR}
|
||||
|
||||
log_info "重启服务..."
|
||||
stop_services "$deploy_dir"
|
||||
sleep 2
|
||||
start_services "$deploy_dir"
|
||||
log_success "服务重启完成"
|
||||
}
|
||||
|
||||
# 查看日志
|
||||
show_logs() {
|
||||
local deploy_dir=${1:-$DEFAULT_DEPLOY_DIR}
|
||||
local follow=$2
|
||||
|
||||
if [ ! -d "$deploy_dir" ]; then
|
||||
log_error "部署目录不存在: $deploy_dir"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$deploy_dir"
|
||||
|
||||
if [ "$follow" = "-f" ]; then
|
||||
if docker compose version &> /dev/null; then
|
||||
docker compose logs -f app
|
||||
else
|
||||
docker-compose logs -f app
|
||||
fi
|
||||
else
|
||||
if docker compose version &> /dev/null; then
|
||||
docker compose logs --tail=100 app
|
||||
else
|
||||
docker-compose logs --tail=100 app
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# 查看状态
|
||||
show_status() {
|
||||
local deploy_dir=${1:-$DEFAULT_DEPLOY_DIR}
|
||||
|
||||
if [ ! -d "$deploy_dir" ]; then
|
||||
log_error "部署目录不存在: $deploy_dir"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$deploy_dir"
|
||||
|
||||
echo -e "${CYAN}=== 数字档案系统服务状态 ===${NC}"
|
||||
|
||||
if docker compose version &> /dev/null; then
|
||||
docker compose ps
|
||||
else
|
||||
docker-compose ps
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${CYAN}=== 容器资源使用情况 ===${NC}"
|
||||
docker stats --no-stream
|
||||
}
|
||||
|
||||
# 更新服务
|
||||
update_services() {
|
||||
local deploy_dir=${1:-$DEFAULT_DEPLOY_DIR}
|
||||
|
||||
if [ ! -d "$deploy_dir" ]; then
|
||||
log_error "部署目录不存在: $deploy_dir"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$deploy_dir"
|
||||
|
||||
if [ -f "update.sh" ]; then
|
||||
./update.sh
|
||||
log_success "服务更新完成"
|
||||
else
|
||||
log_warn "更新脚本不存在,请先部署: $0 deploy $deploy_dir"
|
||||
fi
|
||||
}
|
||||
|
||||
# 清理资源
|
||||
clean_resources() {
|
||||
log_info "清理Docker资源..."
|
||||
|
||||
# 清理停止的容器
|
||||
docker container prune -f
|
||||
|
||||
# 清理未使用的镜像
|
||||
docker image prune -f
|
||||
|
||||
# 清理未使用的网络
|
||||
docker network prune -f
|
||||
|
||||
# 清理未使用的卷
|
||||
docker volume prune -f
|
||||
|
||||
log_success "资源清理完成"
|
||||
}
|
||||
|
||||
# 主函数
|
||||
main() {
|
||||
# 解析命令行参数
|
||||
COMMAND=""
|
||||
VERBOSE=false
|
||||
FORCE=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
build|deploy|check|start|stop|restart|logs|status|update|clean)
|
||||
COMMAND=$1
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
--force)
|
||||
FORCE=true
|
||||
shift
|
||||
;;
|
||||
-q|--quiet)
|
||||
# 重定向输出到/dev/null
|
||||
exec 1>/dev/null 2>&1
|
||||
;;
|
||||
-*)
|
||||
log_error "未知选项: $1"
|
||||
show_help
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
# 参数传递给具体命令
|
||||
break
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# 如果没有命令,显示帮助
|
||||
if [ -z "$COMMAND" ]; then
|
||||
show_help
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# 显示横幅
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
show_banner
|
||||
fi
|
||||
|
||||
# 检查Docker环境
|
||||
check_docker
|
||||
|
||||
# 执行对应命令
|
||||
case $COMMAND in
|
||||
build)
|
||||
build_image
|
||||
;;
|
||||
deploy)
|
||||
deploy_app "$@"
|
||||
;;
|
||||
check)
|
||||
check_environment
|
||||
;;
|
||||
start)
|
||||
start_services "$@"
|
||||
;;
|
||||
stop)
|
||||
stop_services "$@"
|
||||
;;
|
||||
restart)
|
||||
restart_services "$@"
|
||||
;;
|
||||
logs)
|
||||
show_logs "$@"
|
||||
;;
|
||||
status)
|
||||
show_status "$@"
|
||||
;;
|
||||
update)
|
||||
update_services "$@"
|
||||
;;
|
||||
clean)
|
||||
clean_resources
|
||||
;;
|
||||
*)
|
||||
log_error "未知命令: $COMMAND"
|
||||
show_help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# 执行主函数
|
||||
main "$@"
|
||||
125
build-push-acr.sh
Executable file
125
build-push-acr.sh
Executable file
@@ -0,0 +1,125 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
load_env_file() {
|
||||
local file="$1"
|
||||
[[ -f "$file" ]] || return 0
|
||||
|
||||
while IFS= read -r line || [[ -n "$line" ]]; do
|
||||
line="${line#${line%%[![:space:]]*}}"
|
||||
line="${line%${line##*[![:space:]]}}"
|
||||
[[ -z "$line" ]] && continue
|
||||
[[ "$line" == \#* ]] && continue
|
||||
[[ "$line" != *=* ]] && continue
|
||||
|
||||
local key="${line%%=*}"
|
||||
local val="${line#*=}"
|
||||
key="${key#${key%%[![:space:]]*}}"
|
||||
key="${key%${key##*[![:space:]]}}"
|
||||
[[ "$key" =~ ^[A-Za-z_][A-Za-z0-9_]*$ ]] || continue
|
||||
|
||||
if [[ "$val" =~ ^\".*\"$ ]]; then
|
||||
val="${val:1:${#val}-2}"
|
||||
elif [[ "$val" =~ ^\'.*\'$ ]]; then
|
||||
val="${val:1:${#val}-2}"
|
||||
fi
|
||||
|
||||
if [[ -z "${!key:-}" ]]; then
|
||||
export "$key=$val"
|
||||
fi
|
||||
done < "$file"
|
||||
}
|
||||
|
||||
usage() {
|
||||
cat <<'EOF' >&2
|
||||
Usage:
|
||||
bash build-push-acr.sh <acr_password>
|
||||
# or:
|
||||
ACR_PASSWORD=... ACR_USERNAME=... REPO_URL=... NAMESPACE=... REPO_NAME=... IMAGE_TAG=... bash build-push-acr.sh
|
||||
|
||||
Required:
|
||||
acr_password (positional arg #1) OR env ACR_PASSWORD
|
||||
|
||||
Optional:
|
||||
ENV_FILE env file to load (default: ../deploy/.env if exists)
|
||||
ACR_USERNAME default: aipper@qq.com
|
||||
REPO_URL registry host (ACR). Default: registry.cn-hangzhou.aliyuncs.com
|
||||
NAMESPACE ACR namespace. Default: aipper
|
||||
REPO_NAME repository name. Default: digital-archive-server
|
||||
IMAGE_TAG default: YYYYMMDDHHMM
|
||||
DRY_RUN=1 print computed image ref and exit
|
||||
|
||||
Compatibility:
|
||||
REPO_URL -> ACR_REGISTRY
|
||||
NAMESPACE -> ACR_NAMESPACE
|
||||
EOF
|
||||
}
|
||||
|
||||
if [[ -n "${ENV_FILE:-}" ]]; then
|
||||
load_env_file "$ENV_FILE"
|
||||
else
|
||||
load_env_file "$script_dir/../deploy/.env"
|
||||
fi
|
||||
|
||||
if [[ -z "${ACR_PASSWORD:-}" && -n "${1:-}" ]]; then
|
||||
export ACR_PASSWORD="$1"
|
||||
fi
|
||||
if [[ -z "${ACR_USERNAME:-}" && -n "${2:-}" ]]; then
|
||||
export ACR_USERNAME="$2"
|
||||
fi
|
||||
|
||||
if [[ -z "${ACR_REGISTRY:-}" && -n "${REPO_URL:-}" ]]; then
|
||||
export ACR_REGISTRY="$REPO_URL"
|
||||
fi
|
||||
if [[ -z "${ACR_NAMESPACE:-}" && -n "${NAMESPACE:-}" ]]; then
|
||||
export ACR_NAMESPACE="$NAMESPACE"
|
||||
fi
|
||||
|
||||
export REPO_URL="${REPO_URL:-${ACR_REGISTRY:-registry.cn-hangzhou.aliyuncs.com}}"
|
||||
export NAMESPACE="${NAMESPACE:-${ACR_NAMESPACE:-aipper}}"
|
||||
export REPO_NAME="${REPO_NAME:-${IMAGE_REPO:-digital-archive-server}}"
|
||||
export IMAGE_TAG="${IMAGE_TAG:-$(date +"%Y%m%d%H%M")}"
|
||||
|
||||
export ACR_REGISTRY="${ACR_REGISTRY:-$REPO_URL}"
|
||||
export ACR_NAMESPACE="${ACR_NAMESPACE:-$NAMESPACE}"
|
||||
export IMAGE_REPO="${IMAGE_REPO:-$REPO_NAME}"
|
||||
export ACR_USERNAME="${ACR_USERNAME:-aipper@qq.com}"
|
||||
|
||||
if [[ -z "${ACR_PASSWORD:-}" ]]; then
|
||||
echo "错误:请在运行脚本时传递密码,例如:bash build-push-acr.sh your-acr-password" >&2
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
image_repo="$IMAGE_REPO"
|
||||
image_tag="$IMAGE_TAG"
|
||||
|
||||
image_ref="${ACR_REGISTRY}/${ACR_NAMESPACE}/${image_repo}:${image_tag}"
|
||||
|
||||
if [[ "${DRY_RUN:-}" == "1" ]]; then
|
||||
echo "DRY_RUN=1"
|
||||
echo "IMAGE_REF=$image_ref"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
printf '%s' "$ACR_PASSWORD" | docker login "$ACR_REGISTRY" -u "$ACR_USERNAME" --password-stdin
|
||||
|
||||
if docker buildx version >/dev/null 2>&1; then
|
||||
docker buildx build \
|
||||
-f "$script_dir/Dockerfile" \
|
||||
-t "$image_ref" \
|
||||
--load \
|
||||
"$script_dir"
|
||||
else
|
||||
docker build \
|
||||
-f "$script_dir/Dockerfile" \
|
||||
-t "$image_ref" \
|
||||
"$script_dir"
|
||||
fi
|
||||
|
||||
docker push "$image_ref"
|
||||
|
||||
echo "Pushed: $image_ref"
|
||||
echo "SERVER_IMAGE=$image_ref"
|
||||
@@ -1,68 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Docker Compose YAML 格式检查 ==="
|
||||
echo
|
||||
|
||||
DEPLOY_DIR="/root/server/archive"
|
||||
COMPOSE_FILE="$DEPLOY_DIR/docker-compose.yml"
|
||||
|
||||
if [ ! -f "$COMPOSE_FILE" ]; then
|
||||
echo "❌ docker-compose.yml 文件不存在: $COMPOSE_FILE"
|
||||
echo "请先部署应用: ./archive-manager.sh deploy"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✓ 找到 docker-compose.yml 文件"
|
||||
echo
|
||||
|
||||
echo "🔍 检查 YAML 语法..."
|
||||
|
||||
# 使用 Python 检查 YAML 语法
|
||||
if command -v python3 &> /dev/null; then
|
||||
python3 -c "
|
||||
import yaml
|
||||
import sys
|
||||
try:
|
||||
with open('$COMPOSE_FILE', 'r') as f:
|
||||
yaml.safe_load(f)
|
||||
print('✅ YAML 语法正确')
|
||||
except yaml.YAMLError as e:
|
||||
print(f'❌ YAML 语法错误: {e}')
|
||||
sys.exit(1)
|
||||
"
|
||||
else
|
||||
echo "⚠️ Python3 不可用,跳过语法检查"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "🔍 检查 Docker Compose 配置..."
|
||||
|
||||
# 检测 Docker Compose 命令类型
|
||||
if docker compose version &> /dev/null; then
|
||||
COMPOSE_CMD="docker compose"
|
||||
elif docker-compose version &> /dev/null; then
|
||||
COMPOSE_CMD="docker-compose"
|
||||
else
|
||||
echo "❌ Docker Compose 不可用"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "使用命令: $COMPOSE_CMD"
|
||||
|
||||
# 验证配置文件
|
||||
if $COMPOSE_CMD -f "$COMPOSE_FILE" config --quiet 2>/dev/null; then
|
||||
echo "✅ Docker Compose 配置正确"
|
||||
else
|
||||
echo "❌ Docker Compose 配置错误"
|
||||
echo "详细错误信息:"
|
||||
$COMPOSE_CMD -f "$COMPOSE_FILE" config
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "📋 配置内容摘要:"
|
||||
$COMPOSE_CMD -f "$COMPOSE_FILE" config --services
|
||||
|
||||
echo
|
||||
echo "=== 检查完成 ==="
|
||||
echo "现在可以启动服务: ./archive-manager.sh start"
|
||||
@@ -1,65 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Elasticsearch 容器连接检查 ==="
|
||||
echo
|
||||
|
||||
# 检查 es 容器是否存在
|
||||
if ! docker ps -a --format "table {{.Names}}" | grep -q "^es$"; then
|
||||
echo "❌ 未找到名为 'es' 的容器"
|
||||
echo "请确保 Elasticsearch 容器已创建且名称为 'es'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✓ 找到 'es' 容器"
|
||||
|
||||
# 检查 proxy 网络是否存在
|
||||
if ! docker network ls --format "table {{.Name}}" | grep -q "^proxy$"; then
|
||||
echo "❌ 未找到 'proxy' 网络"
|
||||
echo "正在创建 proxy 网络..."
|
||||
docker network create proxy
|
||||
echo "✓ proxy 网络创建成功"
|
||||
else
|
||||
echo "✓ 找到 'proxy' 网络"
|
||||
fi
|
||||
|
||||
# 检查 es 容器是否已连接到 proxy 网络
|
||||
if ! docker network inspect proxy --format '{{range .Containers}}{{.Name}} {{end}}' | grep -q "es"; then
|
||||
echo "⚠️ 'es' 容器未连接到 'proxy' 网络"
|
||||
echo "正在连接 'es' 容器到 'proxy' 网络..."
|
||||
docker network connect proxy es
|
||||
echo "✓ 'es' 容器已连接到 'proxy' 网络"
|
||||
else
|
||||
echo "✓ 'es' 容器已连接到 'proxy' 网络"
|
||||
fi
|
||||
|
||||
# 检查 es 容器状态
|
||||
ES_STATUS=$(docker inspect es --format '{{.State.Status}}')
|
||||
if [ "$ES_STATUS" = "running" ]; then
|
||||
echo "✓ 'es' 容器正在运行"
|
||||
else
|
||||
echo "⚠️ 'es' 容器状态: $ES_STATUS"
|
||||
echo "正在启动 'es' 容器..."
|
||||
docker start es
|
||||
echo "✓ 'es' 容器已启动"
|
||||
fi
|
||||
|
||||
# 测试 Elasticsearch 连接
|
||||
echo
|
||||
echo "🧪 测试 Elasticsearch 连接..."
|
||||
if docker exec es curl -s http://localhost:9200/_cluster/health > /dev/null; then
|
||||
echo "✓ Elasticsearch 连接正常"
|
||||
docker exec es curl -s http://localhost:9200/_cluster/health | grep -o '"status":"[^"]*"' | cut -d'"' -f4
|
||||
else
|
||||
echo "❌ Elasticsearch 连接失败"
|
||||
echo "请检查 Elasticsearch 配置"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "=== 检查完成 ==="
|
||||
echo
|
||||
echo "📋 网络中的容器:"
|
||||
docker network inspect proxy --format '{{range .Containers}}{{.Name}} ({{.Name}}){{end}}'
|
||||
|
||||
echo
|
||||
echo "🚀 现在可以部署数字档案系统:"
|
||||
echo " ./archive-manager.sh deploy"
|
||||
@@ -1,50 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== 字体配置检查脚本 ==="
|
||||
echo
|
||||
|
||||
CONTAINER_NAME="digital-archive-app"
|
||||
|
||||
# 检查容器是否运行
|
||||
if ! docker ps | grep -q "$CONTAINER_NAME"; then
|
||||
echo "❌ 容器 $CONTAINER_NAME 未运行"
|
||||
echo "请先启动容器: ./archive-manager.sh start"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✓ 找到运行中的容器: $CONTAINER_NAME"
|
||||
|
||||
echo
|
||||
echo "🔍 检查字体配置:"
|
||||
|
||||
# 检查字体目录
|
||||
echo "1. 字体目录:"
|
||||
docker exec "$CONTAINER_NAME" ls -la /usr/share/fonts/ 2>/dev/null || echo " 字体目录不存在"
|
||||
|
||||
# 检查字体配置
|
||||
echo
|
||||
echo "2. 字体配置:"
|
||||
docker exec "$CONTAINER_NAME" ls -la /etc/fonts/ 2>/dev/null || echo " 字体配置目录不存在"
|
||||
|
||||
# 检查可用字体
|
||||
echo
|
||||
echo "3. 可用字体:"
|
||||
docker exec "$CONTAINER_NAME" fc-list | head -10 2>/dev/null || echo " 无法获取字体列表"
|
||||
|
||||
# 检查 Java 字体环境
|
||||
echo
|
||||
echo "4. Java 字体环境:"
|
||||
docker exec "$CONTAINER_NAME" java -Djava.awt.headless=true -cp . -c "import java.awt.GraphicsEnvironment; import java.awt.Font; System.out.println(\"Available fonts: \" + GraphicsEnvironment.getLocalGraphicsEnvironment().getAvailableFontFamilyNames().length); System.out.println(\"Headless: \" + GraphicsEnvironment.isHeadless());" 2>/dev/null || echo " 无法检查 Java 字体环境"
|
||||
|
||||
echo
|
||||
echo "🧪 测试字体渲染:"
|
||||
docker exec "$CONTAINER_NAME" sh -c "echo '测试字体' | fc-match -v 2>/dev/null || echo '字体匹配失败'"
|
||||
|
||||
echo
|
||||
echo "💡 如果字体问题持续,可以尝试:"
|
||||
echo "1. 重新构建镜像(包含字体包)"
|
||||
echo "2. 添加中文字体支持"
|
||||
echo "3. 设置 JAVA_OPTS 包含 -Djava.awt.headless=true"
|
||||
|
||||
echo
|
||||
echo "=== 检查完成 ==="
|
||||
@@ -1,42 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== MySQL 兼容性检查脚本 ==="
|
||||
echo
|
||||
|
||||
# 检查 MySQL 容器
|
||||
MYSQL_CONTAINER=$(docker ps --format "table {{.Names}}" | grep -i mysql | head -1)
|
||||
|
||||
if [ -z "$MYSQL_CONTAINER" ]; then
|
||||
echo "❌ 未找到运行中的 MySQL 容器"
|
||||
echo "请确保 MySQL 容器正在运行"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✓ 找到 MySQL 容器: $MYSQL_CONTAINER"
|
||||
|
||||
echo
|
||||
echo "🔍 检查 MySQL 版本:"
|
||||
MYSQL_VERSION=$(docker exec "$MYSQL_CONTAINER" mysql --version 2>/dev/null || echo "无法获取版本")
|
||||
echo "MySQL 版本: $MYSQL_VERSION"
|
||||
|
||||
echo
|
||||
echo "🔍 检查认证插件:"
|
||||
docker exec "$MYSQL_CONTAINER" mysql -u root -pAbc@123456 -e "SELECT plugin FROM mysql.user WHERE User='root';" 2>/dev/null || echo "无法查询认证插件"
|
||||
|
||||
echo
|
||||
echo "🔍 检查用户权限:"
|
||||
docker exec "$MYSQL_CONTAINER" mysql -u root -pAbc@123456 -e "SHOW GRANTS FOR 'root'@'%';" 2>/dev/null || echo "无法查询用户权限"
|
||||
|
||||
echo
|
||||
echo "💡 如果仍有连接问题,可以尝试以下解决方案:"
|
||||
echo "1. 在 MySQL 中修改用户认证方式:"
|
||||
echo " ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'Abc@123456';"
|
||||
echo " FLUSH PRIVILEGES;"
|
||||
echo
|
||||
echo "2. 检查 MySQL 配置文件中的 default-authentication-plugin"
|
||||
echo
|
||||
echo "3. 确认网络连接正常:"
|
||||
echo " docker exec $MYSQL_CONTAINER ping mysql"
|
||||
|
||||
echo
|
||||
echo "=== 检查完成 ==="
|
||||
@@ -1,55 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== 容器调试脚本 ==="
|
||||
echo
|
||||
|
||||
CONTAINER_NAME="digital-archive-app"
|
||||
|
||||
# 检查容器是否运行
|
||||
if ! docker ps | grep -q "$CONTAINER_NAME"; then
|
||||
echo "❌ 容器 $CONTAINER_NAME 未运行"
|
||||
echo "正在启动容器进行调试..."
|
||||
|
||||
# 临时启动容器进行调试
|
||||
docker run --rm --name debug-container -it --entrypoint /bin/sh digital-archive:stable
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "✓ 找到运行中的容器: $CONTAINER_NAME"
|
||||
|
||||
echo
|
||||
echo "🔍 检查容器内的文件结构:"
|
||||
|
||||
# 检查工作目录
|
||||
echo "1. 工作目录内容:"
|
||||
docker exec "$CONTAINER_NAME" ls -la /app || echo "❌ 无法访问 /app 目录"
|
||||
|
||||
# 检查 JAR 文件
|
||||
echo
|
||||
echo "2. JAR 文件检查:"
|
||||
docker exec "$CONTAINER_NAME" ls -la app.jar || echo "❌ app.jar 不存在"
|
||||
|
||||
# 检查 JAR 文件类型
|
||||
echo
|
||||
echo "3. JAR 文件类型:"
|
||||
docker exec "$CONTAINER_NAME" file app.jar || echo "❌ 无法检查文件类型"
|
||||
|
||||
# 检查用户权限
|
||||
echo
|
||||
echo "4. 用户权限:"
|
||||
docker exec "$CONTAINER_NAME" whoami && docker exec "$CONTAINER_NAME" id
|
||||
|
||||
# 检查 Java 环境
|
||||
echo
|
||||
echo "5. Java 环境:"
|
||||
docker exec "$CONTAINER_NAME" java -version || echo "❌ Java 不可用"
|
||||
|
||||
echo
|
||||
echo "🧪 测试直接运行 JAR:"
|
||||
docker exec "$CONTAINER_NAME" sh -c "cd /app && java -jar app.jar --help" || echo "❌ JAR 运行失败"
|
||||
|
||||
echo
|
||||
echo "=== 调试完成 ==="
|
||||
echo
|
||||
echo "💡 如果需要进入容器调试:"
|
||||
echo " docker exec -it $CONTAINER_NAME /bin/sh"
|
||||
@@ -1,76 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Maven 构建调试脚本 ==="
|
||||
echo
|
||||
|
||||
# 检查项目文件
|
||||
echo "🔍 检查项目文件:"
|
||||
if [ -f "pom.xml" ]; then
|
||||
echo "✓ pom.xml 存在"
|
||||
echo " 项目信息:"
|
||||
grep -E "<groupId>|<artifactId>|<version>" pom.xml | head -3
|
||||
else
|
||||
echo "❌ pom.xml 不存在"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -d "src" ]; then
|
||||
echo "✓ src 目录存在"
|
||||
echo " 源码结构:"
|
||||
find src -name "*.java" | wc -l | xargs echo " Java 文件数量:"
|
||||
else
|
||||
echo "❌ src 目录不存在"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -d "src/main/lib" ]; then
|
||||
echo "✓ lib 目录存在"
|
||||
echo " JAR 文件:"
|
||||
ls -la src/main/lib/*.jar 2>/dev/null | wc -l | xargs echo " JAR 文件数量:"
|
||||
else
|
||||
echo "❌ lib 目录不存在"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "🧪 本地 Maven 测试构建:"
|
||||
|
||||
# 创建临时目录进行测试构建
|
||||
mkdir -p temp-build
|
||||
cd temp-build
|
||||
|
||||
# 复制必要文件
|
||||
cp ../pom.xml .
|
||||
cp -r ../src .
|
||||
if [ -f "../settings.xml" ]; then
|
||||
cp ../settings.xml .
|
||||
fi
|
||||
|
||||
echo "开始 Maven 构建测试..."
|
||||
echo "这可能需要几分钟..."
|
||||
|
||||
# 执行构建
|
||||
if mvn clean compile -B -s settings.xml 2>&1 | tee build.log; then
|
||||
echo "✓ 编译成功"
|
||||
else
|
||||
echo "❌ 编译失败,查看 build.log"
|
||||
tail -20 build.log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "尝试打包..."
|
||||
if mvn package -DskipTests -B -s settings.xml 2>&1 | tee package.log; then
|
||||
echo "✓ 打包成功"
|
||||
echo "生成的文件:"
|
||||
ls -la target/*.jar 2>/dev/null || echo " 未找到 JAR 文件"
|
||||
ls -la target/ | grep -E "\.(jar|war)$" || echo " 未找到任何归档文件"
|
||||
else
|
||||
echo "❌ 打包失败,查看 package.log"
|
||||
tail -30 package.log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd ..
|
||||
echo
|
||||
echo "=== 调试完成 ==="
|
||||
echo "如果本地构建成功,问题可能在 Docker 环境中"
|
||||
255
doc/1.md
255
doc/1.md
@@ -1,255 +0,0 @@
|
||||
好的,这是将您提供的文档转换为 Markdown 格式的内容。
|
||||
# 市公司业务数据文档
|
||||
|
||||
## 一、月度卷烟购进、销售、库存数据
|
||||
|
||||
**说明**:每月初导一份数据,导入的为进销存仓库数据,包含罚没库。
|
||||
|
||||
| 字段中文 | 字段名称 |
|
||||
| :--- | :--- |
|
||||
| 日期 | `biz_date` |
|
||||
| 仓库名称 | `stor_name` |
|
||||
| 商品编码 | `product_code` |
|
||||
| 商品名称 | `product_name` |
|
||||
| 计量单位名称 | `unit_name` |
|
||||
| 期初数量 | `last_qty` |
|
||||
| 购入数量 | `buy_qty` |
|
||||
| 销售数量 | `sale_qty` |
|
||||
| 报损数量 | `dec_qty` |
|
||||
| 期末数量 | `rest_qty` |
|
||||
|
||||
## 二、年度在销及退出卷烟品牌规格数据
|
||||
|
||||
### 1. 在销品牌
|
||||
|
||||
**说明**:每月初导一份数据,导入的为在销品规的数据,用于归档。
|
||||
|
||||
| 字段中文 | 字段名称 |
|
||||
| :--- | :--- |
|
||||
| 卷烟标识 | `product_uuid` |
|
||||
| 卷烟编码 | `product_code` |
|
||||
| 卷烟名称 | `product_name` |
|
||||
| 厂家简称 | `factory_simple_name` |
|
||||
| 品牌名称 | `brand_name` |
|
||||
| 是否异形包装(1:是;0:否) | `is_abnormity` |
|
||||
| 包装长度(mm) | `length` |
|
||||
| 包装宽度(mm) | `width` |
|
||||
| 包装高度(mm) | `height` |
|
||||
| 焦油含量(mg) | `tar_qty` |
|
||||
| 包条形码 | `bar_code` |
|
||||
| 包包装支数 | `package_qty` |
|
||||
| 条条形码 | `bar_code2` |
|
||||
| 条包装支数 | `package_qty2` |
|
||||
| 件条形码 | `bar_code3` |
|
||||
| 件包装支数 | `package_qty3` |
|
||||
| 卷烟价类 | `price_type_code` |
|
||||
| 批发指导价 | `direct_whole_price` |
|
||||
| 零售指导价 | `direct_retail price` |
|
||||
| 是否省内烟 | `is_province` |
|
||||
| 是否查扣烟启用 | `is_seized` |
|
||||
| 调剂价 | `adjust_price` |
|
||||
| 零售价 | `retail price` |
|
||||
| 批发价 | `whole_sale_price` |
|
||||
| 引入日期 | `in_begin_date` |
|
||||
| 上市日期 | `sale_begin_date` |
|
||||
| 退出日期 | `out_begin_date` |
|
||||
|
||||
### 2. 退出品规
|
||||
|
||||
**说明**:每月初导一份数据,导入的为审批完结的数据,用于归档。
|
||||
|
||||
| 字段中文 | 字段名称 |
|
||||
| :--- | :--- |
|
||||
| 卷烟标识 | `product_uuid` |
|
||||
| 卷烟编码 | `product_code` |
|
||||
| 卷烟名称 | `product_name` |
|
||||
| 厂家简称 | `factory_simple_name` |
|
||||
| 品牌名称 | `brand_name` |
|
||||
| 是否异形包装(1:是;0:否) | `is_abnormity` |
|
||||
| 包装长度(mm) | `length` |
|
||||
| 包装宽度(mm) | `width` |
|
||||
| 包装高度(mm) | `height` |
|
||||
| 焦油含量(mg) | `tar_qty` |
|
||||
| 包条形码 | `bar_code` |
|
||||
| 包包装支数 | `package_qty` |
|
||||
| 条条形码 | `bar_code2` |
|
||||
| 条包装支数 | `package_qty2` |
|
||||
| 件条形码 | `bar_code3` |
|
||||
| 件包装支数 | `package_qty3` |
|
||||
| 卷烟价类 | `price_type_code` |
|
||||
| 批发指导价 | `direct_whole_price` |
|
||||
| 零售指导价 | `direct_retail_price` |
|
||||
| 是否省内烟 | `is_province` |
|
||||
| 是否查扣烟启用 | `is_seized` |
|
||||
| 调剂价 | `adjust price` |
|
||||
| 零售价 | `retail price` |
|
||||
| 批发价 | `whole_sale_price` |
|
||||
| 退出日期 | `out begin date` |
|
||||
|
||||
## 三、终端建设全流程档案数据
|
||||
|
||||
**说明**:每月初导一份数据,导入的为审批完结的数据,用于归档。
|
||||
|
||||
| 字段中文 | 字段名称 |
|
||||
| :--- | :--- |
|
||||
| 市场部 | `depart_uuid` |
|
||||
| 部门名称 | `depart_name` |
|
||||
| 营销线 | `saler_dept_uuid` |
|
||||
| 许可证号码 | `license_code` |
|
||||
| 客户名称 | `cust_name` |
|
||||
| 经营地址 | `address` |
|
||||
| 经营者 | `manage_person_name` |
|
||||
| 客户档位名称 | `cust_type_name` |
|
||||
| 经营业态 | `busi_place_code` |
|
||||
| 当前终端层级 | `terminal_level_before` |
|
||||
| 拟建设终端层级 | `terminal_level_after` |
|
||||
| 申请说明 | `apply_remark` |
|
||||
| 处理说明 | `deal_remark` |
|
||||
| 受理状态 | `accept_status` |
|
||||
| 申请人名称 | `creator_name` |
|
||||
| 创建时间 | `syscreatedate` |
|
||||
|
||||
----
|
||||
CREATE TABLE `cc_tbc_product_ez` (
|
||||
`product_uuid` char(32) COLLATE utf8_bin NOT NULL,
|
||||
`product_code` varchar(20) COLLATE utf8_bin NOT NULL,
|
||||
`product_name` varchar(100) COLLATE utf8_bin NOT NULL,
|
||||
`factory_simple_name` varchar(20) COLLATE utf8_bin DEFAULT NULL,
|
||||
`brand_name` varchar(100) COLLATE utf8_bin DEFAULT NULL,
|
||||
`is_abnormity` char(1) COLLATE utf8_bin NOT NULL DEFAULT '0',
|
||||
`length` decimal(9,0) DEFAULT NULL,
|
||||
`width` decimal(9,0) DEFAULT NULL,
|
||||
`height` decimal(9,0) DEFAULT NULL,
|
||||
`tar_qty` decimal(9,2) DEFAULT NULL,
|
||||
`bar_code` varchar(20) COLLATE utf8_bin DEFAULT NULL,
|
||||
`package_qty` decimal(9,0) DEFAULT NULL,
|
||||
`bar_code2` varchar(20) COLLATE utf8_bin DEFAULT NULL,
|
||||
`package_qty2` decimal(9,0) DEFAULT NULL,
|
||||
`bar_code3` varchar(20) COLLATE utf8_bin DEFAULT NULL,
|
||||
`package_qty3` decimal(9,0) DEFAULT NULL,
|
||||
`price_type_code` varchar(5) COLLATE utf8_bin NOT NULL,
|
||||
`direct_whole_price` decimal(9,2) NOT NULL DEFAULT '0.00',
|
||||
`direct_retail_price` decimal(9,2) NOT NULL DEFAULT '0.00',
|
||||
`is_province` char(1) COLLATE utf8_bin NOT NULL,
|
||||
`is_seized` char(1) COLLATE utf8_bin NOT NULL,
|
||||
`adjust_price` decimal(9,2) DEFAULT NULL,
|
||||
`retail_price` decimal(9,2) DEFAULT NULL,
|
||||
`whole_sale_price` decimal(9,2) DEFAULT NULL,
|
||||
`in_begin_date` char(10) COLLATE utf8_bin DEFAULT NULL,
|
||||
`sale_begin_date` char(10) COLLATE utf8_bin DEFAULT NULL,
|
||||
`out_begin_date` char(10) COLLATE utf8_bin DEFAULT NULL,
|
||||
PRIMARY KEY (`product_uuid`) USING BTREE
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC;
|
||||
|
||||
|
||||
CREATE TABLE `cc_tbc_drawout_ez` (
|
||||
`drawout_uuid` char(32) NOT NULL,
|
||||
`org_name` varchar(100) DEFAULT NULL,
|
||||
`quit_uuid` char(32) NOT NULL,
|
||||
`drawout_date` char(10) DEFAULT NULL,
|
||||
`quit_date` char(10) DEFAULT NULL,
|
||||
`comment` varchar(1000) DEFAULT NULL,
|
||||
`creater_name` varchar(20) DEFAULT NULL,
|
||||
`SYSCREATEDATE` varchar(25) DEFAULT NULL,
|
||||
PRIMARY KEY (`drawout_uuid`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
|
||||
|
||||
CREATE TABLE `scm_rpt_bizstordayreport_ez` (
|
||||
`id` bigint NOT NULL AUTO_INCREMENT,
|
||||
`biz_date` char(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
|
||||
`manage_unit_uuid` char(32) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
|
||||
`stor_uuid` char(32) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
|
||||
`stor_name` varchar(500) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
|
||||
`product_uuid` char(32) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
|
||||
`product_code` varchar(100) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
|
||||
`product_name` varchar(500) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
|
||||
`unit_uuid` char(32) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
|
||||
`unit_name` varchar(100) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
|
||||
`last_qty` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`last_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`buy_qty` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`buy_notax_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`buy_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`movein_qty` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`movein_notax_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`movein_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`movein_cost` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`moveout_qty` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`moveout_notax_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`moveout_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`moveout_cost` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`sale_qty` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`sale_notax_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`sale_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`sale_cost` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`sale_gross_profit` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`allot_qty` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`allot_notax_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`allot_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`allot_cost` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`allot_gross_profit` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`dec_qty` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`dec_notax_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`dec_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`inc_qty` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`inc_notax_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`inc_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`rest_qty` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`rest_amount` decimal(15,2) NOT NULL DEFAULT '0.00',
|
||||
`cost_price` decimal(18,6) NOT NULL DEFAULT '0.000000',
|
||||
PRIMARY KEY (`id`) USING BTREE,
|
||||
UNIQUE KEY `pk_idx` (`biz_date`,`stor_uuid`,`product_uuid`) USING BTREE,
|
||||
KEY `auto_shard_key_manage_unit_uuid` (`manage_unit_uuid`) USING BTREE
|
||||
) ENGINE=InnoDB AUTO_INCREMENT=38406392 DEFAULT CHARSET=utf8mb3 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC;
|
||||
|
||||
|
||||
CREATE TABLE `ec_exp_apply_accept_ez` (
|
||||
`accept_uuid` char(32) NOT NULL,
|
||||
`org_name` varchar(500) NOT NULL,
|
||||
`org_name2` varchar(500) NOT NULL,
|
||||
`license_code` varchar(20) DEFAULT NULL,
|
||||
`cust_name` varchar(100) NOT NULL,
|
||||
`address` varchar(500) NOT NULL,
|
||||
`manage_person_name` varchar(100) DEFAULT NULL,
|
||||
`cust_type_name` varchar(100) NOT NULL,
|
||||
`busi_place_code` varchar(5) NOT NULL,
|
||||
`terminal_level_before` varchar(5) NOT NULL,
|
||||
`terminal_level_after` varchar(5) DEFAULT NULL,
|
||||
`apply_remark` varchar(255) DEFAULT NULL,
|
||||
`deal_remark` varchar(255) DEFAULT NULL,
|
||||
`accept_status` varchar(5) NOT NULL,
|
||||
`syscreatedate` varchar(25) NOT NULL,
|
||||
`updator_name` varchar(20) DEFAULT NULL,
|
||||
PRIMARY KEY (`accept_uuid`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
|
||||
|
||||
|
||||
|
||||
##1、查询在销品牌
|
||||
|
||||
bar_code2,
|
||||
package_qty2,
|
||||
bar_code3,
|
||||
package_qty3,
|
||||
price_type_code,
|
||||
direct_whole_price,
|
||||
direct_retail_price,
|
||||
is_province,
|
||||
is_seized,
|
||||
adjust_price,
|
||||
retail_price,
|
||||
whole_sale_price,
|
||||
in_begin_date,
|
||||
sale_begin_date,
|
||||
out_begin_date
|
||||
from CC_TBC_PRODUCT_ez;
|
||||
|
||||
|
||||
## 2、查询退出品规
|
||||
select org_name,quit_uuid,drawout_date,quit_date,comment,creater_name,SYSCREATEDATE from cc_tbc_drawout_ez;
|
||||
|
||||
##3、市公司月度卷烟购进、销售、库存数据
|
||||
select * from SCM_RPT_BizStorDayReport_ez;
|
||||
|
||||
#4、市公司终端建设全流程档案数据
|
||||
select * from ec_exp_apply_accept_ez ;
|
||||
@@ -1,111 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== 直接修复 docker-compose.yml ==="
|
||||
echo
|
||||
|
||||
DEPLOY_DIR="/root/server/archive"
|
||||
COMPOSE_FILE="$DEPLOY_DIR/docker-compose.yml"
|
||||
|
||||
if [ ! -f "$COMPOSE_FILE" ]; then
|
||||
echo "❌ docker-compose.yml 文件不存在"
|
||||
echo "正在重新部署..."
|
||||
./archive-manager.sh deploy -f
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "📋 当前文件内容(前20行):"
|
||||
head -20 "$COMPOSE_FILE"
|
||||
|
||||
echo
|
||||
echo "🔧 重新生成正确的 docker-compose.yml..."
|
||||
|
||||
# 备份原文件
|
||||
cp "$COMPOSE_FILE" "$COMPOSE_FILE.backup"
|
||||
|
||||
# 生成正确的 docker-compose.yml
|
||||
cat > "$COMPOSE_FILE" << 'EOF'
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# 主应用服务
|
||||
app:
|
||||
image: digital-archive:fast
|
||||
container_name: digital-archive-app
|
||||
ports:
|
||||
- "9081:9081"
|
||||
volumes:
|
||||
- ./data/upload:/app/data/upload
|
||||
- ./data/temp:/app/data/temp
|
||||
- ./data/unzip:/app/data/unzip
|
||||
- ./data/images:/app/data/images
|
||||
- ./data/reports:/app/data/reports
|
||||
- ./logs:/app/logs
|
||||
environment:
|
||||
- SPRING_PROFILES_ACTIVE=prod
|
||||
- SERVER_PORT=9081
|
||||
- DB_HOST=mysql
|
||||
- DB_PORT=3306
|
||||
- DB_NAME=enterprise_digital_archives
|
||||
- DB_USERNAME=root
|
||||
- DB_PASSWORD=Abc@123456
|
||||
- DB_DRIVER=com.mysql.cj.jdbc.Driver
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=Abc123456
|
||||
- ELASTICSEARCH_HOST=es
|
||||
- ELASTICSEARCH_PORT=9200
|
||||
- ELASTICSEARCH_SCHEME=http
|
||||
- TESS_PATH=/usr/bin/tesseract
|
||||
- SWAGGER_SHOW=false
|
||||
- LOG_ROOT_LEVEL=info
|
||||
- LOG_APP_LEVEL=info
|
||||
networks:
|
||||
- proxy
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:9081/point-strategy/actuator/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
# Elasticsearch - 使用已有的 "es" 容器
|
||||
# 注意:确保已有的 "es" 容器已连接到 proxy 网络
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
external: true
|
||||
EOF
|
||||
|
||||
echo "✅ docker-compose.yml 已重新生成"
|
||||
|
||||
echo
|
||||
echo "🧪 测试新配置..."
|
||||
|
||||
# 检测 Docker Compose 命令类型
|
||||
if docker compose version &> /dev/null; then
|
||||
COMPOSE_CMD="docker compose"
|
||||
else
|
||||
COMPOSE_CMD="docker-compose"
|
||||
fi
|
||||
|
||||
echo "使用命令: $COMPOSE_CMD"
|
||||
|
||||
# 验证配置
|
||||
if $COMPOSE_CMD -f "$COMPOSE_FILE" config --quiet 2>/dev/null; then
|
||||
echo "✅ 配置验证成功"
|
||||
else
|
||||
echo "❌ 配置验证失败,显示错误:"
|
||||
$COMPOSE_CMD -f "$COMPOSE_FILE" config
|
||||
echo
|
||||
echo "恢复备份文件..."
|
||||
mv "$COMPOSE_FILE.backup" "$COMPOSE_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "🚀 现在可以启动服务:"
|
||||
echo " ./archive-manager.sh start"
|
||||
echo
|
||||
echo "或直接使用:"
|
||||
echo " cd $DEPLOY_DIR && $COMPOSE_CMD up -d"
|
||||
@@ -1,39 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== 日志权限修复脚本 ==="
|
||||
echo
|
||||
|
||||
DEPLOY_DIR="/root/server/archive"
|
||||
LOGS_DIR="$DEPLOY_DIR/logs"
|
||||
|
||||
if [ ! -d "$LOGS_DIR" ]; then
|
||||
echo "❌ 日志目录不存在: $LOGS_DIR"
|
||||
echo "正在创建日志目录..."
|
||||
mkdir -p "$LOGS_DIR"
|
||||
fi
|
||||
|
||||
echo "🔧 修复日志目录权限..."
|
||||
|
||||
# 设置目录权限
|
||||
chmod 755 "$LOGS_DIR"
|
||||
|
||||
# 设置所有者(app 用户的 UID/GID 是 1001)
|
||||
chown -R 1001:1001 "$LOGS_DIR" 2>/dev/null || echo "⚠️ 无法设置所有者,但权限已设置"
|
||||
|
||||
# 显示权限信息
|
||||
echo "✓ 权限修复完成"
|
||||
echo
|
||||
echo "📋 目录权限信息:"
|
||||
ls -la "$LOGS_DIR"
|
||||
|
||||
echo
|
||||
echo "📋 目录所有者信息:"
|
||||
ls -la "$DEPLOY_DIR" | grep logs
|
||||
|
||||
echo
|
||||
echo "🚀 现在可以重启服务:"
|
||||
echo " ./archive-manager.sh restart"
|
||||
echo
|
||||
echo "或者手动重启容器:"
|
||||
echo " docker stop digital-archive-app"
|
||||
echo " docker start digital-archive-app"
|
||||
@@ -1,35 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== 网络连接问题修复 ==="
|
||||
echo
|
||||
|
||||
echo "🔧 修复内容:"
|
||||
echo "1. 添加阿里云镜像源配置"
|
||||
echo "2. 移除tini依赖(非必需)"
|
||||
echo "3. 简化基础包安装"
|
||||
echo
|
||||
|
||||
echo "📋 网络问题分析:"
|
||||
echo "❌ 原问题:deb.debian.org 连接超时"
|
||||
echo "✅ 解决:使用 mirrors.aliyun.com"
|
||||
echo
|
||||
|
||||
echo "🚀 现在可以重新构建:"
|
||||
echo "./archive-manager.sh build"
|
||||
echo
|
||||
|
||||
echo "💡 如果还有网络问题,可以:"
|
||||
echo "1. 检查Docker daemon网络配置"
|
||||
echo "2. 使用代理服务器"
|
||||
echo "3. 切换到其他镜像源"
|
||||
echo
|
||||
|
||||
echo "🔍 备用镜像源(如果阿里云也不行):"
|
||||
echo "# 华为云"
|
||||
echo "sed -i 's/deb.debian.org/repo.huaweicloud.com/g' /etc/apt/sources.list"
|
||||
echo
|
||||
echo "# 腾讯云"
|
||||
echo "sed -i 's/deb.debian.org/mirrors.cloud.tencent.com/g' /etc/apt/sources.list"
|
||||
echo
|
||||
|
||||
echo "=== 开始构建测试 ==="
|
||||
@@ -1,54 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# 数字档案系统文件权限修复脚本
|
||||
# 用于解决Docker部署中的文件权限问题
|
||||
|
||||
echo "开始修复数字档案系统文件权限..."
|
||||
|
||||
# 定义数据目录
|
||||
DATA_DIRS=(
|
||||
"./data/upload"
|
||||
"./data/temp"
|
||||
"./data/unzip"
|
||||
"./data/images"
|
||||
"./data/reports"
|
||||
"./logs"
|
||||
)
|
||||
|
||||
# 创建目录(如果不存在)
|
||||
echo "创建数据目录..."
|
||||
for dir in "${DATA_DIRS[@]}"; do
|
||||
if [ ! -d "$dir" ]; then
|
||||
echo "创建目录: $dir"
|
||||
mkdir -p "$dir"
|
||||
else
|
||||
echo "目录已存在: $dir"
|
||||
fi
|
||||
done
|
||||
|
||||
# 设置权限 - 使用1001:1001 (与Dockerfile中的app用户一致)
|
||||
echo "设置目录权限为1001:1001..."
|
||||
for dir in "${DATA_DIRS[@]}"; do
|
||||
echo "设置权限: $dir"
|
||||
sudo chown -R 1001:1001 "$dir"
|
||||
sudo chmod -R 755 "$dir"
|
||||
done
|
||||
|
||||
# 验证权限设置
|
||||
echo "验证权限设置..."
|
||||
for dir in "${DATA_DIRS[@]}"; do
|
||||
echo "目录: $dir"
|
||||
ls -ld "$dir"
|
||||
done
|
||||
|
||||
echo "文件权限修复完成!"
|
||||
echo ""
|
||||
echo "重要提示:"
|
||||
echo "1. 当前系统用户需要对数据目录有读写权限"
|
||||
echo "2. 如果1001用户不存在,请确保Docker容器内的app用户UID为1001"
|
||||
echo "3. 在Linux环境下,可以使用 'id -u' 和 'id -g' 查看当前用户ID"
|
||||
echo ""
|
||||
echo "如果仍然遇到权限问题,可以尝试:"
|
||||
echo "1. 将当前用户添加到docker组: sudo usermod -aG docker \$USER"
|
||||
echo "2. 重新登录使权限生效"
|
||||
echo "3. 或者使用root用户运行: sudo ./fix-permissions.sh"
|
||||
@@ -1,144 +0,0 @@
|
||||
# 操作留痕(Codex)
|
||||
|
||||
- 时间:2025-11-01 19:35(UTC+8)
|
||||
- 动作:修改 Dockerfile 创建应用用户命令
|
||||
- 位置:Dockerfile(运行阶段“创建应用用户和目录”)
|
||||
- 变更:将原使用 addgroup/adduser 的写法替换为兼容 Debian/Ubuntu 与 Alpine 的健壮分支逻辑(优先使用 `groupadd/useradd`,回退到 `addgroup/adduser` 并按是否支持 `--gid` 区分参数)。
|
||||
- 目的:修复构建阶段报错“Option g is ambiguous (gecos, gid, group)”,避免不同基础镜像工具链参数差异导致失败。
|
||||
- 工具:apply_patch(补丁写入)
|
||||
- 结果:补丁应用成功,建议以 `docker build --no-cache` 重新构建验证。
|
||||
|
||||
- 时间:2025-11-01 19:40(UTC+8)
|
||||
- 动作:修复部署脚本 archive-manager.sh 镜像检查逻辑
|
||||
- 位置:archive-manager.sh(deploy_app 函数)
|
||||
- 变更:新增 `image_exists` 检查,通过 `docker image inspect` 判断本地镜像是否存在;若不存在,`deploy` 流程自动调用 `build_image` 进行构建,避免 `docker compose up` 因尝试拉取远端镜像而失败(镜像找不到)。
|
||||
- 目的:解决部署时报“镜像找不到”的问题,提升一键部署体验。
|
||||
- 工具:apply_patch(补丁写入)
|
||||
- 结果:补丁应用成功,可通过 `./archive-manager.sh deploy <目录>` 直接部署,首次会自动构建镜像。
|
||||
|
||||
- 时间:2025-11-18 17:43(UTC+8)
|
||||
- 动作:按环境调整日志输出配置
|
||||
- 位置:src/main/resources/logback-spring.xml
|
||||
- 变更:新增 springProfile 分支,开发环境仅输出控制台,生产环境写入与 Dockerfile 一致的 `/app/logs` 并保留分级 RollingFileAppender。
|
||||
- 目的:满足开发环境不落盘、生产环境落盘且目录对齐容器目录的需求。
|
||||
- 工具:apply_patch(重写配置文件)
|
||||
- 结果:配置已更新,可通过启动 dev/prod Profile 验证控制台与落盘行为。
|
||||
|
||||
- 时间:2025-11-18 17:46(UTC+8)
|
||||
- 动作:修复 logback 配置解析错误
|
||||
- 位置:src/main/resources/logback-spring.xml
|
||||
- 变更:将 springProfile 条件中的逻辑与符号转义为 `&`,解决 XML 解析报错(“实体名称必须紧跟在 & 后面”)。
|
||||
- 目的:确保 logback-spring.xml 能被 Spring Boot 正常加载。
|
||||
- 工具:apply_patch(补丁写入)
|
||||
- 结果:配置已更正,可再次启动验证。
|
||||
|
||||
- 时间:2025-11-18 18:00(UTC+8)
|
||||
- 动作:调整 OperatorTable 读取数据源方式
|
||||
- 位置:src/main/java/com/point/strategy/table/OperatorTable.java
|
||||
- 变更:不再从固定 application.properties 读取数据源,改为注入 Spring Environment,通过激活的 profile 动态获取 `spring.datasource.*`,并在未配置时抛出明确异常;保持现有静态方法接口,内部连接创建随当前 yml 生效。
|
||||
- 目的:保证建表 DDL 能使用当前激活配置的数据库连接,解决 dev/prod yml 下无法建表的问题。
|
||||
- 工具:apply_patch(补丁写入)
|
||||
- 结果:已更新,可在激活对应 profile 后尝试新增门类触发表创建。
|
||||
|
||||
- 时间:2025-11-20 09:45(UTC+8)
|
||||
- 动作:定位 productSalesDocking 方法
|
||||
- 工具:shell(rg -n "productSalesDocking" src/main/java/com/point/strategy/oaDocking/controller/ArchivesUploadController.java)
|
||||
- 输出:找到方法起始行 1136,确认分析范围
|
||||
|
||||
- 时间:2025-11-20 09:45(UTC+8)
|
||||
- 动作:查看方法周边 1100-1400 行
|
||||
- 工具:shell(sed -n '1100,1400p' src/main/java/com/point/strategy/oaDocking/controller/ArchivesUploadController.java)
|
||||
- 输出:截取方法上半段,记录日志与统计逻辑
|
||||
|
||||
- 时间:2025-11-20 09:46(UTC+8)
|
||||
- 动作:扩展读取 1170-1900 行
|
||||
- 工具:shell(sed -n '1170,1900p' src/main/java/com/point/strategy/oaDocking/controller/ArchivesUploadController.java)
|
||||
- 输出:获取循环处理逻辑但输出较大被截断
|
||||
|
||||
- 时间:2025-11-20 09:46(UTC+8)
|
||||
- 动作:聚焦 1500-1700 行
|
||||
- 工具:shell(sed -n '1500,1700p' src/main/java/com/point/strategy/oaDocking/controller/ArchivesUploadController.java)
|
||||
- 输出:查看 100 列组装逻辑
|
||||
|
||||
- 时间:2025-11-20 09:47(UTC+8)
|
||||
- 动作:尝试用 python 截取完整方法
|
||||
- 工具:shell(python - <<'PY' ...)
|
||||
- 输出:命令失败,提示 python: command not found
|
||||
|
||||
- 时间:2025-11-20 09:47(UTC+8)
|
||||
- 动作:使用 python3 重试提取方法
|
||||
- 工具:shell(python3 - <<'PY' ...)
|
||||
- 输出:成功导出方法文本,但因输出限制被截断
|
||||
|
||||
- 时间:2025-11-20 09:48(UTC+8)
|
||||
- 动作:定位 LIMIT 10 语句行号
|
||||
- 工具:shell(rg -n "LIMIT 10" src/main/java/com/point/strategy/oaDocking/controller/ArchivesUploadController.java)
|
||||
- 输出:确认 1230/1232 行存在 LIMIT 10
|
||||
- 时间:2025-11-20 09:49(UTC+8)
|
||||
- 动作:查看 LIMIT 相关代码行号
|
||||
- 工具:shell(nl -ba src/main/java/com/point/strategy/oaDocking/controller/ArchivesUploadController.java | sed -n '1205,1245p')
|
||||
- 输出:确认 totalPending 统计与 LIMIT 10 取数位于 1206-1233 行
|
||||
- 时间:2025-11-20 09:52(UTC+8)
|
||||
- 动作:移除 productSalesDocking 中的 LIMIT 10 限制
|
||||
- 工具:apply_patch(编辑 src/main/java/com/point/strategy/oaDocking/controller/ArchivesUploadController.java)
|
||||
- 输出:删掉取数 SQL 的 LIMIT 10,保留 biz_date 过滤与排序,确保一次性读取全部数据
|
||||
- 时间:2025-11-20 09:42(UTC+8)
|
||||
- 动作:检查 .claude 目录
|
||||
- 工具:shell(ls .claude)
|
||||
- 输出:目录不存在,返回 "ls: .claude: No such file or directory"
|
||||
|
||||
- 时间:2025-11-20 09:43(UTC+8)
|
||||
- 动作:列出仓库根目录文件
|
||||
- 工具:shell(ls)
|
||||
- 输出:记录仓库根目录下的主要文件和目录
|
||||
|
||||
- 时间:2025-11-20 09:44(UTC+8)
|
||||
- 动作:查看 operations-log.md 内容
|
||||
- 工具:shell(cat operations-log.md)
|
||||
- 输出:阅读历史留痕,确认记录格式
|
||||
|
||||
- 时间:2025-11-20 09:44(UTC+8)
|
||||
- 动作:查询当前系统时间
|
||||
- 工具:shell(date '+%Y-%m-%d %H:%M:%S %Z')
|
||||
- 输出:2025-11-20 09:44:48 CST
|
||||
|
||||
- 时间:2025-11-24 14:47(UTC+8)
|
||||
- 动作:确认 .claude 目录是否存在
|
||||
- 工具:shell(ls .claude)
|
||||
- 输出:目录不存在,命令返回 “No such file or directory”
|
||||
|
||||
- 时间:2025-11-24 14:48(UTC+8)
|
||||
- 动作:阅读 AnjuanAndJuanneiController.java 源码
|
||||
- 工具:shell(sed -n '1,960p' src/main/java/com/point/strategy/originBatchUpload/AnjuanAndJuanneiController.java)
|
||||
- 输出:获取 uploadSimpleFilesAnjuan 及相关上传实现用于 OOM 分析
|
||||
|
||||
- 时间:2025-11-24 14:49(UTC+8)
|
||||
- 动作:查阅 PdfFileHelper.image2Pdf 实现
|
||||
- 工具:shell(sed -n '520,620p' src/main/java/com/point/strategy/common/PdfFileHelper.java)
|
||||
- 输出:确认图片转 PDF 逻辑使用 ImageIO.read + iText Image 双重加载
|
||||
|
||||
- 时间:2025-11-24 14:50(UTC+8)
|
||||
- 动作:查询当前系统时间
|
||||
- 工具:shell(date '+%Y-%m-%d %H:%M:%S %Z')
|
||||
- 输出:2025-11-24 14:50:02 CST
|
||||
|
||||
- 时间:2025-11-24 14:55(UTC+8)
|
||||
- 动作:为传统案卷上传OCR处理增加线程池限流
|
||||
- 位置:src/main/java/com/point/strategy/originBatchUpload/AnjuanAndJuanneiController.java
|
||||
- 变更:新增受控线程池替代每文件新建线程,限制并发与队列,避免多文件同时触发大量OCR线程造成内存峰值。
|
||||
- 工具:apply_patch
|
||||
- 结果:代码已更新
|
||||
|
||||
- 时间:2025-11-24 14:56(UTC+8)
|
||||
- 动作:优化图片转PDF内存占用
|
||||
- 位置:src/main/java/com/point/strategy/common/PdfFileHelper.java
|
||||
- 变更:移除 ImageIO 双重加载大图,直接用 iText Image 尺寸设置页面,减少一次内存占用。
|
||||
- 工具:apply_patch
|
||||
- 结果:代码已更新
|
||||
|
||||
- 时间:2025-11-24 14:59(UTC+8)
|
||||
- 动作:对上传接口增加并发限流(每次最多3个文件)
|
||||
- 位置:src/main/java/com/point/strategy/originBatchUpload/AnjuanAndJuanneiController.java
|
||||
- 变更:新增 Semaphore,保证全局并行处理文件数不超过3;同时将 OCR 线程池调为 1~3 线程并缩短队列,匹配限流策略。
|
||||
- 工具:apply_patch
|
||||
- 结果:代码已更新
|
||||
8
pom.xml
8
pom.xml
@@ -22,6 +22,12 @@
|
||||
<repositories>
|
||||
<repository>
|
||||
<id>com.e-iceblue</id>
|
||||
<name>e-iceblue</name>
|
||||
<url>https://repo.e-iceblue.com/nexus/content/groups/public/</url>
|
||||
</repository>
|
||||
<repository>
|
||||
<id>com.e-iceblue-cn</id>
|
||||
<name>e-iceblue (CN mirror)</name>
|
||||
<url>https://repo.e-iceblue.cn/repository/maven-public/</url>
|
||||
</repository>
|
||||
</repositories>
|
||||
@@ -342,6 +348,8 @@
|
||||
<groupId>e-iceblue</groupId>
|
||||
<artifactId>spire.pdf.free</artifactId>
|
||||
<version>5.1.0</version>
|
||||
<scope>system</scope>
|
||||
<systemPath>${basedir}/src/main/lib/spire.pdf.free-5.1.0.jar</systemPath>
|
||||
</dependency>
|
||||
|
||||
|
||||
|
||||
43
settings.xml
43
settings.xml
@@ -4,8 +4,8 @@
|
||||
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
|
||||
http://maven.apache.org/xsd/settings-1.0.0.xsd">
|
||||
|
||||
<!-- 本地仓库位置 -->
|
||||
<localRepository>/root/.m2/repository</localRepository>
|
||||
<!-- 本地仓库位置(容器中 user.home=/root,本地开发 user.home 为当前用户目录) -->
|
||||
<localRepository>${user.home}/.m2/repository</localRepository>
|
||||
|
||||
<!-- 代理配置 -->
|
||||
<proxies>
|
||||
@@ -120,19 +120,6 @@
|
||||
</snapshots>
|
||||
</repository>
|
||||
|
||||
<!-- JAI仓库 -->
|
||||
<repository>
|
||||
<id>jai-repository</id>
|
||||
<name>Java Advanced Imaging Repository</name>
|
||||
<url>https://maven.java.net/content/repositories/public/</url>
|
||||
<releases>
|
||||
<enabled>true</enabled>
|
||||
</releases>
|
||||
<snapshots>
|
||||
<enabled>false</enabled>
|
||||
</snapshots>
|
||||
</repository>
|
||||
|
||||
<!-- Atlassian仓库 -->
|
||||
<repository>
|
||||
<id>atlassian-public</id>
|
||||
@@ -144,6 +131,30 @@
|
||||
<enabled>true</enabled>
|
||||
</snapshots>
|
||||
</repository>
|
||||
|
||||
<!-- e-iceblue Spire 仓库(用于 spire.pdf.free) -->
|
||||
<repository>
|
||||
<id>com.e-iceblue</id>
|
||||
<url>https://repo.e-iceblue.com/nexus/content/groups/public/</url>
|
||||
<releases>
|
||||
<enabled>true</enabled>
|
||||
</releases>
|
||||
<snapshots>
|
||||
<enabled>false</enabled>
|
||||
</snapshots>
|
||||
</repository>
|
||||
|
||||
<!-- e-iceblue Spire 仓库(CN 镜像备选) -->
|
||||
<repository>
|
||||
<id>com.e-iceblue-cn</id>
|
||||
<url>https://repo.e-iceblue.cn/repository/maven-public/</url>
|
||||
<releases>
|
||||
<enabled>true</enabled>
|
||||
</releases>
|
||||
<snapshots>
|
||||
<enabled>false</enabled>
|
||||
</snapshots>
|
||||
</repository>
|
||||
|
||||
<!-- JBossearlyaccess仓库 -->
|
||||
<repository>
|
||||
@@ -192,4 +203,4 @@
|
||||
<activeProfiles>
|
||||
<activeProfile>default</activeProfile>
|
||||
</activeProfiles>
|
||||
</settings>
|
||||
</settings>
|
||||
|
||||
BIN
src/main/lib/spire.pdf.free-5.1.0.jar
Normal file
BIN
src/main/lib/spire.pdf.free-5.1.0.jar
Normal file
Binary file not shown.
@@ -1,3 +0,0 @@
|
||||
This is where language files should be placed.
|
||||
|
||||
Please DO NOT translate these directly use this service: https://www.transifex.com/projects/p/tinymce/
|
||||
99
src/main/webapp/dist/tinymce/readme.md
vendored
99
src/main/webapp/dist/tinymce/readme.md
vendored
@@ -1,99 +0,0 @@
|
||||
TinyMCE - JavaScript Library for Rich Text Editing
|
||||
===================================================
|
||||
|
||||
Building TinyMCE
|
||||
-----------------
|
||||
Install [Node.js](https://nodejs.org/en/) on your system.
|
||||
Clone this repository on your system
|
||||
```
|
||||
$ git clone https://github.com/tinymce/tinymce.git
|
||||
```
|
||||
Open a console and go to the project directory.
|
||||
```
|
||||
$ cd tinymce/
|
||||
```
|
||||
Install `grunt` command line tool globally.
|
||||
```
|
||||
$ npm i -g grunt-cli
|
||||
```
|
||||
Install all package dependencies.
|
||||
```
|
||||
$ npm install
|
||||
```
|
||||
Now, build TinyMCE by using `grunt`.
|
||||
```
|
||||
$ grunt
|
||||
```
|
||||
|
||||
|
||||
Build tasks
|
||||
------------
|
||||
`grunt`
|
||||
Lints, compiles, minifies and creates release packages for TinyMCE. This will produce the production ready packages.
|
||||
|
||||
`grunt start`
|
||||
Starts a webpack-dev-server that compiles the core, themes, plugins and all demos. Go to `localhost:3000` for a list of links to all the demo pages.
|
||||
|
||||
`grunt dev`
|
||||
Runs tsc, webpack and less. This will only produce the bare essentials for a development build and is a lot faster.
|
||||
|
||||
`grunt test`
|
||||
Runs all tests on PhantomJS.
|
||||
|
||||
`grunt bedrock-manual`
|
||||
Runs all tests manually in a browser.
|
||||
|
||||
`grunt bedrock-auto:<browser>`
|
||||
Runs all tests through selenium browsers supported are chrome, firefox, ie, MicrosoftEdge, chrome-headless and phantomjs.
|
||||
|
||||
`grunt webpack:core`
|
||||
Builds the demo js files for the core part of tinymce this is required to get the core demos working.
|
||||
|
||||
`grunt webpack:plugins`
|
||||
Builds the demo js files for the plugins part of tinymce this is required to get the plugins demos working.
|
||||
|
||||
`grunt webpack:themes`
|
||||
Builds the demo js files for the themes part of tinymce this is required to get the themes demos working.
|
||||
|
||||
`grunt webpack:<name>-plugin`
|
||||
Builds the demo js files for the specific plugin.
|
||||
|
||||
`grunt webpack:<name>-theme`
|
||||
Builds the demo js files for the specific theme.
|
||||
|
||||
`grunt --help`
|
||||
Displays the various build tasks.
|
||||
|
||||
Bundle themes and plugins into a single file
|
||||
---------------------------------------------
|
||||
`grunt bundle --themes=modern --plugins=table,paste`
|
||||
|
||||
Minifies the core, adds the modern theme and adds the table and paste plugin into tinymce.min.js.
|
||||
|
||||
Contributing to the TinyMCE project
|
||||
------------------------------------
|
||||
TinyMCE is an open source software project and we encourage developers to contribute patches and code to be included in the main package of TinyMCE.
|
||||
|
||||
__Basic Rules__
|
||||
|
||||
* Contributed code will be licensed under the LGPL license but not limited to LGPL
|
||||
* Copyright notices will be changed to Ephox Corporation, contributors will get credit for their work
|
||||
* All third party code will be reviewed, tested and possibly modified before being released
|
||||
* All contributors will have to have signed the Contributor License Agreement
|
||||
|
||||
These basic rules ensures that the contributed code remains open source and under the LGPL license.
|
||||
|
||||
__How to Contribute to the Code__
|
||||
|
||||
The TinyMCE source code is [hosted on Github](https://github.com/tinymce/tinymce). Through Github you can submit pull requests and log new bugs and feature requests.
|
||||
|
||||
When you submit a pull request, you will get a notice about signing the __Contributors License Agreement (CLA)__.
|
||||
You should have a __valid email address on your GitHub account__, and you will be sent a key to verify your identity and digitally sign the agreement.
|
||||
|
||||
After you signed your pull request will automatically be ready for review & merge.
|
||||
|
||||
__How to Contribute to the Docs__
|
||||
|
||||
Docs are hosted on Github in the [tinymce-docs](https://github.com/tinymce/tinymce-docs) repo.
|
||||
|
||||
[How to contribute](https://www.tinymce.com/docs/advanced/contributing-docs/) to the docs, including a style guide, can be found on the TinyMCE website.
|
||||
@@ -1,3 +0,0 @@
|
||||
This is where language files should be placed.
|
||||
|
||||
Please DO NOT translate these directly use this service: https://www.transifex.com/projects/p/tinymce/
|
||||
@@ -1 +0,0 @@
|
||||
Icons are generated and provided by the http://icomoon.io service.
|
||||
@@ -1,46 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== 多阶段构建优化效果测试 ==="
|
||||
echo
|
||||
|
||||
echo "🔧 Docker 多阶段构建优化:"
|
||||
echo
|
||||
|
||||
echo "📊 构建阶段对比:"
|
||||
echo "┌─────────────────┬──────────┬──────────┬──────────┐"
|
||||
echo "│ 构建方式 │ 首次构建 │ 代码变更 │ 依赖变更 │"
|
||||
echo "├─────────────────┼──────────┼──────────┼──────────┤"
|
||||
echo "│ 单阶段构建 │ 10-15分钟 │ 10-15分钟 │ 10-15分钟 │"
|
||||
echo "│ 多阶段构建 │ 10-15分钟 │ 2-3分钟 │ 10-15分钟 │"
|
||||
echo "└─────────────────┴──────────┴──────────┴──────────┘"
|
||||
echo
|
||||
|
||||
echo "🚀 优化原理:"
|
||||
echo "1. 基础镜像阶段: 安装系统依赖 (apt, 字体等)"
|
||||
echo "2. Maven构建阶段: 编译代码和打包"
|
||||
echo "3. 运行阶段: 复用基础镜像 + JAR文件"
|
||||
echo
|
||||
|
||||
echo "💡 优势:"
|
||||
echo "✅ 代码变更时复用基础镜像 (节省 8-12 分钟)"
|
||||
echo "✅ 系统依赖只需安装一次"
|
||||
echo "✅ 构建缓存命中率更高"
|
||||
echo "✅ 开发调试更快速"
|
||||
echo
|
||||
|
||||
echo "🧪 测试方法:"
|
||||
echo "# 首次构建 (完整构建)"
|
||||
echo "./archive-manager.sh build"
|
||||
echo
|
||||
echo "# 代码变更后构建 (复用基础镜像)"
|
||||
echo "./archive-manager.sh build"
|
||||
echo "# 应该只需 2-3 分钟"
|
||||
echo
|
||||
|
||||
echo "📋 当前多阶段结构:"
|
||||
echo "阶段1: base - 系统依赖和字体 (可复用)"
|
||||
echo "阶段2: builder - Maven构建 (代码变更时重建)"
|
||||
echo "阶段3: final - 复制JAR文件 (快速)"
|
||||
|
||||
echo
|
||||
echo "=== 开始构建测试 ==="
|
||||
@@ -1,159 +0,0 @@
|
||||
# 线上环境路径拼接问题修复总结
|
||||
|
||||
## 问题描述
|
||||
在线上生产环境中,`uploadPath + File.separator + "uploadFile"` 变成了 `uploadPath+"uploadFile"`,没有用 `File.separator` 分割开来。
|
||||
|
||||
## 问题根源分析
|
||||
|
||||
### 1. 生产环境配置问题
|
||||
在 `application-prod.yml` 中:
|
||||
```yaml
|
||||
img:
|
||||
upload: ${IMG_UPLOAD_PATH:/app/data/images}
|
||||
```
|
||||
|
||||
如果生产环境中设置了环境变量 `IMG_UPLOAD_PATH=/app/data/images/`(以斜杠结尾),而代码中又使用了:
|
||||
```java
|
||||
String saveUrl = uploadPath + File.separator + "uploadFile" + File.separator ;
|
||||
```
|
||||
|
||||
结果会是:`/app/data/images//uploadFile/` (双斜杠问题)
|
||||
|
||||
### 2. 代码中的路径拼接缺陷
|
||||
在 `ImportService.java` 第3976行发现:
|
||||
```java
|
||||
String fullPath = uploadPath + File.separator + dir;
|
||||
```
|
||||
|
||||
这也有同样的路径拼接问题。
|
||||
|
||||
## 修复方案
|
||||
|
||||
### 1. 统一路径处理方法
|
||||
在 `ImportService.java` 中添加了 `combinePath` 方法,自动处理路径分隔符:
|
||||
|
||||
```java
|
||||
/**
|
||||
* 安全地拼接路径,避免路径分隔符重复
|
||||
* @param basePath 基础路径
|
||||
* @param additionalPath 要追加的路径
|
||||
* @return 拼接后的路径
|
||||
*/
|
||||
private String combinePath(String basePath, String additionalPath) {
|
||||
if (basePath == null || basePath.trim().isEmpty()) {
|
||||
return additionalPath;
|
||||
}
|
||||
if (additionalPath == null || additionalPath.trim().isEmpty()) {
|
||||
return basePath;
|
||||
}
|
||||
|
||||
// 确保basePath不以分隔符结尾
|
||||
String normalizedBasePath = basePath;
|
||||
if (basePath.endsWith("/") || basePath.endsWith("\\")) {
|
||||
normalizedBasePath = basePath.substring(0, basePath.length() - 1);
|
||||
}
|
||||
|
||||
// 确保additionalPath不以分隔符开头
|
||||
String normalizedAdditionalPath = additionalPath;
|
||||
if (additionalPath.startsWith("/") || additionalPath.startsWith("\\")) {
|
||||
normalizedAdditionalPath = additionalPath.substring(1);
|
||||
}
|
||||
|
||||
return normalizedBasePath + File.separator + normalizedAdditionalPath;
|
||||
}
|
||||
```
|
||||
|
||||
### 2. 修复所有路径拼接位置
|
||||
|
||||
#### 主要修复点:
|
||||
- `uploadPath + File.separator + "uploadFile" + File.separator` → `combinePath(uploadPath, "uploadFile") + File.separator`
|
||||
- `uploadPath + File.separator + dir` → `combinePath(uploadPath, dir)`
|
||||
|
||||
#### 修复统计:
|
||||
- 修复了19处路径拼接问题
|
||||
- 包括所有 `hookUp` 系列方法中的路径拼接
|
||||
- 修复了遗漏的第3976行路径拼接
|
||||
|
||||
### 3. 生产环境配置优化
|
||||
|
||||
#### application-prod.yml 修复:
|
||||
```yaml
|
||||
# 生产环境文件路径配置(Docker环境安全路径)
|
||||
# 注意:所有路径都不应该以斜杠结尾,避免路径拼接时出现双斜杠问题
|
||||
upload:
|
||||
path: ${UPLOAD_PATH:/app/data/upload}
|
||||
temp:
|
||||
path: ${TEMP_PATH:/app/data/temp}
|
||||
unzip:
|
||||
path: ${UNZIP_PATH:/app/data/unzip}
|
||||
img:
|
||||
upload: ${IMG_UPLOAD_PATH:/app/data/images} # 注意:不以斜杠结尾
|
||||
report:
|
||||
path: ${REPORT_PATH:/app/data/reports} # 注意:不以斜杠结尾
|
||||
```
|
||||
|
||||
## 修复效果
|
||||
|
||||
### 1. 路径安全性
|
||||
- ✅ 自动处理路径分隔符重复问题
|
||||
- ✅ 支持跨平台路径处理(Windows/Linux)
|
||||
- ✅ 防止路径拼接错误
|
||||
|
||||
### 2. 生产环境适配
|
||||
- ✅ Docker环境安全路径配置
|
||||
- ✅ 支持环境变量覆盖
|
||||
- ✅ 明确的配置注释说明
|
||||
|
||||
### 3. 代码健壮性
|
||||
- ✅ 统一的路径处理逻辑
|
||||
- ✅ 自动处理边界情况(空值、重复分隔符)
|
||||
- ✅ 符合12-Factor App原则
|
||||
|
||||
## 生产环境部署建议
|
||||
|
||||
### 1. 环境变量设置
|
||||
如果需要自定义路径,确保环境变量不包含尾随斜杠:
|
||||
```bash
|
||||
# 正确做法
|
||||
export IMG_UPLOAD_PATH=/app/data/images
|
||||
export UPLOAD_PATH=/app/data/upload
|
||||
|
||||
# 错误做法(会导致问题)
|
||||
export IMG_UPLOAD_PATH=/app/data/images/
|
||||
export UPLOAD_PATH=/app/data/upload/
|
||||
```
|
||||
|
||||
### 2. Docker配置建议
|
||||
在 Docker Compose 或 Kubernetes 配置中:
|
||||
```yaml
|
||||
environment:
|
||||
- IMG_UPLOAD_PATH=/app/data/images
|
||||
- UPLOAD_PATH=/app/data/upload
|
||||
- TEMP_PATH=/app/data/temp
|
||||
```
|
||||
|
||||
### 3. 路径验证
|
||||
部署后可以通过日志验证路径是否正确:
|
||||
```java
|
||||
logger.info("图片上传路径: {}", uploadPath);
|
||||
logger.info("最终路径: {}", combinePath(uploadPath, "uploadFile"));
|
||||
```
|
||||
|
||||
## 问题排查指南
|
||||
|
||||
如果线上环境仍然出现路径问题:
|
||||
|
||||
### 1. 检查环境变量
|
||||
```bash
|
||||
echo $IMG_UPLOAD_PATH
|
||||
echo $UPLOAD_PATH
|
||||
```
|
||||
|
||||
### 2. 检查日志输出
|
||||
查看应用日志中的路径配置输出,确保路径格式正确。
|
||||
|
||||
### 3. 检查文件系统权限
|
||||
确保应用有权限在配置的路径下创建文件和目录。
|
||||
|
||||
## 总结
|
||||
通过引入统一的路径处理方法和优化生产环境配置,彻底解决了线上环境的路径拼接问题。此修复确保了应用在不同环境下的稳定运行,避免了因路径格式问题导致的文件上传失败等故障。
|
||||
226
资源映射目录配置总结.md
226
资源映射目录配置总结.md
@@ -1,226 +0,0 @@
|
||||
# 项目资源映射目录配置总结
|
||||
|
||||
## 概览
|
||||
Spring Boot项目的静态资源映射配置主要在 `WebAppConfig.java` 文件中定义,包含了文件上传、图片、临时文件等多种资源的映射配置。
|
||||
|
||||
## 静态资源映射配置
|
||||
|
||||
### 1. 文件上传资源映射
|
||||
**配置文件**: `WebAppConfig.java` 第57-60行
|
||||
```java
|
||||
// 上传文件访问映射
|
||||
registry.addResourceHandler("/upload/**")
|
||||
.addResourceLocations("file:" + uploadPath + "/");
|
||||
```
|
||||
- **虚拟路径**: `/upload/**`
|
||||
- **物理路径**: 通过 `${upload.path}` 配置获取
|
||||
- **开发环境**: `/Users/ab/Desktop/tmp/data/tomcat/webapps/upload/`
|
||||
- **生产环境**: `/app/data/upload/` (Docker环境)
|
||||
|
||||
### 2. 图片文件资源映射
|
||||
**配置文件**: `WebAppConfig.java` 第61-64行
|
||||
```java
|
||||
// 图片文件访问映射(外部存储)
|
||||
registry.addResourceHandler("/img/**")
|
||||
.addResourceLocations("file:" + imgUploadPath + "/");
|
||||
```
|
||||
- **虚拟路径**: `/img/**`
|
||||
- **物理路径**: 通过 `${img.upload}` 配置获取
|
||||
- **开发环境**: `/Users/ab/Desktop/tmp/data/upload/`
|
||||
- **生产环境**: `/app/data/images/` (Docker环境)
|
||||
|
||||
### 3. 临时文件资源映射
|
||||
**配置文件**: `WebAppConfig.java` 第65-68行
|
||||
```java
|
||||
// 临时文件访问映射
|
||||
registry.addResourceHandler("/temp/**")
|
||||
.addResourceLocations("file:" + tempPath + "/");
|
||||
```
|
||||
- **虚拟路径**: `/temp/**`
|
||||
- **物理路径**: 通过 `${temp.path}` 配置获取
|
||||
- **开发环境**: `/Users/ab/Desktop/tmp/data/tempPath/`
|
||||
- **生产环境**: `/app/data/temp/` (Docker环境)
|
||||
|
||||
### 4. 解压文件资源映射
|
||||
**配置文件**: `WebAppConfig.java` 第69-72行
|
||||
```java
|
||||
// 解压文件访问映射
|
||||
registry.addResourceHandler("/unzip/**")
|
||||
.addResourceLocations("file:" + unzipPath + "/");
|
||||
```
|
||||
- **虚拟路径**: `/unzip/**`
|
||||
- **物理路径**: 通过 `${unzip.path}` 配置获取
|
||||
- **开发环境**: `/Users/ab/Desktop/tmp/data/unzip/`
|
||||
- **生产环境**: `/app/data/unzip/` (Docker环境)
|
||||
|
||||
### 5. 报表文件资源映射
|
||||
**配置文件**: `WebAppConfig.java` 第73-76行
|
||||
```java
|
||||
// 报表文件访问映射
|
||||
registry.addResourceHandler("/report/**")
|
||||
.addResourceLocations("file:" + reportPath + "/");
|
||||
```
|
||||
- **虚拟路径**: `/report/**`
|
||||
- **物理路径**: 通过 `${report.path}` 配置获取
|
||||
- **开发环境**: `/Users/ab/Desktop/tmp/data/report/path/`
|
||||
- **生产环境**: `/app/data/reports/` (Docker环境)
|
||||
|
||||
### 6. Webapp静态资源映射
|
||||
|
||||
#### 6.1 PDF文件资源
|
||||
**配置文件**: `WebAppConfig.java` 第77-79行
|
||||
```java
|
||||
// webapp静态资源访问映射
|
||||
registry.addResourceHandler("/pdffile/**")
|
||||
.addResourceLocations("classpath:/pdffile/");
|
||||
```
|
||||
- **虚拟路径**: `/pdffile/**`
|
||||
- **物理路径**: `src/main/resources/pdffile/`
|
||||
- **实际位置**: JAR包内的资源文件
|
||||
|
||||
#### 6.2 图片资源
|
||||
**配置文件**: `WebAppConfig.java` 第80-82行
|
||||
```java
|
||||
registry.addResourceHandler("/images/**")
|
||||
.addResourceLocations("classpath:/images/");
|
||||
```
|
||||
- **虚拟路径**: `/images/**`
|
||||
- **物理路径**: `src/main/webapp/images/`
|
||||
- **实际位置**: Web应用目录下的静态资源
|
||||
|
||||
#### 6.3 模板资源
|
||||
**配置文件**: `WebAppConfig.java` 第83-85行
|
||||
```java
|
||||
registry.addResourceHandler("/template/**")
|
||||
.addResourceLocations("classpath:/template/");
|
||||
```
|
||||
- **虚拟路径**: `/template/**`
|
||||
- **物理路径**: `src/main/resources/templates/`
|
||||
- **实际位置**: JAR包内的模板资源
|
||||
|
||||
### 7. Swagger API文档资源映射
|
||||
**配置文件**: `SwaggerConfig.java`
|
||||
```java
|
||||
// Swagger UI访问映射
|
||||
registry.addResourceHandler("swagger-ui.html")
|
||||
.addResourceLocations("classpath:/META-INF/resources/");
|
||||
|
||||
registry.addResourceHandler("/webjars/**")
|
||||
.addResourceLocations("classpath:/META-INF/resources/webjars/");
|
||||
```
|
||||
- **虚拟路径**: `/swagger-ui.html`、`/webjars/**`
|
||||
- **物理路径**: JAR包内的Swagger资源
|
||||
|
||||
## 拦截器排除配置
|
||||
|
||||
### TokenInterceptor排除路径
|
||||
**配置文件**: `WebAppConfig.java` 第28-30行
|
||||
```java
|
||||
registry.addInterceptor(tokenInterceptor)
|
||||
.addPathPatterns("/**")
|
||||
.excludePathPatterns("/upload/**", "/images/**", "/temp/**", "/unzip/**", "/report/**", "/pdffile/**", "/template/**");
|
||||
```
|
||||
- 所有静态资源路径都被排除在Token拦截之外
|
||||
- 确保静态资源可以直接访问,无需身份验证
|
||||
|
||||
## CORS跨域配置
|
||||
|
||||
### 跨域设置
|
||||
**配置文件**: `WebAppConfig.java` 第34-42行
|
||||
```java
|
||||
registry.addMapping("/**")
|
||||
.allowedOrigins("*")
|
||||
.allowedMethods("*")
|
||||
.allowedHeaders("*")
|
||||
.allowCredentials(true)
|
||||
.exposedHeaders(HttpHeaders.SET_COOKIE)
|
||||
.maxAge(3600L);
|
||||
```
|
||||
- 允许所有域名跨域访问
|
||||
- 允许所有HTTP方法
|
||||
- 允许所有请求头
|
||||
|
||||
## 静态资源目录结构
|
||||
|
||||
### src/main/webapp/
|
||||
```
|
||||
webapp/
|
||||
├── checkCode.html # 验证码页面
|
||||
├── downLoad.html # 下载页面
|
||||
├── image.html # 图片显示页面
|
||||
├── index.html # 主页
|
||||
├── jquery.min.js # jQuery库
|
||||
├── login.html # 登录页面
|
||||
├── orginSingleUpload.html # 单文件上传页面
|
||||
├── upload.html # 上传页面
|
||||
├── dist/ # 前端构建目录
|
||||
├── images/ # 前端图片资源
|
||||
├── pdffile/ # PDF相关资源
|
||||
├── pdfjs/ # PDF.js库
|
||||
├── pdfxml/ # PDF模板
|
||||
├── template/ # 前端模板
|
||||
└── WEB-INF/ # Web应用配置
|
||||
```
|
||||
|
||||
### src/main/resources/
|
||||
```
|
||||
resources/
|
||||
├── application-dev.yml # 开发环境配置
|
||||
├── application-prod.yml # 生产环境配置
|
||||
├── application.yml # 默认配置
|
||||
├── logback-spring.xml # 日志配置
|
||||
├── ocr.properties # OCR配置
|
||||
├── ureport-config.properties # 报表配置
|
||||
├── templates/ # 后端模板
|
||||
├── mapper/ # MyBatis映射文件
|
||||
└── SIMYOU.TTF # 字体文件
|
||||
```
|
||||
|
||||
## 访问示例
|
||||
|
||||
### 1. 访问上传的文件
|
||||
```
|
||||
http://localhost:9081/point-strategy/upload/filename.pdf
|
||||
```
|
||||
|
||||
### 2. 访问图片
|
||||
```
|
||||
http://localhost:9081/point-strategy/img/image.jpg
|
||||
```
|
||||
|
||||
### 3. 访问临时文件
|
||||
```
|
||||
http://localhost:9081/point-strategy/temp/tempfile.pdf
|
||||
```
|
||||
|
||||
### 4. 访问静态资源
|
||||
```
|
||||
http://localhost:9081/point-strategy/images/logo.png
|
||||
http://localhost:9081/point-strategy/pdffile/template.pdf
|
||||
http://localhost:9081/point-strategy/template/report.html
|
||||
```
|
||||
|
||||
## 配置要点
|
||||
|
||||
### 1. 路径规范化
|
||||
- 所有物理路径都应该以 `/` 结尾
|
||||
- 虚拟路径以 `/**` 结尾支持子路径访问
|
||||
- 使用 `file:` 前缀访问外部文件系统
|
||||
|
||||
### 2. 环境差异化
|
||||
- 开发环境使用本地磁盘路径
|
||||
- 生产环境使用Docker容器安全路径
|
||||
- 通过环境变量支持自定义路径配置
|
||||
|
||||
### 3. 安全性
|
||||
- 静态资源排除身份验证拦截
|
||||
- 支持跨域访问
|
||||
- 通过虚拟路径隐藏物理路径
|
||||
|
||||
### 4. 性能优化
|
||||
- 静态资源映射支持浏览器缓存
|
||||
- 排除拦截器减少性能开销
|
||||
- 合理设置跨域缓存时间
|
||||
|
||||
## 总结
|
||||
该项目的静态资源配置完善,支持多种类型文件的访问,包括上传文件、临时文件、报表文件等。通过统一的管理方式,确保了文件的安全访问和性能优化。
|
||||
126
路径拼接问题修复总结.md
126
路径拼接问题修复总结.md
@@ -1,126 +0,0 @@
|
||||
# 路径拼接问题修复总结
|
||||
|
||||
## 问题描述
|
||||
在 `ImportService.java` 中发现路径拼接问题:`uploadPath + File.separator + "uploadFile"` 在线上环境中变成了 `uploadPath+"uploadFile"`,没有用 `File.separator` 分割开来。
|
||||
|
||||
## 问题原因
|
||||
配置文件 `application-dev.yml` 中的 `img.upload` 路径设置为:
|
||||
```yaml
|
||||
img:
|
||||
upload: /Users/ab/Desktop/tmp/data/upload/
|
||||
```
|
||||
|
||||
该路径已经以 `/` 结尾,但代码中又使用了:
|
||||
```java
|
||||
String saveUrl = uploadPath + File.separator + "uploadFile" + File.separator ;
|
||||
```
|
||||
|
||||
这导致路径变成:`/Users/ab/Desktop/tmp/data/upload/` + `File.separator` + `uploadFile`
|
||||
|
||||
在不同操作系统上会产生不同的结果:
|
||||
- Linux: `/Users/ab/Desktop/tmp/data/upload//uploadFile` (双斜杠)
|
||||
- Windows: `\Users\Desktop\tmp\data\upload/\uploadFile` (混合分隔符)
|
||||
|
||||
## 修复方案
|
||||
|
||||
### 1. 新增路径处理工具方法
|
||||
在 `ImportService.java` 中添加了 `combinePath` 方法:
|
||||
|
||||
```java
|
||||
/**
|
||||
* 安全地拼接路径,避免路径分隔符重复
|
||||
* @param basePath 基础路径
|
||||
* @param additionalPath 要追加的路径
|
||||
* @return 拼接后的路径
|
||||
*/
|
||||
private String combinePath(String basePath, String additionalPath) {
|
||||
if (basePath == null || basePath.trim().isEmpty()) {
|
||||
return additionalPath;
|
||||
}
|
||||
if (additionalPath == null || additionalPath.trim().isEmpty()) {
|
||||
return basePath;
|
||||
}
|
||||
|
||||
// 确保basePath不以分隔符结尾
|
||||
String normalizedBasePath = basePath;
|
||||
if (basePath.endsWith("/") || basePath.endsWith("\\")) {
|
||||
normalizedBasePath = basePath.substring(0, basePath.length() - 1);
|
||||
}
|
||||
|
||||
// 确保additionalPath不以分隔符开头
|
||||
String normalizedAdditionalPath = additionalPath;
|
||||
if (additionalPath.startsWith("/") || additionalPath.startsWith("\\")) {
|
||||
normalizedAdditionalPath = additionalPath.substring(1);
|
||||
}
|
||||
|
||||
return normalizedBasePath + File.separator + normalizedAdditionalPath;
|
||||
}
|
||||
```
|
||||
|
||||
### 2. 批量替换路径拼接
|
||||
将所有19处路径拼接从:
|
||||
```java
|
||||
String saveUrl = uploadPath + File.separator + "uploadFile" + File.separator ;
|
||||
```
|
||||
|
||||
替换为:
|
||||
```java
|
||||
String saveUrl = combinePath(uploadPath, "uploadFile") + File.separator ;
|
||||
```
|
||||
|
||||
## 修复位置统计
|
||||
总共修复了19处路径拼接问题,包括:
|
||||
|
||||
### 主要方法中的路径拼接
|
||||
- `hookUp` 方法中的路径拼接
|
||||
- `hookUpTwoZip` 方法中的路径拼接
|
||||
- `hookUpNew` 方法中的路径拼接
|
||||
- `hookUpTwo` 方法中的路径拼接
|
||||
- `hookUpJzt` 方法中的路径拼接
|
||||
- `hookUpXiaoGan` 方法中的路径拼接
|
||||
- 其他相关方法中的路径拼接
|
||||
|
||||
### 注释掉的方法
|
||||
部分已经注释掉的方法中的路径拼接也被统一替换。
|
||||
|
||||
## 修复效果
|
||||
|
||||
### 1. 路径正确性
|
||||
- ✅ 避免路径分隔符重复
|
||||
- ✅ 支持跨平台路径处理
|
||||
- ✅ 确保路径格式一致性
|
||||
|
||||
### 2. 代码健壮性
|
||||
- ✅ 处理空值和边界情况
|
||||
- ✅ 自动标准化路径格式
|
||||
- ✅ 支持不同的操作系统
|
||||
|
||||
### 3. 维护性
|
||||
- ✅ 统一的路径处理逻辑
|
||||
- ✅ 易于理解和维护
|
||||
- ✅ 可复用的工具方法
|
||||
|
||||
## 验证结果
|
||||
- ✅ 编译通过:`mvn compile -q`
|
||||
- ✅ 无语法错误
|
||||
- ✅ 无路径拼接问题残留
|
||||
- ✅ 保持原有业务逻辑不变
|
||||
|
||||
## 使用示例
|
||||
|
||||
### 修复前
|
||||
```java
|
||||
// 假设 uploadPath = "/path/to/upload/"
|
||||
String saveUrl = uploadPath + File.separator + "uploadFile" + File.separator ;
|
||||
// 结果: "/path/to/upload//uploadFile/" (Linux)
|
||||
// 结果: "\path\to\upload/\uploadFile\" (Windows)
|
||||
```
|
||||
|
||||
### 修复后
|
||||
```java
|
||||
String saveUrl = combinePath(uploadPath, "uploadFile") + File.separator ;
|
||||
// 结果: "/path/to/upload/uploadFile/" (所有平台一致)
|
||||
```
|
||||
|
||||
## 总结
|
||||
成功解决了 `ImportService.java` 中的路径拼接问题,通过引入统一的路径处理方法,确保了路径在不同操作系统和配置环境下的一致性和正确性。此修复提升了代码的健壮性和可维护性,避免了因路径格式问题导致的线上故障。
|
||||
Reference in New Issue
Block a user