commit 89119f3c8bfcc41871253ef051dd2d5e2ce3ab52 Author: fjj <–1066869486@qq.com> Date: Fri Dec 8 11:44:35 2023 +0800 Changes diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..46e1f4b --- /dev/null +++ b/.gitignore @@ -0,0 +1,25 @@ +# Compiled class file +*.class + +# Log file +logs/ +*.log + +# BlueJ files +*.ctxt + +# Mobile Tools for Java (J2ME) +.mtj.tmp/ + +# Package Files # +.idea +*.iml +*.war +*.nar +*.ear +*.tar.gz +*.rar +target/ + +# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml +hs_err_pid* diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..24c10e4 --- /dev/null +++ b/LICENSE @@ -0,0 +1,9 @@ +SRT License + +Copyright (c) 2023 天津数睿通科技有限公司 + +The data center of Shuruitong 2.0 can be used for learning and communication, and for secondary developers to use in projects, but cannot be modified. After transformation, profits can be made by selling source code +The intellectual property rights of the software belong to Tianjin Shuruitong Technology Co., Ltd. Violators will be held legally responsible! + +数睿通 2.0 数据中台可用于学习交流、二开商用于项目之中,但不可经过修改,换化之后通过售卖源码谋利 +软件知识产权归天津数睿通科技有限公司所有,违者将追究其法律责任! diff --git a/README.md b/README.md new file mode 100644 index 0000000..8fe7027 --- /dev/null +++ b/README.md @@ -0,0 +1,304 @@ +## 项目说明 +srt-cloud 是采用 Vue3,Ts,Spring Cloud Alibaba、SpringSecurity、Spring Cloud Gateway、SpringBoot、Nacos、Redis、Mybatis-Plus,Tidb,Flink,Hadoop 等最新技术,开发的全新数睿通数据中台,包含数据集成,数据开发,数据治理,数据资产,数据服务,数据集市六大模块,解决数据孤岛问题,实现数据统一口径标准,自定义数据开发任务,帮助企业,政府等解决数据问题!目前项目正在开发中,会尽快做出一版成型可用的产品。 + +## 功能模块说明 + +目前全局管理,应用管理,日志管理,系统管理,数据集成,数据开发,数据服务几大模块已基本完毕。 + +- 数据集成 + + - 数据库管理 — 管理用户添加的数据源,支持 MYSQL/ORACLE/SQLSERVER/POSTGRESQL/GREENPLUM/MARIADB/DB2/DM/OSCAR/KINGBASE8/OSCAR/GBASE8A/HIVE/SQLITE3/SYBASE/DORIS,支持库表查询,测试连接等 + - 文件管理 — 管理用户上传的文件数据 + - 数据接入 — 接入外部数据源的数据到中台 **ODS** 层,也可自定义接入目的端数据源,支持一次性全量同步和周期性增量同步;可自定义表名,字段名的映射规则,支持正则表达式匹配;支持查看执行记录及详细执行结果,可查看同步的数据量,数据大小,成功表数量,失败表数量,成功信息,失败信息,也可查看具体每张表同步的数据量,数据大小,错误信息等,帮助用户全面掌握数据接入的执行情况 + - 贴源数据 — 查看接入到ods层的数据表和数据,可查看每张表的同步记录 +- 数据开发 + - 数据生产 — 对数据进行作业代码化编辑,自行 DDL 建模,运行,调试等 + - 调度中心 + - 调度管理 — 对生产作业进行流程编辑,可视化调度 + - 调度记录 — 查看调度结果,日志等 + - 运维中心 — 对作业执行运维管理 + - 资源中心 + - Flink 集群实例 — 管理 FLink 资源 + - Hadoop 集群配置 — 管理 Hadoop 资源 + - 配置中心 — 管理 FlinkSql 执行配置 +- 数据服务 + - API 目录 — 用户自定义 API 目录,动态生成 API,对外提供服务 + - API 权限 — 对私有 API 进行授权操作 + - API 日志 — 查看 API 调用日志 +- 数据治理 + - 元数据 + - 元模型 — 描述元数据的元数据,主要定义了元数据的属性,通常元模型都是系统内置的,如表元模型,字段元模型等 + - 元数据采集 — 根据定义的元模型对元数据进行采集,通常是每一种元模型有自己内置的采集逻辑,可以设置采集周期等 + - 元数据管理 — 对采集的元数据进行查看和管理 + - 数据血缘 — 通过数据接入,数据生产流程之间的关系自动构建数据血缘关系图,追溯数据流向(70%) + - 数据标准 + - 数据质量 +- 数据资产(70%) + - 资源管理 — 自定义资源目录,在每个目录下自定义资源,挂在数据库,api等 + - 资产总览 — 对中台资源做一个总的统计概览 +- 数据集市(50%) + - 资源目录 — 中台资源目录以及目录下资源的查看,可对资源进行申请操作 + - API 目录 — 中台 API 目录以及目录下 API 的查看,可对 API 进行申请 + - 我的申请 — 可以查看自己的申请记录,审批结果 + - 服务审批 — 管理员对其他角色的申请做出审批,若审批通过,申请人便可以收到审批通过的消息,使用自己申请的服务资源 +- 全局管理 + - 数据项目管理 — 中台项目(租户)的管理,每个项目下可以关联用户,用户只能查看自己关联的项目下的数据,选择自己的数据仓库,所有的模块数据都会有所属项目 + - 数仓分层展示 — 对中台数仓的分成做展示说明 +- 应用管理 + - 消息管理 + - 短信平台 — 集成短信平台,支持阿里,腾讯等常用的短信平台 + - 短信日志 — 调用短信所产生的日志 +- 日志管理 + + - 登录日志 — 系统登录产生的日志 +- 系统管理 + - 用户管理 — 对系统用户进行管理 + - 菜单管理 — 对系统菜单进行管理,用于实现动态菜单 + - 定时任务 — 可自定义定时任务,调度执行 + - 数据字典 — 系统的字典数据 + - 机构管理 — 机构数据,若各模块中的数据有所属机构概念,可用于数据权限管理 + - 岗位管理 — 岗位的管理 + - 角色管理 — 角色管理,可以为每个角色自定义菜单查看权限以及机构级的数据权限 + - 附件管理 — 系统附件管理,可以上传下载 + +## 系统数仓架构 + +系统数仓可以在全局项目管理中配置不同租户的数据仓库,在数据集成集成到ods层之后, 可以通过数据生产进行数据开发,数据整体流向图如下: + +###### ![数睿通数仓架构图](images/数睿通数仓架构图.png) + +关于数仓为什么要分层:分层可以有助于数据的管理,同时每次取数只需要获取统计分析过的成品就可以,不需要从源头数据反复计算,避免了计算资源的浪费,通常源头数据量较大,并且中间的处理逻辑较为复杂,所以采用建模分层的方式解决,通常表的前缀都用层级来定义。 + +## 系统核心技术栈 + +前台: + +- vue3 +- vite +- typeScript +- element-plus +- pinia +- 。。。 + +后台: + +- Spring Cloud Alibaba +- SpringSecurity +- Spring Cloud Gateway +- SpringBoot +- Nacos +- Redis +- Mybatis-Plus +- mysql8.0 +- tidb +- doris +- flink +- flink cdc +- flink sql +- neo4j +- 。。。 + +## 系统运行方式 + +### 下载Nacos + +需要从GitHub下载Nacos,下载地址:https://github.com/alibaba/nacos/releases +下载2.1.1版本,因为本项目使用的是Nacos 2.1.1,如果版本号对应不上,后面项目启动会出错。 + +### nacos数据库 + +注意:Nacos 目前只支持MySQL数据库,请安装MySQL8.0版本,以免出现其他错误。 + +新建数据库nacos_config,并运行【conf/nacos-mysql.sql】文件,初始化数据库即可。 + +### 修改Nacos的配置文件 + +需要在【conf/application.properties】文件末尾,新增如下配置: + +```bash +# 填自己的ip地址,本地填124.223.48.209就行 +nacos.inetutils.ip-address=124.223.48.209 + +spring.datasource.platform=mysql +db.num=1 +#填自己的数据库连接和密码 +db.url.0=jdbc:mysql://124.223.48.209:3306/nacos_config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC +db.user.0=root +db.password.0=root +``` + +### 启动Nacos + +```bash +Windows: +startup.cmd -m standalone + +Linux: +sh startup.sh -m standalone + +# 集群启动可关注公众号 螺旋编程极客 查看 +``` + +访问Nacos管理界面(http://localhost:8848/nacos) ,初始用户名nacos,密码nacos,登录之后,如下所示: + +![image-20221030203128590](images/nacos.png) + +### 创建系统数据库 + +创建 `srt_cloud` 数据库,数据库编码为`utf8mb4` +执行`db/srt_cloud.sql`文件,初始化数据库脚本 + +### 导入Nacos配置文件 + +导入Nacos配置文件,配置文件在项目里,文件名为:【deploy/nacos_config.zip】,如下所示: + +![](images/nacos-config.png) + +导入配置文件后,还需要在Nacos里,修改datasource.yaml,如:Redis、MySQL信息等。 + +### 下载安装 neo4j + +具体参考 + +[neo4j安装](https://blog.csdn.net/weixin_44593504/article/details/119903908) + +安装 neo4j-community-3.5.3-unix.tar.gz 版本(相关安装包文件夹里有),否则可能不适配 + +### 启动后端 + +把系统导入 idea,注意部门 jdbc 驱动包 maven 官方仓库不存在,需要手动在本地 mvn install 一下,jar 包在网盘 jdbc 驱动包自取,导入项目后,右上角 profiles 勾选 flink1.14,多刷新几次,导入完 maven 依赖之后,依次启动: + +#### 启动 srt-cloud-gateway + +运行 GatewayApplication.java + +#### 启动 srt-cloud-system + +运行 SystemApplication.java + +#### 启动 srt-cloud-data-integrate + +运行 DataIntegrateApplication.java + +#### 启动 srt-cloud-data-development + +运行 DataDevelopmentApplication.java + +#### 启动 srt-cloud-data-service + +运行 DataServiceApplication.java + +#### 启动 srt-cloud-data-governance + +运行 DataGovernanceApplication.java + +#### 启动 srt-cloud-data-assets + +运行 DataAssetsApplication.java + +#### 启动 srt-cloud-quartz + +运行 QuartzApplication.java + +#### 启动 srt-cloud-message + +运行 MessageApplication.java + +### 启动前端 + +安装版本号为`16.15.0`的`nodejs`,如果已经安装了,请先卸载,推荐使用 nvm 安装 node.js,方便切换不同版本 + +1. 需要先把本地安装的`nodejs`卸载,然后再下载nvm,地址: + https://github.com/coreybutler/nvm-windows/releases +2. 一般情况,找到最新版本,然后下载`nvm-setup.exe`文件就可以了,下载后,我们双击安装即可。 +3. 我们使用`PowerShell`打开命令行,这里需要注意下,要使用`管理员`身份打开`PowerShell` +4. 命令`nvm version` ,可以查看版本号 +5. 命令`nvm ls available`查看`nodejs`可用的版本 +6. 命令`nvm install 16.15.0`,可以安装版本号为`16.15.0`的`nodejs` +7. 命令`nvm list`,可以查看已安装的版本号 +8. 命令`nvm use 16.15.0`,可以切换到版本号为`16.15.0`的`nodejs`,现在就可以通过命令`node -v`查看当前的`nodejs`版本号 +9. 命令`nvm uninstall 16.15.0`,可以卸载版本号为`16.15.0`的`nodejs` + +用 vscode 或 hbuildx 打开 srt-cloud-web + +安装依赖: + +```bash +npm install +``` + +运行项目: + +```bash +npm run dev +``` + +打包项目 + +```bash +npm run build +``` + +## 系统运行展示 + +![image-20221030205835569](images/login.png) + +![](images/首页.png) + +![image-20221030210227005](images/数据库管理.png) + +![image-20221030210420292](images/修改数据库.png) + +![image-20221030210549858](images/查看库表.png) + +![image-20221030210702083](images/数据接入.png) + +![image-20221030210802981](images/接入查看.png) + +![image-20221030210913467](images/接入编辑.png) + +![image-20221030211158654](images/执行记录.png) + +![image-20221030211424876](images/同步结果.png) + +![数据生产-sql](images/数据生产-sql.png) + +![数据生产-flinksql校验](images/数据生产-flinksql校验.png) + +![数据生产-msyql-cdc](images/数据生产-mysql-cdc.png) + +![数据生产-调度](images/数据生产-调度.png) + +![数据生产-执行](images/数据生产-执行.png) + +![运维中心](images/运维中心.png) + +![元模型](images/元模型.png) + +![元数据采集](images/元数据采集.png) + +![元数据采集记录](images/元数据采集记录.png) + +![采集日志](images/采集日志.png) + +![元数据管理](images/元数据管理.png) + +![资产目录](images/资产目录.png) + +![资产详情](images/资产详情.png) + +![开放范围](images/开放范围.png) + +![资源挂载](images/资源挂载.png) + +![挂载数据库表](images/挂载数据库表.png) + +![挂载API](images/挂载API.png) + +![数据库表](images/数据库表.png) + +![资源查阅](images/资源查阅.png) + +## 帮助支持 + +想要了解更多的朋友请关注公众号 **螺旋编程极客** 添加作者微信或在菜单栏加入知识星球,一起进步,一起成长。 diff --git a/README.pdf b/README.pdf new file mode 100644 index 0000000..10e7f3b Binary files /dev/null and b/README.pdf differ diff --git a/assembly/assembly-linux.xml b/assembly/assembly-linux.xml new file mode 100644 index 0000000..7b96a40 --- /dev/null +++ b/assembly/assembly-linux.xml @@ -0,0 +1,41 @@ + + linux + false + + tar.gz + + + + target/generated-resources/appassembler/jsw/${project.artifactId}/bin + ${project.artifactId}/bin + 0755 + + ${project.artifactId} + wrapper-linux* + + + + target/generated-resources/appassembler/jsw/${project.artifactId}/lib + ${project.artifactId}/lib + + *.jar + libwrapper-linux* + + + + target/generated-resources/appassembler/jsw/${project.artifactId}/conf + ${project.artifactId}/conf + + * + + + + target/generated-resources/appassembler/jsw/${project.artifactId}/logs + ${project.artifactId}/logs + + **/* + + + + + diff --git a/assembly/assembly-win.xml b/assembly/assembly-win.xml new file mode 100644 index 0000000..bb2970b --- /dev/null +++ b/assembly/assembly-win.xml @@ -0,0 +1,40 @@ + + win + false + + tar.gz + + + + target/generated-resources/appassembler/jsw/${project.artifactId}/bin + ${project.artifactId}/bin + 0755 + + ${project.artifactId}.bat + wrapper-windows* + + + + target/generated-resources/appassembler/jsw/${project.artifactId}/lib + ${project.artifactId}/lib + + *.jar + wrapper-windows* + + + + target/generated-resources/appassembler/jsw/${project.artifactId}/conf + ${project.artifactId}/conf + + * + + + + target/generated-resources/appassembler/jsw/${project.artifactId}/logs + ${project.artifactId}/logs + + **/* + + + + diff --git a/deploy/nacos_config.zip b/deploy/nacos_config.zip new file mode 100644 index 0000000..7c8e540 Binary files /dev/null and b/deploy/nacos_config.zip differ diff --git a/images/login.png b/images/login.png new file mode 100644 index 0000000..6fe03e7 Binary files /dev/null and b/images/login.png differ diff --git a/images/nacos-config.png b/images/nacos-config.png new file mode 100644 index 0000000..4090370 Binary files /dev/null and b/images/nacos-config.png differ diff --git a/images/nacos.png b/images/nacos.png new file mode 100644 index 0000000..451843b Binary files /dev/null and b/images/nacos.png differ diff --git a/images/修改数据库.png b/images/修改数据库.png new file mode 100644 index 0000000..2b98ab5 Binary files /dev/null and b/images/修改数据库.png differ diff --git a/images/元数据管理.png b/images/元数据管理.png new file mode 100644 index 0000000..5d47792 Binary files /dev/null and b/images/元数据管理.png differ diff --git a/images/元数据采集.png b/images/元数据采集.png new file mode 100644 index 0000000..7942c9a Binary files /dev/null and b/images/元数据采集.png differ diff --git a/images/元数据采集记录.png b/images/元数据采集记录.png new file mode 100644 index 0000000..e097cbf Binary files /dev/null and b/images/元数据采集记录.png differ diff --git a/images/元模型.png b/images/元模型.png new file mode 100644 index 0000000..f15eed0 Binary files /dev/null and b/images/元模型.png differ diff --git a/images/同步结果.png b/images/同步结果.png new file mode 100644 index 0000000..a395bfc Binary files /dev/null and b/images/同步结果.png differ diff --git a/images/开放范围.png b/images/开放范围.png new file mode 100644 index 0000000..449856e Binary files /dev/null and b/images/开放范围.png differ diff --git a/images/执行记录.png b/images/执行记录.png new file mode 100644 index 0000000..ab01d4a Binary files /dev/null and b/images/执行记录.png differ diff --git a/images/挂载api.png b/images/挂载api.png new file mode 100644 index 0000000..397c5d9 Binary files /dev/null and b/images/挂载api.png differ diff --git a/images/挂载数据库表.png b/images/挂载数据库表.png new file mode 100644 index 0000000..f2a0294 Binary files /dev/null and b/images/挂载数据库表.png differ diff --git a/images/接入查看.png b/images/接入查看.png new file mode 100644 index 0000000..cf078b8 Binary files /dev/null and b/images/接入查看.png differ diff --git a/images/接入编辑.png b/images/接入编辑.png new file mode 100644 index 0000000..5e5f0d0 Binary files /dev/null and b/images/接入编辑.png differ diff --git a/images/数据库管理.png b/images/数据库管理.png new file mode 100644 index 0000000..a84655d Binary files /dev/null and b/images/数据库管理.png differ diff --git a/images/数据库表.png b/images/数据库表.png new file mode 100644 index 0000000..9324f2f Binary files /dev/null and b/images/数据库表.png differ diff --git a/images/数据接入.png b/images/数据接入.png new file mode 100644 index 0000000..51d051d Binary files /dev/null and b/images/数据接入.png differ diff --git a/images/数据生产-flinksql校验.png b/images/数据生产-flinksql校验.png new file mode 100644 index 0000000..ec25440 Binary files /dev/null and b/images/数据生产-flinksql校验.png differ diff --git a/images/数据生产-mysql-cdc.png b/images/数据生产-mysql-cdc.png new file mode 100644 index 0000000..060a720 Binary files /dev/null and b/images/数据生产-mysql-cdc.png differ diff --git a/images/数据生产-sql.png b/images/数据生产-sql.png new file mode 100644 index 0000000..1cd1d59 Binary files /dev/null and b/images/数据生产-sql.png differ diff --git a/images/数据生产-执行.png b/images/数据生产-执行.png new file mode 100644 index 0000000..44c2717 Binary files /dev/null and b/images/数据生产-执行.png differ diff --git a/images/数据生产-调度.png b/images/数据生产-调度.png new file mode 100644 index 0000000..34c3b81 Binary files /dev/null and b/images/数据生产-调度.png differ diff --git a/images/数睿通数仓架构图.png b/images/数睿通数仓架构图.png new file mode 100644 index 0000000..3209313 Binary files /dev/null and b/images/数睿通数仓架构图.png differ diff --git a/images/查看库表.png b/images/查看库表.png new file mode 100644 index 0000000..3087162 Binary files /dev/null and b/images/查看库表.png differ diff --git a/images/资产目录.png b/images/资产目录.png new file mode 100644 index 0000000..56791e0 Binary files /dev/null and b/images/资产目录.png differ diff --git a/images/资产详情.png b/images/资产详情.png new file mode 100644 index 0000000..9a413a9 Binary files /dev/null and b/images/资产详情.png differ diff --git a/images/资源挂载.png b/images/资源挂载.png new file mode 100644 index 0000000..4779666 Binary files /dev/null and b/images/资源挂载.png differ diff --git a/images/资源查阅.png b/images/资源查阅.png new file mode 100644 index 0000000..ef37e46 Binary files /dev/null and b/images/资源查阅.png differ diff --git a/images/运维中心.png b/images/运维中心.png new file mode 100644 index 0000000..644ed89 Binary files /dev/null and b/images/运维中心.png differ diff --git a/images/采集日志.png b/images/采集日志.png new file mode 100644 index 0000000..4b5ee50 Binary files /dev/null and b/images/采集日志.png differ diff --git a/images/首页.png b/images/首页.png new file mode 100644 index 0000000..acb3dc2 Binary files /dev/null and b/images/首页.png differ diff --git a/pom.xml b/pom.xml new file mode 100644 index 0000000..f9bed64 --- /dev/null +++ b/pom.xml @@ -0,0 +1,238 @@ + + 4.0.0 + net.srt + srt-cloud + 2.0.0 + pom + + srt-cloud + 新一代数睿通数据中台 + + + org.springframework.boot + spring-boot-starter-parent + 2.6.11 + + + + srt-cloud-framework + srt-cloud-api + srt-cloud-module + srt-cloud-data-integrate + srt-cloud-system + srt-cloud-gateway + + + + 2.0.0 + true + UTF-8 + UTF-8 + 1.8 + 2021.0.1 + 2021.0.1.0 + 3.5.1 + 3.0.3 + 1.6.8 + 1.6.2 + 1.4.2.Final + 2.1.1 + 5.8.4 + 3.8.0 + 8.1.2.79 + 2.0.18 + 3.1.574 + 7.11.0 + 8.4.3 + 5.6.89 + 3.22.3 + 6.2.2 + + 8.0.16 + 42.2.18 + 19.3.0.0 + 6.0 + 3.0 + 3.0 + 3.0 + 5.1.4 + 1.0.0 + 8.6.0 + 3.1.2 + 0.4.0 + 2.3 + 1.21.0 + 1.21.0 + 4.3 + 1.2.8 + + + + + org.projectlombok + lombok + true + + + org.mapstruct + mapstruct + + + org.mapstruct + mapstruct-jdk8 + + + org.mapstruct + mapstruct-processor + + + + + + + org.springframework.cloud + spring-cloud-dependencies + ${spring.cloud.version} + pom + import + + + com.alibaba.cloud + spring-cloud-alibaba-dependencies + ${spring.cloud.alibaba.version} + pom + import + + + com.alibaba + druid-spring-boot-starter + ${druid-starter-version} + + + com.baomidou + mybatis-plus-boot-starter + ${mybatisplus.version} + + + com.github.xiaoymin + knife4j-springdoc-ui + ${knife4j.version} + + + org.springdoc + springdoc-openapi-webmvc-core + ${springdoc.version} + + + org.springdoc + springdoc-openapi-ui + ${springdoc.version} + + + org.springdoc + springdoc-openapi-webflux-ui + ${springdoc.version} + + + com.github.whvcse + easy-captcha + ${captcha.version} + + + org.mapstruct + mapstruct + ${mapstruct.version} + + + org.mapstruct + mapstruct-jdk8 + ${mapstruct.version} + + + org.mapstruct + mapstruct-processor + ${mapstruct.version} + + + com.alibaba.nacos + nacos-client + ${nacos.version} + + + cn.hutool + hutool-all + ${hutool.version} + + + com.aliyun.oss + aliyun-sdk-oss + ${aliyun.oss.version} + + + com.dameng + DmJdbcDriver18 + ${dameng.version} + + + com.aliyun + dysmsapi20170525 + ${aliyun.dysmsapi.version} + + + com.tencentcloudapi + tencentcloud-sdk-java + ${tencentcloud.sdk.version} + + + com.qiniu + qiniu-java-sdk + ${qiniu.version} + + + io.minio + minio + ${minio.version} + + + com.qcloud + cos_api + ${qcloud.cos.version} + + + com.huaweicloud + esdk-obs-java-bundle + ${huaweicloud.obs.version} + + + + + + + + public + 阿里云公共仓库 + https://maven.aliyun.com/repository/public/ + + true + + + + + + public + 阿里云公共仓库 + https://maven.aliyun.com/repository/public/ + + true + + + false + + + + diff --git a/sql/srt_cloud2.0.sql b/sql/srt_cloud2.0.sql new file mode 100644 index 0000000..d042f65 --- /dev/null +++ b/sql/srt_cloud2.0.sql @@ -0,0 +1,2027 @@ +/* +Navicat MySQL Data Transfer + +Source Server : 124.223.48.209 +Source Server Version : 80033 +Source Host : 124.223.48.209:3310 +Source Database : srt_cloud2.0 + +Target Server Type : MYSQL +Target Server Version : 80033 +File Encoding : 65001 + +Date: 2023-09-11 14:49:06 +*/ + +SET FOREIGN_KEY_CHECKS=0; + +-- ---------------------------- +-- Table structure for data_access +-- ---------------------------- +DROP TABLE IF EXISTS `data_access`; +CREATE TABLE `data_access` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键id', + `task_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '任务名称', + `description` varchar(300) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '描述', + `project_id` bigint NOT NULL COMMENT '项目id', + `source_database_id` bigint NOT NULL COMMENT '源端数据库id', + `target_database_id` bigint DEFAULT NULL COMMENT '目的端数据库id(同步方式为1有此值)', + `access_mode` int NOT NULL COMMENT '接入方式 1-接入ods 2-自定义接入', + `task_type` int NOT NULL COMMENT '任务类型 1-实时同步 2-一次性全量同步 3-一次性全量周期性增量', + `cron` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT 'cron表达式(秒 分 时 日 月 星期 年,例如 0 0 3 * * ? 表示每天凌晨三点执行)', + `status` int NOT NULL DEFAULT '0' COMMENT '发布状态 0-未发布 1-已发布', + `run_status` int NOT NULL DEFAULT '1' COMMENT '最新状态 1-等待中 2-运行中 3-正常结束 4-异常结束', + `data_access_json` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '数据接入基础配置json', + `start_time` datetime DEFAULT NULL COMMENT '最近开始时间', + `end_time` datetime DEFAULT NULL COMMENT '最近结束时间', + `release_time` datetime DEFAULT NULL COMMENT '发布时间', + `note` varchar(300) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '备注', + `release_user_id` bigint DEFAULT NULL COMMENT '发布人id', + `next_run_time` datetime DEFAULT NULL COMMENT '下次执行时间', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=120014 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='数据集成-数据接入'; + +-- ---------------------------- +-- Records of data_access +-- ---------------------------- +INSERT INTO `data_access` VALUES ('120008', '人口数据-》中台库', '人口测试数据-》中台库', '10002', '5', null, '1', '3', '0/30 * * * * ? *', '0', '4', '{\"source\":[{\"sourceProductType\":\"MYSQL\",\"url\":\"jdbc:mysql://124.223.48.209:3306/srt_cloud_test?useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"username\":\"root\",\"password\":\"root\",\"connectionTimeout\":60000,\"maxLifeTime\":3600000,\"fetchSize\":10000,\"tableType\":null,\"sourceSchema\":\"srt_cloud_test\",\"includeOrExclude\":1,\"sourceIncludes\":\"people,people_cdc_test\",\"sourceExcludes\":\"\",\"regexTableMapper\":[],\"regexColumnMapper\":[]}],\"target\":{\"targetProductType\":\"MYSQL\",\"url\":\"jdbc:mysql://124.223.48.209:3306/srt_data_warehouse_p_10002?useUnicode=true&characterEncoding=UTF-8&serverTimezone=Asia/Shanghai&nullCatalogMeansCurrent=true\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"username\":\"srt_data_warehouse_p_10002\",\"password\":\"BschnvGuVatChHQ7\",\"connectionTimeout\":60000,\"maxLifeTime\":3600000,\"targetSchema\":\"srt_data_warehouse_p_10002\",\"targetDrop\":false,\"syncExist\":true,\"onlyCreate\":false,\"indexCreate\":true,\"tablePrefix\":\"ods_\",\"lowercase\":true,\"uppercase\":false,\"createTableAutoIncrement\":true,\"writerEngineInsert\":true,\"changeDataSync\":true}}', '2023-07-23 11:19:21', '2023-07-23 11:19:22', null, null, null, '2023-07-23 11:19:30', '1', '0', '10000', '2023-01-21 11:51:07', '10000', '2023-07-23 11:19:13'); +INSERT INTO `data_access` VALUES ('120011', 'mysql->doris', 'mysql->doris', '10002', '5', '8', '2', '2', '', '0', '3', '{\"source\":[{\"sourceProductType\":\"MYSQL\",\"url\":\"jdbc:mysql://124.223.48.209:3306/srt_cloud_test?useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"username\":\"root\",\"password\":\"root\",\"connectionTimeout\":60000,\"maxLifeTime\":3600000,\"fetchSize\":5000,\"tableType\":null,\"sourceSchema\":\"srt_cloud_test\",\"includeOrExclude\":1,\"sourceIncludes\":\"people,example_tbl,people_cdc_test\",\"sourceExcludes\":\"\",\"regexTableMapper\":[],\"regexColumnMapper\":[]}],\"target\":{\"targetProductType\":\"DORIS\",\"url\":\"jdbc:mysql://192.168.30.128:9030/test_db?useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8&rewriteBatchedStatements=true\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"username\":\"root\",\"password\":\"123456\",\"connectionTimeout\":60000,\"maxLifeTime\":3600000,\"targetSchema\":\"test_db\",\"targetDrop\":false,\"syncExist\":true,\"onlyCreate\":false,\"indexCreate\":false,\"tablePrefix\":null,\"lowercase\":false,\"uppercase\":false,\"createTableAutoIncrement\":true,\"writerEngineInsert\":true,\"changeDataSync\":true}}', '2023-06-26 22:37:39', '2023-06-26 22:37:48', null, null, null, null, '0', '0', '10000', '2023-06-19 17:58:05', '10000', '2023-06-26 22:37:28'); +INSERT INTO `data_access` VALUES ('120013', 'doris->mysql', 'doris->mysql', '10002', '8', '5', '1', '2', '', '0', '1', '{\"source\":[{\"sourceProductType\":\"DORIS\",\"url\":\"jdbc:mysql://192.168.30.128:9030/test_db?useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8&rewriteBatchedStatements=true\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"username\":\"root\",\"password\":\"123456\",\"connectionTimeout\":60000,\"maxLifeTime\":3600000,\"fetchSize\":50000,\"tableType\":null,\"sourceSchema\":\"test_db\",\"includeOrExclude\":1,\"sourceIncludes\":\"\",\"sourceExcludes\":\"\",\"regexTableMapper\":[],\"regexColumnMapper\":[]}],\"target\":{\"targetProductType\":\"MYSQL\",\"url\":\"jdbc:mysql://124.223.48.209:3306/srt_data_warehouse_p_10002?useUnicode=true&characterEncoding=UTF-8&serverTimezone=Asia/Shanghai&nullCatalogMeansCurrent=true\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"username\":\"srt_data_warehouse_p_10002\",\"password\":\"BschnvGuVatChHQ7\",\"connectionTimeout\":60000,\"maxLifeTime\":3600000,\"targetSchema\":\"srt_data_warehouse_p_10002\",\"targetDrop\":false,\"syncExist\":true,\"onlyCreate\":false,\"indexCreate\":false,\"tablePrefix\":\"ods_\",\"lowercase\":true,\"uppercase\":false,\"createTableAutoIncrement\":true,\"writerEngineInsert\":true,\"changeDataSync\":true}}', '2023-06-23 10:20:57', '2023-06-23 10:21:03', null, null, null, null, '0', '0', '10000', '2023-06-20 21:34:00', '10000', '2023-06-26 22:33:04'); + +-- ---------------------------- +-- Table structure for data_access_task +-- ---------------------------- +DROP TABLE IF EXISTS `data_access_task`; +CREATE TABLE `data_access_task` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键id', + `data_access_id` bigint NOT NULL COMMENT '数据接入任务id', + `run_status` int NOT NULL COMMENT '运行状态( 1-等待中 2-运行中 3-正常结束 4-异常结束)', + `start_time` datetime NOT NULL COMMENT '开始时间', + `end_time` datetime DEFAULT NULL COMMENT '结束时间', + `real_time_log` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin COMMENT '实时日志', + `error_info` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin COMMENT '错误信息', + `data_count` bigint DEFAULT '0' COMMENT '更新数据量', + `table_success_count` bigint DEFAULT '0' COMMENT '成功表数量', + `table_fail_count` bigint DEFAULT '0' COMMENT '失败表数量', + `byte_count` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT '0' COMMENT '更新大小', + `project_id` bigint NOT NULL COMMENT '项目id', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `access_id_index` (`data_access_id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin ROW_FORMAT=DYNAMIC COMMENT='数据接入任务记录'; + +-- ---------------------------- +-- Records of data_access_task +-- ---------------------------- +INSERT INTO `data_access_task` VALUES ('1', '120011', '3', '2023-06-26 22:37:39', '2023-06-26 22:37:48', 0x323032332D30362D32362032323A33373A333920537461727420746F2072756E206461746120616363657373207461736B0D0A323032332D30362D32362032323A33373A34372053796E63207461626C65205B6578616D706C655F74626C5D2D3E5B6578616D706C655F74626C5D2073756363657373203D3D3E2073796E63436F756E743A5B315D2C73796E6342797465733A5B323442205D2C73796E634D73673A5BE5908CE6ADA5E68890E58A9F5D0D0A323032332D30362D32362032323A33373A34382053796E63207461626C65205B70656F706C655F6364635F746573745D2D3E5B70656F706C655F6364635F746573745D2073756363657373203D3D3E2073796E63436F756E743A5B345D2C73796E6342797465733A5B32353642205D2C73796E634D73673A5BE5908CE6ADA5E68890E58A9F5D0D0A323032332D30362D32362032323A33373A34382053796E63207461626C65205B70656F706C655D2D3E5B70656F706C655D2073756363657373203D3D3E2073796E63436F756E743A5B355D2C73796E6342797465733A5B33363042205D2C73796E634D73673A5BE5908CE6ADA5E68890E58A9F5D0D0A323032332D30362D32362032323A33373A3438204461746120616363657373207461736B20656E6420737563636565640D0A323032332D30362D32362032323A33373A3438204461746120636F756E743A5B31305D2C7461626C6553756363657373436F756E743A5B335D2C7461626C654661696C436F756E743A5B305D2C62797465436F756E743A5B36343042205D0D0A, null, '10', '3', '0', '640B ', '10002', '0', '0', null, '2023-06-26 22:37:40', null, '2023-06-26 22:37:49'); +INSERT INTO `data_access_task` VALUES ('2', '120008', '4', '2023-07-23 11:19:21', '2023-07-23 11:19:22', 0x323032332D30372D32332031313A31393A323120537461727420746F2072756E206461746120616363657373207461736B0D0A323032332D30372D32332031313A31393A32322053796E63207461626C65205B70656F706C655F6364635F746573745D2D3E5B6F64735F70656F706C655F6364635F746573745D2073756363657373203D3D3E2073796E63436F756E743A5B345D2C73796E6342797465733A5B32353642205D2C73796E634D73673A5BE5908CE6ADA5E68890E58A9F5D0D0A6A6176612E7574696C2E636F6E63757272656E742E457865637574696F6E457863657074696F6E3A206A6176612E6C616E672E4172726179496E6465784F75744F66426F756E6473457863657074696F6E0D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C654675747572652E7265706F727447657428436F6D706C657461626C654675747572652E6A6176613A333537290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C654675747572652E67657428436F6D706C657461626C654675747572652E6A6176613A31383935290D0A096174207372742E636C6F75642E6672616D65776F726B2E64627377697463682E646174612E736572766963652E4D6967726174696F6E536572766963652E72756E284D6967726174696F6E536572766963652E6A6176613A313839290D0A096174206E65742E7372742E71756172747A2E7461736B2E446174614163636573735461736B2E72756E28446174614163636573735461736B2E6A6176613A313037290D0A0961742073756E2E7265666C6563742E4E61746976654D6574686F644163636573736F72496D706C2E696E766F6B6530284E6174697665204D6574686F64290D0A0961742073756E2E7265666C6563742E4E61746976654D6574686F644163636573736F72496D706C2E696E766F6B65284E61746976654D6574686F644163636573736F72496D706C2E6A6176613A3632290D0A0961742073756E2E7265666C6563742E44656C65676174696E674D6574686F644163636573736F72496D706C2E696E766F6B652844656C65676174696E674D6574686F644163636573736F72496D706C2E6A6176613A3433290D0A096174206A6176612E6C616E672E7265666C6563742E4D6574686F642E696E766F6B65284D6574686F642E6A6176613A343938290D0A096174206E65742E7372742E71756172747A2E7574696C732E41627374726163745363686564756C654A6F622E646F457865637574652841627374726163745363686564756C654A6F622E6A6176613A3635290D0A096174206E65742E7372742E71756172747A2E7574696C732E41627374726163745363686564756C654A6F622E657865637574652841627374726163745363686564756C654A6F622E6A6176613A3435290D0A096174206F72672E71756172747A2E636F72652E4A6F6252756E5368656C6C2E72756E284A6F6252756E5368656C6C2E6A6176613A323032290D0A096174206F72672E71756172747A2E73696D706C2E53696D706C65546872656164506F6F6C24576F726B65725468726561642E72756E2853696D706C65546872656164506F6F6C2E6A6176613A353733290D0A4361757365642062793A206A6176612E6C616E672E4172726179496E6465784F75744F66426F756E6473457863657074696F6E0D0A096174206A6176612E6C616E672E53797374656D2E6172726179636F7079284E6174697665204D6574686F64290D0A096174206A6176612E6C616E672E537472696E672E676574436861727328537472696E672E6A6176613A383236290D0A096174206A6176612E6C616E672E4162737472616374537472696E674275696C6465722E617070656E64284162737472616374537472696E674275696C6465722E6A6176613A343439290D0A096174206A6176612E6C616E672E537472696E674275696C6465722E617070656E6428537472696E674275696C6465722E6A6176613A313336290D0A096174206E65742E7372742E71756172747A2E7461736B2E446174614163636573735461736B2E6C616D626461246275696C644D6967726174696F6E53657276696365243128446174614163636573735461736B2E6A6176613A333336290D0A096174207372742E636C6F75642E6672616D65776F726B2E64627377697463682E646174612E736572766963652E4D6967726174696F6E536572766963652E67657441636365707448616E646C6572284D6967726174696F6E536572766963652E6A6176613A323733290D0A096174207372742E636C6F75642E6672616D65776F726B2E64627377697463682E646174612E736572766963652E4D6967726174696F6E536572766963652E6C616D626461246D616B654675747572655461736B2431284D6967726174696F6E536572766963652E6A6176613A323431290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C654675747572652E756E6941636365707428436F6D706C657461626C654675747572652E6A6176613A363536290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C6546757475726524556E694163636570742E747279466972652424246361707475726528436F6D706C657461626C654675747572652E6A6176613A363332290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C6546757475726524556E694163636570742E7472794669726528436F6D706C657461626C654675747572652E6A617661290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C654675747572652E706F7374436F6D706C65746528436F6D706C657461626C654675747572652E6A6176613A343734290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C65467574757265244173796E63537570706C792E72756E2424246361707475726528436F6D706C657461626C654675747572652E6A6176613A31353935290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C65467574757265244173796E63537570706C792E72756E28436F6D706C657461626C654675747572652E6A617661290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C65467574757265244173796E63537570706C792E6578656328436F6D706C657461626C654675747572652E6A6176613A31353832290D0A096174206A6176612E7574696C2E636F6E63757272656E742E466F726B4A6F696E5461736B2E646F4578656328466F726B4A6F696E5461736B2E6A6176613A323839290D0A096174206A6176612E7574696C2E636F6E63757272656E742E466F726B4A6F696E506F6F6C24576F726B51756575652E72756E5461736B28466F726B4A6F696E506F6F6C2E6A6176613A31303536290D0A096174206A6176612E7574696C2E636F6E63757272656E742E466F726B4A6F696E506F6F6C2E72756E576F726B657228466F726B4A6F696E506F6F6C2E6A6176613A31363932290D0A096174206A6176612E7574696C2E636F6E63757272656E742E466F726B4A6F696E576F726B65725468726561642E72756E28466F726B4A6F696E576F726B65725468726561642E6A6176613A313537290D0A323032332D30372D32332031313A31393A3232204461746120616363657373207461736B20656E64206661696C65640D0A, 0x6A6176612E7574696C2E636F6E63757272656E742E457865637574696F6E457863657074696F6E3A206A6176612E6C616E672E4172726179496E6465784F75744F66426F756E6473457863657074696F6E0D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C654675747572652E7265706F727447657428436F6D706C657461626C654675747572652E6A6176613A333537290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C654675747572652E67657428436F6D706C657461626C654675747572652E6A6176613A31383935290D0A096174207372742E636C6F75642E6672616D65776F726B2E64627377697463682E646174612E736572766963652E4D6967726174696F6E536572766963652E72756E284D6967726174696F6E536572766963652E6A6176613A313839290D0A096174206E65742E7372742E71756172747A2E7461736B2E446174614163636573735461736B2E72756E28446174614163636573735461736B2E6A6176613A313037290D0A0961742073756E2E7265666C6563742E4E61746976654D6574686F644163636573736F72496D706C2E696E766F6B6530284E6174697665204D6574686F64290D0A0961742073756E2E7265666C6563742E4E61746976654D6574686F644163636573736F72496D706C2E696E766F6B65284E61746976654D6574686F644163636573736F72496D706C2E6A6176613A3632290D0A0961742073756E2E7265666C6563742E44656C65676174696E674D6574686F644163636573736F72496D706C2E696E766F6B652844656C65676174696E674D6574686F644163636573736F72496D706C2E6A6176613A3433290D0A096174206A6176612E6C616E672E7265666C6563742E4D6574686F642E696E766F6B65284D6574686F642E6A6176613A343938290D0A096174206E65742E7372742E71756172747A2E7574696C732E41627374726163745363686564756C654A6F622E646F457865637574652841627374726163745363686564756C654A6F622E6A6176613A3635290D0A096174206E65742E7372742E71756172747A2E7574696C732E41627374726163745363686564756C654A6F622E657865637574652841627374726163745363686564756C654A6F622E6A6176613A3435290D0A096174206F72672E71756172747A2E636F72652E4A6F6252756E5368656C6C2E72756E284A6F6252756E5368656C6C2E6A6176613A323032290D0A096174206F72672E71756172747A2E73696D706C2E53696D706C65546872656164506F6F6C24576F726B65725468726561642E72756E2853696D706C65546872656164506F6F6C2E6A6176613A353733290D0A4361757365642062793A206A6176612E6C616E672E4172726179496E6465784F75744F66426F756E6473457863657074696F6E0D0A096174206A6176612E6C616E672E53797374656D2E6172726179636F7079284E6174697665204D6574686F64290D0A096174206A6176612E6C616E672E537472696E672E676574436861727328537472696E672E6A6176613A383236290D0A096174206A6176612E6C616E672E4162737472616374537472696E674275696C6465722E617070656E64284162737472616374537472696E674275696C6465722E6A6176613A343439290D0A096174206A6176612E6C616E672E537472696E674275696C6465722E617070656E6428537472696E674275696C6465722E6A6176613A313336290D0A096174206E65742E7372742E71756172747A2E7461736B2E446174614163636573735461736B2E6C616D626461246275696C644D6967726174696F6E53657276696365243128446174614163636573735461736B2E6A6176613A333336290D0A096174207372742E636C6F75642E6672616D65776F726B2E64627377697463682E646174612E736572766963652E4D6967726174696F6E536572766963652E67657441636365707448616E646C6572284D6967726174696F6E536572766963652E6A6176613A323733290D0A096174207372742E636C6F75642E6672616D65776F726B2E64627377697463682E646174612E736572766963652E4D6967726174696F6E536572766963652E6C616D626461246D616B654675747572655461736B2431284D6967726174696F6E536572766963652E6A6176613A323431290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C654675747572652E756E6941636365707428436F6D706C657461626C654675747572652E6A6176613A363536290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C6546757475726524556E694163636570742E747279466972652424246361707475726528436F6D706C657461626C654675747572652E6A6176613A363332290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C6546757475726524556E694163636570742E7472794669726528436F6D706C657461626C654675747572652E6A617661290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C654675747572652E706F7374436F6D706C65746528436F6D706C657461626C654675747572652E6A6176613A343734290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C65467574757265244173796E63537570706C792E72756E2424246361707475726528436F6D706C657461626C654675747572652E6A6176613A31353935290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C65467574757265244173796E63537570706C792E72756E28436F6D706C657461626C654675747572652E6A617661290D0A096174206A6176612E7574696C2E636F6E63757272656E742E436F6D706C657461626C65467574757265244173796E63537570706C792E6578656328436F6D706C657461626C654675747572652E6A6176613A31353832290D0A096174206A6176612E7574696C2E636F6E63757272656E742E466F726B4A6F696E5461736B2E646F4578656328466F726B4A6F696E5461736B2E6A6176613A323839290D0A096174206A6176612E7574696C2E636F6E63757272656E742E466F726B4A6F696E506F6F6C24576F726B51756575652E72756E5461736B28466F726B4A6F696E506F6F6C2E6A6176613A31303536290D0A096174206A6176612E7574696C2E636F6E63757272656E742E466F726B4A6F696E506F6F6C2E72756E576F726B657228466F726B4A6F696E506F6F6C2E6A6176613A31363932290D0A096174206A6176612E7574696C2E636F6E63757272656E742E466F726B4A6F696E576F726B65725468726561642E72756E28466F726B4A6F696E576F726B65725468726561642E6A6176613A313537290D0A, '9', '2', '0', '616B ', '10002', '0', '0', null, '2023-07-23 11:19:22', null, '2023-07-23 11:19:23'); + +-- ---------------------------- +-- Table structure for data_access_task_detail +-- ---------------------------- +DROP TABLE IF EXISTS `data_access_task_detail`; +CREATE TABLE `data_access_task_detail` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键id', + `data_access_id` bigint NOT NULL COMMENT '数据接入id', + `task_id` bigint NOT NULL COMMENT '数据接入任务id', + `source_schema_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '源端库名', + `source_table_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '源端表名', + `target_schema_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '目的端库名', + `target_table_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '目的端表名', + `sync_count` bigint NOT NULL COMMENT '同步记录数', + `sync_bytes` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '同步数据量', + `if_success` int NOT NULL COMMENT '是否成功 0-否 1-是', + `error_msg` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci COMMENT '失败信息', + `project_id` bigint NOT NULL COMMENT '项目id', + `success_msg` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '成功信息', + `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `target_table_name_idx` (`target_table_name`) USING BTREE, + KEY `access_id_idx` (`data_access_id`) USING BTREE, + KEY `task_id_idx` (`task_id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='数据接入-同步记录详情'; + +-- ---------------------------- +-- Records of data_access_task_detail +-- ---------------------------- +INSERT INTO `data_access_task_detail` VALUES ('1', '120011', '1', 'srt_cloud_test', 'example_tbl', 'test_db', 'example_tbl', '1', '24B ', '1', null, '10002', '同步成功', '2023-06-26 22:37:48'); +INSERT INTO `data_access_task_detail` VALUES ('2', '120011', '1', 'srt_cloud_test', 'people_cdc_test', 'test_db', 'people_cdc_test', '4', '256B ', '1', null, '10002', '同步成功', '2023-06-26 22:37:49'); +INSERT INTO `data_access_task_detail` VALUES ('3', '120011', '1', 'srt_cloud_test', 'people', 'test_db', 'people', '5', '360B ', '1', null, '10002', '同步成功', '2023-06-26 22:37:49'); +INSERT INTO `data_access_task_detail` VALUES ('4', '120008', '2', 'srt_cloud_test', 'people', 'srt_data_warehouse_p_10002', 'ods_people', '5', '360B ', '1', null, '10002', '同步成功', '2023-07-23 11:19:23'); +INSERT INTO `data_access_task_detail` VALUES ('5', '120008', '2', 'srt_cloud_test', 'people_cdc_test', 'srt_data_warehouse_p_10002', 'ods_people_cdc_test', '4', '256B ', '1', null, '10002', '同步成功', '2023-07-23 11:19:23'); + +-- ---------------------------- +-- Table structure for data_database +-- ---------------------------- +DROP TABLE IF EXISTS `data_database`; +CREATE TABLE `data_database` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键id', + `name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '名称', + `database_type` int NOT NULL COMMENT '数据库类型(字典 database_type)', + `database_ip` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '主机ip地址', + `database_port` varchar(10) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '端口', + `database_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '库名', + `status` int NOT NULL DEFAULT '0' COMMENT '状态(字典 database_status)', + `user_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '用户名', + `password` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '密码', + `is_rt_approve` int DEFAULT NULL COMMENT '是否支持实时接入(字典 yes_or_no)', + `no_rt_reason` varchar(150) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '不支持实时接入原因', + `jdbc_url` varchar(1000) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT 'jdbcUrl', + `project_id` bigint DEFAULT NULL COMMENT '所属项目id', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='数据集成-数据库管理'; + +-- ---------------------------- +-- Records of data_database +-- ---------------------------- +INSERT INTO `data_database` VALUES ('5', '人口测试数据', '1', '124.223.48.209', '3306', 'srt_cloud_test', '1', 'root', 'root', null, null, 'jdbc:mysql://124.223.48.209:3306/srt_cloud_test?useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true', '10002', '4', '0', '10000', '2023-01-20 13:43:12', '10000', '2023-02-11 21:46:07'); +INSERT INTO `data_database` VALUES ('8', 'doris测试', '16', '192.168.30.128', '9030', 'test_db', '1', 'root', '123456', null, null, 'jdbc:mysql://192.168.30.128:9030/test_db?useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8&rewriteBatchedStatements=true', '10002', '1', '0', '10000', '2023-06-19 17:49:30', '10000', '2023-06-26 22:33:04'); + +-- ---------------------------- +-- Table structure for data_file +-- ---------------------------- +DROP TABLE IF EXISTS `data_file`; +CREATE TABLE `data_file` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键id', + `name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '名称', + `file_category_id` bigint NOT NULL COMMENT '所属分组id', + `type` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '文件类型', + `file_url` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '文件url地址', + `description` varchar(300) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '描述', + `project_id` bigint NOT NULL COMMENT '项目id', + `size` bigint DEFAULT NULL COMMENT '文件大小', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='文件表'; + +-- ---------------------------- +-- Records of data_file +-- ---------------------------- +INSERT INTO `data_file` VALUES ('1', '642fdd9f405aa93f862d70572d88ece.png', '5', 'png', 'http://localhost:8082/sys/upload/20230718/642fdd9f405aa93f862d70572d88ece_56737.png', '', '10002', '71957', '0', '0', '10000', '2023-07-18 15:45:52', '10000', '2023-07-18 15:45:52'); + +-- ---------------------------- +-- Table structure for data_file_category +-- ---------------------------- +DROP TABLE IF EXISTS `data_file_category`; +CREATE TABLE `data_file_category` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键id', + `parent_id` bigint NOT NULL DEFAULT '0' COMMENT '父级id(顶级为0)', + `type` int NOT NULL DEFAULT '0' COMMENT '0-文件夹 1-文件目录', + `name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL COMMENT '分组名称', + `order_no` int NOT NULL COMMENT '分组序号', + `description` varchar(300) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL COMMENT '描述', + `path` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL COMMENT '分组路径', + `project_id` bigint NOT NULL COMMENT '项目id', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin ROW_FORMAT=DYNAMIC COMMENT='文件分组表'; + +-- ---------------------------- +-- Records of data_file_category +-- ---------------------------- +INSERT INTO `data_file_category` VALUES ('2', '0', '0', '111', '0', '111', '111', '10002', '0', '0', '10000', '2023-07-11 11:16:44', '10000', '2023-07-11 11:16:44'); +INSERT INTO `data_file_category` VALUES ('3', '2', '1', '222', '0', '', '111/222', '10002', '0', '0', '10000', '2023-07-18 15:10:57', '10000', '2023-07-18 15:10:57'); +INSERT INTO `data_file_category` VALUES ('4', '2', '0', '测试目录', '0', '', '111/测试目录', '10002', '0', '0', '10000', '2023-07-18 15:41:21', '10000', '2023-07-18 15:41:21'); +INSERT INTO `data_file_category` VALUES ('5', '4', '1', '测试文件目录', '0', '顶顶顶', '111/测试目录/测试文件目录', '10002', '0', '0', '10000', '2023-07-18 15:42:47', '10000', '2023-07-18 15:42:56'); + +-- ---------------------------- +-- Table structure for data_layer +-- ---------------------------- +DROP TABLE IF EXISTS `data_layer`; +CREATE TABLE `data_layer` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键id', + `name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '分层英文名称', + `cn_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '分层中文名称', + `note` varchar(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '分层描述', + `table_prefix` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '表名前缀', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=30008 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='数仓分层'; + +-- ---------------------------- +-- Records of data_layer +-- ---------------------------- +INSERT INTO `data_layer` VALUES ('30002', 'ODS', '数据引入层', '用于接收并处理需要存储至数据仓库系统的原始数据,其数据表的结构与原始数据所在的数据系统中的表结构一致,是数据仓库的数据准备区', 'ods', '3', '0', '10000', '2022-10-08 17:16:35', '10000', '2022-10-08 17:16:41'); +INSERT INTO `data_layer` VALUES ('30003', 'DIM', '维度层', '使用维度构建数据模型', 'dim', '0', '0', '10000', '2022-10-08 17:17:40', '10000', '2022-10-08 17:17:42'); +INSERT INTO `data_layer` VALUES ('30004', 'DWD', '明细数据层', '通过企业的业务活动事件构建数据模型。基于具体业务事件的特点,构建最细粒度的明细数据事实表。', 'dwd', '0', '0', '10000', '2022-10-08 17:18:13', '10000', '2022-10-08 17:18:18'); +INSERT INTO `data_layer` VALUES ('30006', 'DWS', '汇总数据层', '通过分析的主题对象构建数据模型。基于上层的应用和产品的指标需求,构建公共粒度的汇总指标表。', 'dws', '0', '0', '10000', '2022-10-08 17:20:03', '10000', '2022-10-08 17:20:09'); +INSERT INTO `data_layer` VALUES ('30007', 'ADS', '应用数据层', '用于存放数据产品个性化的统计指标数据,输出各种报表', 'ads', '0', '0', '10000', '2022-10-08 17:20:52', '10000', '2022-10-08 17:20:57'); + +-- ---------------------------- +-- Table structure for data_ods +-- ---------------------------- +DROP TABLE IF EXISTS `data_ods`; +CREATE TABLE `data_ods` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键id', + `data_access_id` bigint NOT NULL COMMENT '数据接入id', + `table_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '表名', + `remarks` varchar(300) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '注释', + `project_id` bigint NOT NULL COMMENT '项目id', + `recently_sync_time` datetime DEFAULT NULL COMMENT '最近同步时间', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + UNIQUE KEY `table_name_uni` (`table_name`,`project_id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='数据集成-贴源数据'; + +-- ---------------------------- +-- Records of data_ods +-- ---------------------------- +INSERT INTO `data_ods` VALUES ('1', '120008', 'ods_people', '人口信息表', '10002', '2023-07-23 11:19:22', '0', '0', null, '2023-04-07 16:39:00', null, '2023-07-23 11:19:23'); +INSERT INTO `data_ods` VALUES ('4', '120008', 'ods_people_cdc_test', '人口信息表', '10002', '2023-07-23 11:19:22', '0', '0', null, '2023-07-23 11:19:23', null, '2023-07-23 11:19:23'); + +-- ---------------------------- +-- Table structure for data_project +-- ---------------------------- +DROP TABLE IF EXISTS `data_project`; +CREATE TABLE `data_project` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '项目名称', + `eng_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '英文名称', + `description` varchar(300) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '描述', + `status` int NOT NULL COMMENT '0-停用 1-启用', + `duty_person` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '负责人', + `db_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '数仓库名', + `db_url` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '数仓url', + `db_username` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '数仓用户名', + `db_password` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '数仓密码', + `db_type` int NOT NULL DEFAULT '1' COMMENT '数仓类型 字典 data_house_type', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=10010 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='数据项目'; + +-- ---------------------------- +-- Records of data_project +-- ---------------------------- +INSERT INTO `data_project` VALUES ('10002', '默认项目', 'test_project', '测试', '1', 'admin', 'srt_data_warehouse_p_10002', 'jdbc:mysql://124.223.48.209:3306/srt_data_warehouse_p_10002?useUnicode=true&characterEncoding=UTF-8&serverTimezone=Asia/Shanghai&nullCatalogMeansCurrent=true', 'srt_data_warehouse_p_10002', 'BschnvGuVatChHQ7', '1', '75', '0', '10000', '2022-09-27 20:59:19', '10000', '2023-06-13 16:49:33'); +INSERT INTO `data_project` VALUES ('10009', 'doris数仓测试', 'doris-test', '', '1', 'admin', 'test_db', 'jdbc:mysql://192.168.30.128:9030/test_db?useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8&rewriteBatchedStatements=true', 'root', '123456', '16', '2', '0', '10000', '2023-06-23 10:27:15', '10000', '2023-06-26 22:43:15'); + +-- ---------------------------- +-- Table structure for data_project_user_rel +-- ---------------------------- +DROP TABLE IF EXISTS `data_project_user_rel`; +CREATE TABLE `data_project_user_rel` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键id', + `data_project_id` bigint NOT NULL COMMENT '项目id', + `user_id` bigint NOT NULL COMMENT '用户id', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + UNIQUE KEY `project_admin_uni` (`data_project_id`,`user_id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin ROW_FORMAT=DYNAMIC COMMENT='项目用户关联表'; + +-- ---------------------------- +-- Records of data_project_user_rel +-- ---------------------------- +INSERT INTO `data_project_user_rel` VALUES ('7', '10002', '10001', '0', '0', '10000', '2022-10-28 16:20:58', '10000', '2022-10-28 16:20:58'); +INSERT INTO `data_project_user_rel` VALUES ('8', '10002', '10002', '0', '0', '10000', '2022-10-28 16:20:58', '10000', '2022-10-28 16:20:58'); + +-- ---------------------------- +-- Table structure for qrtz_blob_triggers +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_blob_triggers`; +CREATE TABLE `qrtz_blob_triggers` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `trigger_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_name的外键', + `trigger_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_group的外键', + `blob_data` blob COMMENT '存放持久化Trigger对象', + PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`) USING BTREE, + CONSTRAINT `qrtz_blob_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `trigger_name`, `trigger_group`) REFERENCES `qrtz_triggers` (`sched_name`, `trigger_name`, `trigger_group`) ON DELETE RESTRICT ON UPDATE RESTRICT +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='Blob类型的触发器表'; + +-- ---------------------------- +-- Records of qrtz_blob_triggers +-- ---------------------------- + +-- ---------------------------- +-- Table structure for qrtz_calendars +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_calendars`; +CREATE TABLE `qrtz_calendars` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `calendar_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '日历名称', + `calendar` blob NOT NULL COMMENT '存放持久化calendar对象', + PRIMARY KEY (`sched_name`,`calendar_name`) USING BTREE +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='日历信息表'; + +-- ---------------------------- +-- Records of qrtz_calendars +-- ---------------------------- + +-- ---------------------------- +-- Table structure for qrtz_cron_triggers +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_cron_triggers`; +CREATE TABLE `qrtz_cron_triggers` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `trigger_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_name的外键', + `trigger_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_group的外键', + `cron_expression` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'cron表达式', + `time_zone_id` varchar(80) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '时区', + PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`) USING BTREE, + CONSTRAINT `qrtz_cron_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `trigger_name`, `trigger_group`) REFERENCES `qrtz_triggers` (`sched_name`, `trigger_name`, `trigger_group`) ON DELETE RESTRICT ON UPDATE RESTRICT +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='Cron类型的触发器表'; + +-- ---------------------------- +-- Records of qrtz_cron_triggers +-- ---------------------------- +INSERT INTO `qrtz_cron_triggers` VALUES ('SrtScheduler', 'TASK_NAME_1', 'data_access', '0/30 * * * * ? *', 'GMT+08:00'); +INSERT INTO `qrtz_cron_triggers` VALUES ('SrtScheduler', 'TASK_NAME_10', 'data_quality', '0/30 * * * * ? *', 'GMT+08:00'); +INSERT INTO `qrtz_cron_triggers` VALUES ('SrtScheduler', 'TASK_NAME_2', 'data_production', '0/30 * * * * ? *', 'GMT+08:00'); +INSERT INTO `qrtz_cron_triggers` VALUES ('SrtScheduler', 'TASK_NAME_4', 'data_governance', '0/30 * * * * ? *', 'GMT+08:00'); +INSERT INTO `qrtz_cron_triggers` VALUES ('SrtScheduler', 'TASK_NAME_8', 'data_access', '0/30 * * * * ?', 'GMT+08:00'); + +-- ---------------------------- +-- Table structure for qrtz_fired_triggers +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_fired_triggers`; +CREATE TABLE `qrtz_fired_triggers` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `entry_id` varchar(95) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度器实例id', + `trigger_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_name的外键', + `trigger_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_group的外键', + `instance_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度器实例名', + `fired_time` bigint NOT NULL COMMENT '触发的时间', + `sched_time` bigint NOT NULL COMMENT '定时器制定的时间', + `priority` int NOT NULL COMMENT '优先级', + `state` varchar(16) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '状态', + `job_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '任务名称', + `job_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '任务组名', + `is_nonconcurrent` varchar(1) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '是否并发', + `requests_recovery` varchar(1) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '是否接受恢复执行', + PRIMARY KEY (`sched_name`,`entry_id`) USING BTREE +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='已触发的触发器表'; + +-- ---------------------------- +-- Records of qrtz_fired_triggers +-- ---------------------------- + +-- ---------------------------- +-- Table structure for qrtz_job_details +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_job_details`; +CREATE TABLE `qrtz_job_details` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `job_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '任务名称', + `job_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '任务组名', + `description` varchar(250) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '相关介绍', + `job_class_name` varchar(250) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '执行任务类名称', + `is_durable` varchar(1) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '是否持久化', + `is_nonconcurrent` varchar(1) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '是否并发', + `is_update_data` varchar(1) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '是否更新数据', + `requests_recovery` varchar(1) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '是否接受恢复执行', + `job_data` blob COMMENT '存放持久化job对象', + PRIMARY KEY (`sched_name`,`job_name`,`job_group`) USING BTREE +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='任务详细信息表'; + +-- ---------------------------- +-- Records of qrtz_job_details +-- ---------------------------- +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_1', 'data_access', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074000E646174614163636573735461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007070740010302F3330202A202A202A202A203F202A71007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000174000B646174615F6163636573737400275B3132303030385DE4BABAE58FA3E6B58BE8AF95E695B0E68DAE2DE3808BE4B8ADE58FB0E5BA937371007E00100000000274000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C75657870007400063132303030387371007E001400000000000027127071007E00127371007E0014000000000001D4C87372000E6A6176612E7574696C2E44617465686A81014B5974190300007870770800000187555334687870707800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_10', 'data_quality', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074000F646174615175616C6974795461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B5974190300007870770800000188F1A05A707870740010302F3330202A202A202A202A203F202A71007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000A74000C646174615F7175616C6974797400125B335DE6B58BE8AF95E6898BE69CBAE58FB77371007E00100000000574000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C7565787000740001337371007E001600000000000027127071007E00127371007E001600000000000000037371007E0013770800000188F1A1F0B078707371007E0010000000017800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_11', 'data_access', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074000E646174614163636573735461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B5974190300007870770800000188F80B2138787074000071007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000B74000B646174615F6163636573737400155B3132303031325D6F7261636C652D3E646F7269737371007E00100000000274000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C75657870017400063132303031327371007E001600000000000027127071007E00127371007E0016000000000001D4CC7371007E0013770800000188F80B213878707371007E0010000000017800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_12', 'data_access', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074000E646174614163636573735461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B5974190300007870770800000188F80B30D8787074000071007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000C74000B646174615F6163636573737400175B3132303031305D6F7261636C652DE3808B6D7973716C7371007E00100000000274000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C75657870017400063132303031307371007E001600000000000027127071007E00127371007E0016000000000001D4CA7371007E0013770800000188F80B30D878707371007E0010000000017800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_2', 'data_production', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074001A6461746150726F64756374696F6E5363686564756C655461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B5974190300007870770800000185D2B187407870740010302F3330202A202A202A202A203F202A71007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000274000F646174615F70726F64756374696F6E74000F5B315DE6B58BE8AF95E8B083E5BAA67371007E00100000000374000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C7565787000740001317371007E001600000000000027127071007E00127371007E001600000000000000017371007E0013770800000185D2B2D73078707371007E0010000000017800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_3', 'data_access', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074000E646174614163636573735461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B5974190300007870770800000185D32B2410787074000071007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000374000B646174615F6163636573737400275B3132303030385DE4BABAE58FA3E6B58BE8AF95E695B0E68DAE2DE3808BE4B8ADE58FB0E5BA937371007E00100000000274000873796E6344617461737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C75657870017400063132303030387371007E001600000000000027127071007E00127371007E0016000000000001D4C87371007E0013770800000185D391D13878707371007E0010000000017800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_4', 'data_governance', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074002164617461476F7665726E616E63654D65746164617461436F6C6C6563745461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B59741903000078707708000001874FA9F0B07870740010302F3330202A202A202A202A203F202A71007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000474000F646174615F676F7665726E616E636574000F5B315DE6B58BE8AF95E98787E99B867371007E00100000000474000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C7565787000740001317371007E001600000000000027127071007E00127371007E001600000000000000017371007E00137708000001875F2E526078707371007E0010000000037800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_5', 'data_governance', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074002164617461476F7665726E616E63654D65746164617461436F6C6C6563745461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B59741903000078707708000001874FC7486878707071007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000574000F646174615F676F7665726E616E636574000F5B315DE6B58BE8AF95E98787E99B867371007E00100000000474000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C7565787001740001317371007E001500000000000027127071007E00127371007E001500000000000000017371007E001377080000018750619C3878707371007E0010000000057800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_6', 'data_governance', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074002164617461476F7665726E616E63654D65746164617461436F6C6C6563745461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B5974190300007870770800000187515DE62878707071007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000674000F646174615F676F7665726E616E63657400185B365DE4B8ADE58FB0E5BA93E98787E99B86E6B58BE8AF957371007E00100000000474000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C7565787001740001367371007E001500000000000027127071007E001271007E00167371007E0013770800000187544C920878707371007E0010000000027800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_7', 'data_governance', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074002164617461476F7665726E616E63654D65746164617461436F6C6C6563745461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B597419030000787077080000018751646E0878707071007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000774000F646174615F676F7665726E616E636574000F5B375D6F7261636C65E6B58BE8AF957371007E00100000000474000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C7565787001740001377371007E001500000000000027127071007E001271007E00167371007E001377080000018751FC02B878707371007E0010000000087800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_8', 'data_access', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074000E646174614163636573735461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B59741903000078707708000001875560E068787074000E302F3330202A202A202A202A203F71007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000874000B646174615F61636365737374001A5B3132303030395D6F7261636C65E695B4E5BA93E5908CE6ADA57371007E00100000000274000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C75657870007400063132303030397371007E001600000000000027127071007E00127371007E0016000000000001D4C97371007E001377080000018755772B70787071007E001A7800); +INSERT INTO `qrtz_job_details` VALUES ('SrtScheduler', 'TASK_NAME_9', 'data_quality', null, 'net.srt.quartz.utils.ScheduleDisallowConcurrentExecution', '0', '1', '0', '0', 0xACED0005737200156F72672E71756172747A2E4A6F62446174614D61709FB083E8BFA9B0CB020000787200266F72672E71756172747A2E7574696C732E537472696E674B65794469727479466C61674D61708208E8C3FBC55D280200015A0013616C6C6F77735472616E7369656E74446174617872001D6F72672E71756172747A2E7574696C732E4469727479466C61674D617013E62EAD28760ACE0200025A000564697274794C00036D617074000F4C6A6176612F7574696C2F4D61703B787001737200116A6176612E7574696C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F724900097468726573686F6C6478703F4000000000000C7708000000100000000174000D4A4F425F504152414D5F4B4559737200276E65742E7372742E71756172747A2E656E746974792E5363686564756C654A6F62456E7469747900000000000000010200155A0007736176654C6F674C00086265616E4E616D657400124C6A6176612F6C616E672F537472696E673B4C000A636F6E63757272656E747400134C6A6176612F6C616E672F496E74656765723B4C000A63726561746554696D657400104C6A6176612F7574696C2F446174653B4C000763726561746F727400104C6A6176612F6C616E672F4C6F6E673B4C000E63726F6E45787072657373696F6E71007E00094C000764656C6574656471007E000A4C0002696471007E000C4C00086A6F6247726F757071007E00094C00076A6F624E616D6571007E00094C00076A6F625479706571007E000A4C00066D6574686F6471007E00094C00046F6E63657400134C6A6176612F6C616E672F426F6F6C65616E3B4C0006706172616D7371007E00094C000970726F6A656374496471007E000C4C000672656D61726B71007E00094C000673746174757371007E000A4C000674797065496471007E000C4C000A75706461746554696D6571007E000B4C00077570646174657271007E000C4C000776657273696F6E71007E000A78700074000F646174615175616C6974795461736B737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B0200007870000000007372000E6A6176612E7574696C2E44617465686A81014B5974190300007870770800000188ED8C02E0787074000071007E00127372000E6A6176612E6C616E672E4C6F6E673B8BE490CC8F23DF0200014A000576616C75657871007E0011000000000000000974000C646174615F7175616C6974797400125B315DE6B58BE8AF95E594AFE4B880E680A77371007E00100000000574000372756E737200116A6176612E6C616E672E426F6F6C65616ECD207280D59CFAEE0200015A000576616C7565787001740001317371007E001600000000000027127071007E00127371007E001600000000000000017371007E0013770800000188ED8C9B3878707371007E0010000000027800); + +-- ---------------------------- +-- Table structure for qrtz_locks +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_locks`; +CREATE TABLE `qrtz_locks` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `lock_name` varchar(40) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '悲观锁名称', + PRIMARY KEY (`sched_name`,`lock_name`) USING BTREE +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='存储的悲观锁信息表'; + +-- ---------------------------- +-- Records of qrtz_locks +-- ---------------------------- +INSERT INTO `qrtz_locks` VALUES ('SrtScheduler', 'STATE_ACCESS'); +INSERT INTO `qrtz_locks` VALUES ('SrtScheduler', 'TRIGGER_ACCESS'); + +-- ---------------------------- +-- Table structure for qrtz_paused_trigger_grps +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_paused_trigger_grps`; +CREATE TABLE `qrtz_paused_trigger_grps` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `trigger_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_group的外键', + PRIMARY KEY (`sched_name`,`trigger_group`) USING BTREE +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='暂停的触发器表'; + +-- ---------------------------- +-- Records of qrtz_paused_trigger_grps +-- ---------------------------- + +-- ---------------------------- +-- Table structure for qrtz_scheduler_state +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_scheduler_state`; +CREATE TABLE `qrtz_scheduler_state` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `instance_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '实例名称', + `last_checkin_time` bigint NOT NULL COMMENT '上次检查时间', + `checkin_interval` bigint NOT NULL COMMENT '检查间隔时间', + PRIMARY KEY (`sched_name`,`instance_name`) USING BTREE +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='调度器状态表'; + +-- ---------------------------- +-- Records of qrtz_scheduler_state +-- ---------------------------- +INSERT INTO `qrtz_scheduler_state` VALUES ('SrtScheduler', 'PC-20230301UOMX1689904842681', '1690086191090', '15000'); + +-- ---------------------------- +-- Table structure for qrtz_simple_triggers +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_simple_triggers`; +CREATE TABLE `qrtz_simple_triggers` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `trigger_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_name的外键', + `trigger_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_group的外键', + `repeat_count` bigint NOT NULL COMMENT '重复的次数统计', + `repeat_interval` bigint NOT NULL COMMENT '重复的间隔时间', + `times_triggered` bigint NOT NULL COMMENT '已经触发的次数', + PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`) USING BTREE, + CONSTRAINT `qrtz_simple_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `trigger_name`, `trigger_group`) REFERENCES `qrtz_triggers` (`sched_name`, `trigger_name`, `trigger_group`) ON DELETE RESTRICT ON UPDATE RESTRICT +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='简单触发器的信息表'; + +-- ---------------------------- +-- Records of qrtz_simple_triggers +-- ---------------------------- +INSERT INTO `qrtz_simple_triggers` VALUES ('SrtScheduler', 'TASK_NAME_11', 'data_access', '0', '0', '0'); +INSERT INTO `qrtz_simple_triggers` VALUES ('SrtScheduler', 'TASK_NAME_12', 'data_access', '0', '0', '0'); +INSERT INTO `qrtz_simple_triggers` VALUES ('SrtScheduler', 'TASK_NAME_3', 'data_access', '0', '0', '0'); +INSERT INTO `qrtz_simple_triggers` VALUES ('SrtScheduler', 'TASK_NAME_5', 'data_governance', '0', '0', '0'); +INSERT INTO `qrtz_simple_triggers` VALUES ('SrtScheduler', 'TASK_NAME_6', 'data_governance', '0', '0', '0'); +INSERT INTO `qrtz_simple_triggers` VALUES ('SrtScheduler', 'TASK_NAME_7', 'data_governance', '0', '0', '0'); +INSERT INTO `qrtz_simple_triggers` VALUES ('SrtScheduler', 'TASK_NAME_9', 'data_quality', '0', '0', '0'); + +-- ---------------------------- +-- Table structure for qrtz_simprop_triggers +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_simprop_triggers`; +CREATE TABLE `qrtz_simprop_triggers` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `trigger_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_name的外键', + `trigger_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_triggers表trigger_group的外键', + `str_prop_1` varchar(512) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT 'String类型的trigger的第一个参数', + `str_prop_2` varchar(512) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT 'String类型的trigger的第二个参数', + `str_prop_3` varchar(512) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT 'String类型的trigger的第三个参数', + `int_prop_1` int DEFAULT NULL COMMENT 'int类型的trigger的第一个参数', + `int_prop_2` int DEFAULT NULL COMMENT 'int类型的trigger的第二个参数', + `long_prop_1` bigint DEFAULT NULL COMMENT 'long类型的trigger的第一个参数', + `long_prop_2` bigint DEFAULT NULL COMMENT 'long类型的trigger的第二个参数', + `dec_prop_1` decimal(13,4) DEFAULT NULL COMMENT 'decimal类型的trigger的第一个参数', + `dec_prop_2` decimal(13,4) DEFAULT NULL COMMENT 'decimal类型的trigger的第二个参数', + `bool_prop_1` varchar(1) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT 'Boolean类型的trigger的第一个参数', + `bool_prop_2` varchar(1) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT 'Boolean类型的trigger的第二个参数', + PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`) USING BTREE, + CONSTRAINT `qrtz_simprop_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `trigger_name`, `trigger_group`) REFERENCES `qrtz_triggers` (`sched_name`, `trigger_name`, `trigger_group`) ON DELETE RESTRICT ON UPDATE RESTRICT +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='同步机制的行锁表'; + +-- ---------------------------- +-- Records of qrtz_simprop_triggers +-- ---------------------------- + +-- ---------------------------- +-- Table structure for qrtz_triggers +-- ---------------------------- +DROP TABLE IF EXISTS `qrtz_triggers`; +CREATE TABLE `qrtz_triggers` ( + `sched_name` varchar(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '调度名称', + `trigger_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '触发器的名字', + `trigger_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '触发器所属组的名字', + `job_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_job_details表job_name的外键', + `job_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'qrtz_job_details表job_group的外键', + `description` varchar(250) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '相关介绍', + `next_fire_time` bigint DEFAULT NULL COMMENT '上一次触发时间(毫秒)', + `prev_fire_time` bigint DEFAULT NULL COMMENT '下一次触发时间(默认为-1表示不触发)', + `priority` int DEFAULT NULL COMMENT '优先级', + `trigger_state` varchar(16) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '触发器状态', + `trigger_type` varchar(8) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '触发器的类型', + `start_time` bigint NOT NULL COMMENT '开始时间', + `end_time` bigint DEFAULT NULL COMMENT '结束时间', + `calendar_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '日程表名称', + `misfire_instr` smallint DEFAULT NULL COMMENT '补偿执行的策略', + `job_data` blob COMMENT '存放持久化job对象', + PRIMARY KEY (`sched_name`,`trigger_name`,`trigger_group`) USING BTREE, + KEY `sched_name` (`sched_name`,`job_name`,`job_group`) USING BTREE, + CONSTRAINT `qrtz_triggers_ibfk_1` FOREIGN KEY (`sched_name`, `job_name`, `job_group`) REFERENCES `qrtz_job_details` (`sched_name`, `job_name`, `job_group`) ON DELETE RESTRICT ON UPDATE RESTRICT +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=DYNAMIC COMMENT='触发器详细信息表'; + +-- ---------------------------- +-- Records of qrtz_triggers +-- ---------------------------- +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_1', 'data_access', 'TASK_NAME_1', 'data_access', null, '1689904860000', '-1', '5', 'PAUSED', 'CRON', '1689904854000', '0', null, '2', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_10', 'data_quality', 'TASK_NAME_10', 'data_quality', null, '1689904860000', '-1', '5', 'PAUSED', 'CRON', '1689904855000', '0', null, '2', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_11', 'data_access', 'TASK_NAME_11', 'data_access', null, '1689904855176', '-1', '5', 'PAUSED', 'SIMPLE', '1689904855176', '0', null, '0', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_12', 'data_access', 'TASK_NAME_12', 'data_access', null, '1689904855196', '-1', '5', 'PAUSED', 'SIMPLE', '1689904855196', '0', null, '0', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_2', 'data_production', 'TASK_NAME_2', 'data_production', null, '1689904860000', '-1', '5', 'PAUSED', 'CRON', '1689904854000', '0', null, '2', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_3', 'data_access', 'TASK_NAME_3', 'data_access', null, '1689904854985', '-1', '5', 'PAUSED', 'SIMPLE', '1689904854985', '0', null, '0', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_4', 'data_governance', 'TASK_NAME_4', 'data_governance', null, '1689904860000', '-1', '5', 'PAUSED', 'CRON', '1689904855000', '0', null, '2', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_5', 'data_governance', 'TASK_NAME_5', 'data_governance', null, '1689904855045', '-1', '5', 'PAUSED', 'SIMPLE', '1689904855045', '0', null, '0', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_6', 'data_governance', 'TASK_NAME_6', 'data_governance', null, '1689904855066', '-1', '5', 'PAUSED', 'SIMPLE', '1689904855066', '0', null, '0', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_7', 'data_governance', 'TASK_NAME_7', 'data_governance', null, '1689904855087', '-1', '5', 'PAUSED', 'SIMPLE', '1689904855087', '0', null, '0', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_8', 'data_access', 'TASK_NAME_8', 'data_access', null, '1689904860000', '-1', '5', 'PAUSED', 'CRON', '1689904855000', '0', null, '2', ''); +INSERT INTO `qrtz_triggers` VALUES ('SrtScheduler', 'TASK_NAME_9', 'data_quality', 'TASK_NAME_9', 'data_quality', null, '1689904855133', '-1', '5', 'PAUSED', 'SIMPLE', '1689904855133', '0', null, '0', ''); + +-- ---------------------------- +-- Table structure for schedule_job +-- ---------------------------- +DROP TABLE IF EXISTS `schedule_job`; +CREATE TABLE `schedule_job` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `project_id` bigint NOT NULL COMMENT '项目(租户)id', + `type_id` bigint DEFAULT NULL COMMENT '类型id', + `job_type` int NOT NULL DEFAULT '1' COMMENT '任务类型 1-自定义 2-数据接入 3-数据生产', + `job_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '名称', + `job_group` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '分组', + `bean_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'spring bean名称', + `method` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '执行方法', + `params` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '参数', + `once` tinyint(1) NOT NULL DEFAULT '0' COMMENT '只一次', + `cron_expression` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'cron表达式', + `status` tinyint unsigned DEFAULT NULL COMMENT '状态 0:暂停 1:正常', + `concurrent` tinyint unsigned DEFAULT NULL COMMENT '是否并发 0:禁止 1:允许', + `remark` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '备注', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='定时任务'; + +-- ---------------------------- +-- Records of schedule_job +-- ---------------------------- +INSERT INTO `schedule_job` VALUES ('1', '10002', '120008', '2', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'run', '120008', '0', '0/30 * * * * ? *', '0', '0', null, null, '0', null, null, null, '2023-04-06 14:48:49'); +INSERT INTO `schedule_job` VALUES ('2', '10002', '1', '3', '[1]测试调度', 'data_production', 'dataProductionScheduleTask', 'run', '1', '0', '0/30 * * * * ? *', '0', '0', null, '1', '0', null, '2023-01-21 12:58:48', null, '2023-01-21 13:00:14'); +INSERT INTO `schedule_job` VALUES ('3', '10002', '120008', '2', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', '', '0', '0', null, '1', '0', null, '2023-01-21 15:11:38', null, '2023-01-21 17:03:47'); +INSERT INTO `schedule_job` VALUES ('4', '10002', '1', '4', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '0', '0/30 * * * * ? *', '0', '0', null, '3', '0', null, '2023-04-05 12:25:50', null, '2023-04-08 12:44:44'); +INSERT INTO `schedule_job` VALUES ('5', '10002', '1', '4', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '0', '0', null, '5', '0', null, '2023-04-05 12:57:53', null, '2023-04-05 15:46:27'); +INSERT INTO `schedule_job` VALUES ('6', '10002', '6', '4', '[6]中台库采集测试', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '6', '1', null, '0', '0', null, '2', '0', null, '2023-04-05 20:22:01', null, '2023-04-06 10:01:57'); +INSERT INTO `schedule_job` VALUES ('7', '10002', '7', '4', '[7]oracle测试', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '7', '1', null, '0', '0', null, '8', '0', null, '2023-04-05 20:29:09', null, '2023-04-05 23:14:43'); +INSERT INTO `schedule_job` VALUES ('8', '10002', '120009', '2', '[120009]oracle整库同步', 'data_access', 'dataAccessTask', 'run', '120009', '0', '0/30 * * * * ?', '0', '0', null, '2', '0', null, '2023-04-06 15:03:45', null, '2023-04-06 15:28:06'); +INSERT INTO `schedule_job` VALUES ('9', '10002', '1', '5', '[1]测试唯一性', 'data_quality', 'dataQualityTask', 'run', '1', '1', '', '0', '0', null, '2', '0', null, '2023-06-24 21:15:56', null, '2023-06-24 21:16:35'); +INSERT INTO `schedule_job` VALUES ('10', '10002', '3', '5', '[3]测试手机号', 'data_quality', 'dataQualityTask', 'run', '3', '0', '0/30 * * * * ? *', '0', '0', null, '1', '0', null, '2023-06-25 16:16:38', null, '2023-06-25 16:18:22'); +INSERT INTO `schedule_job` VALUES ('11', '10002', '120012', '2', '[120012]oracle->doris', 'data_access', 'dataAccessTask', 'run', '120012', '1', '', '0', '0', null, '1', '0', null, '2023-06-26 22:10:59', null, '2023-06-26 22:10:59'); +INSERT INTO `schedule_job` VALUES ('12', '10002', '120010', '2', '[120010]oracle-》mysql', 'data_access', 'dataAccessTask', 'run', '120010', '1', '', '0', '0', null, '1', '0', null, '2023-06-26 22:11:03', null, '2023-06-26 22:11:03'); + +-- ---------------------------- +-- Table structure for schedule_job_log +-- ---------------------------- +DROP TABLE IF EXISTS `schedule_job_log`; +CREATE TABLE `schedule_job_log` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `project_id` bigint NOT NULL COMMENT '项目(租户)id', + `type_id` bigint DEFAULT NULL COMMENT '类型id', + `job_type` int NOT NULL COMMENT '任务类型 1-自定义 2-数据接入 3-数据生产调度', + `job_id` bigint NOT NULL COMMENT '任务id', + `job_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '任务名称', + `job_group` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '任务组名', + `bean_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'spring bean名称', + `method` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '执行方法', + `params` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '参数', + `status` tinyint unsigned NOT NULL COMMENT '任务状态 0:失败 1:成功', + `error` text CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci COMMENT '异常信息', + `times` bigint NOT NULL COMMENT '耗时(单位:毫秒)', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `idx_job_id` (`job_id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=52 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='定时任务日志'; + +-- ---------------------------- +-- Records of schedule_job_log +-- ---------------------------- +INSERT INTO `schedule_job_log` VALUES ('1', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '1806', '2023-01-20 14:10:32'); +INSERT INTO `schedule_job_log` VALUES ('2', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '354', '2023-01-20 14:11:00'); +INSERT INTO `schedule_job_log` VALUES ('3', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '1397', '2023-01-20 14:11:31'); +INSERT INTO `schedule_job_log` VALUES ('4', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '1949', '2023-01-20 16:06:18'); +INSERT INTO `schedule_job_log` VALUES ('5', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '553', '2023-01-20 16:10:42'); +INSERT INTO `schedule_job_log` VALUES ('6', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '513', '2023-01-20 16:11:01'); +INSERT INTO `schedule_job_log` VALUES ('7', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '571', '2023-01-20 16:11:41'); +INSERT INTO `schedule_job_log` VALUES ('8', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '794', '2023-01-20 16:35:34'); +INSERT INTO `schedule_job_log` VALUES ('9', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '658', '2023-01-20 16:36:44'); +INSERT INTO `schedule_job_log` VALUES ('10', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '2172', '2023-01-21 12:26:40'); +INSERT INTO `schedule_job_log` VALUES ('11', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '1371', '2023-01-21 12:28:02'); +INSERT INTO `schedule_job_log` VALUES ('12', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '737', '2023-01-21 12:28:31'); +INSERT INTO `schedule_job_log` VALUES ('13', '10002', '1', '3', '2', '[1]测试调度', 'data_production', 'dataProductionScheduleTask', 'run', '1', '1', null, '15125', '2023-01-21 12:59:15'); +INSERT INTO `schedule_job_log` VALUES ('14', '10002', '1', '3', '2', '[1]测试调度', 'data_production', 'dataProductionScheduleTask', 'run', '1', '1', null, '5062', '2023-01-21 12:59:35'); +INSERT INTO `schedule_job_log` VALUES ('15', '10002', '1', '3', '2', '[1]测试调度', 'data_production', 'dataProductionScheduleTask', 'run', '1', '1', null, '5070', '2023-01-21 13:00:05'); +INSERT INTO `schedule_job_log` VALUES ('16', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '991', '2023-01-21 14:51:01'); +INSERT INTO `schedule_job_log` VALUES ('17', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '538', '2023-01-21 14:51:31'); +INSERT INTO `schedule_job_log` VALUES ('18', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '560', '2023-01-21 14:52:01'); +INSERT INTO `schedule_job_log` VALUES ('19', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '775', '2023-01-21 14:53:01'); +INSERT INTO `schedule_job_log` VALUES ('20', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '524', '2023-01-21 14:53:31'); +INSERT INTO `schedule_job_log` VALUES ('21', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '779', '2023-01-21 14:54:01'); +INSERT INTO `schedule_job_log` VALUES ('22', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '819', '2023-01-21 14:54:31'); +INSERT INTO `schedule_job_log` VALUES ('23', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '439', '2023-01-21 14:55:00'); +INSERT INTO `schedule_job_log` VALUES ('24', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '479', '2023-01-21 14:56:31'); +INSERT INTO `schedule_job_log` VALUES ('25', '10002', '120008', '2', '3', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '682', '2023-01-21 15:11:39'); +INSERT INTO `schedule_job_log` VALUES ('26', '10002', '120008', '2', '3', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '542', '2023-01-21 17:03:00'); +INSERT INTO `schedule_job_log` VALUES ('27', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '537', '2023-01-21 17:04:31'); +INSERT INTO `schedule_job_log` VALUES ('28', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '1721', '2023-01-22 20:47:02'); +INSERT INTO `schedule_job_log` VALUES ('29', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '1023', '2023-01-22 20:47:31'); +INSERT INTO `schedule_job_log` VALUES ('30', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'syncData', '120008', '1', null, '848', '2023-01-22 20:48:01'); +INSERT INTO `schedule_job_log` VALUES ('31', '10002', '1', '4', '5', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '4069', '2023-04-05 12:57:57'); +INSERT INTO `schedule_job_log` VALUES ('32', '10002', '1', '4', '5', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '4221', '2023-04-05 13:11:34'); +INSERT INTO `schedule_job_log` VALUES ('33', '10002', '1', '4', '5', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '3640', '2023-04-05 13:16:59'); +INSERT INTO `schedule_job_log` VALUES ('34', '10002', '1', '4', '5', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '3214', '2023-04-05 13:18:20'); +INSERT INTO `schedule_job_log` VALUES ('35', '10002', '1', '4', '5', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '4004', '2023-04-05 15:45:26'); +INSERT INTO `schedule_job_log` VALUES ('36', '10002', '6', '4', '6', '[6]中台库采集测试', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '6', '1', null, '2554', '2023-04-05 20:22:03'); +INSERT INTO `schedule_job_log` VALUES ('37', '10002', '7', '4', '7', '[7]oracle测试', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '7', '1', null, '4613576', '2023-04-05 21:46:02'); +INSERT INTO `schedule_job_log` VALUES ('38', '10002', '6', '4', '6', '[6]中台库采集测试', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '6', '1', null, '2357', '2023-04-06 10:01:48'); +INSERT INTO `schedule_job_log` VALUES ('39', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'run', '120008', '1', null, '668', '2023-04-06 14:48:01'); +INSERT INTO `schedule_job_log` VALUES ('40', '10002', '120008', '2', '1', '[120008]人口测试数据-》中台库', 'data_access', 'dataAccessTask', 'run', '120008', '1', null, '242', '2023-04-06 14:48:30'); +INSERT INTO `schedule_job_log` VALUES ('41', '10002', '120009', '2', '8', '[120009]oracle整库同步', 'data_access', 'dataAccessTask', 'run', '120009', '1', null, '756177', '2023-04-06 15:16:36'); +INSERT INTO `schedule_job_log` VALUES ('42', '10002', '1', '4', '4', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '4127', '2023-04-06 16:30:04'); +INSERT INTO `schedule_job_log` VALUES ('43', '10002', '1', '4', '4', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '3934', '2023-04-06 16:30:34'); +INSERT INTO `schedule_job_log` VALUES ('44', '10002', '1', '4', '4', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '4633', '2023-04-08 12:44:05'); +INSERT INTO `schedule_job_log` VALUES ('45', '10002', '1', '4', '4', '[1]测试采集', 'data_governance', 'dataGovernanceMetadataCollectTask', 'run', '1', '1', null, '4590', '2023-04-08 12:44:35'); +INSERT INTO `schedule_job_log` VALUES ('46', '10002', '1', '5', '9', '[1]测试唯一性', 'data_quality', 'dataQualityTask', 'run', '1', '1', null, '429', '2023-06-24 21:15:56'); +INSERT INTO `schedule_job_log` VALUES ('47', '10002', '1', '5', '9', '[1]测试唯一性', 'data_quality', 'dataQualityTask', 'run', '1', '1', null, '361', '2023-06-24 21:16:31'); +INSERT INTO `schedule_job_log` VALUES ('48', '10002', '3', '5', '10', '[3]测试手机号', 'data_quality', 'dataQualityTask', 'run', '3', '1', null, '174', '2023-06-25 16:17:00'); +INSERT INTO `schedule_job_log` VALUES ('49', '10002', '3', '5', '10', '[3]测试手机号', 'data_quality', 'dataQualityTask', 'run', '3', '1', null, '136', '2023-06-25 16:17:30'); +INSERT INTO `schedule_job_log` VALUES ('50', '10002', '3', '5', '10', '[3]测试手机号', 'data_quality', 'dataQualityTask', 'run', '3', '1', null, '159', '2023-06-25 16:18:00'); +INSERT INTO `schedule_job_log` VALUES ('51', '10002', '120010', '2', '12', '[120010]oracle-》mysql', 'data_access', 'dataAccessTask', 'run', '120010', '0', 'java.lang.reflect.InvocationTargetException\r\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n at java.lang.reflect.Method.invoke(Method.java:498)\r\n at net.srt.quartz.utils.AbstractScheduleJob.doExecute(AbstractScheduleJob.java:65)\r\n at net.srt.quartz.utils.AbstractScheduleJob.execute(AbstractScheduleJob.java:45)\r\n at org.quartz.core.JobRunShell.run(JobRunShell.java:202)\r\n at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)\r\nCaused by: java.lang.NullPointerException\r\n at net.srt.quartz.task.DataAccessTask.run(DataAccessTask.java:82)\r\n ... 8 more\r\n', '406', '2023-07-21 10:02:02'); + +-- ---------------------------- +-- Table structure for sms_log +-- ---------------------------- +DROP TABLE IF EXISTS `sms_log`; +CREATE TABLE `sms_log` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `platform_id` bigint DEFAULT NULL COMMENT '平台ID', + `platform` tinyint DEFAULT NULL COMMENT '平台类型', + `mobile` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '手机号', + `params` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '参数', + `status` tinyint DEFAULT NULL COMMENT '状态 0:失败 1:成功', + `error` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '异常信息', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='短信日志'; + +-- ---------------------------- +-- Records of sms_log +-- ---------------------------- + +-- ---------------------------- +-- Table structure for sms_platform +-- ---------------------------- +DROP TABLE IF EXISTS `sms_platform`; +CREATE TABLE `sms_platform` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `platform` tinyint DEFAULT NULL COMMENT '平台类型 0:阿里云 1:腾讯云 2:七牛云 3:华为云', + `sign_name` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '短信签名', + `template_id` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '短信模板', + `app_id` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '短信应用ID,如:腾讯云等', + `sender_id` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '腾讯云国际短信、华为云等需要', + `url` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '接入地址,如:华为云', + `access_key` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'AccessKey', + `secret_key` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'SecretKey', + `status` tinyint DEFAULT NULL COMMENT '状态 0:禁用 1:启用', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='短信平台'; + +-- ---------------------------- +-- Records of sms_platform +-- ---------------------------- + +-- ---------------------------- +-- Table structure for sys_attachment +-- ---------------------------- +DROP TABLE IF EXISTS `sys_attachment`; +CREATE TABLE `sys_attachment` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '附件名称', + `url` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '附件地址', + `size` bigint DEFAULT NULL COMMENT '附件大小', + `platform` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '存储平台', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='附件管理'; + +-- ---------------------------- +-- Records of sys_attachment +-- ---------------------------- +INSERT INTO `sys_attachment` VALUES ('1', '调度配置.png', 'http://localhost:8080/sys/upload/20221028/调度配置_59019.png', '68077', 'LOCAL', '0', '1', '10000', '2022-10-28 16:23:40', '10000', '2022-10-28 16:29:22'); + +-- ---------------------------- +-- Table structure for sys_dict_data +-- ---------------------------- +DROP TABLE IF EXISTS `sys_dict_data`; +CREATE TABLE `sys_dict_data` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `dict_type_id` bigint NOT NULL COMMENT '字典类型ID', + `dict_label` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '字典标签', + `dict_value` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '字典值', + `remark` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '备注', + `sort` int DEFAULT NULL COMMENT '排序', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=154 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='字典数据'; + +-- ---------------------------- +-- Records of sys_dict_data +-- ---------------------------- +INSERT INTO `sys_dict_data` VALUES ('1', '1', '停用', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('2', '1', '正常', '1', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('3', '2', '男', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('4', '2', '女', '1', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('5', '2', '未知', '2', '', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('6', '3', '正常', '1', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('7', '3', '停用', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('8', '4', '全部数据', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('9', '4', '本部门及子部门数据', '1', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('10', '4', '本部门数据', '2', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('11', '4', '本人数据', '3', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('12', '4', '自定义数据', '4', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('13', '5', '禁用', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('14', '5', '启用', '1', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('15', '6', '失败', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('16', '6', '成功', '1', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('17', '7', '登录成功', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('18', '7', '退出成功', '1', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('19', '7', '验证码错误', '2', '', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('20', '7', '账号密码错误', '3', '', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_data` VALUES ('21', '8', '阿里云', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_dict_data` VALUES ('22', '8', '腾讯云', '1', '', '1', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_dict_data` VALUES ('23', '8', '七牛云', '2', '', '2', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_dict_data` VALUES ('24', '8', '华为云', '3', '', '3', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_dict_data` VALUES ('25', '9', '默认', 'default', '', '0', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_dict_data` VALUES ('26', '9', '数据生产', 'data_production', '', '2', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2023-01-19 16:51:31'); +INSERT INTO `sys_dict_data` VALUES ('27', '10', '暂停', '0', '', '0', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_dict_data` VALUES ('28', '10', '正常', '1', '', '1', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_dict_data` VALUES ('29', '11', '停用', '0', '', '0', '0', '0', '10000', '2022-09-27 12:00:10', '10000', '2022-09-27 12:00:10'); +INSERT INTO `sys_dict_data` VALUES ('30', '11', '启用', '1', '', '0', '0', '0', '10000', '2022-09-27 12:00:17', '10000', '2022-09-27 12:00:17'); +INSERT INTO `sys_dict_data` VALUES ('31', '12', 'MYSQL', '1', '', '1', '0', '0', '10000', '2022-10-09 17:17:20', '10000', '2022-10-09 17:17:20'); +INSERT INTO `sys_dict_data` VALUES ('32', '12', 'ORACLE', '2', '', '2', '0', '0', '10000', '2022-10-09 17:17:30', '10000', '2022-10-09 17:17:30'); +INSERT INTO `sys_dict_data` VALUES ('33', '12', 'SQLSERVER2000', '3', '', '5', '0', '0', '10000', '2022-10-09 17:17:46', '10000', '2022-10-09 17:17:46'); +INSERT INTO `sys_dict_data` VALUES ('34', '12', 'SQLSERVER', '4', '', '4', '0', '0', '10000', '2022-10-09 17:17:55', '10000', '2022-10-09 17:17:55'); +INSERT INTO `sys_dict_data` VALUES ('35', '12', 'POSTGRESQL', '5', '', '3', '0', '0', '10000', '2022-10-09 17:18:06', '10000', '2022-10-09 17:18:06'); +INSERT INTO `sys_dict_data` VALUES ('36', '12', 'GREENPLUM', '6', '', '12', '0', '0', '10000', '2022-10-09 17:18:20', '10000', '2022-10-09 17:18:20'); +INSERT INTO `sys_dict_data` VALUES ('37', '12', 'MARIADB', '7', '', '7', '0', '0', '10000', '2022-10-09 17:18:31', '10000', '2022-10-09 17:18:31'); +INSERT INTO `sys_dict_data` VALUES ('38', '12', 'DB2', '8', '', '6', '0', '0', '10000', '2022-10-09 17:18:40', '10000', '2022-10-09 17:18:40'); +INSERT INTO `sys_dict_data` VALUES ('39', '12', 'DM', '9', '', '8', '0', '0', '10000', '2022-10-09 17:19:00', '10000', '2022-10-09 17:19:00'); +INSERT INTO `sys_dict_data` VALUES ('40', '12', 'KINGBASE', '10', '', '11', '0', '0', '10000', '2022-10-09 17:19:16', '10000', '2022-10-09 17:19:16'); +INSERT INTO `sys_dict_data` VALUES ('41', '12', 'OSCAR', '11', '', '15', '0', '0', '10000', '2022-10-09 17:19:25', '10000', '2022-10-09 17:19:25'); +INSERT INTO `sys_dict_data` VALUES ('42', '12', 'GBASE8A', '12', '', '13', '0', '0', '10000', '2022-10-09 17:19:37', '10000', '2022-10-09 17:19:37'); +INSERT INTO `sys_dict_data` VALUES ('43', '12', 'HIVE', '13', '', '9', '0', '0', '10000', '2022-10-09 17:19:57', '10000', '2022-10-09 17:19:57'); +INSERT INTO `sys_dict_data` VALUES ('44', '12', 'SQLITE3', '14', '', '10', '0', '0', '10000', '2022-10-09 17:20:09', '10000', '2022-10-09 17:20:09'); +INSERT INTO `sys_dict_data` VALUES ('45', '12', 'SYBASE', '15', '', '14', '0', '0', '10000', '2022-10-09 17:20:19', '10000', '2022-10-09 17:20:19'); +INSERT INTO `sys_dict_data` VALUES ('46', '11', '断开', '0', '', '0', '0', '1', '10000', '2022-10-09 17:23:42', '10000', '2022-10-09 17:23:52'); +INSERT INTO `sys_dict_data` VALUES ('47', '13', '断开', '0', '', '0', '0', '0', '10000', '2022-10-09 17:24:31', '10000', '2022-10-09 17:24:31'); +INSERT INTO `sys_dict_data` VALUES ('48', '13', '正常', '1', '', '0', '0', '0', '10000', '2022-10-09 17:24:37', '10000', '2022-10-09 17:24:37'); +INSERT INTO `sys_dict_data` VALUES ('49', '14', '否', '0', '', '0', '0', '0', '10000', '2022-10-09 17:25:13', '10000', '2022-10-09 17:25:13'); +INSERT INTO `sys_dict_data` VALUES ('50', '14', '是', '1', '', '0', '0', '0', '10000', '2022-10-09 17:25:18', '10000', '2022-10-09 17:25:18'); +INSERT INTO `sys_dict_data` VALUES ('51', '15', '实时同步', '1', '', '1', '0', '1', '10000', '2022-10-24 17:47:15', '10000', '2023-01-21 14:50:00'); +INSERT INTO `sys_dict_data` VALUES ('52', '15', '一次性全量同步', '2', '', '2', '0', '0', '10000', '2022-10-24 17:47:26', '10000', '2022-10-24 17:47:26'); +INSERT INTO `sys_dict_data` VALUES ('53', '15', '周期性增量(全量比对)', '3', '', '3', '0', '0', '10000', '2022-10-24 17:47:41', '10000', '2023-06-25 16:36:05'); +INSERT INTO `sys_dict_data` VALUES ('54', '16', '等待中', '1', '', '1', '0', '0', '10000', '2022-10-24 17:49:04', '10000', '2022-10-24 17:49:04'); +INSERT INTO `sys_dict_data` VALUES ('55', '16', '运行中', '2', '', '2', '0', '0', '10000', '2022-10-24 17:49:13', '10000', '2022-10-24 17:49:13'); +INSERT INTO `sys_dict_data` VALUES ('56', '16', '正常结束', '3', '', '3', '0', '0', '10000', '2022-10-24 17:49:23', '10000', '2022-10-24 17:49:23'); +INSERT INTO `sys_dict_data` VALUES ('57', '16', '异常结束', '4', '', '4', '0', '0', '10000', '2022-10-24 17:49:33', '10000', '2022-10-24 17:49:33'); +INSERT INTO `sys_dict_data` VALUES ('58', '17', '未发布', '0', '', '0', '0', '0', '10000', '2022-10-24 17:50:23', '10000', '2022-10-24 17:50:23'); +INSERT INTO `sys_dict_data` VALUES ('59', '17', '已发布', '1', '', '1', '0', '0', '10000', '2022-10-24 17:50:31', '10000', '2022-10-24 17:50:31'); +INSERT INTO `sys_dict_data` VALUES ('60', '18', 'Sql', '1', '', '1', '0', '0', '10000', '2022-11-26 20:29:36', '10000', '2023-01-10 17:09:02'); +INSERT INTO `sys_dict_data` VALUES ('61', '18', 'FlinkSql', '2', '', '2', '0', '0', '10000', '2022-11-26 20:29:49', '10000', '2023-01-10 17:09:08'); +INSERT INTO `sys_dict_data` VALUES ('62', '19', 'Standalone', '1', null, '1', '0', '0', '10000', '2022-12-03 10:41:18', '10000', '2022-12-03 10:41:23'); +INSERT INTO `sys_dict_data` VALUES ('63', '19', 'Yarn Session', '2', null, '2', '0', '0', '10000', '2022-12-03 10:41:55', '10000', '2022-12-03 10:42:00'); +INSERT INTO `sys_dict_data` VALUES ('64', '19', 'Yarn Per-Job', '3', null, '3', '0', '0', '10000', '2022-12-03 10:42:29', '10000', '2022-12-03 10:42:35'); +INSERT INTO `sys_dict_data` VALUES ('65', '19', 'Yarn Application', '4', null, '4', '0', '0', '10000', '2022-12-03 10:43:02', '10000', '2022-12-03 10:43:08'); +INSERT INTO `sys_dict_data` VALUES ('66', '19', 'K8s Session', '5', null, '5', '0', '1', '10000', '2022-12-03 10:43:41', '10000', '2023-01-03 21:37:16'); +INSERT INTO `sys_dict_data` VALUES ('67', '19', 'K8s Application', '6', null, '6', '0', '1', '10000', '2022-12-03 10:44:24', '10000', '2023-01-03 21:37:19'); +INSERT INTO `sys_dict_data` VALUES ('68', '19', 'Local', '0', '', '0', '0', '0', '10000', '2022-12-21 11:35:54', '10000', '2022-12-21 11:35:54'); +INSERT INTO `sys_dict_data` VALUES ('69', '20', 'Flink On Yarn', 'Yarn', '', '0', '0', '0', '10000', '2022-12-22 09:31:02', '10000', '2022-12-22 09:31:02'); +INSERT INTO `sys_dict_data` VALUES ('70', '21', 'INITIALIZE', '0', '', '0', '0', '0', '10000', '2023-01-03 20:47:20', '10000', '2023-01-03 20:47:20'); +INSERT INTO `sys_dict_data` VALUES ('71', '21', 'RUNNING', '1', '', '1', '0', '0', '10000', '2023-01-03 20:47:27', '10000', '2023-01-03 20:47:27'); +INSERT INTO `sys_dict_data` VALUES ('72', '21', 'SUCCESS', '2', '', '2', '0', '0', '10000', '2023-01-03 20:47:36', '10000', '2023-01-03 20:47:36'); +INSERT INTO `sys_dict_data` VALUES ('73', '21', 'FAILED', '3', '', '3', '0', '0', '10000', '2023-01-03 20:47:43', '10000', '2023-01-03 20:47:43'); +INSERT INTO `sys_dict_data` VALUES ('74', '22', 'INITIALIZING', 'INITIALIZING', '', '0', '0', '0', '10000', '2023-01-03 20:50:05', '10000', '2023-01-03 20:50:05'); +INSERT INTO `sys_dict_data` VALUES ('75', '22', 'CREATED', 'CREATED', '', '1', '0', '0', '10000', '2023-01-03 20:50:15', '10000', '2023-01-03 20:50:15'); +INSERT INTO `sys_dict_data` VALUES ('76', '22', 'RUNNING', 'RUNNING', '', '2', '0', '0', '10000', '2023-01-03 20:50:22', '10000', '2023-01-03 20:50:22'); +INSERT INTO `sys_dict_data` VALUES ('77', '22', 'FAILING', 'FAILING', '', '3', '0', '0', '10000', '2023-01-03 20:50:29', '10000', '2023-01-03 20:50:29'); +INSERT INTO `sys_dict_data` VALUES ('78', '22', 'FAILED', 'FAILED', '', '4', '0', '0', '10000', '2023-01-03 20:50:36', '10000', '2023-01-03 20:50:36'); +INSERT INTO `sys_dict_data` VALUES ('79', '22', 'CANCELLING', 'CANCELLING', '', '5', '0', '0', '10000', '2023-01-03 20:50:44', '10000', '2023-01-03 20:50:44'); +INSERT INTO `sys_dict_data` VALUES ('80', '22', 'CANCELED', 'CANCELED', '', '6', '0', '0', '10000', '2023-01-03 20:50:51', '10000', '2023-01-03 20:50:51'); +INSERT INTO `sys_dict_data` VALUES ('81', '22', 'FINISHED', 'FINISHED', '', '7', '0', '0', '10000', '2023-01-03 20:50:58', '10000', '2023-01-03 20:50:58'); +INSERT INTO `sys_dict_data` VALUES ('82', '22', 'RESTARTING', 'RESTARTING', '', '8', '0', '0', '10000', '2023-01-03 20:51:05', '10000', '2023-01-03 20:51:05'); +INSERT INTO `sys_dict_data` VALUES ('83', '22', 'SUSPENDED', 'SUSPENDED', '', '9', '0', '0', '10000', '2023-01-03 20:51:11', '10000', '2023-01-03 20:51:11'); +INSERT INTO `sys_dict_data` VALUES ('84', '22', 'RECONCILING', 'RECONCILING', '', '10', '0', '0', '10000', '2023-01-03 20:51:18', '10000', '2023-01-03 20:51:18'); +INSERT INTO `sys_dict_data` VALUES ('85', '22', 'UNKNOWN', 'UNKNOWN', '', '11', '0', '0', '10000', '2023-01-03 20:51:25', '10000', '2023-01-03 20:51:25'); +INSERT INTO `sys_dict_data` VALUES ('86', '23', '禁用', '0', '', '0', '0', '0', '10000', '2023-01-09 21:23:38', '10000', '2023-01-09 21:23:38'); +INSERT INTO `sys_dict_data` VALUES ('87', '23', '最近一次', '1', '', '1', '0', '0', '10000', '2023-01-09 21:23:49', '10000', '2023-01-09 21:23:53'); +INSERT INTO `sys_dict_data` VALUES ('88', '23', '最早一次', '2', '', '2', '0', '0', '10000', '2023-01-09 21:24:02', '10000', '2023-01-09 21:24:02'); +INSERT INTO `sys_dict_data` VALUES ('89', '23', '指定一次', '3', '', '3', '0', '0', '10000', '2023-01-09 21:24:12', '10000', '2023-01-09 21:24:17'); +INSERT INTO `sys_dict_data` VALUES ('90', '18', 'FlinkSqlCommon', '4', 'FlinkSql的公共代码块,例如一些初始化的ddl等', '4', '0', '0', '10000', '2023-01-10 17:07:45', '10000', '2023-01-10 17:09:12'); +INSERT INTO `sys_dict_data` VALUES ('91', '24', '手动', '1', '', '0', '0', '0', '10000', '2023-01-18 15:13:22', '10000', '2023-01-18 15:13:22'); +INSERT INTO `sys_dict_data` VALUES ('92', '24', '调度', '2', '', '1', '0', '0', '10000', '2023-01-18 15:13:30', '10000', '2023-01-18 15:13:30'); +INSERT INTO `sys_dict_data` VALUES ('93', '9', '数据接入', 'data_access', '', '1', '0', '0', '10000', '2023-01-19 16:51:55', '10000', '2023-01-19 16:51:55'); +INSERT INTO `sys_dict_data` VALUES ('94', '25', '文件夹', '1', '', '0', '0', '0', '10000', '2023-01-30 11:34:40', '10000', '2023-01-30 11:34:40'); +INSERT INTO `sys_dict_data` VALUES ('95', '25', 'API 目录', '2', '', '0', '0', '0', '10000', '2023-01-30 11:34:46', '10000', '2023-01-30 11:34:55'); +INSERT INTO `sys_dict_data` VALUES ('96', '26', 'GET', 'GET', '', '0', '0', '0', '10000', '2023-02-12 11:26:50', '10000', '2023-02-12 11:26:50'); +INSERT INTO `sys_dict_data` VALUES ('97', '26', 'POST', 'POST', '', '1', '0', '0', '10000', '2023-02-12 11:26:58', '10000', '2023-02-12 11:26:58'); +INSERT INTO `sys_dict_data` VALUES ('98', '26', 'PUT', 'PUT', '', '2', '0', '0', '10000', '2023-02-12 11:27:05', '10000', '2023-02-12 11:27:05'); +INSERT INTO `sys_dict_data` VALUES ('99', '26', 'DELETE', 'DELETE', '', '3', '0', '0', '10000', '2023-02-12 11:27:15', '10000', '2023-02-12 11:27:15'); +INSERT INTO `sys_dict_data` VALUES ('100', '27', 'application/json', 'application/json', '', '0', '0', '0', '10000', '2023-02-12 11:33:16', '10000', '2023-02-12 11:33:16'); +INSERT INTO `sys_dict_data` VALUES ('101', '28', '10分钟', '10min', '', '0', '0', '0', '10000', '2023-02-16 14:44:06', '10000', '2023-02-16 14:44:22'); +INSERT INTO `sys_dict_data` VALUES ('102', '28', '1小时', '1hour', '', '1', '0', '0', '10000', '2023-02-16 14:44:36', '10000', '2023-02-16 14:44:36'); +INSERT INTO `sys_dict_data` VALUES ('103', '28', '1天', '1day', '', '2', '0', '0', '10000', '2023-02-16 14:44:52', '10000', '2023-02-16 14:44:57'); +INSERT INTO `sys_dict_data` VALUES ('104', '28', '30天', '30day', '', '3', '0', '0', '10000', '2023-02-16 14:45:59', '10000', '2023-02-16 14:46:03'); +INSERT INTO `sys_dict_data` VALUES ('105', '28', '仅一次', 'once', '', '4', '0', '0', '10000', '2023-02-16 14:46:19', '10000', '2023-02-16 14:46:25'); +INSERT INTO `sys_dict_data` VALUES ('106', '28', '永不失效', 'forever', '', '5', '0', '0', '10000', '2023-02-16 14:46:40', '10000', '2023-02-16 14:46:40'); +INSERT INTO `sys_dict_data` VALUES ('107', '29', '数字', '1', '', '0', '0', '0', '10000', '2023-03-28 16:02:20', '10000', '2023-03-28 16:02:20'); +INSERT INTO `sys_dict_data` VALUES ('108', '29', '字符串', '2', '', '1', '0', '0', '10000', '2023-03-28 16:02:27', '10000', '2023-03-28 16:02:31'); +INSERT INTO `sys_dict_data` VALUES ('109', '30', '文本框', '1', '', '0', '0', '0', '10000', '2023-03-28 16:03:10', '10000', '2023-03-28 16:03:10'); +INSERT INTO `sys_dict_data` VALUES ('110', '31', '全量', '0', '', '0', '0', '0', '10000', '2023-04-01 09:53:29', '10000', '2023-04-01 09:53:29'); +INSERT INTO `sys_dict_data` VALUES ('111', '31', '增量', '1', '', '1', '0', '0', '10000', '2023-04-01 09:53:35', '10000', '2023-04-01 09:53:35'); +INSERT INTO `sys_dict_data` VALUES ('112', '32', '一次性', '1', '', '1', '0', '0', '10000', '2023-04-01 09:54:05', '10000', '2023-04-01 12:26:03'); +INSERT INTO `sys_dict_data` VALUES ('113', '32', '周期性', '2', '', '2', '0', '0', '10000', '2023-04-01 09:54:11', '10000', '2023-04-01 09:54:14'); +INSERT INTO `sys_dict_data` VALUES ('114', '33', '数据库', '1', '', '1', '0', '0', '10000', '2023-04-01 09:54:45', '10000', '2023-04-01 09:54:48'); +INSERT INTO `sys_dict_data` VALUES ('115', '33', '中台库', '2', '', '2', '0', '0', '10000', '2023-04-01 09:54:55', '10000', '2023-04-01 09:54:55'); +INSERT INTO `sys_dict_data` VALUES ('116', '9', '数据治理', 'data_governance', '', '3', '0', '0', '10000', '2023-04-03 17:12:43', '10000', '2023-04-03 17:12:58'); +INSERT INTO `sys_dict_data` VALUES ('117', '34', '运行中', '2', '', '0', '0', '0', '10000', '2023-04-05 16:20:05', '10000', '2023-04-05 16:20:05'); +INSERT INTO `sys_dict_data` VALUES ('118', '34', '成功', '1', '', '1', '0', '0', '10000', '2023-04-05 16:20:11', '10000', '2023-04-05 16:20:11'); +INSERT INTO `sys_dict_data` VALUES ('119', '34', '失败', '0', '', '2', '0', '0', '10000', '2023-04-05 16:20:23', '10000', '2023-04-05 16:20:23'); +INSERT INTO `sys_dict_data` VALUES ('120', '35', '标准字段', '1', '', '1', '0', '0', '10000', '2023-05-08 15:52:49', '10000', '2023-05-08 15:52:49'); +INSERT INTO `sys_dict_data` VALUES ('121', '35', '标准码表', '2', '', '2', '0', '0', '10000', '2023-05-08 15:52:57', '10000', '2023-05-08 15:52:57'); +INSERT INTO `sys_dict_data` VALUES ('122', '36', '数字', '1', '', '1', '0', '0', '10000', '2023-05-18 16:24:32', '10000', '2023-05-18 16:24:32'); +INSERT INTO `sys_dict_data` VALUES ('123', '36', '字符串', '2', '', '2', '0', '0', '10000', '2023-05-18 16:24:39', '10000', '2023-05-18 16:24:39'); +INSERT INTO `sys_dict_data` VALUES ('124', '36', '日期', '3', '', '3', '0', '0', '10000', '2023-05-18 16:24:47', '10000', '2023-05-18 16:24:47'); +INSERT INTO `sys_dict_data` VALUES ('125', '36', '小数', '4', '', '4', '0', '0', '10000', '2023-05-18 16:24:56', '10000', '2023-05-18 16:24:56'); +INSERT INTO `sys_dict_data` VALUES ('126', '37', '唯一性', '1', '', '1', '0', '0', '10000', '2023-05-28 09:49:18', '10000', '2023-05-28 09:49:18'); +INSERT INTO `sys_dict_data` VALUES ('127', '37', '规范性', '2', '', '2', '0', '0', '10000', '2023-05-28 09:49:24', '10000', '2023-05-28 09:49:24'); +INSERT INTO `sys_dict_data` VALUES ('128', '37', '有效性', '3', '', '3', '0', '0', '10000', '2023-05-28 09:49:32', '10000', '2023-05-28 09:49:32'); +INSERT INTO `sys_dict_data` VALUES ('129', '37', '完整性', '4', '', '4', '0', '0', '10000', '2023-05-28 09:49:40', '10000', '2023-05-28 09:49:40'); +INSERT INTO `sys_dict_data` VALUES ('130', '37', '一致性', '5', '', '5', '0', '0', '10000', '2023-05-28 09:49:47', '10000', '2023-05-28 09:49:47'); +INSERT INTO `sys_dict_data` VALUES ('131', '37', '及时性', '6', '', '6', '0', '0', '10000', '2023-05-28 09:49:57', '10000', '2023-05-28 09:49:57'); +INSERT INTO `sys_dict_data` VALUES ('132', '37', '准确性', '7', '', '7', '0', '0', '10000', '2023-05-28 09:50:08', '10000', '2023-05-28 09:50:08'); +INSERT INTO `sys_dict_data` VALUES ('133', '38', '一次性', '1', '', '1', '0', '0', '10000', '2023-05-29 11:52:19', '10000', '2023-05-29 11:52:51'); +INSERT INTO `sys_dict_data` VALUES ('134', '38', '周期性', '2', '', '2', '0', '0', '10000', '2023-05-29 11:52:27', '10000', '2023-05-29 11:52:57'); +INSERT INTO `sys_dict_data` VALUES ('135', '39', '单字段唯一', '1', '', '1', '0', '0', '10000', '2023-05-30 12:43:31', '10000', '2023-05-30 12:43:31'); +INSERT INTO `sys_dict_data` VALUES ('136', '39', '组合字段唯一', '2', '', '2', '0', '0', '10000', '2023-05-30 12:43:40', '10000', '2023-05-30 12:43:40'); +INSERT INTO `sys_dict_data` VALUES ('137', '40', 'TiDB(MySql)', '1', '', '1', '0', '0', '10000', '2023-06-03 11:07:47', '10000', '2023-06-03 11:07:47'); +INSERT INTO `sys_dict_data` VALUES ('138', '40', 'ORACLE', '2', '', '2', '0', '0', '10000', '2023-06-13 16:57:00', '10000', '2023-06-13 16:57:00'); +INSERT INTO `sys_dict_data` VALUES ('139', '40', 'POSTGRESQL', '3', '', '3', '0', '0', '10000', '2023-06-13 17:12:18', '10000', '2023-06-13 17:12:18'); +INSERT INTO `sys_dict_data` VALUES ('140', '12', 'DORIS', '16', '', '16', '0', '0', '10000', '2023-06-19 17:46:51', '10000', '2023-06-19 17:46:51'); +INSERT INTO `sys_dict_data` VALUES ('141', '40', 'DORIS', '16', '', '16', '0', '0', '10000', '2023-06-23 10:24:52', '10000', '2023-06-23 10:24:52'); +INSERT INTO `sys_dict_data` VALUES ('142', '40', 'GREENPLUM', '6', '', '6', '0', '0', '10000', '2023-06-23 10:25:53', '10000', '2023-06-23 10:25:53'); +INSERT INTO `sys_dict_data` VALUES ('143', '41', '未上架', '0', '', '0', '0', '0', '10000', '2023-07-07 10:51:35', '10000', '2023-07-07 10:51:35'); +INSERT INTO `sys_dict_data` VALUES ('144', '41', '已上架', '1', '', '1', '0', '0', '10000', '2023-07-07 10:51:42', '10000', '2023-07-07 10:51:42'); +INSERT INTO `sys_dict_data` VALUES ('145', '42', '未挂载', '0', '', '0', '0', '0', '10000', '2023-07-07 10:52:09', '10000', '2023-07-07 10:52:09'); +INSERT INTO `sys_dict_data` VALUES ('146', '42', '已挂载', '1', '', '1', '0', '0', '10000', '2023-07-07 10:52:17', '10000', '2023-07-07 10:52:24'); +INSERT INTO `sys_dict_data` VALUES ('147', '43', '全部', '1', '', '1', '0', '0', '10000', '2023-07-07 15:09:02', '10000', '2023-07-07 15:09:02'); +INSERT INTO `sys_dict_data` VALUES ('148', '43', '角色', '2', '', '2', '0', '0', '10000', '2023-07-07 15:09:10', '10000', '2023-07-07 15:09:10'); +INSERT INTO `sys_dict_data` VALUES ('149', '43', '用户', '3', '', '3', '0', '0', '10000', '2023-07-07 15:09:16', '10000', '2023-07-07 15:09:16'); +INSERT INTO `sys_dict_data` VALUES ('150', '44', '数据库表', '1', '', '1', '0', '0', '10000', '2023-07-14 14:00:48', '10000', '2023-07-14 14:00:48'); +INSERT INTO `sys_dict_data` VALUES ('151', '44', 'API', '2', '', '2', '0', '0', '10000', '2023-07-14 14:00:52', '10000', '2023-07-14 14:00:57'); +INSERT INTO `sys_dict_data` VALUES ('152', '44', '文件', '3', '', '3', '0', '0', '10000', '2023-07-14 14:01:05', '10000', '2023-07-14 14:01:05'); +INSERT INTO `sys_dict_data` VALUES ('153', '9', '数据质量', 'data_quality', '', '4', '0', '0', '10000', '2023-07-21 10:03:30', '10000', '2023-07-21 10:03:38'); + +-- ---------------------------- +-- Table structure for sys_dict_type +-- ---------------------------- +DROP TABLE IF EXISTS `sys_dict_type`; +CREATE TABLE `sys_dict_type` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `dict_type` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '字典类型', + `dict_name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '字典名称', + `remark` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '备注', + `sort` int DEFAULT NULL COMMENT '排序', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=45 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='字典类型'; + +-- ---------------------------- +-- Records of sys_dict_type +-- ---------------------------- +INSERT INTO `sys_dict_type` VALUES ('1', 'post_status', '状态', '岗位管理', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_type` VALUES ('2', 'user_gender', '性别', '用户管理', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_type` VALUES ('3', 'user_status', '状态', '用户管理', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_type` VALUES ('4', 'role_data_scope', '数据范围', '角色管理', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_type` VALUES ('5', 'enable_disable', '状态', '功能状态:启用 | 禁用 ', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_type` VALUES ('6', 'success_fail', '状态', '操作状态:成功 | 失败', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_type` VALUES ('7', 'login_operation', '操作信息', '登录管理', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_type` VALUES ('8', 'sms_platform', '平台类型', '短信管理', '0', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_dict_type` VALUES ('9', 'schedule_group', '任务组名', '定时任务', '0', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_dict_type` VALUES ('10', 'schedule_status', '状态', '定时任务', '0', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_dict_type` VALUES ('11', 'project_status', '项目状态', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_dict_type` VALUES ('12', 'database_type', '数据库类型', '', '0', '0', '0', '10000', '2022-10-09 16:07:56', '10000', '2022-10-09 16:07:56'); +INSERT INTO `sys_dict_type` VALUES ('13', 'database_status', '数据库状态', '', '0', '0', '0', '10000', '2022-10-09 17:24:20', '10000', '2022-10-09 17:24:20'); +INSERT INTO `sys_dict_type` VALUES ('14', 'yes_or_no', '是否', '', '0', '0', '0', '10000', '2022-10-09 17:25:01', '10000', '2022-10-09 17:25:01'); +INSERT INTO `sys_dict_type` VALUES ('15', 'task_type', '任务类型', '', '0', '0', '0', '10000', '2022-10-24 17:46:19', '10000', '2022-10-24 17:46:19'); +INSERT INTO `sys_dict_type` VALUES ('16', 'run_status', '运行状态', '', '0', '0', '0', '10000', '2022-10-24 17:48:48', '10000', '2022-10-24 17:48:48'); +INSERT INTO `sys_dict_type` VALUES ('17', 'release_status', '发布状态', '', '0', '0', '0', '10000', '2022-10-24 17:50:07', '10000', '2022-10-24 17:50:07'); +INSERT INTO `sys_dict_type` VALUES ('18', 'production_task_type', '数据生产作业类型', '数据生产作业类型', '0', '0', '0', '10000', '2022-11-26 20:28:20', '10000', '2022-11-26 20:28:20'); +INSERT INTO `sys_dict_type` VALUES ('19', 'production_cluster_type', '数据生产集群实例类型', '数据生产集群实例类型', '0', '0', '0', '10000', '2022-12-03 10:40:37', '10000', '2022-12-03 10:40:43'); +INSERT INTO `sys_dict_type` VALUES ('20', 'production_cluster_configuration_type', '集群配置类型', '', '0', '0', '0', '10000', '2022-12-22 09:30:02', '10000', '2022-12-22 09:30:02'); +INSERT INTO `sys_dict_type` VALUES ('21', 'task_status', '作业状态', '', '0', '0', '0', '10000', '2023-01-03 20:46:59', '10000', '2023-01-03 20:46:59'); +INSERT INTO `sys_dict_type` VALUES ('22', 'instance_status', '作业实例状态', '', '0', '0', '0', '10000', '2023-01-03 20:49:16', '10000', '2023-01-03 20:49:16'); +INSERT INTO `sys_dict_type` VALUES ('23', 'savepoint_strategy', 'flink的savepoint策略', '', '0', '0', '0', '10000', '2023-01-09 21:22:38', '10000', '2023-01-09 21:22:38'); +INSERT INTO `sys_dict_type` VALUES ('24', 'execute_type', '执行类型', '', '0', '0', '0', '10000', '2023-01-18 15:13:05', '10000', '2023-01-18 15:13:05'); +INSERT INTO `sys_dict_type` VALUES ('25', 'api_group_type', 'api分组类型', '', '0', '0', '0', '10000', '2023-01-30 11:34:24', '10000', '2023-01-30 11:34:24'); +INSERT INTO `sys_dict_type` VALUES ('26', 'api_type', 'api请求方式', '', '0', '0', '0', '10000', '2023-02-12 11:26:37', '10000', '2023-02-12 11:26:37'); +INSERT INTO `sys_dict_type` VALUES ('27', 'content_type', '请求参数类型', '', '0', '0', '0', '10000', '2023-02-12 11:32:28', '10000', '2023-02-12 11:32:28'); +INSERT INTO `sys_dict_type` VALUES ('28', 'api_expire_desc', 'api有效期', '', '0', '0', '0', '10000', '2023-02-16 14:43:43', '10000', '2023-02-16 14:43:43'); +INSERT INTO `sys_dict_type` VALUES ('29', 'model_property_data_type', '元模型属性数据类型', '', '0', '0', '0', '10000', '2023-03-28 16:02:07', '10000', '2023-03-28 16:02:07'); +INSERT INTO `sys_dict_type` VALUES ('30', 'model_property_input_type', '元模型属性输入类型', '', '0', '0', '0', '10000', '2023-03-28 16:02:59', '10000', '2023-03-28 16:02:59'); +INSERT INTO `sys_dict_type` VALUES ('31', 'metadata_collect_strategy', '元数据采集策略', '', '0', '0', '0', '10000', '2023-04-01 09:53:13', '10000', '2023-04-01 09:53:13'); +INSERT INTO `sys_dict_type` VALUES ('32', 'metadata_collect_type', '元数据采集类型', '', '0', '0', '0', '10000', '2023-04-01 09:53:55', '10000', '2023-04-01 09:53:55'); +INSERT INTO `sys_dict_type` VALUES ('33', 'db_type', '数据源类型', '', '0', '0', '0', '10000', '2023-04-01 09:54:35', '10000', '2023-04-01 09:54:35'); +INSERT INTO `sys_dict_type` VALUES ('34', 'metadata_collect_status', '元数据采集运行状态', '', '0', '0', '0', '10000', '2023-04-05 16:19:14', '10000', '2023-04-05 16:19:14'); +INSERT INTO `sys_dict_type` VALUES ('35', 'standard_type', '标准类型', '', '0', '0', '0', '10000', '2023-05-08 15:52:35', '10000', '2023-05-08 15:52:35'); +INSERT INTO `sys_dict_type` VALUES ('36', 'standard_data_type', '标准字段-数据类型', '', '0', '0', '0', '10000', '2023-05-18 16:23:44', '10000', '2023-05-18 16:23:44'); +INSERT INTO `sys_dict_type` VALUES ('37', 'quality_rule_type', '质量规则类型', '', '0', '0', '0', '10000', '2023-05-28 09:49:04', '10000', '2023-05-28 09:49:04'); +INSERT INTO `sys_dict_type` VALUES ('38', 'quality_config_task_type', '质量规则任务类型', '', '0', '0', '0', '10000', '2023-05-29 11:52:07', '10000', '2023-05-29 11:52:07'); +INSERT INTO `sys_dict_type` VALUES ('39', 'quality_unique_type', '质量唯一类型', '', '0', '0', '0', '10000', '2023-05-30 12:43:05', '10000', '2023-05-30 12:43:05'); +INSERT INTO `sys_dict_type` VALUES ('40', 'data_house_type', '数仓类型', '', '0', '0', '0', '10000', '2023-06-03 11:07:31', '10000', '2023-06-03 11:07:31'); +INSERT INTO `sys_dict_type` VALUES ('41', 'ground_status', '上架状态', '', '0', '0', '0', '10000', '2023-07-07 10:51:19', '10000', '2023-07-07 10:51:19'); +INSERT INTO `sys_dict_type` VALUES ('42', 'mount_status', '挂载状态', '', '0', '0', '0', '10000', '2023-07-07 10:51:56', '10000', '2023-07-07 10:51:56'); +INSERT INTO `sys_dict_type` VALUES ('43', 'open_type', '开放类型', '', '0', '0', '0', '10000', '2023-07-07 15:08:42', '10000', '2023-07-07 15:08:42'); +INSERT INTO `sys_dict_type` VALUES ('44', 'mount_type', '挂载资源类型', '', '0', '0', '0', '10000', '2023-07-14 14:00:25', '10000', '2023-07-14 14:00:25'); + +-- ---------------------------- +-- Table structure for sys_log_login +-- ---------------------------- +DROP TABLE IF EXISTS `sys_log_login`; +CREATE TABLE `sys_log_login` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `username` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '用户名', + `ip` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '登录IP', + `address` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '登录地点', + `user_agent` varchar(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'User Agent', + `status` tinyint DEFAULT NULL COMMENT '登录状态 0:失败 1:成功', + `operation` tinyint unsigned DEFAULT NULL COMMENT '操作信息 0:登录成功 1:退出成功 2:验证码错误 3:账号密码错误', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=399 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='登录日志'; + +-- ---------------------------- +-- Records of sys_log_login +-- ---------------------------- +INSERT INTO `sys_log_login` VALUES ('1', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-09-27 17:11:24'); +INSERT INTO `sys_log_login` VALUES ('2', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-09-27 17:11:36'); +INSERT INTO `sys_log_login` VALUES ('3', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-09-27 20:48:32'); +INSERT INTO `sys_log_login` VALUES ('4', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-09-27 20:48:42'); +INSERT INTO `sys_log_login` VALUES ('5', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-04 14:09:31'); +INSERT INTO `sys_log_login` VALUES ('6', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-07 11:23:48'); +INSERT INTO `sys_log_login` VALUES ('7', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-07 12:00:53'); +INSERT INTO `sys_log_login` VALUES ('8', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-07 12:01:05'); +INSERT INTO `sys_log_login` VALUES ('9', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-08 12:42:10'); +INSERT INTO `sys_log_login` VALUES ('10', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-08 16:57:53'); +INSERT INTO `sys_log_login` VALUES ('11', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-08 16:58:11'); +INSERT INTO `sys_log_login` VALUES ('12', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-08 17:30:44'); +INSERT INTO `sys_log_login` VALUES ('13', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-08 17:30:52'); +INSERT INTO `sys_log_login` VALUES ('14', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-09 12:40:22'); +INSERT INTO `sys_log_login` VALUES ('15', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-09 12:40:31'); +INSERT INTO `sys_log_login` VALUES ('16', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-10-11 17:47:52'); +INSERT INTO `sys_log_login` VALUES ('17', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-11 17:48:06'); +INSERT INTO `sys_log_login` VALUES ('18', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-17 11:03:13'); +INSERT INTO `sys_log_login` VALUES ('19', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-17 13:48:23'); +INSERT INTO `sys_log_login` VALUES ('20', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-17 13:48:37'); +INSERT INTO `sys_log_login` VALUES ('21', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-17 13:49:08'); +INSERT INTO `sys_log_login` VALUES ('22', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '3', '2022-10-17 13:49:21'); +INSERT INTO `sys_log_login` VALUES ('23', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-10-17 13:49:32'); +INSERT INTO `sys_log_login` VALUES ('24', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-17 13:49:44'); +INSERT INTO `sys_log_login` VALUES ('25', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-17 13:51:26'); +INSERT INTO `sys_log_login` VALUES ('26', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-10-17 13:51:42'); +INSERT INTO `sys_log_login` VALUES ('27', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-17 13:51:49'); +INSERT INTO `sys_log_login` VALUES ('28', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-17 13:59:53'); +INSERT INTO `sys_log_login` VALUES ('29', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-10-17 14:00:05'); +INSERT INTO `sys_log_login` VALUES ('30', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-17 14:00:11'); +INSERT INTO `sys_log_login` VALUES ('31', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-19 10:10:25'); +INSERT INTO `sys_log_login` VALUES ('32', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-20 10:48:03'); +INSERT INTO `sys_log_login` VALUES ('33', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-20 12:01:44'); +INSERT INTO `sys_log_login` VALUES ('34', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-20 12:01:51'); +INSERT INTO `sys_log_login` VALUES ('35', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-20 12:55:48'); +INSERT INTO `sys_log_login` VALUES ('36', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-20 12:55:48'); +INSERT INTO `sys_log_login` VALUES ('37', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-20 12:56:10'); +INSERT INTO `sys_log_login` VALUES ('38', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-22 11:46:48'); +INSERT INTO `sys_log_login` VALUES ('39', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-22 13:15:51'); +INSERT INTO `sys_log_login` VALUES ('40', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-22 13:16:05'); +INSERT INTO `sys_log_login` VALUES ('41', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-22 13:34:19'); +INSERT INTO `sys_log_login` VALUES ('42', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-10-22 13:34:27'); +INSERT INTO `sys_log_login` VALUES ('43', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-22 13:34:33'); +INSERT INTO `sys_log_login` VALUES ('44', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-22 16:39:23'); +INSERT INTO `sys_log_login` VALUES ('45', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-22 16:39:30'); +INSERT INTO `sys_log_login` VALUES ('46', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-22 17:00:00'); +INSERT INTO `sys_log_login` VALUES ('47', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-22 17:00:09'); +INSERT INTO `sys_log_login` VALUES ('48', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-22 17:17:08'); +INSERT INTO `sys_log_login` VALUES ('49', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-22 17:17:24'); +INSERT INTO `sys_log_login` VALUES ('50', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-22 20:04:14'); +INSERT INTO `sys_log_login` VALUES ('51', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-22 20:04:23'); +INSERT INTO `sys_log_login` VALUES ('52', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-23 20:10:49'); +INSERT INTO `sys_log_login` VALUES ('53', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-24 21:45:07'); +INSERT INTO `sys_log_login` VALUES ('54', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-25 15:27:30'); +INSERT INTO `sys_log_login` VALUES ('55', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-25 15:27:39'); +INSERT INTO `sys_log_login` VALUES ('56', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-26 22:37:16'); +INSERT INTO `sys_log_login` VALUES ('57', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-27 14:59:36'); +INSERT INTO `sys_log_login` VALUES ('58', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-27 14:59:50'); +INSERT INTO `sys_log_login` VALUES ('59', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-27 15:35:46'); +INSERT INTO `sys_log_login` VALUES ('60', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-27 16:14:35'); +INSERT INTO `sys_log_login` VALUES ('61', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-27 22:19:45'); +INSERT INTO `sys_log_login` VALUES ('62', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-27 22:19:56'); +INSERT INTO `sys_log_login` VALUES ('63', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-28 17:00:33'); +INSERT INTO `sys_log_login` VALUES ('64', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-28 17:00:41'); +INSERT INTO `sys_log_login` VALUES ('65', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-29 12:49:32'); +INSERT INTO `sys_log_login` VALUES ('66', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-29 12:55:40'); +INSERT INTO `sys_log_login` VALUES ('67', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-29 12:57:00'); +INSERT INTO `sys_log_login` VALUES ('68', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-29 12:57:10'); +INSERT INTO `sys_log_login` VALUES ('69', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-10-29 14:30:10'); +INSERT INTO `sys_log_login` VALUES ('70', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-10-29 14:30:18'); +INSERT INTO `sys_log_login` VALUES ('71', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-01 23:14:43'); +INSERT INTO `sys_log_login` VALUES ('72', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-01 23:15:01'); +INSERT INTO `sys_log_login` VALUES ('73', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-10 16:51:17'); +INSERT INTO `sys_log_login` VALUES ('74', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-11 17:38:02'); +INSERT INTO `sys_log_login` VALUES ('75', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-11 17:38:10'); +INSERT INTO `sys_log_login` VALUES ('76', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-12 21:23:47'); +INSERT INTO `sys_log_login` VALUES ('77', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-14 15:04:59'); +INSERT INTO `sys_log_login` VALUES ('78', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-16 11:15:56'); +INSERT INTO `sys_log_login` VALUES ('79', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-16 11:53:52'); +INSERT INTO `sys_log_login` VALUES ('80', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-16 11:54:04'); +INSERT INTO `sys_log_login` VALUES ('81', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-16 11:54:12'); +INSERT INTO `sys_log_login` VALUES ('82', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-18 12:59:49'); +INSERT INTO `sys_log_login` VALUES ('83', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-18 14:27:20'); +INSERT INTO `sys_log_login` VALUES ('84', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-18 14:27:31'); +INSERT INTO `sys_log_login` VALUES ('85', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-21 09:21:12'); +INSERT INTO `sys_log_login` VALUES ('86', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-22 10:36:54'); +INSERT INTO `sys_log_login` VALUES ('87', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-22 10:36:50'); +INSERT INTO `sys_log_login` VALUES ('88', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-22 10:37:04'); +INSERT INTO `sys_log_login` VALUES ('89', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-23 10:40:40'); +INSERT INTO `sys_log_login` VALUES ('90', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-23 10:40:49'); +INSERT INTO `sys_log_login` VALUES ('91', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-24 10:43:19'); +INSERT INTO `sys_log_login` VALUES ('92', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-25 11:40:20'); +INSERT INTO `sys_log_login` VALUES ('93', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-25 21:46:41'); +INSERT INTO `sys_log_login` VALUES ('94', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-25 21:47:14'); +INSERT INTO `sys_log_login` VALUES ('95', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-25 22:03:34'); +INSERT INTO `sys_log_login` VALUES ('96', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-25 22:03:44'); +INSERT INTO `sys_log_login` VALUES ('97', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-25 23:20:46'); +INSERT INTO `sys_log_login` VALUES ('98', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-25 23:20:53'); +INSERT INTO `sys_log_login` VALUES ('99', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-25 23:25:09'); +INSERT INTO `sys_log_login` VALUES ('100', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-25 23:25:16'); +INSERT INTO `sys_log_login` VALUES ('101', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-25 23:35:55'); +INSERT INTO `sys_log_login` VALUES ('102', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-25 23:37:24'); +INSERT INTO `sys_log_login` VALUES ('103', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-25 23:37:33'); +INSERT INTO `sys_log_login` VALUES ('104', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-25 23:38:27'); +INSERT INTO `sys_log_login` VALUES ('105', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-25 23:43:47'); +INSERT INTO `sys_log_login` VALUES ('106', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-26 00:03:39'); +INSERT INTO `sys_log_login` VALUES ('107', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-26 00:03:51'); +INSERT INTO `sys_log_login` VALUES ('108', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-26 12:03:03'); +INSERT INTO `sys_log_login` VALUES ('109', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-26 12:03:17'); +INSERT INTO `sys_log_login` VALUES ('110', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-26 12:05:30'); +INSERT INTO `sys_log_login` VALUES ('111', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-26 12:05:47'); +INSERT INTO `sys_log_login` VALUES ('112', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-26 12:10:04'); +INSERT INTO `sys_log_login` VALUES ('113', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-26 12:10:15'); +INSERT INTO `sys_log_login` VALUES ('114', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-26 16:35:40'); +INSERT INTO `sys_log_login` VALUES ('115', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-26 16:35:50'); +INSERT INTO `sys_log_login` VALUES ('116', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-27 18:38:34'); +INSERT INTO `sys_log_login` VALUES ('117', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-27 18:38:49'); +INSERT INTO `sys_log_login` VALUES ('118', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-27 18:38:55'); +INSERT INTO `sys_log_login` VALUES ('119', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-11-28 15:34:17'); +INSERT INTO `sys_log_login` VALUES ('120', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-28 15:34:39'); +INSERT INTO `sys_log_login` VALUES ('121', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-29 15:40:57'); +INSERT INTO `sys_log_login` VALUES ('122', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-30 16:10:59'); +INSERT INTO `sys_log_login` VALUES ('123', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-30 16:11:15'); +INSERT INTO `sys_log_login` VALUES ('124', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-11-30 16:11:19'); +INSERT INTO `sys_log_login` VALUES ('125', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-11-30 16:11:26'); +INSERT INTO `sys_log_login` VALUES ('126', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-01 16:15:28'); +INSERT INTO `sys_log_login` VALUES ('127', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-03 11:13:35'); +INSERT INTO `sys_log_login` VALUES ('128', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-12-03 11:25:18'); +INSERT INTO `sys_log_login` VALUES ('129', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-03 11:25:28'); +INSERT INTO `sys_log_login` VALUES ('130', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-04 11:25:38'); +INSERT INTO `sys_log_login` VALUES ('131', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-05 16:40:26'); +INSERT INTO `sys_log_login` VALUES ('132', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-06 17:23:45'); +INSERT INTO `sys_log_login` VALUES ('133', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-12-07 09:26:16'); +INSERT INTO `sys_log_login` VALUES ('134', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-07 09:26:26'); +INSERT INTO `sys_log_login` VALUES ('135', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-08 13:01:55'); +INSERT INTO `sys_log_login` VALUES ('136', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-09 15:45:04'); +INSERT INTO `sys_log_login` VALUES ('137', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-09 15:45:11'); +INSERT INTO `sys_log_login` VALUES ('138', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-09 15:45:12'); +INSERT INTO `sys_log_login` VALUES ('139', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-10 18:19:41'); +INSERT INTO `sys_log_login` VALUES ('140', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-14 12:13:21'); +INSERT INTO `sys_log_login` VALUES ('141', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-15 16:15:03'); +INSERT INTO `sys_log_login` VALUES ('142', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-19 10:29:52'); +INSERT INTO `sys_log_login` VALUES ('143', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-20 16:14:50'); +INSERT INTO `sys_log_login` VALUES ('144', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-21 16:15:22'); +INSERT INTO `sys_log_login` VALUES ('145', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-12-21 20:44:20'); +INSERT INTO `sys_log_login` VALUES ('146', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-22 09:29:02'); +INSERT INTO `sys_log_login` VALUES ('147', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-22 09:29:10'); +INSERT INTO `sys_log_login` VALUES ('148', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-23 11:11:15'); +INSERT INTO `sys_log_login` VALUES ('149', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-24 16:18:17'); +INSERT INTO `sys_log_login` VALUES ('150', 'admin', '192.168.40.1', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-25 04:02:09'); +INSERT INTO `sys_log_login` VALUES ('151', 'admin', '192.168.40.1', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-25 04:02:29'); +INSERT INTO `sys_log_login` VALUES ('152', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2022-12-25 16:53:04'); +INSERT INTO `sys_log_login` VALUES ('153', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-25 16:53:14'); +INSERT INTO `sys_log_login` VALUES ('154', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-27 12:46:14'); +INSERT INTO `sys_log_login` VALUES ('155', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:01:30'); +INSERT INTO `sys_log_login` VALUES ('156', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-28 17:01:49'); +INSERT INTO `sys_log_login` VALUES ('157', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:01:55'); +INSERT INTO `sys_log_login` VALUES ('158', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-28 17:02:05'); +INSERT INTO `sys_log_login` VALUES ('159', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:02:11'); +INSERT INTO `sys_log_login` VALUES ('160', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:02:27'); +INSERT INTO `sys_log_login` VALUES ('161', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-28 17:03:28'); +INSERT INTO `sys_log_login` VALUES ('162', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-28 17:03:35'); +INSERT INTO `sys_log_login` VALUES ('163', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:03:42'); +INSERT INTO `sys_log_login` VALUES ('164', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:07:12'); +INSERT INTO `sys_log_login` VALUES ('165', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:07:40'); +INSERT INTO `sys_log_login` VALUES ('166', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:08:30'); +INSERT INTO `sys_log_login` VALUES ('167', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:08:42'); +INSERT INTO `sys_log_login` VALUES ('168', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:12:53'); +INSERT INTO `sys_log_login` VALUES ('169', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-28 17:44:39'); +INSERT INTO `sys_log_login` VALUES ('170', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-28 17:44:49'); +INSERT INTO `sys_log_login` VALUES ('171', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-30 09:53:45'); +INSERT INTO `sys_log_login` VALUES ('172', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2022-12-30 09:53:45'); +INSERT INTO `sys_log_login` VALUES ('173', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-30 09:53:52'); +INSERT INTO `sys_log_login` VALUES ('174', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2022-12-31 11:56:53'); +INSERT INTO `sys_log_login` VALUES ('175', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-01 12:18:29'); +INSERT INTO `sys_log_login` VALUES ('176', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-02 13:08:57'); +INSERT INTO `sys_log_login` VALUES ('177', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-03 20:45:34'); +INSERT INTO `sys_log_login` VALUES ('178', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-04 21:08:34'); +INSERT INTO `sys_log_login` VALUES ('179', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-04 21:08:42'); +INSERT INTO `sys_log_login` VALUES ('180', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-04 21:08:48'); +INSERT INTO `sys_log_login` VALUES ('181', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-05 21:13:59'); +INSERT INTO `sys_log_login` VALUES ('182', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-05 21:14:07'); +INSERT INTO `sys_log_login` VALUES ('183', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-07 15:55:06'); +INSERT INTO `sys_log_login` VALUES ('184', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-08 17:28:55'); +INSERT INTO `sys_log_login` VALUES ('185', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-08 17:29:22'); +INSERT INTO `sys_log_login` VALUES ('186', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-09 20:02:14'); +INSERT INTO `sys_log_login` VALUES ('187', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-10 22:21:51'); +INSERT INTO `sys_log_login` VALUES ('188', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-12 10:26:37'); +INSERT INTO `sys_log_login` VALUES ('189', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-12 10:26:47'); +INSERT INTO `sys_log_login` VALUES ('190', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-12 10:26:53'); +INSERT INTO `sys_log_login` VALUES ('191', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-13 10:28:17'); +INSERT INTO `sys_log_login` VALUES ('192', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-14 11:41:45'); +INSERT INTO `sys_log_login` VALUES ('193', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2023-01-14 19:22:05'); +INSERT INTO `sys_log_login` VALUES ('194', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-14 19:22:14'); +INSERT INTO `sys_log_login` VALUES ('195', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-15 20:52:51'); +INSERT INTO `sys_log_login` VALUES ('196', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-17 16:29:17'); +INSERT INTO `sys_log_login` VALUES ('197', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2023-01-17 17:06:30'); +INSERT INTO `sys_log_login` VALUES ('198', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-17 17:06:41'); +INSERT INTO `sys_log_login` VALUES ('199', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2023-01-18 16:00:40'); +INSERT INTO `sys_log_login` VALUES ('200', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-18 16:04:57'); +INSERT INTO `sys_log_login` VALUES ('201', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-18 16:05:04'); +INSERT INTO `sys_log_login` VALUES ('202', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-19 16:50:06'); +INSERT INTO `sys_log_login` VALUES ('203', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2023-01-19 21:47:25'); +INSERT INTO `sys_log_login` VALUES ('204', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-19 21:47:36'); +INSERT INTO `sys_log_login` VALUES ('205', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-20 22:07:37'); +INSERT INTO `sys_log_login` VALUES ('206', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-21 14:19:09'); +INSERT INTO `sys_log_login` VALUES ('207', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-22 15:47:10'); +INSERT INTO `sys_log_login` VALUES ('208', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-22 15:47:15'); +INSERT INTO `sys_log_login` VALUES ('209', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-22 15:47:20'); +INSERT INTO `sys_log_login` VALUES ('210', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-22 16:01:06'); +INSERT INTO `sys_log_login` VALUES ('211', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-22 16:01:13'); +INSERT INTO `sys_log_login` VALUES ('212', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2023-01-22 20:35:22'); +INSERT INTO `sys_log_login` VALUES ('213', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-22 20:37:34'); +INSERT INTO `sys_log_login` VALUES ('214', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-30 11:21:20'); +INSERT INTO `sys_log_login` VALUES ('215', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-01-30 11:21:21'); +INSERT INTO `sys_log_login` VALUES ('216', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2023-01-30 11:43:26'); +INSERT INTO `sys_log_login` VALUES ('217', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-01-30 11:44:19'); +INSERT INTO `sys_log_login` VALUES ('218', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '0', '2', '2023-02-06 15:37:19'); +INSERT INTO `sys_log_login` VALUES ('219', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-02-06 15:37:28'); +INSERT INTO `sys_log_login` VALUES ('220', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2023-02-06 16:05:00'); +INSERT INTO `sys_log_login` VALUES ('221', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-02-06 16:05:11'); +INSERT INTO `sys_log_login` VALUES ('222', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '1', '2023-02-06 16:13:05'); +INSERT INTO `sys_log_login` VALUES ('223', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-02-06 16:13:17'); +INSERT INTO `sys_log_login` VALUES ('224', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-02-08 09:56:59'); +INSERT INTO `sys_log_login` VALUES ('225', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', '1', '0', '2023-02-09 14:55:13'); +INSERT INTO `sys_log_login` VALUES ('226', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-11 21:39:31'); +INSERT INTO `sys_log_login` VALUES ('227', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-13 14:53:06'); +INSERT INTO `sys_log_login` VALUES ('228', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-14 09:56:41'); +INSERT INTO `sys_log_login` VALUES ('229', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-14 09:56:48'); +INSERT INTO `sys_log_login` VALUES ('230', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-14 22:43:51'); +INSERT INTO `sys_log_login` VALUES ('231', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-14 22:43:59'); +INSERT INTO `sys_log_login` VALUES ('232', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-15 11:17:02'); +INSERT INTO `sys_log_login` VALUES ('233', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-15 11:17:08'); +INSERT INTO `sys_log_login` VALUES ('234', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-16 12:50:31'); +INSERT INTO `sys_log_login` VALUES ('235', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-16 14:51:51'); +INSERT INTO `sys_log_login` VALUES ('236', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-02-16 14:54:55'); +INSERT INTO `sys_log_login` VALUES ('237', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-16 14:55:02'); +INSERT INTO `sys_log_login` VALUES ('238', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-17 11:37:42'); +INSERT INTO `sys_log_login` VALUES ('239', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-17 11:37:50'); +INSERT INTO `sys_log_login` VALUES ('240', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-02-20 11:04:14'); +INSERT INTO `sys_log_login` VALUES ('241', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-02-20 11:04:14'); +INSERT INTO `sys_log_login` VALUES ('242', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-20 11:04:21'); +INSERT INTO `sys_log_login` VALUES ('243', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-20 14:11:57'); +INSERT INTO `sys_log_login` VALUES ('244', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-02-20 14:12:07'); +INSERT INTO `sys_log_login` VALUES ('245', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-20 14:12:13'); +INSERT INTO `sys_log_login` VALUES ('246', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-20 14:15:04'); +INSERT INTO `sys_log_login` VALUES ('247', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-20 14:15:11'); +INSERT INTO `sys_log_login` VALUES ('248', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-22 08:58:09'); +INSERT INTO `sys_log_login` VALUES ('249', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-22 14:49:33'); +INSERT INTO `sys_log_login` VALUES ('250', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-22 14:49:40'); +INSERT INTO `sys_log_login` VALUES ('251', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-23 18:28:58'); +INSERT INTO `sys_log_login` VALUES ('252', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-25 20:57:48'); +INSERT INTO `sys_log_login` VALUES ('253', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 11:55:13'); +INSERT INTO `sys_log_login` VALUES ('254', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-26 12:06:15'); +INSERT INTO `sys_log_login` VALUES ('255', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 14:54:42'); +INSERT INTO `sys_log_login` VALUES ('256', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-26 15:18:35'); +INSERT INTO `sys_log_login` VALUES ('257', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-02-26 15:18:43'); +INSERT INTO `sys_log_login` VALUES ('258', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 15:18:51'); +INSERT INTO `sys_log_login` VALUES ('259', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 21:00:09'); +INSERT INTO `sys_log_login` VALUES ('260', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-26 21:10:08'); +INSERT INTO `sys_log_login` VALUES ('261', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 21:10:16'); +INSERT INTO `sys_log_login` VALUES ('262', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-26 21:18:16'); +INSERT INTO `sys_log_login` VALUES ('263', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 21:20:30'); +INSERT INTO `sys_log_login` VALUES ('264', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-26 21:31:09'); +INSERT INTO `sys_log_login` VALUES ('265', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 21:31:14'); +INSERT INTO `sys_log_login` VALUES ('266', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-26 21:33:11'); +INSERT INTO `sys_log_login` VALUES ('267', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 21:35:29'); +INSERT INTO `sys_log_login` VALUES ('268', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-26 21:59:01'); +INSERT INTO `sys_log_login` VALUES ('269', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 21:59:09'); +INSERT INTO `sys_log_login` VALUES ('270', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-26 22:01:52'); +INSERT INTO `sys_log_login` VALUES ('271', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 22:02:00'); +INSERT INTO `sys_log_login` VALUES ('272', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-02-26 22:04:18'); +INSERT INTO `sys_log_login` VALUES ('273', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-02-26 22:04:25'); +INSERT INTO `sys_log_login` VALUES ('274', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-03 15:15:31'); +INSERT INTO `sys_log_login` VALUES ('275', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-05 18:37:02'); +INSERT INTO `sys_log_login` VALUES ('276', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-07 17:07:54'); +INSERT INTO `sys_log_login` VALUES ('277', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-14 12:31:29'); +INSERT INTO `sys_log_login` VALUES ('278', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-28 10:48:49'); +INSERT INTO `sys_log_login` VALUES ('279', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-28 13:40:53'); +INSERT INTO `sys_log_login` VALUES ('280', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-29 12:10:54'); +INSERT INTO `sys_log_login` VALUES ('281', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-29 15:03:12'); +INSERT INTO `sys_log_login` VALUES ('282', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-03-30 11:13:39'); +INSERT INTO `sys_log_login` VALUES ('283', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-30 11:14:16'); +INSERT INTO `sys_log_login` VALUES ('284', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-03-31 08:58:46'); +INSERT INTO `sys_log_login` VALUES ('285', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-31 08:58:53'); +INSERT INTO `sys_log_login` VALUES ('286', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-03-31 08:59:16'); +INSERT INTO `sys_log_login` VALUES ('287', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-03-31 10:39:14'); +INSERT INTO `sys_log_login` VALUES ('288', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-03-31 10:39:15'); +INSERT INTO `sys_log_login` VALUES ('289', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-03-31 10:39:21'); +INSERT INTO `sys_log_login` VALUES ('290', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-31 10:39:27'); +INSERT INTO `sys_log_login` VALUES ('291', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-03-31 11:12:06'); +INSERT INTO `sys_log_login` VALUES ('292', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-01 12:06:39'); +INSERT INTO `sys_log_login` VALUES ('293', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-03 10:36:26'); +INSERT INTO `sys_log_login` VALUES ('294', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-04-03 15:30:31'); +INSERT INTO `sys_log_login` VALUES ('295', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-03 15:30:36'); +INSERT INTO `sys_log_login` VALUES ('296', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-04-05 12:19:44'); +INSERT INTO `sys_log_login` VALUES ('297', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-04-05 12:19:54'); +INSERT INTO `sys_log_login` VALUES ('298', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-04-05 12:20:05'); +INSERT INTO `sys_log_login` VALUES ('299', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-05 12:20:11'); +INSERT INTO `sys_log_login` VALUES ('300', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-04-05 12:22:29'); +INSERT INTO `sys_log_login` VALUES ('301', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-05 12:22:35'); +INSERT INTO `sys_log_login` VALUES ('302', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-04-05 12:22:35'); +INSERT INTO `sys_log_login` VALUES ('303', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-04-06 09:59:56'); +INSERT INTO `sys_log_login` VALUES ('304', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '0', '2', '2023-04-06 10:00:01'); +INSERT INTO `sys_log_login` VALUES ('305', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-06 10:00:08'); +INSERT INTO `sys_log_login` VALUES ('306', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-07 10:05:47'); +INSERT INTO `sys_log_login` VALUES ('307', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-07 11:34:24'); +INSERT INTO `sys_log_login` VALUES ('308', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-08 12:10:45'); +INSERT INTO `sys_log_login` VALUES ('309', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-04-08 12:27:14'); +INSERT INTO `sys_log_login` VALUES ('310', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-08 12:30:57'); +INSERT INTO `sys_log_login` VALUES ('311', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '1', '2023-04-08 12:34:08'); +INSERT INTO `sys_log_login` VALUES ('312', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-08 12:36:23'); +INSERT INTO `sys_log_login` VALUES ('313', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-20 11:53:27'); +INSERT INTO `sys_log_login` VALUES ('314', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-04-23 14:17:59'); +INSERT INTO `sys_log_login` VALUES ('315', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36', '1', '0', '2023-05-08 09:38:18'); +INSERT INTO `sys_log_login` VALUES ('316', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-18 15:34:36'); +INSERT INTO `sys_log_login` VALUES ('317', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-05-18 15:34:38'); +INSERT INTO `sys_log_login` VALUES ('318', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-19 16:06:02'); +INSERT INTO `sys_log_login` VALUES ('319', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-23 09:19:02'); +INSERT INTO `sys_log_login` VALUES ('320', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-05-23 09:33:56'); +INSERT INTO `sys_log_login` VALUES ('321', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-05-23 09:44:03'); +INSERT INTO `sys_log_login` VALUES ('322', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-23 09:44:09'); +INSERT INTO `sys_log_login` VALUES ('323', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-05-24 15:02:29'); +INSERT INTO `sys_log_login` VALUES ('324', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-24 15:02:38'); +INSERT INTO `sys_log_login` VALUES ('325', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-25 17:04:24'); +INSERT INTO `sys_log_login` VALUES ('326', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-27 09:57:33'); +INSERT INTO `sys_log_login` VALUES ('327', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-28 09:17:01'); +INSERT INTO `sys_log_login` VALUES ('328', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-29 10:39:30'); +INSERT INTO `sys_log_login` VALUES ('329', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-30 12:30:43'); +INSERT INTO `sys_log_login` VALUES ('330', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-30 14:29:57'); +INSERT INTO `sys_log_login` VALUES ('331', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-05-30 20:42:15'); +INSERT INTO `sys_log_login` VALUES ('332', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-05-30 20:42:21'); +INSERT INTO `sys_log_login` VALUES ('333', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-30 20:42:25'); +INSERT INTO `sys_log_login` VALUES ('334', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-05-30 20:43:45'); +INSERT INTO `sys_log_login` VALUES ('335', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-05-30 20:47:41'); +INSERT INTO `sys_log_login` VALUES ('336', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-06-01 16:15:03'); +INSERT INTO `sys_log_login` VALUES ('337', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-01 16:15:03'); +INSERT INTO `sys_log_login` VALUES ('338', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-03 10:52:01'); +INSERT INTO `sys_log_login` VALUES ('339', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-08 17:29:40'); +INSERT INTO `sys_log_login` VALUES ('340', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-06-08 17:29:51'); +INSERT INTO `sys_log_login` VALUES ('341', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-08 17:34:35'); +INSERT INTO `sys_log_login` VALUES ('342', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-13 15:16:36'); +INSERT INTO `sys_log_login` VALUES ('343', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-06-19 17:46:21'); +INSERT INTO `sys_log_login` VALUES ('344', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-19 17:46:25'); +INSERT INTO `sys_log_login` VALUES ('345', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-20 20:54:31'); +INSERT INTO `sys_log_login` VALUES ('346', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-22 12:30:51'); +INSERT INTO `sys_log_login` VALUES ('347', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-06-24 20:18:27'); +INSERT INTO `sys_log_login` VALUES ('348', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-24 20:18:33'); +INSERT INTO `sys_log_login` VALUES ('349', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-25 20:50:46'); +INSERT INTO `sys_log_login` VALUES ('350', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-26 21:46:53'); +INSERT INTO `sys_log_login` VALUES ('351', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-06-26 22:23:18'); +INSERT INTO `sys_log_login` VALUES ('352', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-06-26 22:31:31'); +INSERT INTO `sys_log_login` VALUES ('353', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-04 15:27:04'); +INSERT INTO `sys_log_login` VALUES ('354', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-06 17:25:43'); +INSERT INTO `sys_log_login` VALUES ('355', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-07 15:41:41'); +INSERT INTO `sys_log_login` VALUES ('356', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-07 15:41:50'); +INSERT INTO `sys_log_login` VALUES ('357', '测试用户', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-07 15:45:27'); +INSERT INTO `sys_log_login` VALUES ('358', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-07 15:45:32'); +INSERT INTO `sys_log_login` VALUES ('359', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-07-10 09:22:12'); +INSERT INTO `sys_log_login` VALUES ('360', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-10 09:22:16'); +INSERT INTO `sys_log_login` VALUES ('361', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-11 10:46:18'); +INSERT INTO `sys_log_login` VALUES ('362', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-07-11 11:07:49'); +INSERT INTO `sys_log_login` VALUES ('363', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-11 11:07:54'); +INSERT INTO `sys_log_login` VALUES ('364', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-13 16:28:25'); +INSERT INTO `sys_log_login` VALUES ('365', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-07-14 16:59:21'); +INSERT INTO `sys_log_login` VALUES ('366', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-14 16:59:27'); +INSERT INTO `sys_log_login` VALUES ('367', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-14 16:59:46'); +INSERT INTO `sys_log_login` VALUES ('368', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-14 16:59:51'); +INSERT INTO `sys_log_login` VALUES ('369', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-07-17 15:38:30'); +INSERT INTO `sys_log_login` VALUES ('370', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-17 15:38:39'); +INSERT INTO `sys_log_login` VALUES ('371', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-07-18 15:39:22'); +INSERT INTO `sys_log_login` VALUES ('372', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-18 15:39:28'); +INSERT INTO `sys_log_login` VALUES ('373', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-07-19 16:16:52'); +INSERT INTO `sys_log_login` VALUES ('374', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-19 16:16:57'); +INSERT INTO `sys_log_login` VALUES ('375', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-20 16:21:50'); +INSERT INTO `sys_log_login` VALUES ('376', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-21 14:58:54'); +INSERT INTO `sys_log_login` VALUES ('377', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-21 14:59:04'); +INSERT INTO `sys_log_login` VALUES ('378', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-22 17:54:15'); +INSERT INTO `sys_log_login` VALUES ('379', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-22 22:44:14'); +INSERT INTO `sys_log_login` VALUES ('380', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-22 22:44:25'); +INSERT INTO `sys_log_login` VALUES ('381', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-22 22:44:49'); +INSERT INTO `sys_log_login` VALUES ('382', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '3', '2023-07-22 22:45:05'); +INSERT INTO `sys_log_login` VALUES ('383', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-07-22 22:45:08'); +INSERT INTO `sys_log_login` VALUES ('384', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '3', '2023-07-22 22:45:13'); +INSERT INTO `sys_log_login` VALUES ('385', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '3', '2023-07-22 22:45:21'); +INSERT INTO `sys_log_login` VALUES ('386', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '0', '2', '2023-07-22 22:45:29'); +INSERT INTO `sys_log_login` VALUES ('387', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-22 22:45:35'); +INSERT INTO `sys_log_login` VALUES ('388', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-22 22:45:48'); +INSERT INTO `sys_log_login` VALUES ('389', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-22 22:45:54'); +INSERT INTO `sys_log_login` VALUES ('390', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-22 22:46:13'); +INSERT INTO `sys_log_login` VALUES ('391', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-22 22:46:24'); +INSERT INTO `sys_log_login` VALUES ('392', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-22 22:52:58'); +INSERT INTO `sys_log_login` VALUES ('393', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-22 22:53:03'); +INSERT INTO `sys_log_login` VALUES ('394', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-22 22:54:34'); +INSERT INTO `sys_log_login` VALUES ('395', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-22 22:54:43'); +INSERT INTO `sys_log_login` VALUES ('396', '测试用户2', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '1', '2023-07-22 22:55:01'); +INSERT INTO `sys_log_login` VALUES ('397', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '1', '0', '2023-07-22 22:55:08'); +INSERT INTO `sys_log_login` VALUES ('398', 'admin', '124.223.48.209', '内网IP', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36', '1', '0', '2023-09-11 13:54:07'); + +-- ---------------------------- +-- Table structure for sys_menu +-- ---------------------------- +DROP TABLE IF EXISTS `sys_menu`; +CREATE TABLE `sys_menu` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `pid` bigint DEFAULT NULL COMMENT '上级ID,一级菜单为0', + `name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '菜单名称', + `url` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '菜单URL', + `authority` varchar(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '授权标识(多个用逗号分隔,如:sys:menu:list,sys:menu:save)', + `type` tinyint DEFAULT NULL COMMENT '类型 0:菜单 1:按钮 2:接口', + `open_style` tinyint DEFAULT NULL COMMENT '打开方式 0:内部 1:外部', + `icon` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '菜单图标', + `sort` int DEFAULT NULL COMMENT '排序', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `idx_pid` (`pid`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=181 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='菜单管理'; + +-- ---------------------------- +-- Records of sys_menu +-- ---------------------------- +INSERT INTO `sys_menu` VALUES ('1', '0', '系统管理', null, null, '0', '0', 'icon-setting', '21', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2023-01-12 10:28:59'); +INSERT INTO `sys_menu` VALUES ('2', '1', '菜单管理', 'sys/menu/index', null, '0', '0', 'icon-menu', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('3', '2', '查看', '', 'sys:menu:list', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('4', '2', '新增', '', 'sys:menu:save', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('5', '2', '修改', '', 'sys:menu:update,sys:menu:info', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('6', '2', '删除', '', 'sys:menu:delete', '1', '0', '', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('7', '1', '数据字典', 'sys/dict/type', '', '0', '0', 'icon-insertrowabove', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('8', '7', '查询', '', 'sys:dict:page', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('9', '7', '新增', '', 'sys:dict:save', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('10', '7', '修改', '', 'sys:dict:update,sys:dict:info', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('11', '7', '删除', '', 'sys:dict:delete', '1', '0', '', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('13', '1', '岗位管理', 'sys/post/index', '', '0', '0', 'icon-solution', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:06:32'); +INSERT INTO `sys_menu` VALUES ('14', '13', '查询', '', 'sys:post:page', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('15', '13', '新增', '', 'sys:post:save', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('16', '13', '修改', '', 'sys:post:update,sys:post:info', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('17', '13', '删除', '', 'sys:post:delete', '1', '0', '', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('18', '1', '机构管理', 'sys/org/index', '', '0', '0', 'icon-cluster', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:06:25'); +INSERT INTO `sys_menu` VALUES ('19', '18', '查询', '', 'sys:org:list', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('20', '18', '新增', '', 'sys:org:save', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('21', '18', '修改', '', 'sys:org:update,sys:org:info', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('22', '18', '删除', '', 'sys:org:delete', '1', '0', '', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('23', '1', '角色管理', 'sys/role/index', '', '0', '0', 'icon-team', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:06:39'); +INSERT INTO `sys_menu` VALUES ('24', '23', '查询', '', 'sys:role:page', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('25', '23', '新增', '', 'sys:role:save,sys:role:menu,sys:org:list', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('26', '23', '修改', '', 'sys:role:update,sys:role:info,sys:role:menu,sys:org:list,sys:user:page', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('27', '23', '删除', '', 'sys:role:delete', '1', '0', '', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('28', '1', '用户管理', 'sys/user/index', '', '0', '0', 'icon-user', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:06:16'); +INSERT INTO `sys_menu` VALUES ('29', '28', '查询', '', 'sys:user:page', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('30', '28', '新增', '', 'sys:user:save,sys:role:list', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('31', '28', '修改', '', 'sys:user:update,sys:user:info,sys:role:list', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('32', '28', '删除', '', 'sys:user:delete', '1', '0', '', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('33', '0', '应用管理', '', '', '0', '0', 'icon-appstore', '18', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:08:19'); +INSERT INTO `sys_menu` VALUES ('34', '1', '附件管理', 'sys/attachment/index', null, '0', '0', 'icon-folder-fill', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('35', '34', '查看', '', 'sys:attachment:page', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('36', '34', '上传', '', 'sys:attachment:save', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('37', '34', '删除', '', 'sys:attachment:delete', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('38', '0', '日志管理', '', '', '0', '0', 'icon-filedone', '19', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:08:14'); +INSERT INTO `sys_menu` VALUES ('39', '38', '登录日志', 'sys/log/login', 'sys:log:login', '0', '0', 'icon-solution', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_menu` VALUES ('40', '33', '消息管理', '', '', '0', '0', 'icon-message', '2', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_menu` VALUES ('41', '40', '短信日志', 'message/sms/log/index', 'sms:log', '0', '0', 'icon-detail', '1', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_menu` VALUES ('42', '40', '短信平台', 'message/sms/platform/index', null, '0', '0', 'icon-whatsapp', '0', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_menu` VALUES ('43', '42', '查看', '', 'sms:platform:page', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_menu` VALUES ('44', '42', '新增', '', 'sms:platform:save', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_menu` VALUES ('45', '42', '修改', '', 'sms:platform:update,sms:platform:info', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_menu` VALUES ('46', '42', '删除', '', 'sms:platform:delete', '1', '0', '', '3', '0', '0', '10000', '2022-09-27 11:01:47', '10000', '2022-09-27 11:01:47'); +INSERT INTO `sys_menu` VALUES ('47', '1', '定时任务', 'quartz/schedule/index', null, '0', '0', 'icon-reloadtime', '0', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_menu` VALUES ('48', '47', '查看', '', 'schedule:page', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_menu` VALUES ('49', '47', '新增', '', 'schedule:save', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_menu` VALUES ('50', '47', '修改', '', 'schedule:update,schedule:info', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_menu` VALUES ('51', '47', '删除', '', 'schedule:delete', '1', '0', '', '3', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_menu` VALUES ('52', '47', '立即运行', '', 'schedule:run', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_menu` VALUES ('53', '47', '日志', '', 'schedule:log', '1', '0', '', '4', '0', '0', '10000', '2022-09-27 11:02:02', '10000', '2022-09-27 11:02:02'); +INSERT INTO `sys_menu` VALUES ('54', '0', '全局管理', '', '', '0', '0', 'icon-wallet', '20', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2023-01-12 10:29:04'); +INSERT INTO `sys_menu` VALUES ('55', '54', '数据项目管理', 'global-manage/project/index', '', '0', '0', 'icon-detail', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-10-09 12:41:16'); +INSERT INTO `sys_menu` VALUES ('56', '54', '数仓分层展示', 'global-manage/layer/index', '', '0', '0', 'icon-table1', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 17:12:21'); +INSERT INTO `sys_menu` VALUES ('57', '55', '查看', '', 'data-integrate:project:page', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 20:46:51', '10000', '2022-09-27 20:46:51'); +INSERT INTO `sys_menu` VALUES ('58', '55', '新增', '', 'data-integrate:project:save', '1', '0', '', '1', '0', '0', '10000', '2022-09-27 20:46:51', '10000', '2022-09-27 20:46:51'); +INSERT INTO `sys_menu` VALUES ('59', '55', '修改', '', 'data-integrate:project:update,data-integrate:project:info', '1', '0', '', '2', '0', '0', '10000', '2022-09-27 20:46:51', '10000', '2022-09-27 20:46:51'); +INSERT INTO `sys_menu` VALUES ('60', '55', '删除', '', 'data-integrate:project:delete', '1', '0', '', '3', '0', '0', '10000', '2022-09-27 20:46:51', '10000', '2022-09-27 20:46:51'); +INSERT INTO `sys_menu` VALUES ('61', '55', '项目成员', '', 'data-integrate:project:users', '1', '0', '', '0', '0', '0', '10000', '2022-09-27 21:28:39', '10000', '2022-09-27 21:28:39'); +INSERT INTO `sys_menu` VALUES ('62', '55', '添加成员', '', 'data-integrate:project:adduser', '1', '0', '', '4', '0', '0', '10000', '2022-10-07 12:00:15', '10000', '2022-10-07 12:00:25'); +INSERT INTO `sys_menu` VALUES ('63', '56', '查看', '', 'data-integrate:layer:page', '1', '0', '', '0', '0', '0', '10000', '2022-10-08 16:55:11', '10000', '2022-10-08 16:55:11'); +INSERT INTO `sys_menu` VALUES ('66', '56', '修改', '', 'data-integrate:layer:update,data-integrate:layer:info', '1', '0', '', '1', '0', '0', '10000', '2022-10-08 17:30:36', '10000', '2022-10-08 17:30:36'); +INSERT INTO `sys_menu` VALUES ('67', '0', '数据集成', '', '', '0', '0', 'icon-control', '1', '0', '0', '10000', '2022-10-09 12:40:06', '10000', '2022-10-09 12:40:06'); +INSERT INTO `sys_menu` VALUES ('68', '67', '数据库管理', 'data-integrate/database/index', '', '0', '0', 'icon-insertrowright', '0', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-10-09 12:49:18'); +INSERT INTO `sys_menu` VALUES ('69', '67', '文件管理', 'data-integrate/file-category/index', '', '0', '0', 'icon-layout', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-10-28 16:50:34'); +INSERT INTO `sys_menu` VALUES ('70', '67', '数据接入', 'data-integrate/access/index', '', '0', '0', 'icon-rotate-right', '2', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-10-09 12:51:30'); +INSERT INTO `sys_menu` VALUES ('71', '67', '贴源数据', 'data-integrate/ods/index', '', '0', '0', 'icon-border', '3', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-10-09 12:53:18'); +INSERT INTO `sys_menu` VALUES ('72', '0', '数据开发', '', '', '0', '0', 'icon-Function', '2', '0', '0', '10000', '2022-10-09 12:56:37', '10000', '2022-10-09 12:56:37'); +INSERT INTO `sys_menu` VALUES ('74', '72', '数据建模', '', '', '0', '0', 'icon-calculator-fill', '0', '0', '1', '10000', '2022-10-09 13:02:24', '10000', '2023-04-07 10:06:18'); +INSERT INTO `sys_menu` VALUES ('75', '74', '维度表', 'data-development/dim/index', '', '0', '0', 'icon-file-markdown', '0', '0', '1', '10000', '2022-10-09 13:02:24', '10000', '2023-04-07 10:06:09'); +INSERT INTO `sys_menu` VALUES ('76', '74', '明细表', 'data-development/dwd/index', '', '0', '0', 'icon-folder-fill', '1', '0', '1', '10000', '2022-10-09 13:02:24', '10000', '2023-04-07 10:06:12'); +INSERT INTO `sys_menu` VALUES ('77', '74', '汇总表', 'data-development/dws/index', '', '0', '0', 'icon-folder-open-fill', '2', '0', '1', '10000', '2022-10-09 13:02:24', '10000', '2023-04-07 10:06:15'); +INSERT INTO `sys_menu` VALUES ('78', '72', '数据生产', 'data-development/production/index', '', '0', '0', 'icon-Console-SQL', '1', '0', '0', '10000', '2022-10-09 13:02:24', '10000', '2023-01-03 21:32:34'); +INSERT INTO `sys_menu` VALUES ('80', '68', '查看', '', 'data-integrate:database:page', '1', '0', '', '0', '0', '0', '10000', '2022-10-09 17:36:31', '10000', '2022-10-09 17:36:31'); +INSERT INTO `sys_menu` VALUES ('81', '68', '新增', '', 'data-integrate:database:save', '1', '0', '', '1', '0', '0', '10000', '2022-10-09 17:36:56', '10000', '2022-10-09 17:38:02'); +INSERT INTO `sys_menu` VALUES ('82', '68', '修改', '', 'data-integrate:database:info,data-integrate:database:update', '1', '0', '', '2', '0', '0', '10000', '2022-10-09 17:37:29', '10000', '2022-10-09 17:38:10'); +INSERT INTO `sys_menu` VALUES ('83', '68', '删除', '', 'data-integrate:database:delete', '1', '0', '', '3', '0', '0', '10000', '2022-10-09 17:37:54', '10000', '2022-10-09 17:37:54'); +INSERT INTO `sys_menu` VALUES ('84', '70', '新增', '', 'data-integrate:access:save', '1', '0', '', '1', '0', '0', '10000', '2022-10-24 22:09:49', '10000', '2022-10-24 22:10:06'); +INSERT INTO `sys_menu` VALUES ('85', '70', '查看', '', 'data-integrate:access:page', '1', '0', '', '0', '0', '0', '10000', '2022-10-24 22:09:49', '10000', '2022-10-24 22:10:38'); +INSERT INTO `sys_menu` VALUES ('86', '70', '修改', '', 'data-integrate:access:update,data-integrate:access:info', '1', '0', '', '2', '0', '0', '10000', '2022-10-24 22:09:49', '10000', '2022-10-24 22:12:14'); +INSERT INTO `sys_menu` VALUES ('87', '70', '删除', '', 'data-integrate:access:delete', '1', '0', '', '3', '0', '0', '10000', '2022-10-24 22:09:49', '10000', '2022-10-24 22:12:19'); +INSERT INTO `sys_menu` VALUES ('88', '70', '发布', '', 'data-integrate:access:release', '1', '0', '', '5', '0', '0', '10000', '2022-10-27 14:32:34', '10000', '2022-10-27 14:32:34'); +INSERT INTO `sys_menu` VALUES ('89', '70', '取消发布', '', 'data-integrate:access:cancle', '1', '0', '', '6', '0', '0', '10000', '2022-10-27 14:33:06', '10000', '2022-10-27 14:33:06'); +INSERT INTO `sys_menu` VALUES ('90', '70', '手动执行', '', 'data-integrate:access:selfhandler', '1', '0', '', '7', '0', '0', '10000', '2022-10-27 22:13:07', '10000', '2022-10-27 22:13:07'); +INSERT INTO `sys_menu` VALUES ('91', '0', '数据治理', '', '', '0', '0', 'icon-insertrowbelow', '3', '0', '0', '10000', '2022-10-29 12:59:30', '10000', '2023-01-20 13:38:57'); +INSERT INTO `sys_menu` VALUES ('92', '91', '元数据', '', '', '0', '0', 'icon-file-exception', '0', '0', '0', '10000', '2022-10-29 13:01:36', '10000', '2023-01-20 13:38:53'); +INSERT INTO `sys_menu` VALUES ('93', '92', '元模型', 'data-governance/metamodel/index', '', '0', '0', 'icon-database', '0', '0', '0', '10000', '2022-10-29 13:05:35', '10000', '2023-03-28 11:35:56'); +INSERT INTO `sys_menu` VALUES ('94', '92', '元数据采集', 'data-governance/metadata-collect/index', '', '0', '0', 'icon-right-square', '1', '0', '0', '10000', '2022-10-29 13:05:35', '10000', '2023-01-20 13:38:45'); +INSERT INTO `sys_menu` VALUES ('97', '92', '元数据管理', 'data-governance/metadata-manage/index', '', '0', '0', 'icon-reconciliation', '2', '0', '0', '10000', '2022-10-29 13:05:35', '10000', '2023-01-20 13:38:42'); +INSERT INTO `sys_menu` VALUES ('98', '91', '数据血缘', 'data-governance/data-blood/index', '', '0', '0', 'icon-deleterow', '1', '0', '0', '10000', '2022-10-29 13:13:23', '10000', '2023-01-20 13:38:38'); +INSERT INTO `sys_menu` VALUES ('99', '0', '数据资产', '', '', '0', '0', 'icon-codelibrary-fill', '5', '0', '0', '10000', '2022-10-29 13:48:15', '10000', '2023-05-28 10:39:59'); +INSERT INTO `sys_menu` VALUES ('100', '99', '资产目录', 'data-assets/catalog/index', '', '0', '0', 'icon-minus-square-fill', '0', '0', '0', '10000', '2022-10-29 13:48:53', '10000', '2023-07-19 10:45:35'); +INSERT INTO `sys_menu` VALUES ('101', '99', '资产总览', 'data-assets/resource-overview/index', '', '0', '0', 'icon-aim', '1', '0', '0', '10000', '2022-10-29 13:50:30', '10000', '2023-01-20 13:39:06'); +INSERT INTO `sys_menu` VALUES ('102', '0', '数据服务', '', '', '0', '0', 'icon-transaction', '4', '0', '0', '10000', '2022-10-29 13:52:16', '10000', '2023-05-28 10:39:53'); +INSERT INTO `sys_menu` VALUES ('103', '102', 'API 目录', 'data-service/api-group/index', '', '0', '0', 'icon-filesearch', '0', '0', '0', '10000', '2022-10-29 13:57:03', '10000', '2023-02-16 14:48:41'); +INSERT INTO `sys_menu` VALUES ('104', '102', '数据可视化', 'data-service/data-visualization/index', '', '0', '0', 'icon-areachart', '1', '0', '1', '10000', '2022-10-29 13:57:03', '10000', '2023-02-16 14:48:35'); +INSERT INTO `sys_menu` VALUES ('105', '0', '数据集市', '', '', '0', '0', 'icon-reconciliation', '6', '0', '0', '10000', '2022-10-29 13:57:03', '10000', '2023-01-20 13:39:39'); +INSERT INTO `sys_menu` VALUES ('106', '105', '资产查阅', 'data-market/resource/index', '', '0', '0', 'icon-sever', '0', '0', '0', '10000', '2022-10-29 13:57:03', '10000', '2023-07-22 22:57:50'); +INSERT INTO `sys_menu` VALUES ('108', '105', '我的申请', 'data-market/my-apply/index', '', '0', '0', 'icon-user', '2', '0', '0', '10000', '2022-10-29 13:57:03', '10000', '2023-01-20 13:39:33'); +INSERT INTO `sys_menu` VALUES ('109', '105', '服务审批', 'data-market/service-check/index', '', '0', '0', 'icon-book', '3', '0', '0', '10000', '2022-10-29 13:57:03', '10000', '2023-01-20 13:39:36'); +INSERT INTO `sys_menu` VALUES ('110', '69', '分组新增', '', 'data-integrate:fileCategory:save', '1', '0', '', '0', '0', '0', '10000', '2022-11-14 15:17:40', '10000', '2022-11-14 15:17:55'); +INSERT INTO `sys_menu` VALUES ('111', '69', '分组编辑', '', 'data-integrate:fileCategory:update', '1', '0', '', '1', '0', '0', '10000', '2022-11-14 15:17:40', '10000', '2022-11-14 15:18:20'); +INSERT INTO `sys_menu` VALUES ('112', '69', '分组删除', '', 'data-integrate:fileCategory:delete', '1', '0', '', '2', '0', '0', '10000', '2022-11-14 15:17:40', '10000', '2022-11-14 15:18:44'); +INSERT INTO `sys_menu` VALUES ('113', '69', '分页查询', '', 'data-integrate:file:page', '2', '0', '', '3', '0', '0', '10000', '2022-11-18 14:22:42', '10000', '2022-11-18 14:23:04'); +INSERT INTO `sys_menu` VALUES ('114', '69', '新增', '', 'data-integrate:file:save', '1', '0', '', '4', '0', '0', '10000', '2022-11-18 14:22:42', '10000', '2022-11-18 14:25:48'); +INSERT INTO `sys_menu` VALUES ('115', '69', '修改', '', 'data-integrate:file:info,data-integrate:file:update', '1', '0', '', '5', '0', '0', '10000', '2022-11-18 14:22:42', '10000', '2022-11-18 14:26:27'); +INSERT INTO `sys_menu` VALUES ('116', '69', '删除', '', 'data-integrate:file:delete', '1', '0', '', '6', '0', '0', '10000', '2022-11-18 14:22:42', '10000', '2022-11-18 14:27:04'); +INSERT INTO `sys_menu` VALUES ('122', '143', 'Flink 集群实例', 'data-development/cluster/index', '', '0', '0', 'icon-appstore-fill', '0', '0', '0', '10000', '2022-12-03 11:21:39', '10000', '2023-01-18 13:53:44'); +INSERT INTO `sys_menu` VALUES ('123', '122', '查询', '', 'data-development:cluster:page', '2', '0', '', '0', '0', '0', '10000', '2022-12-03 11:22:35', '10000', '2022-12-03 11:22:35'); +INSERT INTO `sys_menu` VALUES ('124', '122', '添加', '', 'data-development:cluster:save', '1', '0', '', '1', '0', '0', '10000', '2022-12-03 11:23:09', '10000', '2022-12-03 11:23:09'); +INSERT INTO `sys_menu` VALUES ('125', '122', '修改', '', 'data-development:cluster:info,data-development:cluster:update', '1', '0', '', '2', '0', '0', '10000', '2022-12-03 11:24:47', '10000', '2022-12-03 11:24:47'); +INSERT INTO `sys_menu` VALUES ('126', '122', '删除', '', 'data-development:cluster:delete', '1', '0', '', '3', '0', '0', '10000', '2022-12-03 11:25:10', '10000', '2022-12-03 11:25:10'); +INSERT INTO `sys_menu` VALUES ('127', '143', 'Hadoop 集群配置', 'data-development/cluster-configuration/index', '', '0', '0', 'icon-insertrowabove', '1', '0', '0', '10000', '2022-12-21 20:39:34', '10000', '2023-01-18 13:53:50'); +INSERT INTO `sys_menu` VALUES ('128', '127', '查询', '', 'data-development:cluster-configuration:page', '1', '0', '', '0', '0', '0', '10000', '2022-12-21 20:42:02', '10000', '2022-12-21 20:42:02'); +INSERT INTO `sys_menu` VALUES ('129', '127', '添加', '', 'data-development:cluster-configuration:save', '1', '0', '', '1', '0', '0', '10000', '2022-12-21 20:42:39', '10000', '2022-12-21 20:42:39'); +INSERT INTO `sys_menu` VALUES ('130', '127', '修改', '', 'data-development:cluster-configuration:update,data-development:cluster-configuration:info', '1', '0', '', '2', '0', '0', '10000', '2022-12-21 20:43:11', '10000', '2022-12-21 20:43:11'); +INSERT INTO `sys_menu` VALUES ('131', '127', '删除', '', 'data-development:cluster-configuration:delete', '1', '0', '', '3', '0', '0', '10000', '2022-12-21 20:43:35', '10000', '2022-12-21 20:43:35'); +INSERT INTO `sys_menu` VALUES ('132', '72', '配置中心', 'data-development/sys-config/index', '', '0', '0', 'icon-project', '7', '0', '0', '10000', '2022-12-28 17:45:56', '10000', '2023-01-14 19:13:59'); +INSERT INTO `sys_menu` VALUES ('133', '72', '运维中心', 'data-development/task-history/index', '', '0', '0', 'icon-send', '4', '0', '0', '10000', '2023-01-03 21:30:58', '10000', '2023-01-14 19:13:11'); +INSERT INTO `sys_menu` VALUES ('135', '142', '调度管理', 'data-development/schedule/index', '', '0', '0', 'icon-calendar-check', '0', '0', '0', '10000', '2023-01-14 19:11:46', '10000', '2023-01-18 13:52:27'); +INSERT INTO `sys_menu` VALUES ('136', '135', '查询', '', 'data-development:schedule:page', '2', '0', '', '0', '0', '0', '10000', '2023-01-14 19:17:04', '10000', '2023-01-14 19:17:04'); +INSERT INTO `sys_menu` VALUES ('137', '135', '新增', '', 'data-development:schedule:save', '1', '0', '', '1', '0', '0', '10000', '2023-01-14 19:17:28', '10000', '2023-01-14 19:17:28'); +INSERT INTO `sys_menu` VALUES ('138', '135', '编辑', '', 'data-development:schedule:info,data-development:schedule:update', '1', '0', '', '2', '0', '0', '10000', '2023-01-14 19:17:54', '10000', '2023-01-14 19:17:54'); +INSERT INTO `sys_menu` VALUES ('139', '135', '删除', '', 'data-development:schedule:delete', '1', '0', '', '3', '0', '0', '10000', '2023-01-14 19:18:13', '10000', '2023-01-14 19:18:13'); +INSERT INTO `sys_menu` VALUES ('141', '135', '执行', '', 'data-development:schedule:run', '1', '0', '', '4', '0', '0', '10000', '2023-01-17 17:05:56', '10000', '2023-01-17 17:06:26'); +INSERT INTO `sys_menu` VALUES ('142', '72', '调度中心', '', '', '0', '0', 'icon-calendar', '3', '0', '0', '10000', '2023-01-18 13:49:14', '10000', '2023-01-18 13:51:10'); +INSERT INTO `sys_menu` VALUES ('143', '72', '资源中心', '', '', '0', '0', 'icon-Partition', '6', '0', '0', '10000', '2023-01-18 13:52:46', '10000', '2023-01-18 13:53:37'); +INSERT INTO `sys_menu` VALUES ('144', '142', '调度记录', 'data-development/schedule-record/index', '', '0', '0', 'icon-insertrowabove', '1', '0', '0', '10000', '2023-01-18 15:59:03', '10000', '2023-01-18 15:59:22'); +INSERT INTO `sys_menu` VALUES ('145', '144', '查询', '', 'data-development:schedule:record:page', '2', '0', '', '0', '0', '0', '10000', '2023-01-18 16:00:04', '10000', '2023-01-18 16:00:04'); +INSERT INTO `sys_menu` VALUES ('146', '144', '删除', '', 'data-development:schedule:record:delete', '1', '0', '', '1', '0', '0', '10000', '2023-01-18 16:00:30', '10000', '2023-01-18 16:00:30'); +INSERT INTO `sys_menu` VALUES ('147', '135', '发布', '', 'data-development:schedule:release', '1', '0', '', '5', '0', '0', '10000', '2023-01-19 21:45:38', '10000', '2023-01-19 21:46:34'); +INSERT INTO `sys_menu` VALUES ('148', '135', '取消发布', '', 'data-development:schedule:cancle', '1', '0', '', '6', '0', '0', '10000', '2023-01-19 21:47:00', '10000', '2023-01-19 21:47:00'); +INSERT INTO `sys_menu` VALUES ('149', '103', '修改', '', 'data-service:api-group:info,data-service:api-group:update', '2', '0', '', '1', '0', '0', '10000', '2023-01-30 11:41:31', '10000', '2023-02-06 16:04:35'); +INSERT INTO `sys_menu` VALUES ('150', '103', '删除', '', 'data-service:api-group:delete', '2', '0', '', '3', '0', '0', '10000', '2023-01-30 11:42:01', '10000', '2023-02-06 16:04:50'); +INSERT INTO `sys_menu` VALUES ('151', '103', '添加', '', 'data-service:api-group:save', '2', '0', '', '2', '0', '0', '10000', '2023-01-30 11:43:09', '10000', '2023-01-30 11:43:09'); +INSERT INTO `sys_menu` VALUES ('152', '103', '查看API', '', 'data-service:api-config:page', '2', '0', '', '0', '0', '0', '10000', '2023-02-06 16:04:24', '10000', '2023-02-06 16:11:16'); +INSERT INTO `sys_menu` VALUES ('153', '103', '新增API', '', 'data-service:api-config:save', '1', '0', '', '0', '0', '0', '10000', '2023-02-06 16:12:02', '10000', '2023-02-06 16:12:02'); +INSERT INTO `sys_menu` VALUES ('154', '103', '修改API', '', 'data-service:api-config:update,data-service:api-config:info', '1', '0', '', '0', '0', '0', '10000', '2023-02-06 16:12:33', '10000', '2023-02-06 16:12:33'); +INSERT INTO `sys_menu` VALUES ('155', '103', '删除API', '', 'data-service:api-config:delete', '1', '0', '', '0', '0', '0', '10000', '2023-02-06 16:12:58', '10000', '2023-02-14 09:56:38'); +INSERT INTO `sys_menu` VALUES ('156', '103', '上线', '', 'data-service:api-config:online', '1', '0', '', '0', '0', '0', '10000', '2023-02-15 11:15:52', '10000', '2023-02-15 11:16:23'); +INSERT INTO `sys_menu` VALUES ('157', '103', '下线', '', 'data-service:api-config:offline', '1', '0', '', '0', '0', '0', '10000', '2023-02-15 11:16:37', '10000', '2023-02-15 11:16:37'); +INSERT INTO `sys_menu` VALUES ('158', '102', 'API 权限', 'data-service/app/index', '', '0', '0', 'icon-propertysafety', '1', '0', '0', '10000', '2023-02-16 14:48:26', '10000', '2023-02-16 14:56:47'); +INSERT INTO `sys_menu` VALUES ('159', '158', '查询', '', 'data-service:app:page', '2', '0', '', '0', '0', '0', '10000', '2023-02-16 14:50:15', '10000', '2023-02-16 14:50:15'); +INSERT INTO `sys_menu` VALUES ('160', '158', '保存', '', 'data-service:app:save', '1', '0', '', '1', '0', '0', '10000', '2023-02-16 14:50:39', '10000', '2023-02-16 14:50:39'); +INSERT INTO `sys_menu` VALUES ('161', '158', '更新', '', 'data-service:app:update,data-service:app:info', '1', '0', '', '2', '0', '0', '10000', '2023-02-16 14:51:10', '10000', '2023-02-16 14:51:10'); +INSERT INTO `sys_menu` VALUES ('162', '158', '删除', '', 'data-service:app:delete', '1', '0', '', '3', '0', '0', '10000', '2023-02-16 14:51:28', '10000', '2023-02-16 14:51:39'); +INSERT INTO `sys_menu` VALUES ('163', '158', '授权', '', 'data-service:app:auth', '1', '0', '', '4', '0', '0', '10000', '2023-02-17 11:37:39', '10000', '2023-02-17 11:37:39'); +INSERT INTO `sys_menu` VALUES ('164', '158', '取消授权', '', 'data-service:app:cancel-auth', '1', '0', '', '5', '0', '0', '10000', '2023-02-20 14:11:42', '10000', '2023-02-20 14:13:31'); +INSERT INTO `sys_menu` VALUES ('165', '102', 'API 日志', 'data-service/log/index', '', '0', '0', 'icon-detail', '2', '0', '0', '10000', '2023-02-22 14:47:37', '10000', '2023-02-22 14:48:31'); +INSERT INTO `sys_menu` VALUES ('166', '165', '查询', '', 'data-service:log:page', '2', '0', '', '0', '0', '0', '10000', '2023-02-22 14:49:07', '10000', '2023-02-22 14:49:07'); +INSERT INTO `sys_menu` VALUES ('167', '165', '删除', '', 'data-service:log:delete', '1', '0', '', '1', '0', '0', '10000', '2023-02-22 14:49:25', '10000', '2023-02-22 14:49:25'); +INSERT INTO `sys_menu` VALUES ('168', '94', '查询', '', 'data-governance:metadata-collect:page', '2', '0', '', '0', '0', '0', '10000', '2023-04-03 10:39:26', '10000', '2023-04-03 10:39:26'); +INSERT INTO `sys_menu` VALUES ('169', '94', '编辑', '', 'data-governance:metadata-collect:info,data-governance:metadata-collect:update', '1', '0', '', '1', '0', '0', '10000', '2023-04-03 10:40:06', '10000', '2023-04-03 10:40:06'); +INSERT INTO `sys_menu` VALUES ('170', '94', '保存', '', 'data-governance:metadata-collect:save', '1', '0', '', '2', '0', '0', '10000', '2023-04-03 10:40:25', '10000', '2023-04-03 10:40:42'); +INSERT INTO `sys_menu` VALUES ('171', '94', '删除', '', 'data-governance:metadata-collect:delete', '1', '0', '', '3', '0', '0', '10000', '2023-04-03 10:41:05', '10000', '2023-04-03 10:41:05'); +INSERT INTO `sys_menu` VALUES ('172', '94', '发布', '', 'data-governance:metadata-collect:release', '1', '0', '', '4', '0', '0', '10000', '2023-04-05 12:21:56', '10000', '2023-04-05 12:21:56'); +INSERT INTO `sys_menu` VALUES ('173', '94', '取消发布', '', 'data-governance:metadata-collect:cancel', '1', '0', '', '5', '0', '0', '10000', '2023-04-05 12:22:19', '10000', '2023-04-05 12:22:19'); +INSERT INTO `sys_menu` VALUES ('174', '94', '执行', '', 'data-governance:metadata-collect:hand-run', '1', '0', '', '6', '0', '0', '10000', '2023-04-06 09:59:53', '10000', '2023-04-06 09:59:53'); +INSERT INTO `sys_menu` VALUES ('175', '91', '数据标准', '', '', '0', '0', 'icon-calculator', '2', '0', '0', '10000', '2023-05-08 09:39:12', '10000', '2023-05-08 09:39:25'); +INSERT INTO `sys_menu` VALUES ('176', '175', '标准管理', 'data-governance/data-standard/index', '', '0', '0', 'icon-wallet', '0', '0', '0', '10000', '2023-05-08 09:41:37', '10000', '2023-05-08 09:42:17'); +INSERT INTO `sys_menu` VALUES ('177', '91', '数据质量', '', '', '0', '0', 'icon-creditcard', '3', '0', '0', '10000', '2023-05-28 09:46:54', '10000', '2023-05-28 09:46:54'); +INSERT INTO `sys_menu` VALUES ('178', '177', '质量规则', 'data-governance/quality-rule/index', '', '0', '0', 'icon-USB-fill', '0', '0', '0', '10000', '2023-05-28 09:47:52', '10000', '2023-05-28 09:47:52'); +INSERT INTO `sys_menu` VALUES ('179', '177', '规则配置', 'data-governance/quality-rule/rule-category', '', '0', '0', 'icon-formatpainter-fill', '1', '0', '0', '10000', '2023-05-29 10:40:34', '10000', '2023-05-29 10:40:56'); +INSERT INTO `sys_menu` VALUES ('180', '177', '质量任务', 'data-governance/quality-task/index', '', '0', '0', 'icon-database-fill', '2', '0', '0', '10000', '2023-06-24 21:43:46', '10000', '2023-06-24 21:44:14'); + +-- ---------------------------- +-- Table structure for sys_org +-- ---------------------------- +DROP TABLE IF EXISTS `sys_org`; +CREATE TABLE `sys_org` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `pid` bigint DEFAULT NULL COMMENT '上级ID', + `name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '机构名称', + `sort` int DEFAULT NULL COMMENT '排序', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `project_id` bigint NOT NULL COMMENT '项目id', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `idx_pid` (`pid`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='机构管理'; + +-- ---------------------------- +-- Records of sys_org +-- ---------------------------- +INSERT INTO `sys_org` VALUES ('1', '0', '测试机构', '0', '0', '0', '10002', '10000', '2022-10-08 11:16:10', '10000', '2022-10-08 11:16:10'); +INSERT INTO `sys_org` VALUES ('2', '0', '测试项目2机构', '0', '0', '0', '10004', '10000', '2022-11-25 19:59:21', '10000', '2022-11-25 19:59:21'); +INSERT INTO `sys_org` VALUES ('3', '2', '测试项目2子机构', '0', '0', '0', '10004', '10000', '2022-11-25 19:59:38', '10000', '2022-11-25 19:59:38'); + +-- ---------------------------- +-- Table structure for sys_post +-- ---------------------------- +DROP TABLE IF EXISTS `sys_post`; +CREATE TABLE `sys_post` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `post_code` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '岗位编码', + `post_name` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '岗位名称', + `sort` int DEFAULT NULL COMMENT '排序', + `status` tinyint DEFAULT NULL COMMENT '状态 0:停用 1:正常', + `project_id` bigint NOT NULL COMMENT '项目id', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='岗位管理'; + +-- ---------------------------- +-- Records of sys_post +-- ---------------------------- +INSERT INTO `sys_post` VALUES ('1', 'test', '测试岗位', '0', '1', '10002', '0', '0', '10000', '2022-10-08 11:16:25', '10000', '2022-10-08 11:16:25'); +INSERT INTO `sys_post` VALUES ('2', '测试项目2岗位', '测试项目2', '0', '1', '10004', '0', '0', '10000', '2022-11-25 19:59:59', '10000', '2022-11-25 20:00:09'); + +-- ---------------------------- +-- Table structure for sys_role +-- ---------------------------- +DROP TABLE IF EXISTS `sys_role`; +CREATE TABLE `sys_role` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '角色名称', + `remark` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '备注', + `data_scope` tinyint DEFAULT NULL COMMENT '数据范围 0:全部数据 1:本部门及子部门数据 2:本部门数据 3:本人数据 4:自定义数据', + `org_id` bigint DEFAULT NULL COMMENT '机构ID', + `project_id` int NOT NULL COMMENT '项目id', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `idx_org_id` (`org_id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='角色管理'; + +-- ---------------------------- +-- Records of sys_role +-- ---------------------------- +INSERT INTO `sys_role` VALUES ('1', '测试角色', '', '3', null, '10002', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-11-25 20:01:43'); +INSERT INTO `sys_role` VALUES ('2', '测试项目2管理员', '测试项目2管理员', '0', null, '10004', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-26 12:09:12'); +INSERT INTO `sys_role` VALUES ('3', '测试项目角色', '测试项目角色', '0', null, '10002', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2023-09-11 14:01:04'); + +-- ---------------------------- +-- Table structure for sys_role_data_scope +-- ---------------------------- +DROP TABLE IF EXISTS `sys_role_data_scope`; +CREATE TABLE `sys_role_data_scope` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `role_id` bigint DEFAULT NULL COMMENT '角色ID', + `org_id` bigint DEFAULT NULL COMMENT '机构ID', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `idx_role_id` (`role_id`) USING BTREE +) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='角色数据权限'; + +-- ---------------------------- +-- Records of sys_role_data_scope +-- ---------------------------- + +-- ---------------------------- +-- Table structure for sys_role_menu +-- ---------------------------- +DROP TABLE IF EXISTS `sys_role_menu`; +CREATE TABLE `sys_role_menu` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `role_id` bigint DEFAULT NULL COMMENT '角色ID', + `menu_id` bigint DEFAULT NULL COMMENT '菜单ID', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `idx_role_id` (`role_id`) USING BTREE, + KEY `idx_menu_id` (`menu_id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=189 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='角色菜单关系'; + +-- ---------------------------- +-- Records of sys_role_menu +-- ---------------------------- +INSERT INTO `sys_role_menu` VALUES ('1', '1', '54', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-10-08 11:16:41'); +INSERT INTO `sys_role_menu` VALUES ('2', '1', '55', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-10-08 11:16:41'); +INSERT INTO `sys_role_menu` VALUES ('3', '1', '57', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-10-08 11:16:41'); +INSERT INTO `sys_role_menu` VALUES ('4', '1', '61', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-10-08 11:16:41'); +INSERT INTO `sys_role_menu` VALUES ('5', '1', '58', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-10-08 11:16:41'); +INSERT INTO `sys_role_menu` VALUES ('6', '1', '59', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-10-08 11:16:41'); +INSERT INTO `sys_role_menu` VALUES ('7', '1', '60', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-10-08 11:16:41'); +INSERT INTO `sys_role_menu` VALUES ('8', '1', '62', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-10-08 11:16:41'); +INSERT INTO `sys_role_menu` VALUES ('9', '1', '56', '0', '1', '10000', '2022-10-08 11:16:41', '10000', '2022-10-08 11:16:41'); +INSERT INTO `sys_role_menu` VALUES ('10', '1', '63', '0', '1', '10000', '2022-10-17 13:49:02', '10000', '2022-10-17 13:49:02'); +INSERT INTO `sys_role_menu` VALUES ('11', '2', '1', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('12', '2', '2', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('13', '2', '67', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('14', '2', '68', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('15', '2', '80', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('16', '2', '81', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('17', '2', '82', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('18', '2', '83', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('19', '2', '69', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('20', '2', '110', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('21', '2', '111', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('22', '2', '112', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('23', '2', '113', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('24', '2', '114', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('25', '2', '115', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('26', '2', '116', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('27', '2', '70', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('28', '2', '85', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('29', '2', '84', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('30', '2', '86', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('31', '2', '87', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('32', '2', '88', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('33', '2', '89', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('34', '2', '90', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('35', '2', '71', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('36', '2', '72', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('37', '2', '73', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('38', '2', '74', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('39', '2', '75', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('40', '2', '76', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('41', '2', '77', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('42', '2', '78', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('43', '2', '79', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('44', '2', '91', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('45', '2', '92', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('46', '2', '93', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('47', '2', '94', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('48', '2', '97', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('49', '2', '98', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('50', '2', '99', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('51', '2', '100', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('52', '2', '101', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('53', '2', '102', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('54', '2', '103', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('55', '2', '104', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('56', '2', '105', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('57', '2', '106', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('58', '2', '107', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('59', '2', '108', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('60', '2', '109', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('61', '2', '54', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('62', '2', '55', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('63', '2', '57', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('64', '2', '61', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('65', '2', '58', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('66', '2', '59', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('67', '2', '60', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('68', '2', '62', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('69', '2', '56', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('70', '2', '63', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('71', '2', '66', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('72', '2', '33', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('73', '2', '40', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('74', '2', '42', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('75', '2', '43', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('76', '2', '44', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('77', '2', '45', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('78', '2', '46', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('79', '2', '41', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('80', '2', '38', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('81', '2', '39', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('82', '2', '3', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('83', '2', '28', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('84', '2', '29', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('85', '2', '30', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('86', '2', '31', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('87', '2', '32', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('88', '2', '18', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('89', '2', '19', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('90', '2', '20', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('91', '2', '21', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('92', '2', '22', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('93', '2', '13', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('94', '2', '14', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('95', '2', '15', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('96', '2', '16', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('97', '2', '17', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('98', '2', '23', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('99', '2', '24', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('100', '2', '25', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('101', '2', '26', '0', '1', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('102', '2', '27', '0', '0', '10000', '2022-11-25 20:01:22', '10000', '2022-11-25 20:01:22'); +INSERT INTO `sys_role_menu` VALUES ('103', '3', '54', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('104', '3', '55', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('105', '3', '56', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('106', '3', '1', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('107', '3', '2', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('108', '3', '67', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('109', '3', '68', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('110', '3', '80', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('111', '3', '81', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('112', '3', '82', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('113', '3', '83', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('114', '3', '69', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('115', '3', '110', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('116', '3', '111', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('117', '3', '112', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('118', '3', '113', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('119', '3', '114', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('120', '3', '115', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('121', '3', '116', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('122', '3', '70', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('123', '3', '85', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('124', '3', '84', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('125', '3', '86', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('126', '3', '87', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('127', '3', '88', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('128', '3', '89', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('129', '3', '90', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('130', '3', '71', '0', '0', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('131', '3', '72', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('132', '3', '73', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('133', '3', '74', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('134', '3', '75', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('135', '3', '76', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('136', '3', '77', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('137', '3', '78', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('138', '3', '79', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('139', '3', '91', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('140', '3', '92', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('141', '3', '93', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('142', '3', '94', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('143', '3', '97', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('144', '3', '98', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('145', '3', '99', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('146', '3', '100', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('147', '3', '101', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('148', '3', '102', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('149', '3', '103', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('150', '3', '104', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('151', '3', '105', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('152', '3', '106', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('153', '3', '107', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('154', '3', '108', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('155', '3', '109', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('156', '3', '57', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('157', '3', '61', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('158', '3', '62', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('159', '3', '63', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('160', '3', '3', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('161', '3', '28', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('162', '3', '29', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('163', '3', '30', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('164', '3', '31', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('165', '3', '32', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('166', '3', '18', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('167', '3', '19', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('168', '3', '20', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('169', '3', '21', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('170', '3', '22', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('171', '3', '13', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('172', '3', '14', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('173', '3', '15', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('174', '3', '16', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('175', '3', '17', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('176', '3', '23', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('177', '3', '24', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('178', '3', '25', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('179', '3', '26', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('180', '3', '27', '0', '1', '10000', '2022-11-25 20:10:26', '10000', '2022-11-25 20:10:26'); +INSERT INTO `sys_role_menu` VALUES ('181', '3', '26', '0', '1', '10000', '2022-11-26 12:07:15', '10000', '2022-11-26 12:07:15'); +INSERT INTO `sys_role_menu` VALUES ('182', '3', '58', '0', '1', '10000', '2022-11-26 12:10:00', '10000', '2022-11-26 12:10:00'); +INSERT INTO `sys_role_menu` VALUES ('183', '3', '59', '0', '1', '10000', '2022-11-26 12:10:00', '10000', '2022-11-26 12:10:00'); +INSERT INTO `sys_role_menu` VALUES ('184', '3', '60', '0', '1', '10000', '2022-11-26 12:10:00', '10000', '2022-11-26 12:10:00'); +INSERT INTO `sys_role_menu` VALUES ('185', '3', '105', '0', '1', '10000', '2023-07-22 22:44:46', '10000', '2023-07-22 22:44:46'); +INSERT INTO `sys_role_menu` VALUES ('186', '3', '106', '0', '1', '10000', '2023-07-22 22:44:46', '10000', '2023-07-22 22:44:46'); +INSERT INTO `sys_role_menu` VALUES ('187', '3', '108', '0', '1', '10000', '2023-07-22 22:44:46', '10000', '2023-07-22 22:44:46'); +INSERT INTO `sys_role_menu` VALUES ('188', '3', '109', '0', '1', '10000', '2023-07-22 22:44:46', '10000', '2023-07-22 22:44:46'); + +-- ---------------------------- +-- Table structure for sys_user +-- ---------------------------- +DROP TABLE IF EXISTS `sys_user`; +CREATE TABLE `sys_user` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `username` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL COMMENT '用户名', + `password` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '密码', + `real_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '姓名', + `avatar` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '头像', + `gender` tinyint DEFAULT NULL COMMENT '性别 0:男 1:女 2:未知', + `email` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '邮箱', + `mobile` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '手机号', + `org_id` bigint DEFAULT NULL COMMENT '机构ID', + `super_admin` tinyint DEFAULT NULL COMMENT '超级管理员 0:否 1:是', + `status` tinyint DEFAULT NULL COMMENT '状态 0:停用 1:正常', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=10003 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='用户管理'; + +-- ---------------------------- +-- Records of sys_user +-- ---------------------------- +INSERT INTO `sys_user` VALUES ('10000', 'admin', '{bcrypt}$2a$10$mW/yJPHjyueQ1g26WNBz0uxVPa0GQdJO1fFZmqdkqgMTGnyszlXxu', 'admin', 'https://www.zrxlh.top/wp-content/uploads/2020/03/cropped-20150917091836_dFNP4-1.jpeg', '0', '985134801@qq.com', '13042238929', null, '1', '1', '0', '0', '10000', '2022-09-27 11:01:26', '10000', '2022-09-27 11:01:26'); +INSERT INTO `sys_user` VALUES ('10001', '测试用户', '{bcrypt}$2a$10$fbYBagGEe60fcaRzKHKDSOOdiQPlHgHGsDzE2WmnoRD5bR8oEiUxy', '测试用户', null, '0', '', '13677677652', '1', '0', '1', '0', '0', '10000', '2022-10-08 11:17:16', '10000', '2023-07-22 22:46:11'); +INSERT INTO `sys_user` VALUES ('10002', '测试用户2', '{bcrypt}$2a$10$Ven/NC9X.Ii7mlcxpH2wceVTviAn9228vIu6R1oXwHI4Ac3IaB7c2', '测试用户2', null, '0', '', '13566566565', '1', '0', '1', '0', '0', '10000', '2022-10-08 16:37:20', '10000', '2023-07-22 22:46:08'); + +-- ---------------------------- +-- Table structure for sys_user_post +-- ---------------------------- +DROP TABLE IF EXISTS `sys_user_post`; +CREATE TABLE `sys_user_post` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `user_id` bigint DEFAULT NULL COMMENT '用户ID', + `post_id` bigint DEFAULT NULL COMMENT '岗位ID', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `idx_user_id` (`user_id`) USING BTREE, + KEY `idx_post_id` (`post_id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='用户岗位关系'; + +-- ---------------------------- +-- Records of sys_user_post +-- ---------------------------- +INSERT INTO `sys_user_post` VALUES ('1', '10001', '1', '0', '0', '10000', '2022-10-08 11:17:16', '10000', '2022-10-08 11:17:16'); +INSERT INTO `sys_user_post` VALUES ('2', '10002', '1', '0', '0', '10000', '2022-10-08 16:37:20', '10000', '2022-10-08 16:37:20'); + +-- ---------------------------- +-- Table structure for sys_user_role +-- ---------------------------- +DROP TABLE IF EXISTS `sys_user_role`; +CREATE TABLE `sys_user_role` ( + `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', + `role_id` bigint DEFAULT NULL COMMENT '角色ID', + `user_id` bigint DEFAULT NULL COMMENT '用户ID', + `version` int DEFAULT NULL COMMENT '版本号', + `deleted` tinyint DEFAULT NULL COMMENT '删除标识 0:正常 1:已删除', + `creator` bigint DEFAULT NULL COMMENT '创建者', + `create_time` datetime DEFAULT NULL COMMENT '创建时间', + `updater` bigint DEFAULT NULL COMMENT '更新者', + `update_time` datetime DEFAULT NULL COMMENT '更新时间', + PRIMARY KEY (`id`) USING BTREE, + KEY `idx_role_id` (`role_id`) USING BTREE, + KEY `idx_user_id` (`user_id`) USING BTREE +) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC COMMENT='用户角色关系'; + +-- ---------------------------- +-- Records of sys_user_role +-- ---------------------------- +INSERT INTO `sys_user_role` VALUES ('1', '1', '10001', '0', '1', '10000', '2022-10-08 11:17:16', '10000', '2022-10-08 11:17:16'); +INSERT INTO `sys_user_role` VALUES ('2', '1', '10002', '0', '1', '10000', '2022-10-08 16:37:20', '10000', '2022-10-08 16:37:20'); +INSERT INTO `sys_user_role` VALUES ('3', '3', '10001', '0', '1', '10000', '2022-11-25 21:43:23', '10000', '2022-11-25 21:43:23'); +INSERT INTO `sys_user_role` VALUES ('4', '3', '10001', '0', '1', '10001', '2022-11-26 13:04:21', '10001', '2022-11-26 13:04:21'); +INSERT INTO `sys_user_role` VALUES ('5', '3', '10001', '0', '0', '10001', '2022-11-26 13:04:40', '10001', '2022-11-26 13:04:40'); +INSERT INTO `sys_user_role` VALUES ('6', '3', '10002', '0', '0', '10000', '2023-07-22 22:46:08', '10000', '2023-07-22 22:46:08'); diff --git a/srt-cloud-api/pom.xml b/srt-cloud-api/pom.xml new file mode 100644 index 0000000..cd08cbf --- /dev/null +++ b/srt-cloud-api/pom.xml @@ -0,0 +1,132 @@ + + + net.srt + srt-cloud + 2.0.0 + + 4.0.0 + srt-cloud-api + jar + + + + net.srt + srt-cloud-common + 2.0.0 + + + net.srt + srt-cloud-dbswitch + 2.0.0 + + + + org.springframework.boot + spring-boot-starter-jdbc + + + jsqlparser + com.github.jsqlparser + + + mysql + mysql-connector-java + + + mysql + mysql-connector-java + + + org.postgresql + postgresql + + + org.postgresql + postgresql + + + com.oracle.ojdbc + ojdbc8 + + + com.oracle.ojdbc + orai18n + + + com.microsoft.sqlserver + sqljdbc6.0 + + + com.microsoft.sqlserver + sqljdbc6.0 + + + com.microsoft.sqlserver + msbase + + + com.microsoft.sqlserver + msutil + + + com.microsoft.sqlserver + mssqlserver + + + com.pivotal + greenplum-jdbc + + + com.dameng + dm-jdbc + + + com.kingbase + kingbase-jdbc + + + org.mariadb.jdbc + mariadb-java-client + + + com.ibm.db2.jcc + db2jcc + + + org.xerial + sqlite-jdbc + + + org.apache.hive + hive-jdbc + + + com.sybase + jconn4 + + + com.oscar + oscar-jdbc + + + com.gbase.jdbc + gbase-connector-java + + + + + org.springframework.cloud + spring-cloud-starter-openfeign + + + org.springframework.cloud + spring-cloud-starter-loadbalancer + + + com.github.ben-manes.caffeine + caffeine + + + + diff --git a/srt-cloud-api/src/main/java/net/srt/api/ServerNames.java b/srt-cloud-api/src/main/java/net/srt/api/ServerNames.java new file mode 100644 index 0000000..1f4d7e2 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/ServerNames.java @@ -0,0 +1,41 @@ +package net.srt.api; + +/** + * 服务名称集合 + * + * @author 阿沐 babamu@126.com + */ +public interface ServerNames { + /** + * srt-cloud-gateway 服务名 + */ + String GATEWAY_SERVER_NAME = "srt-cloud-gateway"; + /** + * srt-cloud-system 服务名 + */ + String SYSTEM_SERVER_NAME = "srt-cloud-system"; + /** + * srt-cloud-message 服务名 + */ + String MESSAGE_SERVER_NAME = "srt-cloud-message"; + + /** + * srt-cloud-quartz 服务名 + */ + String QUARTZ_SERVER_NAME = "srt-cloud-quartz"; + + /** + * srt-cloud-data-integrate 服务名 + */ + String DATA_INTEGRATE_NAME = "srt-cloud-data-integrate"; + + /** + * srt-cloud-data-development 服务名 + */ + String DATA_DEVELOPMENT_NAME = "srt-cloud-data-development"; + + /** + * srt-cloud-data-governance 服务名 + */ + String DATA_GOVERNANCE_NAME = "srt-cloud-data-governance"; +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/assets/constant/ResourceOpenType.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/assets/constant/ResourceOpenType.java new file mode 100644 index 0000000..a067255 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/assets/constant/ResourceOpenType.java @@ -0,0 +1,26 @@ +package net.srt.api.module.data.assets.constant; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * 接入方式 + */ +@Getter +@AllArgsConstructor +public enum ResourceOpenType { + /** + * 全部 + */ + ALL(1), + /** + * 角色 + */ + ROLE(2), + /** + * 用户 + */ + USER(3); + + private final Integer value; +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataAccessApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataAccessApi.java new file mode 100644 index 0000000..b10a460 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataAccessApi.java @@ -0,0 +1,53 @@ +package net.srt.api.module.data.integrate; + +import net.srt.api.ServerNames; +import net.srt.api.module.data.integrate.dto.DataAccessDto; +import net.srt.api.module.data.integrate.dto.DataAccessTaskDto; +import net.srt.api.module.data.integrate.dto.PreviewNameMapperDto; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.PutMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestParam; +import srt.cloud.framework.dbswitch.data.domain.DbSwitchResult; +import srt.cloud.framework.dbswitch.data.domain.DbSwitchTableResult; + +import java.util.List; + +/** + * @ClassName DataAccessApi + * @Author zrx + * @Date 2022/10/26 11:39 + */ +@FeignClient(name = ServerNames.DATA_INTEGRATE_NAME, contextId = "data-integrate-access") +public interface DataAccessApi { + /** + * 根据id获取 + */ + @GetMapping(value = "api/data/integrate/access/{id}") + Result getById(@PathVariable Long id); + + @PostMapping(value = "api/data/integrate/access/task") + Result addTask(@RequestBody DataAccessTaskDto dataAccessTaskDto); + + @PutMapping(value = "api/data/integrate/access/task") + void updateTask(@RequestBody DataAccessTaskDto dataAccessTaskDto); + + @PostMapping(value = "api/data/integrate/access/task-detail/{projectId}/{taskId}/{dataAccessId}") + void addTaskDetail(@PathVariable Long projectId, @PathVariable Long taskId, @PathVariable Long dataAccessId, @RequestBody DbSwitchTableResult tableResult); + + /** + * 根据任务id获取任务 + */ + @GetMapping(value = "api/quartz/access-task/{id}") + Result getTaskById(@PathVariable Long id); + + @GetMapping(value = "api/quartz/access-task/table-map/{id}") + Result> getTableMap(@PathVariable Long id); + + @GetMapping(value = "api/quartz/access-task/column-map/{id}") + Result> getColumnMap(@PathVariable Long id, @RequestParam String tableName); +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataDatabaseApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataDatabaseApi.java new file mode 100644 index 0000000..59a7590 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataDatabaseApi.java @@ -0,0 +1,22 @@ +package net.srt.api.module.data.integrate; + +import net.srt.api.ServerNames; +import net.srt.api.module.data.integrate.dto.DataDatabaseDto; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PathVariable; + +/** + * @ClassName DataAccessApi + * @Author zrx + * @Date 2022/10/26 11:39 + */ +@FeignClient(name = ServerNames.DATA_INTEGRATE_NAME, contextId = "data-integrate-database") +public interface DataDatabaseApi { + /** + * 根据id获取 + */ + @GetMapping(value = "api/data/integrate/database/{id}") + Result getById(@PathVariable Long id); +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataOdsApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataOdsApi.java new file mode 100644 index 0000000..f10a099 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataOdsApi.java @@ -0,0 +1,22 @@ +package net.srt.api.module.data.integrate; + +import net.srt.api.ServerNames; +import net.srt.api.module.data.integrate.dto.DataOdsDto; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.RequestBody; + +/** + * @ClassName DataAccessApi + * @Author zrx + * @Date 2022/10/26 11:39 + */ +@FeignClient(name = ServerNames.DATA_INTEGRATE_NAME, contextId = "data-integrate-ods") +public interface DataOdsApi { + /** + * 添加ods + */ + @PostMapping(value = "api/data/integrate/ods") + Result addOds(@RequestBody DataOdsDto dataOdsDto); +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataProjectApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataProjectApi.java new file mode 100644 index 0000000..48afac2 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/DataProjectApi.java @@ -0,0 +1,31 @@ +package net.srt.api.module.data.integrate; + +import net.srt.api.ServerNames; +import net.srt.framework.common.cache.bean.DataProjectCacheBean; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PathVariable; + +import java.util.List; + +/** + * @ClassName DataAccessApi + * @Author zrx + * @Date 2022/10/26 11:39 + */ +@FeignClient(name = ServerNames.DATA_INTEGRATE_NAME, contextId = "data-integrate-project") +public interface DataProjectApi { + /** + * 根据id获取 + */ + @GetMapping(value = "api/data/integrate/project/list-all") + Result> getProjectList(); + + /** + * 根据id获取 + */ + @GetMapping(value = "api/data/integrate/project/{id}") + Result getById(@PathVariable Long id); + +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/constant/AccessMode.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/constant/AccessMode.java new file mode 100644 index 0000000..4b2de85 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/constant/AccessMode.java @@ -0,0 +1,22 @@ +package net.srt.api.module.data.integrate.constant; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * 接入方式 + */ +@Getter +@AllArgsConstructor +public enum AccessMode { + /** + * ods接入 + */ + ODS(1), + /** + * 自定义接入 + */ + CUSTOM(2); + + private final Integer value; +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/constant/CommonRunStatus.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/constant/CommonRunStatus.java new file mode 100644 index 0000000..75b3a4f --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/constant/CommonRunStatus.java @@ -0,0 +1,44 @@ +package net.srt.api.module.data.integrate.constant; + +/** + * @ClassName CommonRunStatus + * @Author zrx + * @Date 2022/5/22 16:15 + */ +public enum CommonRunStatus { + + /** + * 等待中 + */ + WAITING(1,"等待中"), + /** + * 运行中 + */ + RUNNING(2,"运行中"), + /** + * 正常 + */ + SUCCESS(3,"正常结束"), + + /** + * 异常 + */ + FAILED(4,"异常结束"); + + + private Integer code; + private String name; + + CommonRunStatus(Integer code, String name) { + this.code = code; + this.name = name; + } + + public Integer getCode() { + return code; + } + + public String getName() { + return name; + } +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/constant/TaskType.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/constant/TaskType.java new file mode 100644 index 0000000..de25df4 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/constant/TaskType.java @@ -0,0 +1,38 @@ +package net.srt.api.module.data.integrate.constant; + +/** + * @ClassName TaskType + * @Author zrx + */ +public enum TaskType { + + /** + * 实时同步 + */ + REAL_TIME_SYNC(1,"实时同步"), + /** + * 一次性全量同步 + */ + ONE_TIME_FULL_SYNC(2,"一次性全量同步"), + /** + * 一次性全量周期性增量 + */ + ONE_TIME_FULL_PERIODIC_INCR_SYNC(3,"一次性全量周期性增量"); + + + private Integer code; + private String name; + + TaskType(Integer code, String name) { + this.code = code; + this.name = name; + } + + public Integer getCode() { + return code; + } + + public String getName() { + return name; + } +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataAccessDto.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataAccessDto.java new file mode 100644 index 0000000..57988c0 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataAccessDto.java @@ -0,0 +1,99 @@ +package net.srt.api.module.data.integrate.dto; + + +import com.fasterxml.jackson.annotation.JsonFormat; +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; +import srt.cloud.framework.dbswitch.data.config.DbswichProperties; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据集成-数据接入 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-24 +*/ +@Data +@Schema(description = "数据集成-数据接入") +public class DataAccessDto implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "任务名称") + private String taskName; + + @Schema(description = "描述") + private String description; + + @Schema(description = "项目id") + private Long projectId; + + @Schema(description = "源端数据库id") + private Long sourceDatabaseId; + + @Schema(description = "目的端数据库id") + private Long targetDatabaseId; + + @Schema(description = "接入方式 1-ods接入 2-自定义接入") + private Integer accessMode; + + @Schema(description = "任务类型") + private Integer taskType; + + @Schema(description = "cron表达式") + private String cron; + + @Schema(description = "发布状态") + private Integer status; + + @Schema(description = "最新运行状态") + private Integer runStatus; + + @Schema(description = "数据接入基础配置json") + private DbswichProperties dataAccessJson; + + @Schema(description = "最近开始时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date startTime; + + @Schema(description = "最近结束时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date endTime; + + @Schema(description = "发布时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date releaseTime; + + @Schema(description = "备注") + private String note; + + @Schema(description = "发布人id") + private Long releaseUserId; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataAccessTaskDto.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataAccessTaskDto.java new file mode 100644 index 0000000..f74fec6 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataAccessTaskDto.java @@ -0,0 +1,83 @@ +package net.srt.api.module.data.integrate.dto; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据接入任务记录 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-26 +*/ +@Data +@Schema(description = "数据接入任务记录") +public class DataAccessTaskDto implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "数据接入任务id") + private Long dataAccessId; + + @Schema(description = "运行状态( 1-等待中 2-运行中 3-正常结束 4-异常结束)") + private Integer runStatus; + + @Schema(description = "开始时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date startTime; + + @Schema(description = "结束时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date endTime; + + private String realTimeLog; + @Schema(description = "错误信息") + private String errorInfo; + + @Schema(description = "更新数据量") + private Long dataCount; + + @Schema(description = "成功表数量") + private Long tableSuccessCount; + + @Schema(description = "失败表数量") + private Long tableFailCount; + + @Schema(description = "更新大小") + private String byteCount; + + @Schema(description = "项目id") + private Long projectId; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + private Date nextRunTime; + + private Boolean updateTaskAccess; + + +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataDatabaseDto.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataDatabaseDto.java new file mode 100644 index 0000000..da9aec9 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataDatabaseDto.java @@ -0,0 +1,82 @@ +package net.srt.api.module.data.integrate.dto; + +import com.fasterxml.jackson.annotation.JsonFormat; +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据集成-数据库管理 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-09 +*/ +@Data +@Schema(description = "数据集成-数据库管理") +public class DataDatabaseDto implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "名称") + private String name; + + @Schema(description = "数据库类型") + private Integer databaseType; + + @Schema(description = "主机ip") + private String databaseIp; + + @Schema(description = "端口") + private String databasePort; + + @Schema(description = "库名(服务名)") + private String databaseName; + + @Schema(description = "状态") + private Integer status; + + @Schema(description = "用户名") + private String userName; + + @Schema(description = "密码") + private String password; + + @Schema(description = "是否支持实时接入") + private Integer isRtApprove; + + @Schema(description = "不支持实时接入原因") + private String noRtReason; + + @Schema(description = "jdbcUrl") + private String jdbcUrl; + + @Schema(description = "所属项目") + private Long projectId; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataOdsDto.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataOdsDto.java new file mode 100644 index 0000000..c415d07 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/DataOdsDto.java @@ -0,0 +1,62 @@ +package net.srt.api.module.data.integrate.dto; + +import com.fasterxml.jackson.annotation.JsonFormat; +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据集成-贴源数据 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-07 +*/ +@Data +@Schema(description = "数据集成-贴源数据") +public class DataOdsDto implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "数据接入id") + private Long dataAccessId; + + @Schema(description = "表名") + private String tableName; + + @Schema(description = "注释") + private String remarks; + + @Schema(description = "项目id") + private Long projectId; + + @Schema(description = "最近同步时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date recentlySyncTime; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/PreviewNameMapperDto.java b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/PreviewNameMapperDto.java new file mode 100644 index 0000000..4c54114 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/data/integrate/dto/PreviewNameMapperDto.java @@ -0,0 +1,17 @@ +package net.srt.api.module.data.integrate.dto; + +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Data; +import lombok.NoArgsConstructor; + +@Data +@AllArgsConstructor +@NoArgsConstructor +@Builder +public class PreviewNameMapperDto { + + private String originalName; + private String targetName; + private String remarks; +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/message/SmsApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/message/SmsApi.java new file mode 100644 index 0000000..d3d5752 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/message/SmsApi.java @@ -0,0 +1,49 @@ +package net.srt.api.module.message; + +import net.srt.api.ServerNames; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.RequestParam; + +import java.util.Map; + +/** + * 短信服务API + * + * @author 阿沐 babamu@126.com + */ +@FeignClient(name = ServerNames.MESSAGE_SERVER_NAME) +public interface SmsApi { + + /** + * 发送短信 + * + * @param mobile 手机号 + * @param params 参数 + * @return 是否发送成功 + */ + @PostMapping(value = "api/message/sms/send") + Result send(@RequestParam("mobile") String mobile, @RequestParam("params") Map params); + + /** + * 发送短信 + * + * @param mobile 手机号 + * @param key 参数KEY + * @param value 参数Value + * @return 是否发送成功 + */ + @PostMapping(value = "api/message/sms/sendCode") + Result sendCode(@RequestParam("mobile") String mobile, @RequestParam("key") String key, @RequestParam("value") String value); + + /** + * 效验短信验证码 + * + * @param mobile 手机号 + * @param code 验证码 + * @return 是否效验成功 + */ + @PostMapping(value = "api/message/sms/verifyCode") + Result verifyCode(@RequestParam("mobile") String mobile, @RequestParam("code") String code); +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/package-info.java b/srt-cloud-api/src/main/java/net/srt/api/module/package-info.java new file mode 100644 index 0000000..00360b4 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/package-info.java @@ -0,0 +1,6 @@ +/** + * RPC 接口声明,如:Feign接口 + * + * @author 阿沐 babamu@126.com + */ +package net.srt.api.module; diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataAccessApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataAccessApi.java new file mode 100644 index 0000000..76edc98 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataAccessApi.java @@ -0,0 +1,35 @@ +package net.srt.api.module.quartz; + +import net.srt.api.ServerNames; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; + +/** + * 数据集成-数据接入 定时api + * + * @author 阿沐 babamu@126.com + */ +@FeignClient(name = ServerNames.QUARTZ_SERVER_NAME, contextId = "quartz-data-integrate-access") +public interface QuartzDataAccessApi { + + /** + * 发布数据接入任务 + */ + @PostMapping(value = "api/quartz/access/release/{id}") + Result releaseAccess(@PathVariable Long id); + + /** + * 取消数据接入任务 + */ + @PostMapping(value = "api/quartz/access/cancle/{id}") + Result cancleAccess(@PathVariable Long id); + + /** + * 手动执行 + */ + @PostMapping(value = "api/quartz/access/hand-run/{id}") + Result handRun(@PathVariable Long id); + +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataGovernanceMetadataCollectApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataGovernanceMetadataCollectApi.java new file mode 100644 index 0000000..8e5e648 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataGovernanceMetadataCollectApi.java @@ -0,0 +1,34 @@ +package net.srt.api.module.quartz; + +import net.srt.api.ServerNames; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; + +/** + * 数据集成-数据接入 定时api + * + * @author 阿沐 babamu@126.com + */ +@FeignClient(name = ServerNames.QUARTZ_SERVER_NAME, contextId = "quartz-data-metadata-collect") +public interface QuartzDataGovernanceMetadataCollectApi { + + /** + * 发布数据接入任务 + */ + @PostMapping(value = "api/quartz/metadata-collect/release/{id}") + Result release(@PathVariable Long id); + + /** + * 取消数据接入任务 + */ + @PostMapping(value = "api/quartz/metadata-collect/cancel/{id}") + Result cancel(@PathVariable Long id); + + /** + * 手动执行 + */ + @PostMapping(value = "api/quartz/metadata-collect/hand-run/{id}") + Result handRun(@PathVariable Long id); +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataGovernanceQualityApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataGovernanceQualityApi.java new file mode 100644 index 0000000..0c33c05 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataGovernanceQualityApi.java @@ -0,0 +1,34 @@ +package net.srt.api.module.quartz; + +import net.srt.api.ServerNames; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; + +/** + * 数据集成-数据接入 定时api + * + * @author 阿沐 babamu@126.com + */ +@FeignClient(name = ServerNames.QUARTZ_SERVER_NAME, contextId = "quartz-data-quality") +public interface QuartzDataGovernanceQualityApi { + + /** + * 发布数据接入任务 + */ + @PostMapping(value = "api/quartz/quality/release/{id}") + Result release(@PathVariable Long id); + + /** + * 取消数据接入任务 + */ + @PostMapping(value = "api/quartz/quality/cancel/{id}") + Result cancel(@PathVariable Long id); + + /** + * 手动执行 + */ + @PostMapping(value = "api/quartz/quality/hand-run/{id}") + Result handRun(@PathVariable Long id); +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataProductionScheduleApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataProductionScheduleApi.java new file mode 100644 index 0000000..740c3f3 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/QuartzDataProductionScheduleApi.java @@ -0,0 +1,29 @@ +package net.srt.api.module.quartz; + +import net.srt.api.ServerNames; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; + +/** + * 数据集成-数据接入 定时api + * + * @author 阿沐 babamu@126.com + */ +@FeignClient(name = ServerNames.QUARTZ_SERVER_NAME, contextId = "quartz-data-development-production-schedule") +public interface QuartzDataProductionScheduleApi { + + /** + * 发布作业调度任务 + */ + @PostMapping(value = "api/quartz/development-schedule/release/{id}") + Result release(@PathVariable Long id); + + /** + * 取消作业调度任务 + */ + @PostMapping(value = "api/quartz/development-schedule/cancle/{id}") + Result cancle(@PathVariable Long id); + +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/quartz/constant/QuartzJobType.java b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/constant/QuartzJobType.java new file mode 100644 index 0000000..a6bb44f --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/quartz/constant/QuartzJobType.java @@ -0,0 +1,38 @@ +package net.srt.api.module.quartz.constant; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * @ClassName QuartzJobType + * @Author zrx + * @Date 2023/1/19 15:24 + */ +@Getter +@AllArgsConstructor +public enum QuartzJobType { + /** + * 自定义 + */ + CUSTOM(1), + /** + * 数据接入 + */ + DATA_ACCESS(2), + /** + * 数据生产 + */ + DATA_PRODUCTION(3), + + /** + * 数据治理 + */ + DATA_GOVERNANCE(4), + + /** + * 数据质量 + */ + DATA_QUALITY(5); + + private final Integer value; +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/system/StorageApi.java b/srt-cloud-api/src/main/java/net/srt/api/module/system/StorageApi.java new file mode 100644 index 0000000..3ff2027 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/system/StorageApi.java @@ -0,0 +1,38 @@ +package net.srt.api.module.system; + +import feign.codec.Encoder; +import feign.form.spring.SpringFormEncoder; +import net.srt.api.ServerNames; +import net.srt.api.module.system.dto.StorageDTO; +import net.srt.framework.common.utils.Result; +import org.springframework.cloud.openfeign.FeignClient; +import org.springframework.context.annotation.Bean; +import org.springframework.http.MediaType; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.RequestPart; +import org.springframework.web.multipart.MultipartFile; + +import java.io.IOException; + +/** + * 文件上传 + * + * @author 阿沐 babamu@126.com + */ +@FeignClient(name = ServerNames.SYSTEM_SERVER_NAME) +public interface StorageApi { + + /** + * 文件上传 + */ + @PostMapping(value = "api/storage/upload", produces = {MediaType.APPLICATION_JSON_VALUE}, + consumes = MediaType.MULTIPART_FORM_DATA_VALUE) + Result upload(@RequestPart("file") MultipartFile file) throws IOException; + + class MultipartSupportConfig { + @Bean + public Encoder feignFormEncoder() { + return new SpringFormEncoder(); + } + } +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/system/constant/SuperAdminEnum.java b/srt-cloud-api/src/main/java/net/srt/api/module/system/constant/SuperAdminEnum.java new file mode 100644 index 0000000..921666c --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/system/constant/SuperAdminEnum.java @@ -0,0 +1,16 @@ +package net.srt.api.module.system.constant; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * 超级管理员枚举 + */ +@Getter +@AllArgsConstructor +public enum SuperAdminEnum { + YES(1), + NO(0); + + private final Integer value; +} diff --git a/srt-cloud-api/src/main/java/net/srt/api/module/system/dto/StorageDTO.java b/srt-cloud-api/src/main/java/net/srt/api/module/system/dto/StorageDTO.java new file mode 100644 index 0000000..aae0268 --- /dev/null +++ b/srt-cloud-api/src/main/java/net/srt/api/module/system/dto/StorageDTO.java @@ -0,0 +1,20 @@ +package net.srt.api.module.system.dto; + +import io.swagger.v3.oas.annotations.media.Schema; +import io.swagger.v3.oas.annotations.tags.Tag; +import lombok.Data; + +/** + * 文件上传 + * + * @author 阿沐 babamu@126.com + */ +@Data +@Tag(name="文件上传") +public class StorageDTO { + @Schema(description = "URL") + private String url; + @Schema(description = "文件大小") + private Long size; + +} diff --git a/srt-cloud-data-integrate/pom.xml b/srt-cloud-data-integrate/pom.xml new file mode 100644 index 0000000..e5b0656 --- /dev/null +++ b/srt-cloud-data-integrate/pom.xml @@ -0,0 +1,199 @@ + + + net.srt + srt-cloud + 2.0.0 + + 4.0.0 + srt-cloud-data-integrate + jar + + + + net.srt + srt-cloud-api + 2.0.0 + + + + org.springframework.boot + spring-boot-starter-log4j2 + + + net.srt + srt-cloud-mybatis + 2.0.0 + + + net.srt + srt-cloud-dbswitch + 2.0.0 + + + jsqlparser + com.github.jsqlparser + + + spring-boot-starter-logging + org.springframework.boot + + + + + org.springframework.cloud + spring-cloud-starter-bootstrap + + + com.alibaba.cloud + spring-cloud-starter-alibaba-nacos-discovery + + + com.alibaba.cloud + spring-cloud-starter-alibaba-nacos-config + + + com.github.xiaoymin + knife4j-springdoc-ui + + + org.quartz-scheduler + quartz + + + + + + + + + org.codehaus.mojo + appassembler-maven-plugin + 2.1.0 + + + + + generate-jsw-scripts + package + + generate-daemons + + + + + + + flat + + src/main/resources + true + + true + + conf + + lib + + bin + UTF-8 + logs + + + + ${project.artifactId} + net.srt.DataIntegrateApplication + + jsw + + + + jsw + + linux-x86-32 + linux-x86-64 + windows-x86-32 + windows-x86-64 + + + + configuration.directory.in.classpath.first + conf + + + wrapper.ping.timeout + 120 + + + set.default.REPO_DIR + lib + + + wrapper.logfile + logs/wrapper.log + + + + + + + + + -server + -Dfile.encoding=utf-8 + -Xms128m + -Xmx1024m + -XX:+PrintGCDetails + -XX:+PrintGCDateStamps + -Xloggc:logs/gc.log + + + + + + + net.srt.DataIntegrateApplication + ${project.artifactId} + + + + + + + + maven-assembly-plugin + + + ${project.parent.basedir}/assembly/assembly-win.xml + ${project.parent.basedir}/assembly/assembly-linux.xml + + + + + make-assembly + package + + single + + + + + + + org.apache.maven.plugins + maven-surefire-plugin + + true + + + + + diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/DataIntegrateApplication.java b/srt-cloud-data-integrate/src/main/java/net/srt/DataIntegrateApplication.java new file mode 100644 index 0000000..fa5e071 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/DataIntegrateApplication.java @@ -0,0 +1,22 @@ +package net.srt; + +import org.springframework.boot.SpringApplication; +import org.springframework.boot.autoconfigure.SpringBootApplication; +import org.springframework.cloud.client.discovery.EnableDiscoveryClient; +import org.springframework.cloud.openfeign.EnableFeignClients; + +/** + * 数据集成微服务 + * + * @author zrx 985134801@qq.com + */ +@EnableFeignClients +@EnableDiscoveryClient +@SpringBootApplication +public class DataIntegrateApplication { + + public static void main(String[] args) { + SpringApplication.run(DataIntegrateApplication.class, args); + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/api/DataAccessApiImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/api/DataAccessApiImpl.java new file mode 100644 index 0000000..fd038e6 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/api/DataAccessApiImpl.java @@ -0,0 +1,90 @@ +package net.srt.api; + +import lombok.RequiredArgsConstructor; +import net.srt.api.module.data.integrate.DataAccessApi; +import net.srt.api.module.data.integrate.dto.DataAccessDto; +import net.srt.api.module.data.integrate.dto.DataAccessTaskDto; +import net.srt.api.module.data.integrate.dto.PreviewNameMapperDto; +import net.srt.constants.YesOrNo; +import net.srt.convert.DataAccessConvert; +import net.srt.convert.DataAccessTaskConvert; +import net.srt.entity.DataAccessEntity; +import net.srt.entity.DataAccessTaskDetailEntity; +import net.srt.entity.DataAccessTaskEntity; +import net.srt.framework.common.utils.Result; +import net.srt.service.DataAccessService; +import net.srt.service.DataAccessTaskDetailService; +import net.srt.service.DataAccessTaskService; +import org.springframework.web.bind.annotation.RestController; +import srt.cloud.framework.dbswitch.data.domain.DbSwitchResult; +import srt.cloud.framework.dbswitch.data.domain.DbSwitchTableResult; +import srt.cloud.framework.dbswitch.data.util.BytesUnitUtils; + +import java.util.Date; +import java.util.List; + +/** + * @ClassName DataAccessApiImpl + * @Author zrx + * @Date 2022/10/26 11:50 + */ +@RestController +@RequiredArgsConstructor +public class DataAccessApiImpl implements DataAccessApi { + + private final DataAccessService dataAccessService; + private final DataAccessTaskService dataAccessTaskService; + private final DataAccessTaskDetailService dataAccessTaskDetailService; + + @Override + public Result getById(Long id) { + DataAccessEntity dataAccessEntity = dataAccessService.loadById(id); + return Result.ok(DataAccessConvert.INSTANCE.convertDto(dataAccessEntity)); + } + + @Override + public Result addTask(DataAccessTaskDto dataAccessTaskDto) { + DataAccessTaskEntity dataAccessTaskEntity = DataAccessTaskConvert.INSTANCE.convertByDto(dataAccessTaskDto); + dataAccessTaskService.save(dataAccessTaskEntity); + //更新任务的最新开始时间和状态 + dataAccessService.updateStartInfo(dataAccessTaskDto.getDataAccessId()); + return Result.ok(dataAccessTaskEntity.getId()); + } + + @Override + public void updateTask(DataAccessTaskDto dataAccessTaskDto) { + DataAccessTaskEntity dataAccessTaskEntity = DataAccessTaskConvert.INSTANCE.convertByDto(dataAccessTaskDto); + dataAccessTaskService.updateById(dataAccessTaskEntity); + //更新任务的最新结束时间和状态 + if (dataAccessTaskDto.getUpdateTaskAccess()) { + dataAccessService.updateEndInfo(dataAccessTaskDto.getDataAccessId(), dataAccessTaskDto.getRunStatus(), dataAccessTaskDto.getNextRunTime()); + } + } + + @Override + public void addTaskDetail(Long projectId, Long taskId, Long dataAccessId, DbSwitchTableResult tableResult) { + dataAccessTaskDetailService.save(DataAccessTaskDetailEntity.builder().projectId(projectId).taskId(taskId).dataAccessId(dataAccessId) + .sourceSchemaName(tableResult.getSourceSchemaName()).sourceTableName(tableResult.getSourceTableName()) + .targetSchemaName(tableResult.getTargetSchemaName()).targetTableName(tableResult.getTargetTableName()) + .ifSuccess(tableResult.getIfSuccess().get() ? YesOrNo.YES.getValue() : YesOrNo.NO.getValue()) + .syncCount(tableResult.getSyncCount().get()).syncBytes(BytesUnitUtils.bytesSizeToHuman(tableResult.getSyncBytes().get())) + .createTime(new Date()).errorMsg(tableResult.getErrorMsg()).successMsg(tableResult.getSuccessMsg()).build()); + } + + @Override + public Result getTaskById(Long id) { + return Result.ok(DataAccessTaskConvert.INSTANCE.convertDto(dataAccessTaskService.getById(id))); + } + + @Override + public Result> getTableMap(Long id) { + List previewNameMapperDtos = dataAccessService.getTableMap(id); + return Result.ok(previewNameMapperDtos); + } + + @Override + public Result> getColumnMap(Long id, String tableName) { + List previewNameMapperDtos = dataAccessService.getColumnMap(id, tableName); + return Result.ok(previewNameMapperDtos); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/api/DataDatabaseApiImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/api/DataDatabaseApiImpl.java new file mode 100644 index 0000000..2f3228d --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/api/DataDatabaseApiImpl.java @@ -0,0 +1,31 @@ +package net.srt.api; + +import lombok.RequiredArgsConstructor; +import net.srt.api.module.data.integrate.DataDatabaseApi; +import net.srt.api.module.data.integrate.DataOdsApi; +import net.srt.api.module.data.integrate.dto.DataDatabaseDto; +import net.srt.api.module.data.integrate.dto.DataOdsDto; +import net.srt.convert.DataDatabaseConvert; +import net.srt.convert.DataOdsConvert; +import net.srt.entity.DataOdsEntity; +import net.srt.framework.common.utils.Result; +import net.srt.service.DataDatabaseService; +import net.srt.service.DataOdsService; +import org.springframework.web.bind.annotation.RestController; + +/** + * @ClassName DataDatabaseApiImpl + * @Author zrx + * @Date 2022/10/26 11:50 + */ +@RestController +@RequiredArgsConstructor +public class DataDatabaseApiImpl implements DataDatabaseApi { + + private final DataDatabaseService databaseService; + + @Override + public Result getById(Long id) { + return Result.ok(DataDatabaseConvert.INSTANCE.convertDto(databaseService.getById(id))); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/api/DataOdsApiImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/api/DataOdsApiImpl.java new file mode 100644 index 0000000..bd5932c --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/api/DataOdsApiImpl.java @@ -0,0 +1,52 @@ +package net.srt.api; + +import lombok.RequiredArgsConstructor; +import net.srt.api.module.data.integrate.DataAccessApi; +import net.srt.api.module.data.integrate.DataOdsApi; +import net.srt.api.module.data.integrate.dto.DataAccessDto; +import net.srt.api.module.data.integrate.dto.DataAccessTaskDto; +import net.srt.api.module.data.integrate.dto.DataOdsDto; +import net.srt.constants.YesOrNo; +import net.srt.convert.DataAccessConvert; +import net.srt.convert.DataAccessTaskConvert; +import net.srt.convert.DataOdsConvert; +import net.srt.entity.DataAccessEntity; +import net.srt.entity.DataAccessTaskDetailEntity; +import net.srt.entity.DataAccessTaskEntity; +import net.srt.entity.DataOdsEntity; +import net.srt.framework.common.utils.Result; +import net.srt.service.DataAccessService; +import net.srt.service.DataAccessTaskDetailService; +import net.srt.service.DataAccessTaskService; +import net.srt.service.DataOdsService; +import org.springframework.web.bind.annotation.RestController; +import srt.cloud.framework.dbswitch.data.domain.DbSwitchResult; +import srt.cloud.framework.dbswitch.data.domain.DbSwitchTableResult; +import srt.cloud.framework.dbswitch.data.util.BytesUnitUtils; + +import java.util.Date; +import java.util.List; + +/** + * @ClassName DataAccessApiImpl + * @Author zrx + * @Date 2022/10/26 11:50 + */ +@RestController +@RequiredArgsConstructor +public class DataOdsApiImpl implements DataOdsApi { + + private final DataOdsService dataOdsService; + + @Override + public Result addOds(DataOdsDto dataOdsDto) { + DataOdsEntity dataOdsEntity = dataOdsService.getByTableName(dataOdsDto.getProjectId(), dataOdsDto.getTableName()); + if (dataOdsEntity == null) { + dataOdsService.save(DataOdsConvert.INSTANCE.convertByDto(dataOdsDto)); + } else { + dataOdsDto.setId(dataOdsEntity.getId()); + dataOdsService.updateById(DataOdsConvert.INSTANCE.convertByDto(dataOdsDto)); + } + return Result.ok(); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/api/DataProjectApiImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/api/DataProjectApiImpl.java new file mode 100644 index 0000000..3f7ad95 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/api/DataProjectApiImpl.java @@ -0,0 +1,36 @@ +package net.srt.api; + +import lombok.RequiredArgsConstructor; +import net.srt.api.module.data.integrate.DataProjectApi; +import net.srt.entity.DataProjectEntity; +import net.srt.framework.common.cache.bean.DataProjectCacheBean; +import net.srt.framework.common.utils.BeanUtil; +import net.srt.framework.common.utils.Result; +import net.srt.service.DataProjectService; +import org.springframework.web.bind.annotation.RestController; + +import java.util.List; + +/** + * @ClassName DataAccessApiImpl + * @Author zrx + * @Date 2022/10/26 11:50 + */ +@RestController +@RequiredArgsConstructor +public class DataProjectApiImpl implements DataProjectApi { + + private final DataProjectService dataProjectService; + + + @Override + public Result> getProjectList() { + List list = dataProjectService.list(); + return Result.ok(BeanUtil.copyListProperties(list, DataProjectCacheBean::new)); + } + + @Override + public Result getById(Long id) { + return Result.ok(BeanUtil.copyProperties(dataProjectService.getById(id), DataProjectCacheBean::new)); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/constants/AccessMode.java b/srt-cloud-data-integrate/src/main/java/net/srt/constants/AccessMode.java new file mode 100644 index 0000000..2d02a5e --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/constants/AccessMode.java @@ -0,0 +1,22 @@ +package net.srt.constants; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * 接入方式 + */ +@Getter +@AllArgsConstructor +public enum AccessMode { + /** + * ods接入 + */ + ODS(1), + /** + * 自定义接入 + */ + CUSTOM(2); + + private final Integer value; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/constants/CommonRunStatus.java b/srt-cloud-data-integrate/src/main/java/net/srt/constants/CommonRunStatus.java new file mode 100644 index 0000000..fc81c99 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/constants/CommonRunStatus.java @@ -0,0 +1,44 @@ +package net.srt.constants; + +/** + * @ClassName CommonRunStatus + * @Author zrx + * @Date 2022/5/22 16:15 + */ +public enum CommonRunStatus { + + /** + * 等待中 + */ + WAITING(1,"等待中"), + /** + * 运行中 + */ + RUNNING(2,"运行中"), + /** + * 正常 + */ + SUCCESS(3,"正常结束"), + + /** + * 异常 + */ + FAILED(4,"异常结束"); + + + private Integer code; + private String name; + + CommonRunStatus(Integer code, String name) { + this.code = code; + this.name = name; + } + + public Integer getCode() { + return code; + } + + public String getName() { + return name; + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/constants/DataHouseLayer.java b/srt-cloud-data-integrate/src/main/java/net/srt/constants/DataHouseLayer.java new file mode 100644 index 0000000..04c0adf --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/constants/DataHouseLayer.java @@ -0,0 +1,52 @@ +package net.srt.constants; + +/** + * @ClassName DataHouseLayer + * @Author zrx + * @Date 2022/5/20 14:58 + */ +public enum DataHouseLayer { + + + /** + * ODS + */ + ODS("数据引入层","ods_"), + /** + * DWD + */ + DWD("明细数据层","dwd_"), + /** + * DIM + */ + DIM("维度层","dim_"), + /** + * DWS + */ + DWS("汇总数据层","dws_"), + /** + * 应用数据层 + */ + ADS("应用数据层","ads_"), + + /** + * OTHER + */ + OTHER("其他数据",""); + + private String name; + private String tablePrefix; + + DataHouseLayer(String name,String tablePrefix) { + this.name = name; + this.tablePrefix = tablePrefix; + } + + public String getName() { + return name; + } + + public String getTablePrefix() { + return tablePrefix; + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/constants/MiddleTreeNodeType.java b/srt-cloud-data-integrate/src/main/java/net/srt/constants/MiddleTreeNodeType.java new file mode 100644 index 0000000..26d70e4 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/constants/MiddleTreeNodeType.java @@ -0,0 +1,26 @@ +package net.srt.constants; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * 接入方式 + */ +@Getter +@AllArgsConstructor +public enum MiddleTreeNodeType { + /** + * DB + */ + DB(1), + /** + * LAYER + */ + LAYER(2), + /** + * TABLE + */ + TABLE(3); + + private final Integer value; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/constants/SuperAdminEnum.java b/srt-cloud-data-integrate/src/main/java/net/srt/constants/SuperAdminEnum.java new file mode 100644 index 0000000..080e5df --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/constants/SuperAdminEnum.java @@ -0,0 +1,16 @@ +package net.srt.constants; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * 超级管理员枚举 + */ +@Getter +@AllArgsConstructor +public enum SuperAdminEnum { + YES(1), + NO(0); + + private final Integer value; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/constants/YesOrNo.java b/srt-cloud-data-integrate/src/main/java/net/srt/constants/YesOrNo.java new file mode 100644 index 0000000..06d1d72 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/constants/YesOrNo.java @@ -0,0 +1,16 @@ +package net.srt.constants; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * 超级管理员枚举 + */ +@Getter +@AllArgsConstructor +public enum YesOrNo { + YES(1), + NO(0); + + private final Integer value; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataAccessController.java b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataAccessController.java new file mode 100644 index 0000000..91a7dbd --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataAccessController.java @@ -0,0 +1,148 @@ +package net.srt.controller; + +import io.swagger.v3.oas.annotations.Operation; +import io.swagger.v3.oas.annotations.tags.Tag; +import lombok.AllArgsConstructor; +import net.srt.dto.DataAccessClientDto; +import net.srt.dto.PreviewMapDto; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.Result; +import net.srt.query.DataAccessQuery; +import net.srt.query.DataAccessTaskDetailQuery; +import net.srt.query.DataAccessTaskQuery; +import net.srt.service.DataAccessService; +import net.srt.vo.DataAccessTaskDetailVO; +import net.srt.vo.DataAccessTaskVO; +import net.srt.vo.DataAccessVO; +import net.srt.vo.PreviewNameMapperVo; +import org.springframework.security.access.prepost.PreAuthorize; +import org.springframework.web.bind.annotation.DeleteMapping; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.PutMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestMapping; +import org.springframework.web.bind.annotation.RestController; + +import javax.validation.Valid; +import java.util.List; + +/** + * 数据集成-数据接入 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-24 + */ +@RestController +@RequestMapping("access") +@Tag(name = "数据集成-数据接入") +@AllArgsConstructor +public class DataAccessController { + private final DataAccessService dataAccessService; + + @GetMapping("page") + @Operation(summary = "分页") + @PreAuthorize("hasAuthority('data-integrate:access:page')") + public Result> page(@Valid DataAccessQuery query) { + PageResult page = dataAccessService.page(query); + + return Result.ok(page); + } + + @GetMapping("{id}") + @Operation(summary = "信息") + @PreAuthorize("hasAuthority('data-integrate:access:info')") + public Result get(@PathVariable("id") Long id) { + return Result.ok(dataAccessService.getById(id)); + } + + @PostMapping + @Operation(summary = "保存") + @PreAuthorize("hasAuthority('data-integrate:access:save')") + public Result save(@RequestBody DataAccessClientDto dto) { + dataAccessService.save(dto); + return Result.ok(); + } + + @PutMapping + @Operation(summary = "修改") + @PreAuthorize("hasAuthority('data-integrate:access:update')") + public Result update(@RequestBody DataAccessClientDto dto) { + dataAccessService.update(dto); + + return Result.ok(); + } + + @DeleteMapping + @Operation(summary = "删除") + @PreAuthorize("hasAuthority('data-integrate:access:delete')") + public Result delete(@RequestBody List idList) { + dataAccessService.delete(idList); + return Result.ok(); + } + + @PostMapping("preview-table-name-map") + @Operation(summary = "预览表名映射") + public Result> previewTableMap(@RequestBody PreviewMapDto previewMapDto) { + return Result.ok(dataAccessService.previewTableMap(previewMapDto)); + } + + @PostMapping("preview-column-name-map") + @Operation(summary = "预览字段名映射") + public Result> previewColumnMap(@RequestBody PreviewMapDto previewMapDto) { + return Result.ok(dataAccessService.previewColumnMap(previewMapDto)); + } + + @PostMapping("release/{id}") + @Operation(summary = "发布任务") + @PreAuthorize("hasAuthority('data-integrate:access:release')") + public Result release(@PathVariable Long id) { + dataAccessService.release(id); + return Result.ok(); + } + + @PostMapping("cancle/{id}") + @Operation(summary = "取消任务") + @PreAuthorize("hasAuthority('data-integrate:access:cancle')") + public Result cancle(@PathVariable Long id) { + dataAccessService.cancle(id); + return Result.ok(); + } + + @PostMapping("hand-run/{id}") + @Operation(summary = "手动调度执行") + @PreAuthorize("hasAuthority('data-integrate:access:selfhandler')") + public Result handRun(@PathVariable Long id) { + dataAccessService.handRun(id); + return Result.ok(); + } + + @GetMapping("task-page") + @Operation(summary = "获取调度记录") + public Result> taskPage(DataAccessTaskQuery taskQuery) { + PageResult pageResult = dataAccessService.taskPage(taskQuery); + return Result.ok(pageResult); + } + + @GetMapping("task/{id}") + @Operation(summary = "获取调度任务") + public Result getTaskById(@PathVariable Long id) { + return Result.ok(dataAccessService.getTaskById(id)); + } + + @DeleteMapping("task") + @Operation(summary = "删除调度记录") + public Result deleteTask(@RequestBody List idList) { + dataAccessService.deleteTask(idList); + return Result.ok(); + } + + @GetMapping("task-detail-page") + @Operation(summary = "获取同步结果") + public Result> taskDetailPage(DataAccessTaskDetailQuery detailQuery) { + PageResult pageResult = dataAccessService.taskDetailPage(detailQuery); + return Result.ok(pageResult); + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataDatabaseController.java b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataDatabaseController.java new file mode 100644 index 0000000..db2d0c2 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataDatabaseController.java @@ -0,0 +1,160 @@ +package net.srt.controller; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import io.swagger.v3.oas.annotations.Operation; +import io.swagger.v3.oas.annotations.tags.Tag; +import lombok.AllArgsConstructor; +import net.srt.convert.DataDatabaseConvert; +import net.srt.dto.SqlConsole; +import net.srt.entity.DataAccessEntity; +import net.srt.entity.DataDatabaseEntity; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.Result; +import net.srt.framework.common.utils.TreeNodeVo; +import net.srt.query.DataDatabaseQuery; +import net.srt.service.DataAccessService; +import net.srt.service.DataDatabaseService; +import net.srt.vo.ColumnDescriptionVo; +import net.srt.vo.DataDatabaseVO; +import net.srt.vo.SchemaTableDataVo; +import net.srt.vo.SqlGenerationVo; +import net.srt.vo.TableVo; +import org.springframework.security.access.prepost.PreAuthorize; +import org.springframework.web.bind.annotation.DeleteMapping; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.PutMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestMapping; +import org.springframework.web.bind.annotation.RequestParam; +import org.springframework.web.bind.annotation.RestController; + +import javax.validation.Valid; +import java.util.List; + +/** + * 数据集成-数据库管理 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-09 + */ +@RestController +@RequestMapping("database") +@Tag(name = "数据集成-数据库管理") +@AllArgsConstructor +public class DataDatabaseController { + private final DataDatabaseService dataDatabaseService; + + @GetMapping("page") + @Operation(summary = "分页") + @PreAuthorize("hasAuthority('data-integrate:database:page')") + public Result> page(@Valid DataDatabaseQuery query) { + PageResult page = dataDatabaseService.page(query); + + return Result.ok(page); + } + + @GetMapping("{id}") + @Operation(summary = "信息") + @PreAuthorize("hasAuthority('data-integrate:database:info')") + public Result get(@PathVariable("id") Long id) { + DataDatabaseEntity entity = dataDatabaseService.getById(id); + + return Result.ok(DataDatabaseConvert.INSTANCE.convert(entity)); + } + + @PostMapping + @Operation(summary = "保存") + @PreAuthorize("hasAuthority('data-integrate:database:save')") + public Result save(@RequestBody DataDatabaseVO vo) { + dataDatabaseService.save(vo); + + return Result.ok(); + } + + @PutMapping + @Operation(summary = "修改") + @PreAuthorize("hasAuthority('data-integrate:database:update')") + public Result update(@RequestBody @Valid DataDatabaseVO vo) { + dataDatabaseService.update(vo); + + return Result.ok(); + } + + @DeleteMapping + @Operation(summary = "删除") + @PreAuthorize("hasAuthority('data-integrate:database:delete')") + public Result delete(@RequestBody List idList) { + dataDatabaseService.delete(idList); + return Result.ok(); + } + + @PostMapping("/test-online") + @Operation(summary = "测试连接") + public Result testOnline(@RequestBody @Valid DataDatabaseVO vo) { + dataDatabaseService.testOnline(vo); + return Result.ok(); + } + + @GetMapping("/tables/{id}") + @Operation(summary = "根据数据库id获取表相关信息") + public Result> getTablesById(@PathVariable Long id) { + List tableVos = dataDatabaseService.getTablesById(id); + return Result.ok(tableVos); + } + + @PostMapping("/table-data/{id}") + @Operation(summary = "根据sql获取数据") + public Result getTableDataBySql(@PathVariable Integer id, @RequestBody SqlConsole sqlConsole) { + SchemaTableDataVo schemaTableDataVo = dataDatabaseService.getTableDataBySql(id, sqlConsole); + return Result.ok(schemaTableDataVo); + } + + @GetMapping("/list-all") + @Operation(summary = "获取当前用户所能看到的的数据表") + public Result> listAll() { + List list = dataDatabaseService.listAll(); + return Result.ok(list); + } + + @GetMapping("/list-tree/{id}") + @Operation(summary = "获取库目录树") + public Result> listTree(@PathVariable Long id) { + List list = dataDatabaseService.listTree(id); + return Result.ok(list); + } + + @GetMapping("/middle-db/list-tree") + @Operation(summary = "获取中台库(当前项目)目录树") + public Result> listMiddleDbTree() { + List list = dataDatabaseService.listMiddleDbTree(); + return Result.ok(list); + } + + @GetMapping("/{id}/{tableName}/columns") + @Operation(summary = "获取字段信息") + public Result> columnInfo(@PathVariable Long id, @PathVariable String tableName) { + return Result.ok(dataDatabaseService.getColumnInfo(id, tableName)); + } + + @GetMapping("/middle-db/{tableName}/columns") + @Operation(summary = "获取中台库字段信息") + public Result> middleDbClumnInfo(@PathVariable String tableName) { + return Result.ok(dataDatabaseService.middleDbClumnInfo(tableName)); + } + + @GetMapping("/{id}/{tableName}/sql-generation") + @Operation(summary = "获取sql信息") + public Result getSqlGeneration(@PathVariable Long id, @PathVariable String tableName, String tableRemarks) { + return Result.ok(dataDatabaseService.getSqlGeneration(id, tableName, tableRemarks)); + } + + @GetMapping("/middle-db/{tableName}/sql-generation") + @Operation(summary = "获取中台库sql信息") + public Result getMiddleDbSqlGeneration(@PathVariable String tableName, String tableRemarks) { + return Result.ok(dataDatabaseService.getMiddleDbSqlGeneration(tableName, tableRemarks)); + } + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataFileCategoryController.java b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataFileCategoryController.java new file mode 100644 index 0000000..0ea9ee9 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataFileCategoryController.java @@ -0,0 +1,78 @@ +package net.srt.controller; + +import io.swagger.v3.oas.annotations.Operation; +import io.swagger.v3.oas.annotations.tags.Tag; +import lombok.AllArgsConstructor; +import net.srt.entity.DataFileCategoryEntity; +import net.srt.framework.common.utils.BeanUtil; +import net.srt.framework.common.utils.Result; +import net.srt.framework.common.utils.TreeNodeVo; +import net.srt.service.DataFileCategoryService; +import net.srt.vo.DataFileCategoryVO; +import org.springframework.security.access.prepost.PreAuthorize; +import org.springframework.web.bind.annotation.DeleteMapping; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.PutMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestMapping; +import org.springframework.web.bind.annotation.RestController; + +import javax.validation.Valid; +import java.util.List; + +/** + * 文件分组表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-12 + */ +@RestController +@RequestMapping("fileCategory") +@Tag(name = "文件分组表") +@AllArgsConstructor +public class DataFileCategoryController { + private final DataFileCategoryService dataFileCategoryService; + + @GetMapping + @Operation(summary = "查询文件分组树") + public Result> listTree() { + return Result.ok(dataFileCategoryService.listTree()); + } + + @GetMapping("/{id}") + @Operation(summary = "根据id获取") + public Result getById(@PathVariable Integer id) { + DataFileCategoryEntity entity = dataFileCategoryService.getById(id); + TreeNodeVo nodeVo = BeanUtil.copyProperties(entity, TreeNodeVo::new); + nodeVo.setLabel(entity.getName()); + nodeVo.setParentPath(entity.getPath().contains("/") ? entity.getPath().substring(0, entity.getPath().lastIndexOf("/")) : null); + return Result.ok(nodeVo); + } + + + @PostMapping + @Operation(summary = "保存") + @PreAuthorize("hasAuthority('data-integrate:fileCategory:save')") + public Result save(@RequestBody DataFileCategoryVO vo) { + dataFileCategoryService.save(vo); + return Result.ok(); + } + + @PutMapping + @Operation(summary = "修改") + @PreAuthorize("hasAuthority('data-integrate:fileCategory:update')") + public Result update(@RequestBody @Valid DataFileCategoryVO vo) { + dataFileCategoryService.update(vo); + return Result.ok(); + } + + @DeleteMapping + @Operation(summary = "删除") + @PreAuthorize("hasAuthority('data-integrate:fileCategory:delete')") + public Result delete(Long id) { + dataFileCategoryService.delete(id); + return Result.ok(); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataFileController.java b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataFileController.java new file mode 100644 index 0000000..b2c78fa --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataFileController.java @@ -0,0 +1,84 @@ +package net.srt.controller; + +import io.swagger.v3.oas.annotations.Operation; +import io.swagger.v3.oas.annotations.tags.Tag; +import lombok.AllArgsConstructor; +import net.srt.convert.DataFileConvert; +import net.srt.entity.DataFileEntity; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.Result; +import net.srt.service.DataFileService; +import net.srt.query.DataFileQuery; +import net.srt.vo.DataFileVO; +import org.springframework.security.access.prepost.PreAuthorize; +import org.springframework.web.bind.annotation.*; + +import javax.validation.Valid; +import java.util.List; + +/** +* 文件表 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-16 +*/ +@RestController +@RequestMapping("file") +@Tag(name="文件表") +@AllArgsConstructor +public class DataFileController { + private final DataFileService dataFileService; + + @GetMapping("page") + @Operation(summary = "分页") + @PreAuthorize("hasAuthority('data-integrate:file:page')") + public Result> page(@Valid DataFileQuery query){ + PageResult page = dataFileService.page(query); + + return Result.ok(page); + } + + @GetMapping("page-resource") + @Operation(summary = "根据resourceId分页获取") + public Result> pageResource(@Valid DataFileQuery query){ + PageResult page = dataFileService.pageResource(query); + + return Result.ok(page); + } + + @GetMapping("{id}") + @Operation(summary = "信息") + @PreAuthorize("hasAuthority('data-integrate:file:info')") + public Result get(@PathVariable("id") Long id){ + DataFileEntity entity = dataFileService.getById(id); + + return Result.ok(DataFileConvert.INSTANCE.convert(entity)); + } + + @PostMapping + @Operation(summary = "保存") + @PreAuthorize("hasAuthority('data-integrate:file:save')") + public Result save(@RequestBody DataFileVO vo){ + dataFileService.save(vo); + + return Result.ok(); + } + + @PutMapping + @Operation(summary = "修改") + @PreAuthorize("hasAuthority('data-integrate:file:update')") + public Result update(@RequestBody @Valid DataFileVO vo){ + dataFileService.update(vo); + + return Result.ok(); + } + + @DeleteMapping + @Operation(summary = "删除") + @PreAuthorize("hasAuthority('data-integrate:file:delete')") + public Result delete(@RequestBody List idList){ + dataFileService.delete(idList); + + return Result.ok(); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataLayerController.java b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataLayerController.java new file mode 100644 index 0000000..f561384 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataLayerController.java @@ -0,0 +1,80 @@ +package net.srt.controller; + +import io.swagger.v3.oas.annotations.Operation; +import io.swagger.v3.oas.annotations.tags.Tag; +import lombok.AllArgsConstructor; +import net.srt.convert.DataLayerConvert; +import net.srt.entity.DataLayerEntity; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.Result; +import net.srt.query.DataLayerQuery; +import net.srt.service.DataLayerService; +import net.srt.vo.DataLayerVO; +import org.springframework.security.access.prepost.PreAuthorize; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PutMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestMapping; +import org.springframework.web.bind.annotation.RestController; + +import javax.validation.Valid; + +/** +* 数仓分层 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-08 +*/ +@RestController +@RequestMapping("layer") +@Tag(name="数仓分层") +@AllArgsConstructor +public class DataLayerController { + private final DataLayerService dataLayerService; + + @GetMapping("page") + @Operation(summary = "分页") + @PreAuthorize("hasAuthority('data-integrate:layer:page')") + public Result> page(@Valid DataLayerQuery query){ + PageResult page = dataLayerService.page(query); + + return Result.ok(page); + } + + @GetMapping("{id}") + @Operation(summary = "信息") + @PreAuthorize("hasAuthority('data-integrate:layer:info')") + public Result get(@PathVariable("id") Long id){ + DataLayerEntity entity = dataLayerService.getById(id); + + return Result.ok(DataLayerConvert.INSTANCE.convert(entity)); + } + + /*@PostMapping + @Operation(summary = "保存") + @PreAuthorize("hasAuthority('data-integrate:layer:save')") + public Result save(@RequestBody DataLayerVO vo){ + dataLayerService.save(vo); + + return Result.ok(); + }*/ + + @PutMapping + @Operation(summary = "修改") + @PreAuthorize("hasAuthority('data-integrate:layer:update')") + public Result update(@RequestBody @Valid DataLayerVO vo){ + dataLayerService.update(vo); + + return Result.ok(); + } + + /*@DeleteMapping + @Operation(summary = "删除") + @PreAuthorize("hasAuthority('data-integrate:layer:delete')") + public Result delete(@RequestBody List idList){ + dataLayerService.delete(idList); + + return Result.ok(); + }*/ +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataOdsController.java b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataOdsController.java new file mode 100644 index 0000000..87ea9ee --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataOdsController.java @@ -0,0 +1,74 @@ +package net.srt.controller; + +import io.swagger.v3.oas.annotations.Operation; +import io.swagger.v3.oas.annotations.tags.Tag; +import lombok.AllArgsConstructor; +import net.srt.convert.DataOdsConvert; +import net.srt.dto.SqlConsole; +import net.srt.entity.DataOdsEntity; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.Result; +import net.srt.query.DataOdsQuery; +import net.srt.service.DataOdsService; +import net.srt.vo.ColumnDescriptionVo; +import net.srt.vo.DataOdsVO; +import net.srt.vo.SchemaTableDataVo; +import org.springframework.web.bind.annotation.DeleteMapping; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestMapping; +import org.springframework.web.bind.annotation.RestController; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; + +import javax.validation.Valid; +import java.util.List; + +/** + * 数据集成-贴源数据 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-07 + */ +@RestController +@RequestMapping("ods") +@Tag(name = "数据集成-贴源数据") +@AllArgsConstructor +public class DataOdsController { + private final DataOdsService dataOdsService; + + @GetMapping("page") + @Operation(summary = "分页") + public Result> page(@Valid DataOdsQuery query) { + PageResult page = dataOdsService.page(query); + return Result.ok(page); + } + + @GetMapping("{id}") + @Operation(summary = "信息") + public Result get(@PathVariable("id") Long id) { + DataOdsEntity entity = dataOdsService.getById(id); + + return Result.ok(DataOdsConvert.INSTANCE.convert(entity)); + } + + @GetMapping("/{id}/{tableName}/column-info") + @Operation(summary = "获取字段信息") + public Result> columnInfo(@PathVariable Long id, @PathVariable String tableName) { + return Result.ok(dataOdsService.getColumnInfo(id, tableName)); + } + + @GetMapping("/{id}/{tableName}/table-data") + @Operation(summary = "获取表数据") + public Result getTableData(@PathVariable Long id, @PathVariable String tableName) { + return Result.ok(dataOdsService.getTableData(id, tableName)); + } + + @DeleteMapping + @Operation(summary = "删除") + public Result delete(@RequestBody List idList) { + dataOdsService.delete(idList); + return Result.ok(); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataProjectController.java b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataProjectController.java new file mode 100644 index 0000000..a891304 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/controller/DataProjectController.java @@ -0,0 +1,120 @@ +package net.srt.controller; + +import io.swagger.v3.oas.annotations.Operation; +import io.swagger.v3.oas.annotations.tags.Tag; +import lombok.AllArgsConstructor; +import net.srt.convert.DataProjectConvert; +import net.srt.entity.DataProjectEntity; +import net.srt.framework.common.cache.RedisCache; +import net.srt.framework.common.cache.RedisKeys; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.Result; +import net.srt.framework.security.cache.TokenStoreCache; +import net.srt.framework.security.utils.TokenUtils; +import net.srt.query.DataProjectQuery; +import net.srt.service.DataProjectService; +import net.srt.vo.DataDatabaseVO; +import net.srt.vo.DataProjectVO; +import org.springframework.security.access.prepost.PreAuthorize; +import org.springframework.web.bind.annotation.DeleteMapping; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PathVariable; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.PutMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestMapping; +import org.springframework.web.bind.annotation.RestController; + +import javax.servlet.http.HttpServletRequest; +import javax.validation.Valid; +import java.util.List; + +/** + * 数据项目 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-09-27 + */ +@RestController +@RequestMapping("project") +@Tag(name = "数据项目") +@AllArgsConstructor +public class DataProjectController { + private final DataProjectService dataProjectService; + private final TokenStoreCache storeCache; + + @GetMapping("page") + @Operation(summary = "分页") + @PreAuthorize("hasAuthority('data-integrate:project:page')") + public Result> page(@Valid DataProjectQuery query) { + PageResult page = dataProjectService.page(query); + + return Result.ok(page); + } + + @GetMapping("{id}") + @Operation(summary = "信息") + @PreAuthorize("hasAuthority('data-integrate:project:info')") + public Result get(@PathVariable("id") Long id) { + DataProjectEntity entity = dataProjectService.getById(id); + + return Result.ok(DataProjectConvert.INSTANCE.convert(entity)); + } + + @PostMapping + @Operation(summary = "保存") + @PreAuthorize("hasAuthority('data-integrate:project:save')") + public Result save(@RequestBody DataProjectVO vo) { + dataProjectService.save(vo); + + return Result.ok(); + } + + @PutMapping + @Operation(summary = "修改") + @PreAuthorize("hasAuthority('data-integrate:project:update')") + public Result update(@RequestBody @Valid DataProjectVO vo) { + dataProjectService.update(vo); + + return Result.ok(); + } + + @DeleteMapping + @Operation(summary = "删除") + @PreAuthorize("hasAuthority('data-integrate:project:delete')") + public Result delete(@RequestBody List idList) { + dataProjectService.delete(idList); + + return Result.ok(); + } + + @PostMapping("adduser/{projectId}") + @Operation(summary = "添加成员") + @PreAuthorize("hasAuthority('data-integrate:project:adduser')") + public Result addUser(@PathVariable Long projectId, @RequestBody List userIds) { + dataProjectService.addUser(projectId, userIds); + return Result.ok(); + } + + @GetMapping("/current-user-projects") + @Operation(summary = "获取当前用户拥有的项目") + public Result> listProjects() { + return Result.ok(dataProjectService.listProjects()); + } + + @PutMapping("/change-project/{projectId}") + @Operation(summary = "切换项目(租户)") + public Result changeProject(@PathVariable Long projectId, HttpServletRequest request) { + String accessToken = TokenUtils.getAccessToken(request); + //把当前用户的租户id存储到redis缓存中,24小时过期 + storeCache.saveProjectId(accessToken, projectId); + return Result.ok(); + } + + @PostMapping("/test-online") + @Operation(summary = "测试连接") + public Result testOnline(@RequestBody @Valid DataProjectVO vo) { + dataProjectService.testOnline(vo); + return Result.ok(); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataAccessConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataAccessConvert.java new file mode 100644 index 0000000..f443247 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataAccessConvert.java @@ -0,0 +1,29 @@ +package net.srt.convert; + +import net.srt.api.module.data.integrate.dto.DataAccessDto; +import net.srt.entity.DataAccessEntity; +import net.srt.vo.DataAccessVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** + * 数据集成-数据接入 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-24 + */ +@Mapper +public interface DataAccessConvert { + DataAccessConvert INSTANCE = Mappers.getMapper(DataAccessConvert.class); + + DataAccessEntity convert(DataAccessVO vo); + + DataAccessVO convert(DataAccessEntity entity); + + DataAccessDto convertDto(DataAccessEntity entity); + + List convertList(List list); + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataAccessTaskConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataAccessTaskConvert.java new file mode 100644 index 0000000..fc7f263 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataAccessTaskConvert.java @@ -0,0 +1,31 @@ +package net.srt.convert; + +import net.srt.api.module.data.integrate.dto.DataAccessTaskDto; +import net.srt.entity.DataAccessTaskEntity; +import net.srt.vo.DataAccessTaskVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** +* 数据接入任务记录 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-24 +*/ +@Mapper +public interface DataAccessTaskConvert { + DataAccessTaskConvert INSTANCE = Mappers.getMapper(DataAccessTaskConvert.class); + + DataAccessTaskEntity convert(DataAccessTaskVO vo); + + DataAccessTaskEntity convertByDto(DataAccessTaskDto dto); + + DataAccessTaskVO convert(DataAccessTaskEntity entity); + + DataAccessTaskDto convertDto(DataAccessTaskEntity entity); + + List convertList(List list); + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataAccessTaskDetailConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataAccessTaskDetailConvert.java new file mode 100644 index 0000000..e874439 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataAccessTaskDetailConvert.java @@ -0,0 +1,26 @@ +package net.srt.convert; + +import net.srt.entity.DataAccessTaskDetailEntity; +import net.srt.vo.DataAccessTaskDetailVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** +* 数据接入-同步记录详情 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-28 +*/ +@Mapper +public interface DataAccessTaskDetailConvert { + DataAccessTaskDetailConvert INSTANCE = Mappers.getMapper(DataAccessTaskDetailConvert.class); + + DataAccessTaskDetailEntity convert(DataAccessTaskDetailVO vo); + + DataAccessTaskDetailVO convert(DataAccessTaskDetailEntity entity); + + List convertList(List list); + +} \ No newline at end of file diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataDatabaseConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataDatabaseConvert.java new file mode 100644 index 0000000..b4836b2 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataDatabaseConvert.java @@ -0,0 +1,29 @@ +package net.srt.convert; + +import net.srt.api.module.data.integrate.dto.DataDatabaseDto; +import net.srt.entity.DataDatabaseEntity; +import net.srt.vo.DataDatabaseVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** +* 数据集成-数据库管理 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-09 +*/ +@Mapper +public interface DataDatabaseConvert { + DataDatabaseConvert INSTANCE = Mappers.getMapper(DataDatabaseConvert.class); + + DataDatabaseEntity convert(DataDatabaseVO vo); + + DataDatabaseVO convert(DataDatabaseEntity entity); + + DataDatabaseDto convertDto(DataDatabaseEntity entity); + + List convertList(List list); + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataFileCategoryConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataFileCategoryConvert.java new file mode 100644 index 0000000..3c7abf6 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataFileCategoryConvert.java @@ -0,0 +1,26 @@ +package net.srt.convert; + +import net.srt.entity.DataFileCategoryEntity; +import net.srt.vo.DataFileCategoryVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** +* 文件分组表 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-12 +*/ +@Mapper +public interface DataFileCategoryConvert { + DataFileCategoryConvert INSTANCE = Mappers.getMapper(DataFileCategoryConvert.class); + + DataFileCategoryEntity convert(DataFileCategoryVO vo); + + DataFileCategoryVO convert(DataFileCategoryEntity entity); + + List convertList(List list); + +} \ No newline at end of file diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataFileConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataFileConvert.java new file mode 100644 index 0000000..f2f22ef --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataFileConvert.java @@ -0,0 +1,26 @@ +package net.srt.convert; + +import net.srt.entity.DataFileEntity; +import net.srt.vo.DataFileVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** +* 文件表 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-16 +*/ +@Mapper +public interface DataFileConvert { + DataFileConvert INSTANCE = Mappers.getMapper(DataFileConvert.class); + + DataFileEntity convert(DataFileVO vo); + + DataFileVO convert(DataFileEntity entity); + + List convertList(List list); + +} \ No newline at end of file diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataLayerConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataLayerConvert.java new file mode 100644 index 0000000..61f1a6c --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataLayerConvert.java @@ -0,0 +1,26 @@ +package net.srt.convert; + +import net.srt.entity.DataLayerEntity; +import net.srt.vo.DataLayerVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** +* 数仓分层 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-08 +*/ +@Mapper +public interface DataLayerConvert { + DataLayerConvert INSTANCE = Mappers.getMapper(DataLayerConvert.class); + + DataLayerEntity convert(DataLayerVO vo); + + DataLayerVO convert(DataLayerEntity entity); + + List convertList(List list); + +} \ No newline at end of file diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataOdsConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataOdsConvert.java new file mode 100644 index 0000000..0886c97 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataOdsConvert.java @@ -0,0 +1,29 @@ +package net.srt.convert; + +import net.srt.api.module.data.integrate.dto.DataOdsDto; +import net.srt.entity.DataOdsEntity; +import net.srt.vo.DataOdsVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** + * 数据集成-贴源数据 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-07 + */ +@Mapper +public interface DataOdsConvert { + DataOdsConvert INSTANCE = Mappers.getMapper(DataOdsConvert.class); + + DataOdsEntity convert(DataOdsVO vo); + + DataOdsEntity convertByDto(DataOdsDto dto); + + DataOdsVO convert(DataOdsEntity entity); + + List convertList(List list); + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataProjectConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataProjectConvert.java new file mode 100644 index 0000000..601d361 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataProjectConvert.java @@ -0,0 +1,26 @@ +package net.srt.convert; + +import net.srt.entity.DataProjectEntity; +import net.srt.vo.DataProjectVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** +* 数据项目 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-09-27 +*/ +@Mapper +public interface DataProjectConvert { + DataProjectConvert INSTANCE = Mappers.getMapper(DataProjectConvert.class); + + DataProjectEntity convert(DataProjectVO vo); + + DataProjectVO convert(DataProjectEntity entity); + + List convertList(List list); + +} \ No newline at end of file diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataProjectUserRelConvert.java b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataProjectUserRelConvert.java new file mode 100644 index 0000000..4f4780b --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/convert/DataProjectUserRelConvert.java @@ -0,0 +1,26 @@ +package net.srt.convert; + +import net.srt.entity.DataProjectUserRelEntity; +import net.srt.vo.DataProjectUserRelVO; +import org.mapstruct.Mapper; +import org.mapstruct.factory.Mappers; + +import java.util.List; + +/** +* 项目用户关联表 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-08 +*/ +@Mapper +public interface DataProjectUserRelConvert { + DataProjectUserRelConvert INSTANCE = Mappers.getMapper(DataProjectUserRelConvert.class); + + DataProjectUserRelEntity convert(DataProjectUserRelVO vo); + + DataProjectUserRelVO convert(DataProjectUserRelEntity entity); + + List convertList(List list); + +} \ No newline at end of file diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataAccessDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataAccessDao.java new file mode 100644 index 0000000..cab61c5 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataAccessDao.java @@ -0,0 +1,24 @@ +package net.srt.dao; + +import net.srt.entity.DataAccessEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; +import org.apache.ibatis.annotations.Param; + +import java.util.Date; + +/** +* 数据集成-数据接入 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-24 +*/ +@Mapper +public interface DataAccessDao extends BaseDao { + + void updateStartInfo(@Param("dataAccessId") Long dataAccessId); + + void updateEndInfo(@Param("dataAccessId")Long dataAccessId, @Param("runStatus") Integer runStatus, @Param("nextRunTime") Date nextRunTime); + + void changeStatus(@Param("id") Long id, @Param("status") Integer status, @Param("releaseTime") Date releaseTime, @Param("releaseUserId") Long releaseUserId); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataAccessTaskDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataAccessTaskDao.java new file mode 100644 index 0000000..526a09f --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataAccessTaskDao.java @@ -0,0 +1,16 @@ +package net.srt.dao; + +import net.srt.entity.DataAccessTaskEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; + +/** +* 数据接入任务记录 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-24 +*/ +@Mapper +public interface DataAccessTaskDao extends BaseDao { + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataAccessTaskDetailDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataAccessTaskDetailDao.java new file mode 100644 index 0000000..ff70829 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataAccessTaskDetailDao.java @@ -0,0 +1,16 @@ +package net.srt.dao; + +import net.srt.entity.DataAccessTaskDetailEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; + +/** +* 数据接入-同步记录详情 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-28 +*/ +@Mapper +public interface DataAccessTaskDetailDao extends BaseDao { + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataDatabaseDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataDatabaseDao.java new file mode 100644 index 0000000..f9f71b4 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataDatabaseDao.java @@ -0,0 +1,18 @@ +package net.srt.dao; + +import net.srt.entity.DataDatabaseEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; +import org.apache.ibatis.annotations.Param; + +/** +* 数据集成-数据库管理 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-09 +*/ +@Mapper +public interface DataDatabaseDao extends BaseDao { + + void changeStatusById(@Param("id") Long id, @Param("status") Integer status); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataFileCategoryDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataFileCategoryDao.java new file mode 100644 index 0000000..7483c71 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataFileCategoryDao.java @@ -0,0 +1,16 @@ +package net.srt.dao; + +import net.srt.entity.DataFileCategoryEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; + +/** +* 文件分组表 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-12 +*/ +@Mapper +public interface DataFileCategoryDao extends BaseDao { + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataFileDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataFileDao.java new file mode 100644 index 0000000..01ac0ca --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataFileDao.java @@ -0,0 +1,20 @@ +package net.srt.dao; + +import net.srt.entity.DataFileEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; + +import java.util.List; +import java.util.Map; + +/** +* 文件表 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-16 +*/ +@Mapper +public interface DataFileDao extends BaseDao { + + List getResourceList(Map params); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataLayerDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataLayerDao.java new file mode 100644 index 0000000..cf46258 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataLayerDao.java @@ -0,0 +1,16 @@ +package net.srt.dao; + +import net.srt.entity.DataLayerEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; + +/** +* 数仓分层 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-08 +*/ +@Mapper +public interface DataLayerDao extends BaseDao { + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataOdsDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataOdsDao.java new file mode 100644 index 0000000..33678d5 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataOdsDao.java @@ -0,0 +1,16 @@ +package net.srt.dao; + +import net.srt.entity.DataOdsEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; + +/** +* 数据集成-贴源数据 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-07 +*/ +@Mapper +public interface DataOdsDao extends BaseDao { + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataProjectDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataProjectDao.java new file mode 100644 index 0000000..1770c81 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataProjectDao.java @@ -0,0 +1,27 @@ +package net.srt.dao; + +import net.srt.entity.DataProjectEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; +import org.apache.ibatis.annotations.Param; + +import java.util.List; +import java.util.Map; + +/** +* 数据项目 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-09-27 +*/ +@Mapper +public interface DataProjectDao extends BaseDao { + + /** + * 查看当前用户拥有的项目 + * @param userId + * @return + */ + List listProjects(@Param("userId") Long userId); + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataProjectUserRelDao.java b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataProjectUserRelDao.java new file mode 100644 index 0000000..cd7289b --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dao/DataProjectUserRelDao.java @@ -0,0 +1,18 @@ +package net.srt.dao; + +import net.srt.entity.DataProjectUserRelEntity; +import net.srt.framework.mybatis.dao.BaseDao; +import org.apache.ibatis.annotations.Mapper; +import org.apache.ibatis.annotations.Param; + +/** +* 项目用户关联表 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-08 +*/ +@Mapper +public interface DataProjectUserRelDao extends BaseDao { + + DataProjectUserRelEntity getByProjectIdAndUserId(@Param("projectId") Long projectId, @Param("userId") Long userId); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dto/DataAccessClientDto.java b/srt-cloud-data-integrate/src/main/java/net/srt/dto/DataAccessClientDto.java new file mode 100644 index 0000000..1c2e507 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dto/DataAccessClientDto.java @@ -0,0 +1,67 @@ +package net.srt.dto; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Data; +import lombok.NoArgsConstructor; +import srt.cloud.framework.dbswitch.common.entity.PatternMapper; + +import java.util.List; + +/** + * @ClassName DataAccessDto + * @Author zrx + * @Date 2022/10/25 16:26 + */ +@Data +@Builder +@AllArgsConstructor +@NoArgsConstructor +@Schema(description = "数据接入任务前端dto") +public class DataAccessClientDto { + @Schema(description = "主键id") + private Long id; + @Schema(description = "任务名称") + private String taskName; + @Schema(description = "描述") + private String description; + @Schema(description = "项目id") + private Long projectId; + @Schema(description = "源端数据库id") + private Long sourceDatabaseId; + @Schema(description = "目的端数据库id") + private Long targetDatabaseId; + @Schema(description = "接入方式 1-ods接入 2-自定义接入") + private Integer accessMode; + @Schema(description = "任务类型") + private Integer taskType; + @Schema(description = "cron表达式") + private String cron; + @Schema(description = "包含表或排除表") + private Integer includeOrExclude; + @Schema(description = "源端选择的表") + private List sourceSelectedTables; + @Schema(description = "只创建表") + private boolean targetOnlyCreate; + @Schema(description = "同步已存在的表") + private boolean targetSyncExit; + @Schema(description = "同步前是否删除表") + private boolean targetDropTable; + @Schema(description = "开启增量变更同步") + private boolean targetDataSync; + @Schema(description = "同步索引") + private boolean targetIndexCreate; + @Schema(description = "表名字段名转小写") + private boolean targetLowerCase; + @Schema(description = "表名字段名转大写") + private boolean targetUpperCase; + @Schema(description = "主键递增") + private boolean targetAutoIncrement; + @Schema(description = "批处理量") + private Integer batchSize; + @Schema(description = "表名映射") + private List tableNameMapper; + @Schema(description = "字段名名映射") + private List columnNameMapper; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dto/PreviewMapDto.java b/srt-cloud-data-integrate/src/main/java/net/srt/dto/PreviewMapDto.java new file mode 100644 index 0000000..30b4837 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dto/PreviewMapDto.java @@ -0,0 +1,31 @@ +package net.srt.dto; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import srt.cloud.framework.dbswitch.common.entity.PatternMapper; + +import java.util.List; + +/** + * @ClassName PreviewMapDto + * @Author zrx + * @Date 2022/10/27 9:46 + */ +@Data +public class PreviewMapDto { + + private Long sourceDatabaseId; + private Integer includeOrExclude; + + private List sourceSelectedTables; + @Schema(description = "表名映射") + private List tableNameMapper; + + private String preiveTableName; + @Schema(description = "字段名映射") + private List columnNameMapper; + + private boolean targetLowerCase; + private boolean targetUpperCase; + private String tablePrefix; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/dto/SqlConsole.java b/srt-cloud-data-integrate/src/main/java/net/srt/dto/SqlConsole.java new file mode 100644 index 0000000..b39d193 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/dto/SqlConsole.java @@ -0,0 +1,14 @@ +package net.srt.dto; + +import lombok.Data; + +/** + * @ClassName sqlConsole + * @Author zrx + * @Date 2022/10/24 9:50 + */ +@Data +public class SqlConsole { + private String sql; + private Long sqlKey; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataAccessEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataAccessEntity.java new file mode 100644 index 0000000..a1be9d2 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataAccessEntity.java @@ -0,0 +1,119 @@ +package net.srt.entity; + + +import com.baomidou.mybatisplus.annotation.TableField; +import com.baomidou.mybatisplus.annotation.TableName; +import com.baomidou.mybatisplus.extension.handlers.JacksonTypeHandler; +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Data; +import lombok.EqualsAndHashCode; +import lombok.NoArgsConstructor; +import lombok.experimental.SuperBuilder; +import net.srt.framework.mybatis.entity.BaseEntity; +import srt.cloud.framework.dbswitch.data.config.DbswichProperties; + +import java.util.Date; + +/** + * 数据集成-数据接入 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-24 + */ +@EqualsAndHashCode(callSuper = true) +@Data +@SuperBuilder +@AllArgsConstructor +@NoArgsConstructor +@TableName(value = "data_access", autoResultMap = true) +public class DataAccessEntity extends BaseEntity { + + /** + * 任务名称 + */ + private String taskName; + + /** + * 描述 + */ + private String description; + + /** + * 项目id + */ + private Long projectId; + + /** + * 源端数据库id + */ + private Long sourceDatabaseId; + + /** + * 目的端数据库id + */ + private Long targetDatabaseId; + + /** + * 接入方式 1-ods接入 2-自定义接入 + */ + private Integer accessMode; + + /** + * 任务类型 + */ + private Integer taskType; + + /** + * cron表达式 + */ + private String cron; + + /** + * 发布状态 + */ + private Integer status; + + /** + * 最新运行状态 + */ + private Integer runStatus; + + /** + * 数据接入基础配置json + */ + @TableField(typeHandler = JacksonTypeHandler.class) + private DbswichProperties dataAccessJson; + + /** + * 最近开始时间 + */ + private Date startTime; + + /** + * 最近结束时间 + */ + private Date endTime; + + /** + * 发布时间 + */ + private Date releaseTime; + + /** + * 备注 + */ + private String note; + + /** + * 发布人id + */ + private Long releaseUserId; + + /** + * 下次执行时间 + */ + private Date nextRunTime; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataAccessTaskDetailEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataAccessTaskDetailEntity.java new file mode 100644 index 0000000..339e591 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataAccessTaskDetailEntity.java @@ -0,0 +1,96 @@ +package net.srt.entity; + +import com.baomidou.mybatisplus.annotation.TableId; +import com.baomidou.mybatisplus.annotation.TableName; +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Data; +import lombok.NoArgsConstructor; + +import java.util.Date; + +/** + * 数据接入-同步记录详情 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-28 + */ + +@Data +@AllArgsConstructor +@NoArgsConstructor +@Builder +@TableName("data_access_task_detail") +public class DataAccessTaskDetailEntity { + /** + * 主键id + */ + @TableId + private Long id; + + /** + * 数据接入id + */ + private Long dataAccessId; + + /** + * 数据接入任务id + */ + private Long taskId; + + /** + * 源端库名 + */ + private String sourceSchemaName; + + /** + * 源端表名 + */ + private String sourceTableName; + + /** + * 目的端库名 + */ + private String targetSchemaName; + + /** + * 目的端表名 + */ + private String targetTableName; + + /** + * 同步记录数 + */ + private Long syncCount; + + /** + * 同步数据量 + */ + private String syncBytes; + + /** + * 是否成功 0-否 1-是 + */ + private Integer ifSuccess; + + /** + * 失败信息 + */ + private String errorMsg; + + /** + * 成功信息 + */ + private String successMsg; + + /** + * 项目id + */ + private Long projectId; + + /** + * 创建时间 + */ + private Date createTime; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataAccessTaskEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataAccessTaskEntity.java new file mode 100644 index 0000000..27b4320 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataAccessTaskEntity.java @@ -0,0 +1,79 @@ +package net.srt.entity; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.EqualsAndHashCode; +import com.baomidou.mybatisplus.annotation.*; +import lombok.NoArgsConstructor; +import lombok.experimental.SuperBuilder; +import net.srt.framework.mybatis.entity.BaseEntity; + +import java.util.Date; + +/** + * 数据接入任务记录 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-26 + */ +@EqualsAndHashCode(callSuper=true) +@Data +@SuperBuilder +@TableName("data_access_task") +@AllArgsConstructor +@NoArgsConstructor +public class DataAccessTaskEntity extends BaseEntity { + + /** + * 数据接入任务id + */ + private Integer dataAccessId; + + /** + * 运行状态( 1-等待中 2-运行中 3-正常结束 4-异常结束) + */ + private Integer runStatus; + + /** + * 开始时间 + */ + private Date startTime; + + /** + * 结束时间 + */ + private Date endTime; + + private String realTimeLog; + /** + * 错误信息 + */ + private String errorInfo; + + /** + * 更新数据量 + */ + private Long dataCount; + + /** + * 成功表数量 + */ + private Long tableSuccessCount; + + /** + * 失败表数量 + */ + private Long tableFailCount; + + /** + * 更新大小 + */ + private String byteCount; + + /** + * 项目id + */ + private Long projectId; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataDatabaseEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataDatabaseEntity.java new file mode 100644 index 0000000..543c949 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataDatabaseEntity.java @@ -0,0 +1,82 @@ +package net.srt.entity; + +import lombok.Data; +import lombok.EqualsAndHashCode; +import com.baomidou.mybatisplus.annotation.*; +import net.srt.framework.mybatis.entity.BaseEntity; + +import java.util.Date; + +/** + * 数据集成-数据库管理 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-09 + */ +@EqualsAndHashCode(callSuper = false) +@Data +@TableName(value = "data_database", autoResultMap = true) +public class DataDatabaseEntity extends BaseEntity { + + /** + * 名称 + */ + private String name; + + /** + * 数据库类型 + */ + private Integer databaseType; + + /** + * 主机ip + */ + private String databaseIp; + + /** + * 端口 + */ + private String databasePort; + + /** + * 库名 + */ + private String databaseName; + + /** + * 状态 + */ + private Integer status; + + /** + * 用户名 + */ + private String userName; + + /** + * 密码 + */ + private String password; + + /** + * 是否支持实时接入 + */ + private Integer isRtApprove; + + /** + * 不支持实时接入原因 + */ + private String noRtReason; + + /** + * jdbcUrl + */ + private String jdbcUrl; + + /** + * 所属项目 + */ + private Long projectId; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataFileCategoryEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataFileCategoryEntity.java new file mode 100644 index 0000000..f99bd34 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataFileCategoryEntity.java @@ -0,0 +1,59 @@ +package net.srt.entity; + +import com.baomidou.mybatisplus.annotation.TableName; +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Data; +import lombok.EqualsAndHashCode; +import lombok.NoArgsConstructor; +import lombok.experimental.SuperBuilder; +import net.srt.framework.mybatis.entity.BaseEntity; + +/** + * 文件分组表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-12 + */ +@EqualsAndHashCode(callSuper = true) +@SuperBuilder +@Data +@AllArgsConstructor +@NoArgsConstructor +@TableName("data_file_category") +public class DataFileCategoryEntity extends BaseEntity { + + /** + * 父级id(顶级为0) + */ + private Long parentId; + + /** + * 分组名称 + */ + private String name; + + /** + * 分组序号 + */ + private Integer orderNo; + + /** + * 描述 + */ + private String description; + + /** + * 分组路径 + */ + private String path; + + private Integer type; + + /** + * 项目id + */ + private Long projectId; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataFileEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataFileEntity.java new file mode 100644 index 0000000..028d42c --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataFileEntity.java @@ -0,0 +1,56 @@ +package net.srt.entity; + +import com.baomidou.mybatisplus.annotation.TableName; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.mybatis.entity.BaseEntity; + +/** + * 文件表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-16 + */ +@EqualsAndHashCode(callSuper = false) +@Data +@TableName("data_file") +public class DataFileEntity extends BaseEntity { + + /** + * 名称 + */ + private String name; + + /** + * 所属分组id + */ + private Integer fileCategoryId; + + /** + * 文件类型 + */ + private String type; + + /** + * 文件url地址 + */ + private String fileUrl; + + /** + * 描述 + */ + private String description; + + /** + * 大小 + */ + private Long size; + + + /** + * 项目id + */ + private Long projectId; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataLayerEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataLayerEntity.java new file mode 100644 index 0000000..0f5a36c --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataLayerEntity.java @@ -0,0 +1,47 @@ +package net.srt.entity; + +import lombok.Data; +import lombok.EqualsAndHashCode; +import com.baomidou.mybatisplus.annotation.*; +import net.srt.framework.mybatis.entity.BaseEntity; + +import java.util.Date; + +/** + * 数仓分层 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-08 + */ +@EqualsAndHashCode(callSuper=false) +@Data +@TableName("data_layer") +public class DataLayerEntity extends BaseEntity { + + /** + * 分层英文名称 + */ + private String name; + + /** + * 分层中文名称 + */ + private String cnName; + + /** + * 分层描述 + */ + private String note; + + /** + * 表名前缀 + */ + private String tablePrefix; + + + + + + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataOdsEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataOdsEntity.java new file mode 100644 index 0000000..df06e43 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataOdsEntity.java @@ -0,0 +1,52 @@ +package net.srt.entity; + +import lombok.Data; +import lombok.EqualsAndHashCode; +import com.baomidou.mybatisplus.annotation.*; +import net.srt.framework.mybatis.entity.BaseEntity; + +import java.util.Date; + +/** + * 数据集成-贴源数据 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-07 + */ +@EqualsAndHashCode(callSuper=false) +@Data +@TableName("data_ods") +public class DataOdsEntity extends BaseEntity { + + /** + * 数据接入id + */ + private Long dataAccessId; + + /** + * 表名 + */ + private String tableName; + + /** + * 注释 + */ + private String remarks; + + /** + * 项目id + */ + private Long projectId; + + /** + * 最近同步时间 + */ + private Date recentlySyncTime; + + + + + + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataProjectEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataProjectEntity.java new file mode 100644 index 0000000..3df8117 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataProjectEntity.java @@ -0,0 +1,67 @@ +package net.srt.entity; + +import com.baomidou.mybatisplus.annotation.TableName; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.mybatis.entity.BaseEntity; + + +/** + * 数据项目 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-09-27 + */ +@EqualsAndHashCode(callSuper = false) +@Data +@TableName(value = "data_project", autoResultMap = true) +public class DataProjectEntity extends BaseEntity { + + /** + * 项目名称 + */ + private String name; + + /** + * 英文名称 + */ + private String engName; + + /** + * 描述 + */ + private String description; + + /** + * 状态 + */ + private Integer status; + + /** + * 负责人 + */ + private String dutyPerson; + + /** + * 数据库 + */ + private String dbName; + + /** + * 数据库url + */ + private String dbUrl; + + /** + * 数据库用户名 + */ + private String dbUsername; + + /** + * 数据库密码 + */ + private String dbPassword; + + private Integer dbType; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataProjectUserRelEntity.java b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataProjectUserRelEntity.java new file mode 100644 index 0000000..25d227a --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/entity/DataProjectUserRelEntity.java @@ -0,0 +1,34 @@ +package net.srt.entity; + +import lombok.Data; +import lombok.EqualsAndHashCode; +import com.baomidou.mybatisplus.annotation.*; +import net.srt.framework.mybatis.entity.BaseEntity; + +import java.util.Date; + +/** + * 项目用户关联表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-08 + */ +@EqualsAndHashCode(callSuper=false) +@Data +@TableName("data_project_user_rel") +public class DataProjectUserRelEntity extends BaseEntity { + + /** + * 项目id + */ + private Long dataProjectId; + + /** + * 用户id + */ + private Long userId; + + + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/init/BusinessInitializer.java b/srt-cloud-data-integrate/src/main/java/net/srt/init/BusinessInitializer.java new file mode 100644 index 0000000..e6a13ee --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/init/BusinessInitializer.java @@ -0,0 +1,51 @@ +package net.srt.init; + +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import net.srt.entity.DataProjectEntity; +import net.srt.service.DataAccessTaskService; +import net.srt.service.DataProjectService; +import org.springframework.boot.ApplicationArguments; +import org.springframework.boot.ApplicationRunner; +import org.springframework.stereotype.Component; + +import java.util.List; + +/** + * @ClassName BusinessInitializer + * @Author zrx + * @Date 2022/11/27 12:14 + */ +@Slf4j +@RequiredArgsConstructor +@Component +public class BusinessInitializer implements ApplicationRunner { + + private final DataProjectService dataProjectService; + private final DataAccessTaskService accessTaskService; + + @Override + public void run(ApplicationArguments args) { + initProject(); + initScheduleMonitor(); + } + + private void initProject() { + log.info("init project cache"); + List projectEntities = dataProjectService.list(); + //把所有项目放入本地缓存 + for (DataProjectEntity project : projectEntities) { + dataProjectService.initDb(project); + } + log.info("init project cache end"); + } + + /** + * init task monitor + */ + private void initScheduleMonitor() { + //处理没执行完的同步任务 + accessTaskService.dealNotFinished(); + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/query/DataAccessQuery.java b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataAccessQuery.java new file mode 100644 index 0000000..6c1389e --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataAccessQuery.java @@ -0,0 +1,33 @@ +package net.srt.query; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.common.query.Query; + +/** +* 数据集成-数据接入查询 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-24 +*/ +@Data +@EqualsAndHashCode(callSuper = false) +@Schema(description = "数据集成-数据接入查询") +public class DataAccessQuery extends Query { + @Schema(description = "任务名称") + private String taskName; + + @Schema(description = "项目id") + private Integer projectId; + + @Schema(description = "数据库id") + private Integer dataDatabaseId; + + @Schema(description = "发布状态") + private Integer status; + + @Schema(description = "最新运行状态") + private Integer runStatus; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/query/DataAccessTaskDetailQuery.java b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataAccessTaskDetailQuery.java new file mode 100644 index 0000000..cce9cd3 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataAccessTaskDetailQuery.java @@ -0,0 +1,24 @@ +package net.srt.query; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.common.query.Query; + +/** +* 数据接入-同步记录详情查询 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-28 +*/ +@Data +@EqualsAndHashCode(callSuper = false) +@Schema(description = "数据接入-同步记录详情查询") +public class DataAccessTaskDetailQuery extends Query { + @Schema(description = "是否成功 0-否 1-是") + private Integer ifSuccess; + private Long taskId; + private Long projectId; + private String tableName; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/query/DataAccessTaskQuery.java b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataAccessTaskQuery.java new file mode 100644 index 0000000..f7f0c42 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataAccessTaskQuery.java @@ -0,0 +1,22 @@ +package net.srt.query; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.common.query.Query; + +import java.util.Date; + +/** +* 数据接入任务记录查询 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-24 +*/ +@Data +@EqualsAndHashCode(callSuper = false) +@Schema(description = "数据接入任务记录查询") +public class DataAccessTaskQuery extends Query { + private Long dataAccessId; + private Integer runStatus; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/query/DataDatabaseQuery.java b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataDatabaseQuery.java new file mode 100644 index 0000000..d870086 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataDatabaseQuery.java @@ -0,0 +1,36 @@ +package net.srt.query; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.common.query.Query; + +/** +* 数据集成-数据库管理查询 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-09 +*/ +@Data +@EqualsAndHashCode(callSuper = false) +@Schema(description = "数据集成-数据库管理查询") +public class DataDatabaseQuery extends Query { + @Schema(description = "名称") + private String name; + + @Schema(description = "数据库类型") + private Integer databaseType; + + @Schema(description = "库名") + private String databaseName; + + @Schema(description = "状态") + private Integer status; + + @Schema(description = "是否支持实时接入") + private Integer isRtApprove; + + @Schema(description = "所属项目") + private Long projectId; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/query/DataFileQuery.java b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataFileQuery.java new file mode 100644 index 0000000..e14bad2 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataFileQuery.java @@ -0,0 +1,28 @@ +package net.srt.query; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.common.query.Query; + +import java.util.Date; + +/** +* 文件表查询 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-16 +*/ +@Data +@EqualsAndHashCode(callSuper = false) +@Schema(description = "文件表查询") +public class DataFileQuery extends Query { + @Schema(description = "名称") + private String name; + @Schema(description = "文件类型") + private String type; + @Schema(description = "文件分组id") + private Long fileCategoryId; + private Long resourceId; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/query/DataLayerQuery.java b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataLayerQuery.java new file mode 100644 index 0000000..2c186fa --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataLayerQuery.java @@ -0,0 +1,25 @@ +package net.srt.query; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.common.query.Query; + +import java.util.Date; + +/** +* 数仓分层查询 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-08 +*/ +@Data +@EqualsAndHashCode(callSuper = false) +@Schema(description = "数仓分层查询") +public class DataLayerQuery extends Query { + + @Schema(description = "分层英文名称") + private String name; + @Schema(description = "分层中文名称") + private String cnName; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/query/DataOdsQuery.java b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataOdsQuery.java new file mode 100644 index 0000000..443a855 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataOdsQuery.java @@ -0,0 +1,29 @@ +package net.srt.query; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.common.query.Query; + +import java.util.Date; + +/** +* 数据集成-贴源数据查询 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-07 +*/ +@Data +@EqualsAndHashCode(callSuper = false) +@Schema(description = "数据集成-贴源数据查询") +public class DataOdsQuery extends Query { + @Schema(description = "表名") + private String tableName; + + @Schema(description = "注释") + private String remarks; + + @Schema(description = "项目id") + private Long projectId; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/query/DataProjectQuery.java b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataProjectQuery.java new file mode 100644 index 0000000..e5bc2e1 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/query/DataProjectQuery.java @@ -0,0 +1,32 @@ +package net.srt.query; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.EqualsAndHashCode; +import net.srt.framework.common.query.Query; + +import java.util.Date; + +/** +* 数据项目查询 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-09-27 +*/ +@Data +@EqualsAndHashCode(callSuper = false) +@Schema(description = "数据项目查询") +public class DataProjectQuery extends Query { + @Schema(description = "项目名称") + private String name; + + @Schema(description = "英文名称") + private String engName; + + @Schema(description = "状态") + private Integer status; + + @Schema(description = "负责人") + private String dutyPerson; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataAccessService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataAccessService.java new file mode 100644 index 0000000..4bbd187 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataAccessService.java @@ -0,0 +1,66 @@ +package net.srt.service; + +import net.srt.api.module.data.integrate.dto.PreviewNameMapperDto; +import net.srt.dto.DataAccessClientDto; +import net.srt.dto.PreviewMapDto; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.query.DataAccessTaskDetailQuery; +import net.srt.query.DataAccessTaskQuery; +import net.srt.vo.DataAccessTaskDetailVO; +import net.srt.vo.DataAccessTaskVO; +import net.srt.vo.DataAccessVO; +import net.srt.query.DataAccessQuery; +import net.srt.entity.DataAccessEntity; +import net.srt.vo.PreviewNameMapperVo; + +import java.util.Date; +import java.util.List; + +/** + * 数据集成-数据接入 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-24 + */ +public interface DataAccessService extends BaseService { + + PageResult page(DataAccessQuery query); + + DataAccessClientDto getById(Long id); + + void save(DataAccessClientDto dto); + + void update(DataAccessClientDto dto); + + void delete(List idList); + + DataAccessEntity loadById(Long id); + + void updateStartInfo(Long dataAccessId); + + void updateEndInfo(Long dataAccessId, Integer runStatus, Date nextRunTime); + + List getTableMap(Long id); + + List previewTableMap(PreviewMapDto previewMapDto); + + List getColumnMap(Long id, String tableName); + + List previewColumnMap(PreviewMapDto previewMapDto); + + void release(Long id); + + void cancle(Long id); + + void handRun(Long id); + + PageResult taskPage(DataAccessTaskQuery taskQuery); + + void deleteTask(List idList); + + PageResult taskDetailPage(DataAccessTaskDetailQuery detailQuery); + + DataAccessTaskVO getTaskById(Long id); + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataAccessTaskDetailService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataAccessTaskDetailService.java new file mode 100644 index 0000000..1e76f0b --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataAccessTaskDetailService.java @@ -0,0 +1,31 @@ +package net.srt.service; + +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.vo.DataAccessTaskDetailVO; +import net.srt.query.DataAccessTaskDetailQuery; +import net.srt.entity.DataAccessTaskDetailEntity; + +import java.util.List; + +/** + * 数据接入-同步记录详情 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-28 + */ +public interface DataAccessTaskDetailService extends BaseService { + + PageResult page(DataAccessTaskDetailQuery query); + + void save(DataAccessTaskDetailVO vo); + + void update(DataAccessTaskDetailVO vo); + + void delete(List idList); + + void deleteByTaskId(List idList); + + void deleteByAccessId(Long id); + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataAccessTaskService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataAccessTaskService.java new file mode 100644 index 0000000..ffe5398 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataAccessTaskService.java @@ -0,0 +1,30 @@ +package net.srt.service; + +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.vo.DataAccessTaskVO; +import net.srt.query.DataAccessTaskQuery; +import net.srt.entity.DataAccessTaskEntity; + +import java.util.List; + +/** + * 数据接入任务记录 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-24 + */ +public interface DataAccessTaskService extends BaseService { + + PageResult page(DataAccessTaskQuery query); + + void save(DataAccessTaskVO vo); + + void update(DataAccessTaskVO vo); + + void delete(List idList); + + void deleteByAccessId(Long id); + + void dealNotFinished(); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataDatabaseService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataDatabaseService.java new file mode 100644 index 0000000..9ad6767 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataDatabaseService.java @@ -0,0 +1,52 @@ +package net.srt.service; + +import net.srt.dto.SqlConsole; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.TreeNodeVo; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.vo.ColumnDescriptionVo; +import net.srt.vo.DataDatabaseVO; +import net.srt.query.DataDatabaseQuery; +import net.srt.entity.DataDatabaseEntity; +import net.srt.vo.SchemaTableDataVo; +import net.srt.vo.SqlGenerationVo; +import net.srt.vo.TableVo; + +import java.util.List; + +/** + * 数据集成-数据库管理 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-09 + */ +public interface DataDatabaseService extends BaseService { + + PageResult page(DataDatabaseQuery query); + + void save(DataDatabaseVO vo); + + void update(DataDatabaseVO vo); + + void delete(List idList); + + void testOnline(DataDatabaseVO vo); + + List getTablesById(Long id); + + SchemaTableDataVo getTableDataBySql(Integer id, SqlConsole sqlConsole); + + List listAll(); + + List listTree(Long id); + + List getColumnInfo(Long id, String tableName); + + SqlGenerationVo getSqlGeneration(Long id, String tableName, String tableRemarks); + + List listMiddleDbTree(); + + List middleDbClumnInfo(String tableName); + + SqlGenerationVo getMiddleDbSqlGeneration(String tableName, String tableRemarks); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataFileCategoryService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataFileCategoryService.java new file mode 100644 index 0000000..5699dcf --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataFileCategoryService.java @@ -0,0 +1,25 @@ +package net.srt.service; + +import net.srt.entity.DataFileCategoryEntity; +import net.srt.framework.common.utils.TreeNodeVo; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.vo.DataFileCategoryVO; + +import java.util.List; + +/** + * 文件分组表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-12 + */ +public interface DataFileCategoryService extends BaseService { + + void save(DataFileCategoryVO vo); + + void update(DataFileCategoryVO vo); + + void delete(Long id); + + List listTree(); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataFileService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataFileService.java new file mode 100644 index 0000000..06e1ff2 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataFileService.java @@ -0,0 +1,29 @@ +package net.srt.service; + +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.vo.DataFileVO; +import net.srt.query.DataFileQuery; +import net.srt.entity.DataFileEntity; + +import java.util.List; + +/** + * 文件表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-16 + */ +public interface DataFileService extends BaseService { + + PageResult page(DataFileQuery query); + + PageResult pageResource(DataFileQuery query); + + void save(DataFileVO vo); + + void update(DataFileVO vo); + + void delete(List idList); + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataLayerService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataLayerService.java new file mode 100644 index 0000000..0745490 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataLayerService.java @@ -0,0 +1,26 @@ +package net.srt.service; + +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.vo.DataLayerVO; +import net.srt.query.DataLayerQuery; +import net.srt.entity.DataLayerEntity; + +import java.util.List; + +/** + * 数仓分层 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-08 + */ +public interface DataLayerService extends BaseService { + + PageResult page(DataLayerQuery query); + + void save(DataLayerVO vo); + + void update(DataLayerVO vo); + + void delete(List idList); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataOdsService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataOdsService.java new file mode 100644 index 0000000..4c7cbc2 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataOdsService.java @@ -0,0 +1,34 @@ +package net.srt.service; + +import net.srt.entity.DataOdsEntity; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.query.DataOdsQuery; +import net.srt.vo.ColumnDescriptionVo; +import net.srt.vo.DataOdsVO; +import net.srt.vo.SchemaTableDataVo; + +import java.util.List; + +/** + * 数据集成-贴源数据 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-07 + */ +public interface DataOdsService extends BaseService { + + PageResult page(DataOdsQuery query); + + void save(DataOdsVO vo); + + void update(DataOdsVO vo); + + void delete(List idList); + + DataOdsEntity getByTableName(Long projectId, String tableName); + + List getColumnInfo(Long id, String tableName); + + SchemaTableDataVo getTableData(Long id, String tableName); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataProjectService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataProjectService.java new file mode 100644 index 0000000..e95b84f --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataProjectService.java @@ -0,0 +1,34 @@ +package net.srt.service; + +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.vo.DataProjectVO; +import net.srt.query.DataProjectQuery; +import net.srt.entity.DataProjectEntity; + +import java.util.List; + +/** + * 数据项目 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-09-27 + */ +public interface DataProjectService extends BaseService { + + PageResult page(DataProjectQuery query); + + void save(DataProjectVO vo); + + void update(DataProjectVO vo); + + void initDb(DataProjectEntity entity); + + void delete(List idList); + + void addUser(Long projectId, List userIds); + + List listProjects(); + + void testOnline(DataProjectVO vo); +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/DataProjectUserRelService.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataProjectUserRelService.java new file mode 100644 index 0000000..75ff1d6 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/DataProjectUserRelService.java @@ -0,0 +1,25 @@ +package net.srt.service; + +import net.srt.entity.DataProjectUserRelEntity; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.vo.DataProjectUserRelVO; + +import java.util.List; + +/** + * 项目用户关联表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-08 + */ +public interface DataProjectUserRelService extends BaseService { + + void save(DataProjectUserRelVO vo); + + void update(DataProjectUserRelVO vo); + + void delete(List idList); + + DataProjectUserRelEntity getByProjectIdAndUserId(Long projectId, Long userId); + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataAccessServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataAccessServiceImpl.java new file mode 100644 index 0000000..ef4a4a5 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataAccessServiceImpl.java @@ -0,0 +1,447 @@ +package net.srt.service.impl; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.metadata.IPage; +import com.baomidou.mybatisplus.core.toolkit.Wrappers; +import lombok.AllArgsConstructor; +import net.srt.api.module.data.integrate.constant.TaskType; +import net.srt.api.module.data.integrate.dto.PreviewNameMapperDto; +import net.srt.api.module.quartz.QuartzDataAccessApi; +import net.srt.constants.AccessMode; +import net.srt.constants.CommonRunStatus; +import net.srt.constants.DataHouseLayer; +import net.srt.constants.YesOrNo; +import net.srt.convert.DataAccessConvert; +import net.srt.convert.DataAccessTaskConvert; +import net.srt.dao.DataAccessDao; +import net.srt.dao.DataDatabaseDao; +import net.srt.dto.DataAccessClientDto; +import net.srt.dto.PreviewMapDto; +import net.srt.entity.DataAccessEntity; +import net.srt.entity.DataDatabaseEntity; +import net.srt.framework.common.cache.bean.DataProjectCacheBean; +import net.srt.framework.common.config.Config; +import net.srt.framework.common.exception.ServerException; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.BeanUtil; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.framework.security.user.SecurityUser; +import net.srt.query.DataAccessQuery; +import net.srt.query.DataAccessTaskDetailQuery; +import net.srt.query.DataAccessTaskQuery; +import net.srt.service.DataAccessService; +import net.srt.service.DataAccessTaskDetailService; +import net.srt.service.DataAccessTaskService; +import net.srt.vo.DataAccessTaskDetailVO; +import net.srt.vo.DataAccessTaskVO; +import net.srt.vo.DataAccessVO; +import net.srt.vo.PreviewNameMapperVo; +import org.apache.commons.lang3.StringUtils; +import org.quartz.CronExpression; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; +import org.springframework.util.CollectionUtils; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.DbswitchStrUtils; +import srt.cloud.framework.dbswitch.common.util.PatterNameUtils; +import srt.cloud.framework.dbswitch.common.util.StringUtil; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByJdbcService; +import srt.cloud.framework.dbswitch.core.service.impl.MetaDataByJdbcServiceImpl; +import srt.cloud.framework.dbswitch.data.config.DbswichProperties; +import srt.cloud.framework.dbswitch.data.entity.SourceDataSourceProperties; +import srt.cloud.framework.dbswitch.data.entity.TargetDataSourceProperties; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Date; +import java.util.List; +import java.util.stream.Collectors; + +/** + * 数据集成-数据接入 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-24 + */ +@Service +@AllArgsConstructor +public class DataAccessServiceImpl extends BaseServiceImpl implements DataAccessService { + + private final DataDatabaseDao dataDatabaseDao; + private final QuartzDataAccessApi quartzDataAccessApi; + private final DataAccessTaskService dataAccessTaskService; + private final DataAccessTaskDetailService dataAccessTaskDetailService; + private final Config config; + + private final static String STRING_EMPTY = ""; + private final static String STRING_DELETE = ""; + + @Override + public PageResult page(DataAccessQuery query) { + IPage page = baseMapper.selectPage(getPage(query), getWrapper(query)); + + return new PageResult<>(DataAccessConvert.INSTANCE.convertList(page.getRecords()), page.getTotal()); + } + + @Override + public DataAccessClientDto getById(Long id) { + DataAccessEntity dataAccessEntity = baseMapper.selectById(id); + if (dataAccessEntity == null) { + return null; + } + DbswichProperties dataAccessJson = dataAccessEntity.getDataAccessJson(); + SourceDataSourceProperties source = dataAccessJson.getSource().get(0); + TargetDataSourceProperties target = dataAccessJson.getTarget(); + return DataAccessClientDto.builder().id(dataAccessEntity.getId()).accessMode(dataAccessEntity.getAccessMode()).taskName(dataAccessEntity.getTaskName()) + .cron(dataAccessEntity.getCron()).description(dataAccessEntity.getDescription()).projectId(dataAccessEntity.getProjectId()) + .sourceDatabaseId(dataAccessEntity.getSourceDatabaseId()).targetDatabaseId(dataAccessEntity.getTargetDatabaseId()) + .taskType(dataAccessEntity.getTaskType()).batchSize(source.getFetchSize()).tableNameMapper(source.getRegexTableMapper()) + .columnNameMapper(source.getRegexColumnMapper()).includeOrExclude(source.getIncludeOrExclude()) + .sourceSelectedTables(YesOrNo.YES.getValue().equals(source.getIncludeOrExclude()) ? DbswitchStrUtils.stringToList(source.getSourceIncludes()) : DbswitchStrUtils.stringToList(source.getSourceExcludes())) + .targetAutoIncrement(target.getCreateTableAutoIncrement()).targetDataSync(target.getChangeDataSync()).targetDropTable(target.getTargetDrop()) + .targetIndexCreate(target.getIndexCreate()).targetLowerCase(target.getLowercase()).targetOnlyCreate(target.getOnlyCreate()) + .targetSyncExit(target.getSyncExist()).targetUpperCase(target.getUppercase()).build(); + } + + private LambdaQueryWrapper getWrapper(DataAccessQuery query) { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + wrapper.like(StringUtil.isNotBlank(query.getTaskName()), DataAccessEntity::getTaskName, query.getTaskName()); + wrapper.eq(query.getProjectId() != null, DataAccessEntity::getProjectId, query.getProjectId()); + wrapper.eq(query.getDataDatabaseId() != null, DataAccessEntity::getSourceDatabaseId, query.getDataDatabaseId()); + wrapper.eq(query.getStatus() != null, DataAccessEntity::getStatus, query.getStatus()); + wrapper.eq(query.getRunStatus() != null, DataAccessEntity::getRunStatus, query.getRunStatus()); + dataScopeWithoutOrgId(wrapper); + wrapper.orderByDesc(DataAccessEntity::getCreateTime); + wrapper.orderByDesc(DataAccessEntity::getId); + return wrapper; + } + + @Override + public void save(DataAccessClientDto dto) { + dto.setProjectId(getProjectId()); + DataAccessEntity dataAccessEntity = buildDataAccessEntity(dto); + dataAccessEntity.setProjectId(dto.getProjectId()); + baseMapper.insert(dataAccessEntity); + + } + + @Override + public void update(DataAccessClientDto dto) { + dto.setProjectId(getProjectId()); + DataAccessEntity entity = buildDataAccessEntity(dto); + entity.setProjectId(dto.getProjectId()); + updateById(entity); + } + + private DataAccessEntity buildDataAccessEntity(DataAccessClientDto dto) { + if (TaskType.ONE_TIME_FULL_PERIODIC_INCR_SYNC.getCode().equals(dto.getTaskType()) && !CronExpression.isValidExpression(dto.getCron())) { + throw new ServerException("cron表达式有误,请检查!"); + } + DbswichProperties dbswichProperties = new DbswichProperties(); + List source = new ArrayList<>(1); + SourceDataSourceProperties sourceDataSourceProperties = new SourceDataSourceProperties(); + DataDatabaseEntity sourceDatabase = dataDatabaseDao.selectById(dto.getSourceDatabaseId()); + //构建源端 + ProductTypeEnum sourceProductType = ProductTypeEnum.getByIndex(sourceDatabase.getDatabaseType()); + sourceDataSourceProperties.setSourceProductType(sourceProductType); + sourceDataSourceProperties.setUrl(StringUtil.isBlank(sourceDatabase.getJdbcUrl()) ? sourceProductType.getUrl() + .replace("{host}", sourceDatabase.getDatabaseIp()) + .replace("{port}", sourceDatabase.getDatabasePort()) + .replace("{database}", sourceDatabase.getDatabaseName()) : sourceDatabase.getJdbcUrl()); + sourceDataSourceProperties.setDriverClassName(sourceProductType.getDriveClassName()); + sourceDataSourceProperties.setUsername(sourceDatabase.getUserName()); + sourceDataSourceProperties.setPassword(sourceDatabase.getPassword()); + sourceDataSourceProperties.setFetchSize(dto.getBatchSize()); + sourceDataSourceProperties.setSourceSchema(ProductTypeEnum.ORACLE.getIndex().equals(sourceDatabase.getDatabaseType()) ? sourceDatabase.getUserName() : sourceDatabase.getDatabaseName()); + Integer includeOrExclude = dto.getIncludeOrExclude(); + sourceDataSourceProperties.setIncludeOrExclude(includeOrExclude); + //如果是包含表 + if (YesOrNo.YES.getValue().equals(includeOrExclude)) { + sourceDataSourceProperties.setSourceIncludes(StringUtils.join(dto.getSourceSelectedTables(), ",")); + } else { + sourceDataSourceProperties.setSourceExcludes(StringUtils.join(dto.getSourceSelectedTables(), ",")); + } + sourceDataSourceProperties.setRegexTableMapper(dto.getTableNameMapper()); + sourceDataSourceProperties.setRegexColumnMapper(dto.getColumnNameMapper()); + source.add(sourceDataSourceProperties); + //构建目标端 + TargetDataSourceProperties target = new TargetDataSourceProperties(); + if (AccessMode.ODS.getValue().equals(dto.getAccessMode())) { + DataProjectCacheBean project = getProject(dto.getProjectId()); + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(project.getDbType()); + target.setTargetProductType(productTypeEnum); + target.setDriverClassName(productTypeEnum.getDriveClassName()); + target.setUrl(project.getDbUrl()); + target.setUsername(project.getDbUsername()); + target.setPassword(project.getDbPassword()); + target.setTargetSchema(project.getDbName()); + target.setTablePrefix(DataHouseLayer.ODS.getTablePrefix()); + } else { + DataDatabaseEntity targetDatabase = dataDatabaseDao.selectById(dto.getTargetDatabaseId()); + ProductTypeEnum targetProductType = ProductTypeEnum.getByIndex(targetDatabase.getDatabaseType()); + target.setTargetProductType(targetProductType); + target.setUrl(StringUtil.isBlank(targetDatabase.getJdbcUrl()) ? targetProductType.getUrl() + .replace("{host}", targetDatabase.getDatabaseIp()) + .replace("{port}", targetDatabase.getDatabasePort()) + .replace("{database}", targetDatabase.getDatabaseName()) : targetDatabase.getJdbcUrl()); + target.setDriverClassName(targetProductType.getDriveClassName()); + target.setUsername(targetDatabase.getUserName()); + target.setPassword(targetDatabase.getPassword()); + target.setTargetSchema(ProductTypeEnum.ORACLE.getIndex().equals(targetDatabase.getDatabaseType()) ? targetDatabase.getUserName() : targetDatabase.getDatabaseName()); + } + target.setTargetDrop(dto.isTargetDropTable()); + target.setSyncExist(dto.isTargetSyncExit()); + target.setOnlyCreate(dto.isTargetOnlyCreate()); + target.setIndexCreate(dto.isTargetIndexCreate()); + target.setLowercase(dto.isTargetLowerCase()); + target.setUppercase(dto.isTargetUpperCase()); + target.setCreateTableAutoIncrement(dto.isTargetAutoIncrement()); + target.setChangeDataSync(dto.isTargetDataSync()); + dbswichProperties.setSource(source); + dbswichProperties.setTarget(target); + + return DataAccessEntity.builder().id(dto.getId()).taskName(dto.getTaskName()).taskType(dto.getTaskType()).description(dto.getDescription()) + .accessMode(dto.getAccessMode()).cron(dto.getCron()).projectId(dto.getProjectId()).status(YesOrNo.NO.getValue()) + .targetDatabaseId(AccessMode.CUSTOM.getValue().equals(dto.getAccessMode()) ? dto.getTargetDatabaseId() : null).sourceDatabaseId(dto.getSourceDatabaseId()).runStatus(CommonRunStatus.WAITING.getCode()) + .dataAccessJson(dbswichProperties).build(); + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(List idList) { + removeByIds(idList); + for (Long id : idList) { + quartzDataAccessApi.cancleAccess(id); + //删除记录 + dataAccessTaskService.deleteByAccessId(id); + dataAccessTaskDetailService.deleteByAccessId(id); + } + } + + @Override + public DataAccessEntity loadById(Long id) { + return baseMapper.selectById(id); + } + + @Override + public void updateStartInfo(Long dataAccessId) { + baseMapper.updateStartInfo(dataAccessId); + } + + @Override + public void updateEndInfo(Long dataAccessId, Integer runStatus, Date nextRunTime) { + baseMapper.updateEndInfo(dataAccessId, runStatus, nextRunTime); + } + + @Override + public List getTableMap(Long id) { + PreviewMapDto previewMapDto = getPreviewMapDto(id); + return BeanUtil.copyListProperties(previewTableMap(previewMapDto).stream().filter(item -> StringUtil.isNotBlank(item.getTargetName())).collect(Collectors.toList()), PreviewNameMapperDto::new); + } + + + @Override + public List getColumnMap(Long id, String tableName) { + PreviewMapDto previewMapDto = getPreviewMapDto(id); + previewMapDto.setPreiveTableName(tableName); + return BeanUtil.copyListProperties(previewColumnMap(previewMapDto).stream().filter(item -> !STRING_DELETE.equals(item.getTargetName())).collect(Collectors.toList()), PreviewNameMapperDto::new); + } + + private PreviewMapDto getPreviewMapDto(Long id) { + DataAccessEntity dataAccessEntity = baseMapper.selectById(id); + DbswichProperties dbswichProperties = dataAccessEntity.getDataAccessJson(); + PreviewMapDto previewMapDto = new PreviewMapDto(); + previewMapDto.setSourceDatabaseId(dataAccessEntity.getSourceDatabaseId()); + TargetDataSourceProperties targetDataSourceProperties = dbswichProperties.getTarget(); + SourceDataSourceProperties sourceDataSourceProperties = dbswichProperties.getSource().get(0); + previewMapDto.setIncludeOrExclude(sourceDataSourceProperties.getIncludeOrExclude()); + List sourceSelectedTables = new ArrayList<>(); + if (StringUtil.isNotBlank(sourceDataSourceProperties.getSourceIncludes())) { + sourceSelectedTables.addAll(Arrays.asList(sourceDataSourceProperties.getSourceIncludes().split(","))); + } + if (StringUtil.isNotBlank(sourceDataSourceProperties.getSourceExcludes())) { + sourceSelectedTables.addAll(Arrays.asList(sourceDataSourceProperties.getSourceExcludes().split(","))); + } + previewMapDto.setSourceSelectedTables(sourceSelectedTables); + previewMapDto.setTableNameMapper(sourceDataSourceProperties.getRegexTableMapper()); + previewMapDto.setColumnNameMapper(sourceDataSourceProperties.getRegexColumnMapper()); + previewMapDto.setTablePrefix(targetDataSourceProperties.getTablePrefix()); + previewMapDto.setTargetLowerCase(targetDataSourceProperties.getLowercase()); + previewMapDto.setTargetUpperCase(targetDataSourceProperties.getUppercase()); + return previewMapDto; + } + + @Override + public List previewTableMap(PreviewMapDto previewMapDto) { + boolean include = YesOrNo.YES.getValue().equals(previewMapDto.getIncludeOrExclude()); + List result = new ArrayList<>(10); + List allTableNames = getAllTableNames(previewMapDto); + //如果选择的表名为空,则预览全部 + if (CollectionUtils.isEmpty(previewMapDto.getSourceSelectedTables())) { + for (TableDescription td : allTableNames) { + String targetName = PatterNameUtils.getFinalName( + td.getTableName(), previewMapDto.getTableNameMapper()); + if (previewMapDto.isTargetLowerCase()) { + targetName = targetName.toLowerCase(); + } else if (previewMapDto.isTargetUpperCase()) { + targetName = targetName.toUpperCase(); + } + if (StringUtils.isNotBlank(previewMapDto.getTablePrefix()) && !targetName.startsWith(previewMapDto.getTablePrefix())) { + targetName = previewMapDto.getTablePrefix() + targetName; + } + result.add(PreviewNameMapperVo.builder() + .originalName(td.getTableName()) + .targetName(StringUtils.isNotBlank(targetName) ? targetName : STRING_EMPTY) + .remarks(StringUtil.isNotBlank(td.getRemarks()) ? td.getRemarks() : td.getTableName()) + .build()); + } + } else { + if (include) { + for (String name : previewMapDto.getSourceSelectedTables()) { + if (StringUtils.isNotBlank(name)) { + String targetName = PatterNameUtils.getFinalName( + name, previewMapDto.getTableNameMapper()); + if (previewMapDto.isTargetLowerCase()) { + targetName = targetName.toLowerCase(); + } else if (previewMapDto.isTargetUpperCase()) { + targetName = targetName.toUpperCase(); + } + if (StringUtils.isNotBlank(previewMapDto.getTablePrefix()) && !targetName.startsWith(previewMapDto.getTablePrefix())) { + targetName = previewMapDto.getTablePrefix() + targetName; + } + TableDescription tableDescription = allTableNames.stream().filter(item -> name.equals(item.getTableName())).findFirst().orElse(null); + result.add(PreviewNameMapperVo.builder() + .originalName(name) + .targetName(StringUtils.isNotBlank(targetName) ? targetName : STRING_EMPTY) + .remarks(tableDescription != null ? tableDescription.getRemarks() : null) + .build()); + } + } + } else { + for (TableDescription td : allTableNames) { + if (!previewMapDto.getSourceSelectedTables().contains(td.getTableName())) { + String targetName = PatterNameUtils.getFinalName(td.getTableName(), previewMapDto.getTableNameMapper()); + if (previewMapDto.isTargetLowerCase()) { + targetName = targetName.toLowerCase(); + } else if (previewMapDto.isTargetUpperCase()) { + targetName = targetName.toUpperCase(); + } + if (StringUtils.isNotBlank(previewMapDto.getTablePrefix()) && !targetName.startsWith(previewMapDto.getTablePrefix())) { + targetName = previewMapDto.getTablePrefix() + targetName; + } + result.add(PreviewNameMapperVo.builder() + .originalName(td.getTableName()) + .targetName(StringUtils.isNotBlank(targetName) ? targetName : STRING_EMPTY) + .remarks(td.getRemarks()) + .build()); + } + } + } + } + return result; + } + + + @Override + public List previewColumnMap(PreviewMapDto previewMapDto) { + if (previewMapDto.getSourceDatabaseId() == null || StringUtils.isBlank(previewMapDto.getPreiveTableName())) { + throw new ServerException("请选择源端数据库,数据表"); + } + List result = new ArrayList<>(10); + DataDatabaseEntity databaseEntity = dataDatabaseDao.selectById(previewMapDto.getSourceDatabaseId()); + if (databaseEntity == null) { + throw new ServerException("选择的源端数据库已不存在!"); + } + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(databaseEntity.getDatabaseType()); + IMetaDataByJdbcService service = new MetaDataByJdbcServiceImpl(productTypeEnum); + List columns = service.queryTableColumnMetaOnly(StringUtil.isBlank(databaseEntity.getJdbcUrl()) ? productTypeEnum.getUrl() + .replace("{host}", databaseEntity.getDatabaseIp()) + .replace("{port}", databaseEntity.getDatabasePort()) + .replace("{database}", databaseEntity.getDatabaseName()) : databaseEntity.getJdbcUrl(), databaseEntity.getUserName(), databaseEntity.getPassword(), ProductTypeEnum.ORACLE.equals(productTypeEnum) ? databaseEntity.getUserName() : databaseEntity.getDatabaseName(), + previewMapDto.getPreiveTableName()); + for (ColumnDescription cd : columns) { + String targetName = PatterNameUtils.getFinalName(cd.getFieldName(), previewMapDto.getColumnNameMapper()); + if (previewMapDto.isTargetLowerCase()) { + targetName = targetName.toLowerCase(); + } else if (previewMapDto.isTargetUpperCase()) { + targetName = targetName.toUpperCase(); + } + result.add(PreviewNameMapperVo.builder() + .originalName(cd.getFieldName()) + .targetName(StringUtils.isNotBlank(targetName) ? targetName : STRING_DELETE) + .remarks(StringUtil.isNotBlank(cd.getRemarks()) ? cd.getRemarks() : cd.getFieldName()) + .build()); + } + return result; + } + + @Override + public void release(Long id) { + DataAccessEntity dataAccessEntity = baseMapper.selectById(id); + if (TaskType.REAL_TIME_SYNC.getCode().equals(dataAccessEntity.getTaskType())) { + throw new ServerException("暂不支持实时同步!"); + } + quartzDataAccessApi.releaseAccess(id); + //更新状态,发布时间和发布人 + baseMapper.changeStatus(id, YesOrNo.YES.getValue(), new Date(), SecurityUser.getUserId()); + } + + @Override + public void cancle(Long id) { + quartzDataAccessApi.cancleAccess(id); + //更新状态 + baseMapper.changeStatus(id, YesOrNo.NO.getValue(), null, null); + } + + @Override + public void handRun(Long id) { + quartzDataAccessApi.handRun(id); + } + + @Override + public PageResult taskPage(DataAccessTaskQuery taskQuery) { + return dataAccessTaskService.page(taskQuery); + } + + @Override + public void deleteTask(List idList) { + dataAccessTaskService.delete(idList); + //删除对应的同步结果 + dataAccessTaskDetailService.deleteByTaskId(idList); + } + + @Override + public PageResult taskDetailPage(DataAccessTaskDetailQuery detailQuery) { + detailQuery.setProjectId(getProjectId()); + return dataAccessTaskDetailService.page(detailQuery); + } + + @Override + public DataAccessTaskVO getTaskById(Long id) { + return DataAccessTaskConvert.INSTANCE.convert(dataAccessTaskService.getById(id)); + } + + + private List getAllTableNames(PreviewMapDto previewMapDto) { + if (previewMapDto.getSourceDatabaseId() == null) { + throw new ServerException("请选择源端数据库"); + } + DataDatabaseEntity databaseEntity = dataDatabaseDao.selectById(previewMapDto.getSourceDatabaseId()); + + if (databaseEntity == null) { + throw new ServerException("选择的源端数据库已不存在!"); + } + + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(databaseEntity.getDatabaseType()); + IMetaDataByJdbcService service = new MetaDataByJdbcServiceImpl(productTypeEnum); + + return service.queryTableList(StringUtil.isBlank(databaseEntity.getJdbcUrl()) ? productTypeEnum.getUrl() + .replace("{host}", databaseEntity.getDatabaseIp()) + .replace("{port}", databaseEntity.getDatabasePort()) + .replace("{database}", databaseEntity.getDatabaseName()) : databaseEntity.getJdbcUrl(), databaseEntity.getUserName(), databaseEntity.getPassword(), + ProductTypeEnum.ORACLE.equals(productTypeEnum) ? databaseEntity.getUserName() : databaseEntity.getDatabaseName()).stream().filter(td -> !td.isViewTable()) + .collect(Collectors.toList()); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataAccessTaskDetailServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataAccessTaskDetailServiceImpl.java new file mode 100644 index 0000000..0456c9a --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataAccessTaskDetailServiceImpl.java @@ -0,0 +1,85 @@ +package net.srt.service.impl; + +import com.baomidou.mybatisplus.core.toolkit.Wrappers; +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.metadata.IPage; +import lombok.AllArgsConstructor; +import net.srt.convert.DataAccessTaskDetailConvert; +import net.srt.entity.DataAccessTaskDetailEntity; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.query.DataAccessTaskDetailQuery; +import net.srt.vo.DataAccessTaskDetailVO; +import net.srt.dao.DataAccessTaskDetailDao; +import net.srt.service.DataAccessTaskDetailService; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; +import srt.cloud.framework.dbswitch.common.util.StringUtil; + +import java.util.List; + +/** + * 数据接入-同步记录详情 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-28 + */ +@Service +@AllArgsConstructor +public class DataAccessTaskDetailServiceImpl extends BaseServiceImpl implements DataAccessTaskDetailService { + + @Override + public PageResult page(DataAccessTaskDetailQuery query) { + IPage page = baseMapper.selectPage(getPage(query), getWrapper(query)); + + return new PageResult<>(DataAccessTaskDetailConvert.INSTANCE.convertList(page.getRecords()), page.getTotal()); + } + + private LambdaQueryWrapper getWrapper(DataAccessTaskDetailQuery query) { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + wrapper.eq(query.getIfSuccess() != null, DataAccessTaskDetailEntity::getIfSuccess, query.getIfSuccess()); + wrapper.eq(query.getProjectId() != null, DataAccessTaskDetailEntity::getProjectId, query.getProjectId()); + wrapper.eq(query.getTaskId() != null, DataAccessTaskDetailEntity::getTaskId, query.getTaskId()); + wrapper.eq(StringUtil.isNotBlank(query.getTableName()), DataAccessTaskDetailEntity::getTargetTableName, query.getTableName()); + wrapper.orderByDesc(DataAccessTaskDetailEntity::getCreateTime); + wrapper.orderByDesc(DataAccessTaskDetailEntity::getId); + return wrapper; + } + + @Override + public void save(DataAccessTaskDetailVO vo) { + DataAccessTaskDetailEntity entity = DataAccessTaskDetailConvert.INSTANCE.convert(vo); + + baseMapper.insert(entity); + } + + @Override + public void update(DataAccessTaskDetailVO vo) { + DataAccessTaskDetailEntity entity = DataAccessTaskDetailConvert.INSTANCE.convert(vo); + + updateById(entity); + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(List idList) { + removeByIds(idList); + } + + @Override + public void deleteByTaskId(List idList) { + idList.forEach(taskId -> { + LambdaQueryWrapper wrapper = new LambdaQueryWrapper<>(); + wrapper.eq(DataAccessTaskDetailEntity::getTaskId, taskId); + remove(wrapper); + }); + } + + @Override + public void deleteByAccessId(Long id) { + LambdaQueryWrapper wrapper = new LambdaQueryWrapper<>(); + wrapper.eq(DataAccessTaskDetailEntity::getDataAccessId, id); + remove(wrapper); + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataAccessTaskServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataAccessTaskServiceImpl.java new file mode 100644 index 0000000..052e788 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataAccessTaskServiceImpl.java @@ -0,0 +1,90 @@ +package net.srt.service.impl; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.metadata.IPage; +import com.baomidou.mybatisplus.core.toolkit.Wrappers; +import lombok.AllArgsConstructor; +import net.srt.constants.CommonRunStatus; +import net.srt.convert.DataAccessTaskConvert; +import net.srt.dao.DataAccessTaskDao; +import net.srt.entity.DataAccessTaskEntity; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.DateUtils; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.query.DataAccessTaskQuery; +import net.srt.service.DataAccessTaskService; +import net.srt.vo.DataAccessTaskVO; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; + +import java.util.Date; +import java.util.List; + +/** + * 数据接入任务记录 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-24 + */ +@Service +@AllArgsConstructor +public class DataAccessTaskServiceImpl extends BaseServiceImpl implements DataAccessTaskService { + + @Override + public PageResult page(DataAccessTaskQuery query) { + IPage page = baseMapper.selectPage(getPage(query), getWrapper(query)); + + return new PageResult<>(DataAccessTaskConvert.INSTANCE.convertList(page.getRecords()), page.getTotal()); + } + + private LambdaQueryWrapper getWrapper(DataAccessTaskQuery query) { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + wrapper.eq(DataAccessTaskEntity::getDataAccessId, query.getDataAccessId()); + wrapper.eq(query.getRunStatus() != null, DataAccessTaskEntity::getRunStatus, query.getRunStatus()); + wrapper.orderByDesc(DataAccessTaskEntity::getCreateTime); + wrapper.orderByDesc(DataAccessTaskEntity::getId); + return wrapper; + } + + @Override + public void save(DataAccessTaskVO vo) { + DataAccessTaskEntity entity = DataAccessTaskConvert.INSTANCE.convert(vo); + + baseMapper.insert(entity); + } + + @Override + public void update(DataAccessTaskVO vo) { + DataAccessTaskEntity entity = DataAccessTaskConvert.INSTANCE.convert(vo); + + updateById(entity); + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(List idList) { + removeByIds(idList); + } + + @Override + public void deleteByAccessId(Long id) { + LambdaQueryWrapper wrapper = new LambdaQueryWrapper<>(); + wrapper.eq(DataAccessTaskEntity::getDataAccessId, id); + remove(wrapper); + } + + @Override + public void dealNotFinished() { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + wrapper.in(DataAccessTaskEntity::getRunStatus, CommonRunStatus.WAITING.getCode(), CommonRunStatus.RUNNING.getCode()); + List accessTaskEntities = baseMapper.selectList(wrapper); + for (DataAccessTaskEntity accessTaskEntity : accessTaskEntities) { + accessTaskEntity.setEndTime(new Date()); + accessTaskEntity.setRunStatus(CommonRunStatus.FAILED.getCode()); + String errorLog = DateUtils.formatDateTime(new Date()) + " The sync task has unexpected stop,you can try run again"; + accessTaskEntity.setErrorInfo(accessTaskEntity.getErrorInfo() == null ? errorLog : accessTaskEntity.getErrorInfo() + "\r\n" + errorLog); + baseMapper.updateById(accessTaskEntity); + } + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataDatabaseServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataDatabaseServiceImpl.java new file mode 100644 index 0000000..4d10450 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataDatabaseServiceImpl.java @@ -0,0 +1,351 @@ +package net.srt.service.impl; + +import cn.hutool.core.util.StrUtil; +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.metadata.IPage; +import com.baomidou.mybatisplus.core.toolkit.Wrappers; +import lombok.AllArgsConstructor; +import lombok.SneakyThrows; +import net.sf.jsqlparser.parser.CCJSqlParserUtil; +import net.sf.jsqlparser.statement.Statement; +import net.sf.jsqlparser.statement.select.Select; +import net.srt.constants.DataHouseLayer; +import net.srt.constants.MiddleTreeNodeType; +import net.srt.constants.YesOrNo; +import net.srt.convert.DataDatabaseConvert; +import net.srt.dao.DataAccessDao; +import net.srt.dao.DataDatabaseDao; +import net.srt.dto.SqlConsole; +import net.srt.entity.DataAccessEntity; +import net.srt.entity.DataDatabaseEntity; +import net.srt.framework.common.cache.bean.DataProjectCacheBean; +import net.srt.framework.common.exception.ServerException; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.BeanUtil; +import net.srt.framework.common.utils.SqlUtils; +import net.srt.framework.common.utils.TreeNodeVo; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.query.DataDatabaseQuery; +import net.srt.service.DataAccessService; +import net.srt.service.DataDatabaseService; +import net.srt.vo.ColumnDescriptionVo; +import net.srt.vo.DataDatabaseVO; +import net.srt.vo.SchemaTableDataVo; +import net.srt.vo.SqlGenerationVo; +import net.srt.vo.TableVo; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.StringUtil; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByJdbcService; +import srt.cloud.framework.dbswitch.core.service.impl.MetaDataByJdbcServiceImpl; + +import java.util.ArrayList; +import java.util.List; +import java.util.stream.Collectors; + +/** + * 数据集成-数据库管理 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-09 + */ +@Service +@AllArgsConstructor +public class DataDatabaseServiceImpl extends BaseServiceImpl implements DataDatabaseService { + + private final DataAccessDao dataAccessDao; + private final DataAccessService dataAccessService; + + @Override + public PageResult page(DataDatabaseQuery query) { + IPage page = baseMapper.selectPage(getPage(query), getWrapper(query)); + + return new PageResult<>(DataDatabaseConvert.INSTANCE.convertList(page.getRecords()), page.getTotal()); + } + + private LambdaQueryWrapper getWrapper(DataDatabaseQuery query) { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + wrapper.like(StrUtil.isNotBlank(query.getName()), DataDatabaseEntity::getName, query.getName()); + wrapper.like(StrUtil.isNotBlank(query.getDatabaseName()), DataDatabaseEntity::getDatabaseName, query.getDatabaseName()); + wrapper.eq(query.getDatabaseType() != null, DataDatabaseEntity::getDatabaseType, query.getDatabaseType()); + wrapper.eq(query.getStatus() != null, DataDatabaseEntity::getStatus, query.getStatus()); + wrapper.eq(query.getIsRtApprove() != null, DataDatabaseEntity::getIsRtApprove, query.getIsRtApprove()); + wrapper.eq(query.getProjectId() != null, DataDatabaseEntity::getProjectId, query.getProjectId()); + dataScopeWithoutOrgId(wrapper); + return wrapper; + } + + @Override + public void save(DataDatabaseVO vo) { + DataDatabaseEntity entity = DataDatabaseConvert.INSTANCE.convert(vo); + entity.setProjectId(getProjectId()); + setJdbcUrlByEntity(entity); + baseMapper.insert(entity); + try { + testOnline(DataDatabaseConvert.INSTANCE.convert(entity)); + } catch (Exception ignored) { + } + + } + + @Override + public void update(DataDatabaseVO vo) { + DataDatabaseEntity entity = DataDatabaseConvert.INSTANCE.convert(vo); + LambdaQueryWrapper dataAccessEntityWrapper = new LambdaQueryWrapper<>(); + dataAccessEntityWrapper.eq(DataAccessEntity::getSourceDatabaseId, vo.getId()).or().eq(DataAccessEntity::getTargetDatabaseId, vo.getId()); + setJdbcUrlByEntity(entity); + entity.setProjectId(getProjectId()); + List dataAccessEntities = dataAccessDao.selectList(dataAccessEntityWrapper); + for (DataAccessEntity dataAccessEntity : dataAccessEntities) { + //修改数据库的同时,同时修改一下相关的数据接入任务 + dataAccessService.update(dataAccessService.getById(dataAccessEntity.getId())); + } + updateById(entity); + try { + testOnline(DataDatabaseConvert.INSTANCE.convert(entity)); + } catch (Exception ignored) { + } + } + + private void setJdbcUrlByEntity(DataDatabaseEntity entity) { + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(entity.getDatabaseType()); + entity.setJdbcUrl(StringUtil.isBlank(entity.getJdbcUrl()) ? productTypeEnum.getUrl() + .replace("{host}", entity.getDatabaseIp()) + .replace("{port}", entity.getDatabasePort()) + .replace("{database}", entity.getDatabaseName()) : entity.getJdbcUrl()); + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(List idList) { + LambdaQueryWrapper dataAccessEntityWrapper = new LambdaQueryWrapper<>(); + dataAccessEntityWrapper.in(DataAccessEntity::getSourceDatabaseId, idList).or().in(DataAccessEntity::getTargetDatabaseId, idList).last(" limit 1"); + if (dataAccessDao.selectOne(dataAccessEntityWrapper) != null) { + throw new ServerException("要删除的数据库中有数据接入任务与之关联,不允许删除!"); + } + + removeByIds(idList); + } + + @Override + public void testOnline(DataDatabaseVO vo) { + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(vo.getDatabaseType()); + IMetaDataByJdbcService metaDataService = new MetaDataByJdbcServiceImpl(productTypeEnum); + if (StringUtil.isBlank(vo.getJdbcUrl())) { + vo.setJdbcUrl(productTypeEnum.getUrl() + .replace("{host}", vo.getDatabaseIp()) + .replace("{port}", vo.getDatabasePort()) + .replace("{database}", vo.getDatabaseName())); + } + metaDataService.testQuerySQL( + vo.getJdbcUrl(), + vo.getUserName(), + vo.getPassword(), + productTypeEnum.getTestSql() + ); + if (vo.getId() != null) { + //更新状态 + baseMapper.changeStatusById(vo.getId(), YesOrNo.YES.getValue()); + } + } + + @Override + public List getTablesById(Long id) { + DataDatabaseEntity dataDatabaseEntity = baseMapper.selectById(id); + return getTables(dataDatabaseEntity); + } + + private List getTables(DataDatabaseEntity dataDatabaseEntity) { + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(dataDatabaseEntity.getDatabaseType()); + IMetaDataByJdbcService metaDataService = new MetaDataByJdbcServiceImpl(productTypeEnum); + List tableDescriptions = metaDataService.queryTableList(StringUtil.isBlank(dataDatabaseEntity.getJdbcUrl()) ? productTypeEnum.getUrl() + .replace("{host}", dataDatabaseEntity.getDatabaseIp()) + .replace("{port}", dataDatabaseEntity.getDatabasePort()) + .replace("{database}", dataDatabaseEntity.getDatabaseName()) : dataDatabaseEntity.getJdbcUrl(), dataDatabaseEntity.getUserName(), dataDatabaseEntity.getPassword(), + ProductTypeEnum.ORACLE.equals(productTypeEnum) ? dataDatabaseEntity.getUserName() : dataDatabaseEntity.getDatabaseName()); + return BeanUtil.copyListProperties(tableDescriptions, TableVo::new); + } + + @SneakyThrows + @Override + public SchemaTableDataVo getTableDataBySql(Integer id, SqlConsole sqlConsole) { + Statement parse = CCJSqlParserUtil.parse(sqlConsole.getSql()); + if (!(parse instanceof Select)) { + throw new ServerException("只能执行select查询语句!"); + } + DataDatabaseEntity dataDatabaseEntity = baseMapper.selectById(id); + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(dataDatabaseEntity.getDatabaseType()); + IMetaDataByJdbcService metaDataService = new MetaDataByJdbcServiceImpl(productTypeEnum); + SchemaTableData schemaTableData = metaDataService.queryTableDataBySql(StringUtil.isBlank(dataDatabaseEntity.getJdbcUrl()) ? productTypeEnum.getUrl() + .replace("{host}", dataDatabaseEntity.getDatabaseIp()) + .replace("{port}", dataDatabaseEntity.getDatabasePort()) + .replace("{database}", dataDatabaseEntity.getDatabaseName()) : dataDatabaseEntity.getJdbcUrl(), dataDatabaseEntity.getUserName(), dataDatabaseEntity.getPassword(), sqlConsole.getSql(), 100); + return SchemaTableDataVo.builder().columns(SqlUtils.convertColumns(schemaTableData.getColumns())).rows(SqlUtils.convertRows(schemaTableData.getColumns(), schemaTableData.getRows())).build(); + } + + @Override + public List listAll() { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + dataScopeWithoutOrgId(wrapper); + return DataDatabaseConvert.INSTANCE.convertList(baseMapper.selectList(wrapper)); + } + + @Override + public List listTree(Long id) { + DataDatabaseEntity entity = baseMapper.selectById(id); + setJdbcUrlByEntity(entity); + List tables = getTables(entity); + List nodeList = new ArrayList<>(1); + TreeNodeVo dbNode = new TreeNodeVo(); + nodeList.add(dbNode); + dbNode.setName(entity.getName()); + dbNode.setDescription(entity.getName()); + dbNode.setLabel(entity.getDatabaseName()); + dbNode.setId(entity.getId()); + dbNode.setIfLeaf(YesOrNo.YES.getValue()); + dbNode.setAttributes(entity); + List tableNodes = new ArrayList<>(10); + dbNode.setChildren(tableNodes); + for (TableVo table : tables) { + TreeNodeVo tableNode = new TreeNodeVo(); + tableNode.setLabel(table.getTableName()); + tableNode.setName(table.getTableName()); + tableNode.setDescription(table.getRemarks()); + tableNode.setIfLeaf(YesOrNo.NO.getValue()); + tableNodes.add(tableNode); + } + return nodeList; + } + + @Override + public List getColumnInfo(Long id, String tableName) { + DataDatabaseEntity entity = baseMapper.selectById(id); + return getColumnDescriptionVos(tableName, entity); + } + + private List getColumnDescriptionVos(String tableName, DataDatabaseEntity entity) { + setJdbcUrlByEntity(entity); + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(entity.getDatabaseType()); + IMetaDataByJdbcService service = new MetaDataByJdbcServiceImpl(productTypeEnum); + List columnDescriptions = service.queryTableColumnMeta(entity.getJdbcUrl(), entity.getUserName(), entity.getPassword(), ProductTypeEnum.ORACLE.equals(productTypeEnum) ? entity.getUserName() : entity.getDatabaseName(), tableName); + List pks = service.queryTablePrimaryKeys(entity.getJdbcUrl(), entity.getUserName(), entity.getPassword(), ProductTypeEnum.ORACLE.equals(productTypeEnum) ? entity.getUserName() : entity.getDatabaseName(), tableName); + return BeanUtil.copyListProperties(columnDescriptions, ColumnDescriptionVo::new, (oldItem, newItem) -> { + newItem.setFieldName(StringUtil.isNotBlank(newItem.getFieldName()) ? newItem.getFieldName() : newItem.getLabelName()); + if (pks.contains(newItem.getFieldName())) { + newItem.setPk(true); + } + }); + } + + @Override + public SqlGenerationVo getSqlGeneration(Long id, String tableName, String tableRemarks) { + DataDatabaseEntity entity = baseMapper.selectById(id); + return getSqlGenerationVo(tableName, tableRemarks, entity); + } + + private SqlGenerationVo getSqlGenerationVo(String tableName, String tableRemarks, DataDatabaseEntity entity) { + setJdbcUrlByEntity(entity); + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(entity.getDatabaseType()); + IMetaDataByJdbcService service = new MetaDataByJdbcServiceImpl(productTypeEnum); + SqlGenerationVo sqlGenerationVo = new SqlGenerationVo(); + sqlGenerationVo.setSqlCreate(service.getTableDDL(entity.getJdbcUrl(), entity.getUserName(), entity.getPassword(), ProductTypeEnum.ORACLE.equals(productTypeEnum) ? entity.getUserName() : entity.getDatabaseName(), tableName)); + List columns = getColumnDescriptionVos(tableName, entity); + //TODO 后续做一个模块维护 + String flinkConfig = String.format(" 'connector' = 'jdbc',\n" + + " 'url' = '%s',\n" + + " 'table-name' = '%s',\n" + + " 'username' = '%s',\n" + + " 'password' = '%s'\n" + + "-- jdbc 模式 flink 目前只支持 MySQL,Oracle,PostgreSQL,Derby", entity.getJdbcUrl(), tableName, entity.getUserName(), entity.getPassword()); + List columnDescriptions = BeanUtil.copyListProperties(columns, ColumnDescription::new); + sqlGenerationVo.setFlinkSqlCreate(service.getFlinkTableSql(columnDescriptions, ProductTypeEnum.ORACLE.equals(productTypeEnum) ? entity.getUserName() : entity.getDatabaseName(), tableName, tableRemarks, flinkConfig)); + sqlGenerationVo.setSqlSelect(service.getSqlSelect(columnDescriptions, ProductTypeEnum.ORACLE.equals(productTypeEnum) ? entity.getUserName() : entity.getDatabaseName(), tableName, tableRemarks)); + return sqlGenerationVo; + } + + @Override + public List listMiddleDbTree() { + DataDatabaseEntity entity = new DataDatabaseEntity(); + DataProjectCacheBean project = getProject(); + entity.setDatabaseName(project.getDbName()); + entity.setJdbcUrl(project.getDbUrl()); + entity.setUserName(project.getDbUsername()); + entity.setPassword(project.getDbPassword()); + entity.setName(project.getName() + "<中台库>"); + List nodeList = new ArrayList<>(1); + TreeNodeVo dbNode = new TreeNodeVo(); + nodeList.add(dbNode); + dbNode.setIfLeaf(YesOrNo.YES.getValue()); + dbNode.setName(entity.getDatabaseName()); + dbNode.setLabel(entity.getDatabaseName()); + dbNode.setDescription(entity.getName()); + dbNode.setAttributes(entity); + dbNode.setType(MiddleTreeNodeType.DB.getValue()); + List layerList = new ArrayList<>(1); + dbNode.setChildren(layerList); + //获取该项目下的所有表 + IMetaDataByJdbcService metaDataService = new MetaDataByJdbcServiceImpl(ProductTypeEnum.getByIndex(project.getDbType())); + List tableList = metaDataService.queryTableList(entity.getJdbcUrl(), entity.getUserName(), entity.getPassword(), entity.getDatabaseName()); + //分层子菜单 + for (DataHouseLayer layer : DataHouseLayer.values()) { + TreeNodeVo layerNode = new TreeNodeVo(); + layerNode.setIfLeaf(YesOrNo.YES.getValue()); + layerNode.setName(layer.name()); + layerNode.setLabel(layer.name()); + layerNode.setDescription(layer.getName()); + layerNode.setType(MiddleTreeNodeType.LAYER.getValue()); + layerList.add(layerNode); + List tableNodeList = tableList.stream().filter( + table -> table.getTableName().startsWith(layer.getTablePrefix()) && !DataHouseLayer.OTHER.equals(layer) + || DataHouseLayer.OTHER.equals(layer) + && !table.getTableName().startsWith(DataHouseLayer.ODS.getTablePrefix()) + && !table.getTableName().startsWith(DataHouseLayer.DIM.getTablePrefix()) + && !table.getTableName().startsWith(DataHouseLayer.DWD.getTablePrefix()) + && !table.getTableName().startsWith(DataHouseLayer.DWS.getTablePrefix()) + && !table.getTableName().startsWith(DataHouseLayer.ADS.getTablePrefix())).map(table -> { + TreeNodeVo nodeVo = new TreeNodeVo(); + nodeVo.setIfLeaf(YesOrNo.NO.getValue()); + nodeVo.setName(table.getTableName()); + nodeVo.setLabel(table.getTableName()); + nodeVo.setDescription(table.getRemarks()); + nodeVo.setType(MiddleTreeNodeType.TABLE.getValue()); + return nodeVo; + }).collect(Collectors.toList()); + layerNode.setChildren(tableNodeList); + } + return nodeList; + } + + @Override + public List middleDbClumnInfo(String tableName) { + DataDatabaseEntity entity = buildMiddleEntity(); + return getColumnDescriptionVos(tableName, entity); + } + + @Override + public SqlGenerationVo getMiddleDbSqlGeneration(String tableName, String tableRemarks) { + DataDatabaseEntity entity = buildMiddleEntity(); + return getSqlGenerationVo(tableName, tableRemarks, entity); + } + + /** + * 构建中间库的entity + * + * @return + */ + private DataDatabaseEntity buildMiddleEntity() { + DataDatabaseEntity entity = new DataDatabaseEntity(); + DataProjectCacheBean project = getProject(); + entity.setDatabaseType(project.getDbType()); + entity.setDatabaseName(project.getDbName()); + entity.setJdbcUrl(project.getDbUrl()); + entity.setUserName(project.getDbUsername()); + entity.setPassword(project.getDbPassword()); + return entity; + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataFileCategoryServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataFileCategoryServiceImpl.java new file mode 100644 index 0000000..e9b4b0e --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataFileCategoryServiceImpl.java @@ -0,0 +1,100 @@ +package net.srt.service.impl; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import lombok.AllArgsConstructor; +import net.srt.convert.DataFileCategoryConvert; +import net.srt.dao.DataFileCategoryDao; +import net.srt.entity.DataFileCategoryEntity; +import net.srt.entity.DataFileEntity; +import net.srt.framework.common.exception.ServerException; +import net.srt.framework.common.utils.BeanUtil; +import net.srt.framework.common.utils.BuildTreeUtils; +import net.srt.framework.common.utils.TreeNodeVo; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.service.DataFileCategoryService; +import net.srt.service.DataFileService; +import net.srt.vo.DataFileCategoryVO; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; +import srt.cloud.framework.dbswitch.common.util.StringUtil; + +import java.util.List; + +/** + * 文件分组表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-12 + */ +@Service +@AllArgsConstructor +public class DataFileCategoryServiceImpl extends BaseServiceImpl implements DataFileCategoryService { + + private final DataFileService dataFileService; + + @Override + public void save(DataFileCategoryVO vo) { + DataFileCategoryEntity entity = DataFileCategoryConvert.INSTANCE.convert(vo); + entity.setPath(recursionPath(entity, null)); + entity.setProjectId(getProjectId()); + baseMapper.insert(entity); + } + + @Override + public void update(DataFileCategoryVO vo) { + DataFileCategoryEntity entity = DataFileCategoryConvert.INSTANCE.convert(vo); + entity.setPath(recursionPath(entity, null)); + entity.setProjectId(getProjectId()); + updateById(entity); + } + + private String recursionPath(DataFileCategoryEntity categoryEntity, String path) { + if (StringUtil.isBlank(path)) { + path = categoryEntity.getName(); + } + if (categoryEntity.getParentId() != 0) { + DataFileCategoryEntity parent = getById(categoryEntity.getParentId()); + path = parent.getName() + "/" + path; + return recursionPath(parent, path); + } + return path; + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(Long id) { + //查询有没有子节点 + LambdaQueryWrapper wrapper = new LambdaQueryWrapper<>(); + wrapper.eq(DataFileCategoryEntity::getParentId, id).last(" limit 1"); + DataFileCategoryEntity one = baseMapper.selectOne(wrapper); + if (one != null) { + throw new ServerException("存在子节点,不允许删除!"); + } + //查询有没有文件与之关联 + LambdaQueryWrapper fileEntityLambdaQueryWrapper = new LambdaQueryWrapper<>(); + fileEntityLambdaQueryWrapper.eq(DataFileEntity::getFileCategoryId, id).last(" limit 1"); + DataFileEntity dataFileEntity = dataFileService.getOne(fileEntityLambdaQueryWrapper); + if (dataFileEntity != null) { + throw new ServerException("节点下有文件,不允许删除!"); + } + removeById(id); + } + + @Override + public List listTree() { + LambdaQueryWrapper wrapper = new LambdaQueryWrapper<>(); + dataScopeWithoutOrgId(wrapper); + wrapper.orderByAsc(DataFileCategoryEntity::getOrderNo); + List dataFileCategoryEntities = baseMapper.selectList(wrapper); + List treeNodeVos = BeanUtil.copyListProperties(dataFileCategoryEntities, TreeNodeVo::new, (oldItem, newItem) -> { + newItem.setLabel(oldItem.getName()); + newItem.setValue(oldItem.getId()); + newItem.setDisabled(oldItem.getType() == 0); + if (newItem.getPath().contains("/")) { + newItem.setParentPath(newItem.getPath().substring(0, newItem.getPath().lastIndexOf("/"))); + } + }); + return BuildTreeUtils.buildTree(treeNodeVos); + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataFileServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataFileServiceImpl.java new file mode 100644 index 0000000..538ea12 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataFileServiceImpl.java @@ -0,0 +1,99 @@ +package net.srt.service.impl; + +import com.baomidou.mybatisplus.core.toolkit.Wrappers; +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.metadata.IPage; +import lombok.AllArgsConstructor; +import net.srt.convert.DataFileConvert; +import net.srt.dao.DataFileCategoryDao; +import net.srt.entity.DataFileCategoryEntity; +import net.srt.entity.DataFileEntity; +import net.srt.framework.common.constant.Constant; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.query.DataFileQuery; +import net.srt.vo.DataFileVO; +import net.srt.dao.DataFileDao; +import net.srt.service.DataFileService; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; +import srt.cloud.framework.dbswitch.common.util.StringUtil; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * 文件表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-16 + */ +@Service +@AllArgsConstructor +public class DataFileServiceImpl extends BaseServiceImpl implements DataFileService { + + + private final DataFileCategoryDao dataFileCategoryDao; + + @Override + public PageResult page(DataFileQuery query) { + IPage page = baseMapper.selectPage(getPage(query), getWrapper(query)); + + return new PageResult<>(DataFileConvert.INSTANCE.convertList(page.getRecords()), page.getTotal()); + } + + @Override + public PageResult pageResource(DataFileQuery query) { + // 查询参数 + Map params = getParams(query); + IPage page = getPage(query); + params.put(Constant.PAGE, page); + // 数据列表 + List list = baseMapper.getResourceList(params); + List dataFileVOS = DataFileConvert.INSTANCE.convertList(list); + for (DataFileVO dataFileVO : dataFileVOS) { + DataFileCategoryEntity categoryEntity = dataFileCategoryDao.selectById(dataFileVO.getFileCategoryId()); + dataFileVO.setGroup(categoryEntity != null ? categoryEntity.getPath() : null); + } + return new PageResult<>(dataFileVOS, page.getTotal()); + } + + private Map getParams(DataFileQuery query) { + Map params = new HashMap<>(); + params.put("resourceId", query.getResourceId()); + params.put("fileCategoryId", query.getFileCategoryId()); + params.put("type", query.getType()); + params.put("name", query.getName()); + return params; + } + + private LambdaQueryWrapper getWrapper(DataFileQuery query) { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + wrapper.like(StringUtil.isNotBlank(query.getName()), DataFileEntity::getName, query.getName()); + wrapper.like(StringUtil.isNotBlank(query.getType()), DataFileEntity::getType, query.getType()); + wrapper.eq(query.getFileCategoryId() != null, DataFileEntity::getFileCategoryId, query.getFileCategoryId()); + return wrapper; + } + + @Override + public void save(DataFileVO vo) { + DataFileEntity entity = DataFileConvert.INSTANCE.convert(vo); + entity.setProjectId(getProjectId()); + baseMapper.insert(entity); + } + + @Override + public void update(DataFileVO vo) { + DataFileEntity entity = DataFileConvert.INSTANCE.convert(vo); + entity.setProjectId(getProjectId()); + updateById(entity); + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(List idList) { + removeByIds(idList); + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataLayerServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataLayerServiceImpl.java new file mode 100644 index 0000000..120dffd --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataLayerServiceImpl.java @@ -0,0 +1,65 @@ +package net.srt.service.impl; + +import cn.hutool.core.util.StrUtil; +import com.baomidou.mybatisplus.core.toolkit.Wrappers; +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.metadata.IPage; +import lombok.AllArgsConstructor; +import net.srt.convert.DataLayerConvert; +import net.srt.entity.DataLayerEntity; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.query.DataLayerQuery; +import net.srt.vo.DataLayerVO; +import net.srt.dao.DataLayerDao; +import net.srt.service.DataLayerService; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; + +import java.util.List; + +/** + * 数仓分层 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-08 + */ +@Service +@AllArgsConstructor +public class DataLayerServiceImpl extends BaseServiceImpl implements DataLayerService { + + @Override + public PageResult page(DataLayerQuery query) { + IPage page = baseMapper.selectPage(getPage(query), getWrapper(query)); + + return new PageResult<>(DataLayerConvert.INSTANCE.convertList(page.getRecords()), page.getTotal()); + } + + private LambdaQueryWrapper getWrapper(DataLayerQuery query) { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + wrapper.like(StrUtil.isNotBlank(query.getCnName()), DataLayerEntity::getCnName, query.getCnName()); + wrapper.like(StrUtil.isNotBlank(query.getName()), DataLayerEntity::getName, query.getName()); + return wrapper; + } + + @Override + public void save(DataLayerVO vo) { + DataLayerEntity entity = DataLayerConvert.INSTANCE.convert(vo); + + baseMapper.insert(entity); + } + + @Override + public void update(DataLayerVO vo) { + DataLayerEntity entity = DataLayerConvert.INSTANCE.convert(vo); + + updateById(entity); + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(List idList) { + removeByIds(idList); + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataOdsServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataOdsServiceImpl.java new file mode 100644 index 0000000..7ede684 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataOdsServiceImpl.java @@ -0,0 +1,104 @@ +package net.srt.service.impl; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.metadata.IPage; +import com.baomidou.mybatisplus.core.toolkit.Wrappers; +import lombok.AllArgsConstructor; +import net.srt.convert.DataOdsConvert; +import net.srt.dao.DataOdsDao; +import net.srt.entity.DataOdsEntity; +import net.srt.framework.common.config.Config; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.BeanUtil; +import net.srt.framework.common.utils.SqlUtils; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.query.DataOdsQuery; +import net.srt.service.DataOdsService; +import net.srt.vo.ColumnDescriptionVo; +import net.srt.vo.DataOdsVO; +import net.srt.vo.SchemaTableDataVo; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.StringUtil; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByJdbcService; +import srt.cloud.framework.dbswitch.core.service.impl.MetaDataByJdbcServiceImpl; + +import java.util.List; + +/** + * 数据集成-贴源数据 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-07 + */ +@Service +@AllArgsConstructor +public class DataOdsServiceImpl extends BaseServiceImpl implements DataOdsService { + + private final Config config; + + @Override + public PageResult page(DataOdsQuery query) { + IPage page = baseMapper.selectPage(getPage(query), getWrapper(query)); + return new PageResult<>(DataOdsConvert.INSTANCE.convertList(page.getRecords()), page.getTotal()); + } + + private LambdaQueryWrapper getWrapper(DataOdsQuery query) { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + wrapper.like(StringUtil.isNotBlank(query.getTableName()), DataOdsEntity::getTableName, query.getTableName()); + wrapper.like(StringUtil.isNotBlank(query.getRemarks()), DataOdsEntity::getRemarks, query.getRemarks()); + wrapper.eq(query.getProjectId() != null, DataOdsEntity::getProjectId, query.getProjectId()); + dataScopeWithoutOrgId(wrapper); + return wrapper; + } + + @Override + public void save(DataOdsVO vo) { + DataOdsEntity entity = DataOdsConvert.INSTANCE.convert(vo); + + baseMapper.insert(entity); + } + + @Override + public void update(DataOdsVO vo) { + DataOdsEntity entity = DataOdsConvert.INSTANCE.convert(vo); + + updateById(entity); + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(List idList) { + removeByIds(idList); + } + + @Override + public DataOdsEntity getByTableName(Long projectId, String tableName) { + LambdaQueryWrapper wrapper = new LambdaQueryWrapper<>(); + return baseMapper.selectOne(wrapper.eq(DataOdsEntity::getTableName, tableName).eq(DataOdsEntity::getProjectId, projectId)); + } + + @Override + public List getColumnInfo(Long id, String tableName) { + //DataOdsEntity dataOdsEntity = baseMapper.selectById(id); + IMetaDataByJdbcService service = new MetaDataByJdbcServiceImpl(ProductTypeEnum.getByIndex(getProject().getDbType())); + List columnDescriptions = service.queryTableColumnMeta(getProject().getDbUrl(), getProject().getDbUsername(), getProject().getDbPassword(), getProject().getDbName(), tableName); + List pks = service.queryTablePrimaryKeys(getProject().getDbUrl(), getProject().getDbUsername(), getProject().getDbPassword(), getProject().getDbName(), tableName); + return BeanUtil.copyListProperties(columnDescriptions, ColumnDescriptionVo::new, (oldItem, newItem) -> { + newItem.setFieldName(StringUtil.isNotBlank(newItem.getFieldName()) ? newItem.getFieldName() : newItem.getLabelName()); + if (pks.contains(newItem.getFieldName())) { + newItem.setPk(true); + } + }); + } + + @Override + public SchemaTableDataVo getTableData(Long id, String tableName) { + IMetaDataByJdbcService service = new MetaDataByJdbcServiceImpl(ProductTypeEnum.getByIndex(getProject().getDbType())); + SchemaTableData schemaTableData = service.queryTableDataBySql(getProject().getDbUrl(), getProject().getDbUsername(), getProject().getDbPassword(), String.format("SELECT * FROM %s LIMIT 50", tableName), 50); + return SchemaTableDataVo.builder().columns(SqlUtils.convertColumns(schemaTableData.getColumns())).rows(SqlUtils.convertRows(schemaTableData.getColumns(), schemaTableData.getRows())).build(); + } +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataProjectServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataProjectServiceImpl.java new file mode 100644 index 0000000..eb53e7e --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataProjectServiceImpl.java @@ -0,0 +1,186 @@ +package net.srt.service.impl; + +import cn.hutool.core.util.StrUtil; +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.metadata.IPage; +import com.baomidou.mybatisplus.core.toolkit.Wrappers; +import lombok.AllArgsConstructor; +import net.srt.constants.SuperAdminEnum; +import net.srt.constants.YesOrNo; +import net.srt.convert.DataProjectConvert; +import net.srt.dao.DataProjectDao; +import net.srt.entity.DataAccessEntity; +import net.srt.entity.DataDatabaseEntity; +import net.srt.entity.DataFileCategoryEntity; +import net.srt.entity.DataOdsEntity; +import net.srt.entity.DataProjectEntity; +import net.srt.entity.DataProjectUserRelEntity; +import net.srt.framework.common.cache.bean.DataProjectCacheBean; +import net.srt.framework.common.config.Config; +import net.srt.framework.common.exception.ServerException; +import net.srt.framework.common.page.PageResult; +import net.srt.framework.common.utils.BeanUtil; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.framework.security.cache.TokenStoreCache; +import net.srt.framework.security.user.SecurityUser; +import net.srt.framework.security.user.UserDetail; +import net.srt.query.DataProjectQuery; +import net.srt.service.DataAccessService; +import net.srt.service.DataDatabaseService; +import net.srt.service.DataFileCategoryService; +import net.srt.service.DataOdsService; +import net.srt.service.DataProjectService; +import net.srt.service.DataProjectUserRelService; +import net.srt.vo.DataProjectVO; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.StringUtil; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByJdbcService; +import srt.cloud.framework.dbswitch.core.service.impl.MetaDataByJdbcServiceImpl; + +import java.util.List; + +/** + * 数据项目 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-09-27 + */ +@Service +@AllArgsConstructor +public class DataProjectServiceImpl extends BaseServiceImpl implements DataProjectService { + + private final DataProjectUserRelService dataProjectUserRelService; + private final TokenStoreCache tokenStoreCache; + private final Config config; + + @Override + public PageResult page(DataProjectQuery query) { + IPage page = baseMapper.selectPage(getPage(query), getWrapper(query)); + return new PageResult<>(DataProjectConvert.INSTANCE.convertList(page.getRecords()), page.getTotal()); + } + + private LambdaQueryWrapper getWrapper(DataProjectQuery query) { + LambdaQueryWrapper wrapper = Wrappers.lambdaQuery(); + wrapper.like(StrUtil.isNotBlank(query.getName()), DataProjectEntity::getName, query.getName()); + wrapper.like(StrUtil.isNotBlank(query.getEngName()), DataProjectEntity::getEngName, query.getEngName()); + wrapper.eq(query.getStatus() != null, DataProjectEntity::getStatus, query.getStatus()); + wrapper.like(StrUtil.isNotBlank(query.getDutyPerson()), DataProjectEntity::getDutyPerson, query.getDutyPerson()); + wrapper.apply(getDataScope(null, null, null, "id", false, false).getSqlFilter()); + wrapper.orderByDesc(DataProjectEntity::getCreateTime); + wrapper.orderByDesc(DataProjectEntity::getId); + return wrapper; + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void save(DataProjectVO vo) { + DataProjectEntity entity = DataProjectConvert.INSTANCE.convert(vo); + baseMapper.insert(entity); + initDb(entity); + } + + @Override + public void update(DataProjectVO vo) { + passOperator(vo.getId()); + DataProjectEntity entity = DataProjectConvert.INSTANCE.convert(vo); + baseMapper.updateById(entity); + initDb(entity); + } + + @Override + public void initDb(DataProjectEntity entity) { + /*buildProjectDb(entity); + updateById(entity);*/ + //更新缓存 + tokenStoreCache.saveProject(entity.getId(), BeanUtil.copyProperties(entity, DataProjectCacheBean::new)); + } + + /*private void buildProjectDb(DataProjectEntity entity) { + //建库,建用户,授权 + String dbProjectName = config.getDbProjectNameById(entity.getId()); + String dbProjectUsername = config.getDbProjectUsernameById(entity.getId()); + //如果有密码,复用原来的密码 + String dbProjectPassword = StringUtil.isNotBlank(entity.getDbPassword()) ? entity.getDbPassword() : StringUtil.getRandom2(16); + IMetaDataByJdbcService service = new MetaDataByJdbcServiceImpl(ProductTypeEnum.MYSQL); + service.executeSql(config.getHouseUrl(), config.getHouseUsername(), config.getHousePassword(), + String.format("CREATE DATABASE IF NOT EXISTS %s DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_bin", dbProjectName)); + service.executeSql(config.getHouseUrl(), config.getHouseUsername(), config.getHousePassword(), + String.format("DROP USER IF EXISTS '%s'", dbProjectUsername)); + service.executeSql(config.getHouseUrl(), config.getHouseUsername(), config.getHousePassword(), + String.format("CREATE USER IF NOT EXISTS '%s'@'%%' IDENTIFIED BY '%s'", dbProjectUsername, dbProjectPassword)); + service.executeSql(config.getHouseUrl(), config.getHouseUsername(), config.getHousePassword(), + String.format("GRANT ALL PRIVILEGES ON %s.* TO '%s'@'%%'", dbProjectName, dbProjectUsername)); + service.executeSql(config.getHouseUrl(), config.getHouseUsername(), config.getHousePassword(), + "FLUSH PRIVILEGES"); + entity.setDbName(dbProjectName); + entity.setDbUrl(config.getDbProjectUrlByName(dbProjectName)); + entity.setDbUsername(dbProjectUsername); + entity.setDbPassword(dbProjectPassword); + }*/ + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(List idList) { + //项目前端禁用了批量删除,所以只会有一个 + Long projectId = idList.get(0); + passOperator(projectId); + //判断是否有用户与之关联 + LambdaQueryWrapper dataProjectUserRelEntityLambdaQueryWrapper = new LambdaQueryWrapper<>(); + dataProjectUserRelEntityLambdaQueryWrapper.eq(DataProjectUserRelEntity::getDataProjectId, projectId).last(" limit 1"); + if (dataProjectUserRelService.getOne(dataProjectUserRelEntityLambdaQueryWrapper) != null) { + throw new ServerException("该项目下存在用户与之关联,不允许删除!"); + } + removeByIds(idList); + //同步删除 + tokenStoreCache.deleteProject(projectId); + } + + private void passOperator(Long id) { + DataProjectEntity projectEntity = baseMapper.selectById(id); + UserDetail userDetail = SecurityUser.getUser(); + if (!SuperAdminEnum.YES.getValue().equals(userDetail.getSuperAdmin()) && !userDetail.getId().equals(projectEntity.getCreator())) { + throw new ServerException("您无权修改或删除非自己创建的项目租户,请联系创建者或超管解决!"); + } + } + + @Override + public void addUser(Long projectId, List userIds) { + userIds.forEach(userId -> { + //判断是否已经添加 + if (dataProjectUserRelService.getByProjectIdAndUserId(projectId, userId) == null) { + DataProjectUserRelEntity dataProjectUserRelEntity = new DataProjectUserRelEntity(); + dataProjectUserRelEntity.setDataProjectId(projectId); + dataProjectUserRelEntity.setUserId(userId); + dataProjectUserRelService.save(dataProjectUserRelEntity); + } + }); + } + + @Override + public List listProjects() { + UserDetail user = SecurityUser.getUser(); + List dataProjectEntities; + if (user.getSuperAdmin().equals(SuperAdminEnum.YES.getValue())) { + LambdaQueryWrapper queryWrapper = new LambdaQueryWrapper<>(); + dataProjectEntities = baseMapper.selectList(queryWrapper.eq(DataProjectEntity::getStatus, 1)); + } else { + dataProjectEntities = baseMapper.listProjects(user.getId()); + } + return DataProjectConvert.INSTANCE.convertList(dataProjectEntities); + } + + @Override + public void testOnline(DataProjectVO vo) { + ProductTypeEnum productTypeEnum = ProductTypeEnum.getByIndex(vo.getDbType()); + IMetaDataByJdbcService metaDataService = new MetaDataByJdbcServiceImpl(productTypeEnum); + metaDataService.testQuerySQL( + vo.getDbUrl(), + vo.getDbUsername(), + vo.getDbPassword(), + productTypeEnum.getTestSql() + ); + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataProjectUserRelServiceImpl.java b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataProjectUserRelServiceImpl.java new file mode 100644 index 0000000..0dca473 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/service/impl/DataProjectUserRelServiceImpl.java @@ -0,0 +1,56 @@ +package net.srt.service.impl; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.toolkit.Wrappers; +import lombok.AllArgsConstructor; +import net.srt.convert.DataProjectUserRelConvert; +import net.srt.dao.DataProjectUserRelDao; +import net.srt.entity.DataProjectEntity; +import net.srt.entity.DataProjectUserRelEntity; +import net.srt.framework.mybatis.service.impl.BaseServiceImpl; +import net.srt.service.DataProjectUserRelService; +import net.srt.vo.DataProjectUserRelVO; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; + +import javax.sql.rowset.BaseRowSet; +import java.text.BreakIterator; +import java.util.List; + +/** + * 项目用户关联表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-10-08 + */ +@Service +@AllArgsConstructor +public class DataProjectUserRelServiceImpl extends BaseServiceImpl implements DataProjectUserRelService { + + + @Override + public void save(DataProjectUserRelVO vo) { + DataProjectUserRelEntity entity = DataProjectUserRelConvert.INSTANCE.convert(vo); + + baseMapper.insert(entity); + } + + @Override + public void update(DataProjectUserRelVO vo) { + DataProjectUserRelEntity entity = DataProjectUserRelConvert.INSTANCE.convert(vo); + + updateById(entity); + } + + @Override + @Transactional(rollbackFor = Exception.class) + public void delete(List idList) { + removeByIds(idList); + } + + @Override + public DataProjectUserRelEntity getByProjectIdAndUserId(Long projectId, Long userId) { + return baseMapper.getByProjectIdAndUserId(projectId, userId); + } + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/ColumnDescriptionVo.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/ColumnDescriptionVo.java new file mode 100644 index 0000000..18db65a --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/ColumnDescriptionVo.java @@ -0,0 +1,53 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package net.srt.vo; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +/** + * 数据库列描述符信息定义(Column Description) + * + * @author jrl + */ +@Data +@AllArgsConstructor +@NoArgsConstructor +public class ColumnDescriptionVo { + + private String fieldName; + private String labelName; + private String fieldTypeName; + private String filedTypeClassName; + private int fieldType; + private int displaySize; + private int scaleSize; + private int precisionSize; + private boolean isAutoIncrement; + private boolean isNullable; + private String remarks; + private boolean signed = false; + private ProductTypeEnum dbtype; + //索引是否可以不唯一 + private boolean nonIndexUnique; + //索引类别 + private String indexQualifier; + //索引名称 + private String indexName; + private short indexType; + private String ascOrDesc; + //默认值 + private String defaultValue; + //是否是主键 + private boolean isPk; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataAccessTaskDetailVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataAccessTaskDetailVO.java new file mode 100644 index 0000000..4f4f02c --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataAccessTaskDetailVO.java @@ -0,0 +1,66 @@ +package net.srt.vo; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据接入-同步记录详情 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-28 +*/ +@Data +@Schema(description = "数据接入-同步记录详情") +public class DataAccessTaskDetailVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "数据接入id") + private Long dataAccessId; + + @Schema(description = "数据接入任务id") + private Long taskId; + + @Schema(description = "源端库名") + private String sourceSchemaName; + + @Schema(description = "源端表名") + private String sourceTableName; + + @Schema(description = "目的端库名") + private String targetSchemaName; + + @Schema(description = "目的端表名") + private String targetTableName; + + @Schema(description = "同步记录数") + private Long syncCount; + + @Schema(description = "同步数据量") + private String syncBytes; + + @Schema(description = "是否成功 0-否 1-是") + private Integer ifSuccess; + + @Schema(description = "失败信息") + private String errorMsg; + + @Schema(description = "成功信息") + private String successMsg; + + @Schema(description = "项目id") + private Long projectId; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataAccessTaskVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataAccessTaskVO.java new file mode 100644 index 0000000..9811174 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataAccessTaskVO.java @@ -0,0 +1,79 @@ +package net.srt.vo; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据接入任务记录 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-26 +*/ +@Data +@Schema(description = "数据接入任务记录") +public class DataAccessTaskVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Integer id; + + @Schema(description = "数据接入任务id") + private Long dataAccessId; + + @Schema(description = "运行状态( 1-等待中 2-运行中 3-正常结束 4-异常结束)") + private Integer runStatus; + + @Schema(description = "开始时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date startTime; + + @Schema(description = "结束时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date endTime; + + private String realTimeLog; + @Schema(description = "错误信息") + private String errorInfo; + + @Schema(description = "更新数据量") + private Long dataCount; + + @Schema(description = "成功表数量") + private Long tableSuccessCount; + + @Schema(description = "失败表数量") + private Long tableFailCount; + + @Schema(description = "更新大小") + private String byteCount; + + @Schema(description = "项目id") + private Long projectId; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataAccessVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataAccessVO.java new file mode 100644 index 0000000..49f3f30 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataAccessVO.java @@ -0,0 +1,103 @@ +package net.srt.vo; + + +import com.fasterxml.jackson.annotation.JsonFormat; +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; +import srt.cloud.framework.dbswitch.data.config.DbswichProperties; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据集成-数据接入 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-24 +*/ +@Data +@Schema(description = "数据集成-数据接入") +public class DataAccessVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Integer id; + + @Schema(description = "任务名称") + private String taskName; + + @Schema(description = "描述") + private String description; + + @Schema(description = "项目id") + private Long projectId; + + @Schema(description = "源端数据库id") + private Long sourceDatabaseId; + + @Schema(description = "目的端数据库id") + private Long targetDatabaseId; + + @Schema(description = "接入方式 1-ods接入 2-自定义接入") + private Integer accessMode; + + @Schema(description = "任务类型") + private Integer taskType; + + @Schema(description = "cron表达式") + private String cron; + + @Schema(description = "发布状态") + private Integer status; + + @Schema(description = "最新运行状态") + private Integer runStatus; + + @Schema(description = "数据接入基础配置json") + private DbswichProperties dataAccessJson; + + @Schema(description = "最近开始时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date startTime; + + @Schema(description = "最近结束时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date endTime; + + @Schema(description = "发布时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date releaseTime; + + @Schema(description = "备注") + private String note; + + @Schema(description = "发布人id") + private Long releaseUserId; + + @Schema(description = "下次执行时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date nextRunTime; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataDatabaseVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataDatabaseVO.java new file mode 100644 index 0000000..414ea64 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataDatabaseVO.java @@ -0,0 +1,82 @@ +package net.srt.vo; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据集成-数据库管理 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-09 +*/ +@Data +@Schema(description = "数据集成-数据库管理") +public class DataDatabaseVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "名称") + private String name; + + @Schema(description = "数据库类型") + private Integer databaseType; + + @Schema(description = "主机ip") + private String databaseIp; + + @Schema(description = "端口") + private String databasePort; + + @Schema(description = "库名(服务名)") + private String databaseName; + + @Schema(description = "状态") + private Integer status; + + @Schema(description = "用户名") + private String userName; + + @Schema(description = "密码") + private String password; + + @Schema(description = "是否支持实时接入") + private Integer isRtApprove; + + @Schema(description = "不支持实时接入原因") + private String noRtReason; + + @Schema(description = "jdbcUrl") + private String jdbcUrl; + + @Schema(description = "所属项目") + private Long projectId; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataFileCategoryVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataFileCategoryVO.java new file mode 100644 index 0000000..e371190 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataFileCategoryVO.java @@ -0,0 +1,65 @@ +package net.srt.vo; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; +import java.util.List; + +/** + * 文件分组表 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-11-12 + */ +@Data +@Schema(description = "文件分组表") +public class DataFileCategoryVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "父级id(顶级为0)") + private Long parentId; + @Schema(description = "0-文件夹 1-文件目录") + private Integer type; + @Schema(description = "分组名称") + private String name; + + @Schema(description = "分组序号") + private Integer orderNo; + + @Schema(description = "描述") + private String description; + + @Schema(description = "分组路径") + private String path; + + @Schema(description = "项目id") + private Long projectId; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataFileVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataFileVO.java new file mode 100644 index 0000000..56f705b --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataFileVO.java @@ -0,0 +1,69 @@ +package net.srt.vo; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 文件表 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-16 +*/ +@Data +@Schema(description = "文件表") +public class DataFileVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "名称") + private String name; + + @Schema(description = "所属分组id") + private Integer fileCategoryId; + + @Schema(description = "文件类型") + private String type; + + @Schema(description = "文件url地址") + private String fileUrl; + + @Schema(description = "描述") + private String description; + + @Schema(description = "大小") + private Long size; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "项目id") + private Long projectId; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + private String group; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataLayerVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataLayerVO.java new file mode 100644 index 0000000..701de96 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataLayerVO.java @@ -0,0 +1,58 @@ +package net.srt.vo; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数仓分层 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-08 +*/ +@Data +@Schema(description = "数仓分层") +public class DataLayerVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "分层英文名称") + private String name; + + @Schema(description = "分层中文名称") + private String cnName; + + @Schema(description = "分层描述") + private String note; + + @Schema(description = "表名前缀") + private String tablePrefix; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataOdsVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataOdsVO.java new file mode 100644 index 0000000..fbfd53a --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataOdsVO.java @@ -0,0 +1,62 @@ +package net.srt.vo; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据集成-贴源数据 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-11-07 +*/ +@Data +@Schema(description = "数据集成-贴源数据") +public class DataOdsVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "数据接入id") + private Long dataAccessId; + + @Schema(description = "表名") + private String tableName; + + @Schema(description = "注释") + private String remarks; + + @Schema(description = "项目id") + private Long projectId; + + @Schema(description = "最近同步时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date recentlySyncTime; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataProjectUserRelVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataProjectUserRelVO.java new file mode 100644 index 0000000..cbd7694 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataProjectUserRelVO.java @@ -0,0 +1,52 @@ +package net.srt.vo; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 项目用户关联表 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-10-08 +*/ +@Data +@Schema(description = "项目用户关联表") +public class DataProjectUserRelVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "主键id") + private Long id; + + @Schema(description = "项目id") + private Long dataProjectId; + + @Schema(description = "用户id") + private Long userId; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataProjectVO.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataProjectVO.java new file mode 100644 index 0000000..f428928 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/DataProjectVO.java @@ -0,0 +1,66 @@ +package net.srt.vo; + +import io.swagger.v3.oas.annotations.media.Schema; +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; +import net.srt.framework.common.utils.DateUtils; + +import java.io.Serializable; +import java.util.Date; + +/** +* 数据项目 +* +* @author zrx 985134801@qq.com +* @since 1.0.0 2022-09-27 +*/ +@Data +@Schema(description = "数据项目") +public class DataProjectVO implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "id") + private Long id; + + @Schema(description = "项目名称") + private String name; + + @Schema(description = "英文名称") + private String engName; + + @Schema(description = "描述") + private String description; + + @Schema(description = "状态") + private Integer status; + + @Schema(description = "负责人") + private String dutyPerson; + + @Schema(description = "版本号") + private Integer version; + + @Schema(description = "删除标识 0:正常 1:已删除") + private Integer deleted; + + @Schema(description = "创建者") + private Long creator; + + @Schema(description = "创建时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + + @Schema(description = "更新者") + private Long updater; + + @Schema(description = "更新时间") + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date updateTime; + private Integer dbType; + private String dbName; + private String dbUrl; + private String dbUsername; + private String dbPassword; + + +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/PreviewNameMapperVo.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/PreviewNameMapperVo.java new file mode 100644 index 0000000..706e1bb --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/PreviewNameMapperVo.java @@ -0,0 +1,17 @@ +package net.srt.vo; + +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Data; +import lombok.NoArgsConstructor; + +@Data +@AllArgsConstructor +@NoArgsConstructor +@Builder +public class PreviewNameMapperVo { + + private String originalName; + private String targetName; + private String remarks; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/SchemaTableDataVo.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/SchemaTableDataVo.java new file mode 100644 index 0000000..632c5fa --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/SchemaTableDataVo.java @@ -0,0 +1,16 @@ +package net.srt.vo; + + +import lombok.Builder; +import lombok.Data; + +import java.util.List; +import java.util.Map; + +@Data +@Builder +public class SchemaTableDataVo { + + private Map columns; + private List> rows; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/SqlGenerationVo.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/SqlGenerationVo.java new file mode 100644 index 0000000..e4bebc0 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/SqlGenerationVo.java @@ -0,0 +1,19 @@ +package net.srt.vo; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + +/** + * SqlGeneration + * + * @author zrx + */ +@Data +@AllArgsConstructor +@NoArgsConstructor +public class SqlGenerationVo { + private String flinkSqlCreate; + private String sqlSelect; + private String sqlCreate; +} diff --git a/srt-cloud-data-integrate/src/main/java/net/srt/vo/TableVo.java b/srt-cloud-data-integrate/src/main/java/net/srt/vo/TableVo.java new file mode 100644 index 0000000..4854331 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/java/net/srt/vo/TableVo.java @@ -0,0 +1,16 @@ +package net.srt.vo; + +import lombok.Data; +import srt.cloud.framework.dbswitch.common.type.DBTableType; + +/** + * @ClassName TableVo + * @Author zrx + * @Date 2022/10/24 9:14 + */ +@Data +public class TableVo { + private String tableName; + private String remarks; + private DBTableType tableType; +} diff --git a/srt-cloud-data-integrate/src/main/resources/auth.yml b/srt-cloud-data-integrate/src/main/resources/auth.yml new file mode 100644 index 0000000..82ec0d6 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/auth.yml @@ -0,0 +1,4 @@ +auth: + ignore_urls: + - /test + - /api/** diff --git a/srt-cloud-data-integrate/src/main/resources/bootstrap.yml b/srt-cloud-data-integrate/src/main/resources/bootstrap.yml new file mode 100644 index 0000000..14d407f --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/bootstrap.yml @@ -0,0 +1,42 @@ +#数据集成 +server: + port: 8084 + +spring: + mvc: + servlet: + load-on-startup: 1 + application: + name: srt-cloud-data-integrate + profiles: + active: dev + cloud: + nacos: + discovery: + server-addr: 124.223.48.209:8848 + # 命名空间,默认:public + namespace: c370afdb-9c55-4068-a78b-3b35b1ac1420 + service: ${spring.application.name} + group: srt2.0 + config: + server-addr: ${spring.cloud.nacos.discovery.server-addr} + namespace: ${spring.cloud.nacos.discovery.namespace} + file-extension: yaml + # 指定配置 + extension-configs: + - data-id: datasource.yaml + refresh: true + servlet: + multipart: + max-request-size: 100MB + max-file-size: 1024MB +# feign 配置 +feign: + client: + config: + default: + connectTimeout: 60000 + readTimeout: 60000 + loggerLevel: basic + okhttp: + enabled: true diff --git a/srt-cloud-data-integrate/src/main/resources/log4j2.xml b/srt-cloud-data-integrate/src/main/resources/log4j2.xml new file mode 100644 index 0000000..ba24b09 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/log4j2.xml @@ -0,0 +1,48 @@ + + + + + ./logs/ + srt-cloud-data-integrate + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataAccessDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataAccessDao.xml new file mode 100644 index 0000000..5086f02 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataAccessDao.xml @@ -0,0 +1,41 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + UPDATE data_access SET run_status=2,start_time=now(),end_time=null WHERE id=#{dataAccessId} + + + UPDATE data_access SET run_status=#{runStatus},end_time=now(),next_run_time=#{nextRunTime} WHERE id=#{dataAccessId} + + + UPDATE data_access SET status=#{status},release_time=#{releaseTime},release_user_id=#{releaseUserId} WHERE id=#{id} + + + diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataAccessTaskDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataAccessTaskDao.xml new file mode 100644 index 0000000..20e90a5 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataAccessTaskDao.xml @@ -0,0 +1,23 @@ + + + + + + + + + + + + + + + + + + + + + + + diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataAccessTaskDetailDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataAccessTaskDetailDao.xml new file mode 100644 index 0000000..c622dca --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataAccessTaskDetailDao.xml @@ -0,0 +1,22 @@ + + + + + + + + + + + + + + + + + + + + + + diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataDatabaseDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataDatabaseDao.xml new file mode 100644 index 0000000..0a0ba10 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataDatabaseDao.xml @@ -0,0 +1,32 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + UPDATE data_database SET status=#{status} WHERE id=#{id} + + + diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataFileCategoryDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataFileCategoryDao.xml new file mode 100644 index 0000000..b1ceaff --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataFileCategoryDao.xml @@ -0,0 +1,21 @@ + + + + + + + + + + + + + + + + + + + + + diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataFileDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataFileDao.xml new file mode 100644 index 0000000..9a9db90 --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataFileDao.xml @@ -0,0 +1,42 @@ + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataLayerDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataLayerDao.xml new file mode 100644 index 0000000..76661dc --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataLayerDao.xml @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataOdsDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataOdsDao.xml new file mode 100644 index 0000000..0ce478a --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataOdsDao.xml @@ -0,0 +1,21 @@ + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataProjectDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataProjectDao.xml new file mode 100644 index 0000000..c6ef1ea --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataProjectDao.xml @@ -0,0 +1,30 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/srt-cloud-data-integrate/src/main/resources/mapper/DataProjectUserRelDao.xml b/srt-cloud-data-integrate/src/main/resources/mapper/DataProjectUserRelDao.xml new file mode 100644 index 0000000..c1d99ac --- /dev/null +++ b/srt-cloud-data-integrate/src/main/resources/mapper/DataProjectUserRelDao.xml @@ -0,0 +1,22 @@ + + + + + + + + + + + + + + + + + + + + diff --git a/srt-cloud-framework/pom.xml b/srt-cloud-framework/pom.xml new file mode 100644 index 0000000..bb25dc8 --- /dev/null +++ b/srt-cloud-framework/pom.xml @@ -0,0 +1,21 @@ + + + net.srt + srt-cloud + 2.0.0 + + 4.0.0 + srt-cloud-framework + pom + + + srt-cloud-common + srt-cloud-security + srt-cloud-mybatis + srt-cloud-dbswitch + srt-cloud-flink + srt-cloud-data-lineage + + + + diff --git a/srt-cloud-framework/srt-cloud-common/pom.xml b/srt-cloud-framework/srt-cloud-common/pom.xml new file mode 100644 index 0000000..57c7688 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/pom.xml @@ -0,0 +1,45 @@ + + + net.srt + srt-cloud-framework + 2.0.0 + + 4.0.0 + srt-cloud-common + jar + + + + org.springframework.boot + spring-boot-starter-web + + + + org.springframework.boot + spring-boot-starter-logging + + + + + org.springframework.boot + spring-boot-starter-validation + + + org.springframework.boot + spring-boot-starter-data-redis + + + org.springframework.boot + spring-boot-starter-security + provided + + + org.springdoc + springdoc-openapi-ui + + + cn.hutool + hutool-all + + + diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/RedisCache.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/RedisCache.java new file mode 100644 index 0000000..2b387a8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/RedisCache.java @@ -0,0 +1,143 @@ +package net.srt.framework.common.cache; + +import org.springframework.data.redis.core.HashOperations; +import org.springframework.data.redis.core.RedisTemplate; +import org.springframework.stereotype.Component; + +import javax.annotation.Resource; +import java.util.Collection; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +/** + * Redis Cache + * + * @author 阿沐 babamu@126.com + */ +@Component +public class RedisCache { + @Resource + private RedisTemplate redisTemplate; + + /** + * 默认过期时长为24小时,单位:秒 + */ + public final static long DEFAULT_EXPIRE = 60 * 60 * 24L; + /** + * 过期时长为1小时,单位:秒 + */ + public final static long HOUR_ONE_EXPIRE = 60 * 60 * 1L; + /** + * 过期时长为6小时,单位:秒 + */ + public final static long HOUR_SIX_EXPIRE = 60 * 60 * 6L; + /** + * 不设置过期时长 + */ + public final static long NOT_EXPIRE = -1L; + + public void set(String key, Object value, long expire) { + redisTemplate.opsForValue().set(key, value); + if (expire != NOT_EXPIRE) { + expire(key, expire); + } + } + + public void set(String key, Object value) { + set(key, value, DEFAULT_EXPIRE); + } + + public Object get(String key, long expire) { + Object value = redisTemplate.opsForValue().get(key); + if (expire != NOT_EXPIRE) { + expire(key, expire); + } + return value; + } + + public Object get(String key) { + return get(key, NOT_EXPIRE); + } + + public Long increment(String key) { + return redisTemplate.opsForValue().increment(key); + } + + public Boolean hasKey(String key) { + return redisTemplate.hasKey(key); + } + + public void delete(String key) { + redisTemplate.delete(key); + } + + public void delete(Collection keys) { + redisTemplate.delete(keys); + } + + public Object hGet(String key, String field) { + return redisTemplate.opsForHash().get(key, field); + } + + public Map hGetAll(String key) { + HashOperations hashOperations = redisTemplate.opsForHash(); + return hashOperations.entries(key); + } + + public void hMSet(String key, Map map) { + hMSet(key, map, DEFAULT_EXPIRE); + } + + public void hMSet(String key, Map map, long expire) { + redisTemplate.opsForHash().putAll(key, map); + + if (expire != NOT_EXPIRE) { + expire(key, expire); + } + } + + public void hSet(String key, String field, Object value) { + hSet(key, field, value, DEFAULT_EXPIRE); + } + + public void hSet(String key, String field, Object value, long expire) { + redisTemplate.opsForHash().put(key, field, value); + + if (expire != NOT_EXPIRE) { + expire(key, expire); + } + } + + public void expire(String key, long expire) { + redisTemplate.expire(key, expire, TimeUnit.SECONDS); + } + + public void hDel(String key, Object... fields) { + redisTemplate.opsForHash().delete(key, fields); + } + + public void leftPush(String key, Object value) { + leftPush(key, value, DEFAULT_EXPIRE); + } + + public void leftPush(String key, Object value, long expire) { + redisTemplate.opsForList().leftPush(key, value); + + if (expire != NOT_EXPIRE) { + expire(key, expire); + } + } + + public Object rightPop(String key) { + return redisTemplate.opsForList().rightPop(key); + } + + /** + * 发布订阅 + * @param topic + * @param message + */ + public void convertAndSend(String topic, String message) { + redisTemplate.convertAndSend(topic, message); + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/RedisKeys.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/RedisKeys.java new file mode 100644 index 0000000..96eb03c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/RedisKeys.java @@ -0,0 +1,60 @@ +package net.srt.framework.common.cache; + +/** + * Redis Key管理 + * + * @author 阿沐 babamu@126.com + */ +public class RedisKeys { + + /** + * 验证码Key + */ + public static String getCaptchaKey(String key) { + return "sys:captcha:" + key; + } + + /** + * accessToken Key + */ + public static String getAccessTokenKey(String accessToken) { + return "sys:access:" + accessToken; + } + + /** + * accessToken Key + */ + public static String getProjectIdKey(String accessToken) { + return "sys:project:id:" + accessToken; + } + + /** + * projectId Key + */ + public static String getProjectKey(Long projectId) { + return "sys:project:" + projectId; + } + + /** + * appToken Key + */ + public static String getAppTokenKey(String appToken) { + return "app:token:" + appToken; + } + + /** + * getAppIdKey + */ + public static String getAppIdKey(Long appId) { + return "app:id:" + appId; + } + + + /** + * getNeo4jKey + */ + public static String getNeo4jKey(Long projectId) { + return "neo4j:info" + projectId; + } + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/bean/DataProjectCacheBean.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/bean/DataProjectCacheBean.java new file mode 100644 index 0000000..b3d3bdd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/bean/DataProjectCacheBean.java @@ -0,0 +1,46 @@ +package net.srt.framework.common.cache.bean; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + + +/** + * 数据项目 + * + * @author zrx 985134801@qq.com + * @since 1.0.0 2022-09-27 + */ +@Data +@AllArgsConstructor +@NoArgsConstructor +public class DataProjectCacheBean { + + private Long id; + /** + * 项目名称 + */ + private String name; + /** + * 数据库 + */ + private String dbName; + + /** + * 数据库url + */ + private String dbUrl; + + /** + * 数据库用户名 + */ + private String dbUsername; + + /** + * 数据库密码 + */ + private String dbPassword; + + private Integer dbType; + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/bean/Neo4jInfo.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/bean/Neo4jInfo.java new file mode 100644 index 0000000..0c20df0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/cache/bean/Neo4jInfo.java @@ -0,0 +1,13 @@ +package net.srt.framework.common.cache.bean; + +import lombok.Data; + +/** + * @ClassName Neo4jInfo + * @Author zrx + * @Date 2023/6/13 17:43 + */ +@Data +public class Neo4jInfo { + private String neo4jUrl; +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/Config.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/Config.java new file mode 100644 index 0000000..421f84e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/Config.java @@ -0,0 +1,25 @@ +package net.srt.framework.common.config; + +import lombok.Getter; +import org.springframework.beans.factory.annotation.Value; +import org.springframework.stereotype.Component; + +/** + * 项目配置信息的工具类 + * + * @author zrx + */ +@Getter +@Component("config") +public class Config { + + @Value("${spring.datasource.driver-class-name}") + private String driver; + @Value("${spring.datasource.url}") + private String url; + @Value("${spring.datasource.username}") + private String username; + @Value("${spring.datasource.password}") + private String password; + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/CorsConfig.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/CorsConfig.java new file mode 100644 index 0000000..a1c83f4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/CorsConfig.java @@ -0,0 +1,28 @@ +package net.srt.framework.common.config; + +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; +import org.springframework.web.cors.CorsConfiguration; +import org.springframework.web.cors.UrlBasedCorsConfigurationSource; +import org.springframework.web.filter.CorsFilter; + +/** + * 跨域配置 + * + * @author 阿沐 babamu@126.com + */ +@Configuration +public class CorsConfig { + + @Bean + public CorsFilter corsFilter() { + final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); + final CorsConfiguration corsConfiguration = new CorsConfiguration(); + corsConfiguration.setAllowCredentials(true); + corsConfiguration.addAllowedHeader("*"); + corsConfiguration.addAllowedOriginPattern("*"); + corsConfiguration.addAllowedMethod("*"); + source.registerCorsConfiguration("/**", corsConfiguration); + return new CorsFilter(source); + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/RedisConfig.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/RedisConfig.java new file mode 100644 index 0000000..1c5bf11 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/RedisConfig.java @@ -0,0 +1,47 @@ +package net.srt.framework.common.config; + +import com.fasterxml.jackson.annotation.JsonAutoDetect; +import com.fasterxml.jackson.annotation.PropertyAccessor; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.jsontype.impl.LaissezFaireSubTypeValidator; +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; +import org.springframework.data.redis.connection.RedisConnectionFactory; +import org.springframework.data.redis.core.RedisTemplate; +import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer; +import org.springframework.data.redis.serializer.RedisSerializer; + +/** + * Redis配置 + * + * @author 阿沐 babamu@126.com + */ +@Configuration +public class RedisConfig { + + @Bean + public Jackson2JsonRedisSerializer jackson2JsonRedisSerializer(){ + Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class); + ObjectMapper objectMapper = new ObjectMapper(); + objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY); + objectMapper.activateDefaultTyping(LaissezFaireSubTypeValidator.instance, ObjectMapper.DefaultTyping.NON_FINAL); + jackson2JsonRedisSerializer.setObjectMapper(objectMapper); + + return jackson2JsonRedisSerializer; + } + + @Bean + public RedisTemplate redisTemplate(RedisConnectionFactory factory) { + RedisTemplate template = new RedisTemplate<>(); + // Key HashKey使用String序列化 + template.setKeySerializer(RedisSerializer.string()); + template.setHashKeySerializer(RedisSerializer.string()); + + // Value HashValue使用Jackson2JsonRedisSerializer序列化 + template.setValueSerializer(jackson2JsonRedisSerializer()); + template.setHashValueSerializer(jackson2JsonRedisSerializer()); + + template.setConnectionFactory(factory); + return template; + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/SwaggerConfig.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/SwaggerConfig.java new file mode 100644 index 0000000..4c81108 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/SwaggerConfig.java @@ -0,0 +1,53 @@ +package net.srt.framework.common.config; + +import io.swagger.v3.oas.models.Components; +import io.swagger.v3.oas.models.OpenAPI; +import io.swagger.v3.oas.models.info.Contact; +import io.swagger.v3.oas.models.info.Info; +import io.swagger.v3.oas.models.info.License; +import io.swagger.v3.oas.models.security.SecurityRequirement; +import io.swagger.v3.oas.models.security.SecurityScheme; +import org.springdoc.core.GroupedOpenApi; +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; + +/** + * Swagger配置 + * + * @author 阿沐 babamu@126.com + */ +@Configuration +public class SwaggerConfig{ + + @Bean + public GroupedOpenApi userApi(){ + String[] paths = { "/**" }; + String[] packagedToMatch = { "net.srt" }; + return GroupedOpenApi.builder().group("SrtCloud") + .pathsToMatch(paths) + .packagesToScan(packagedToMatch).build(); + } + + @Bean + public OpenAPI customOpenAPI() { + Contact contact= new Contact(); + + OpenAPI openapi = new OpenAPI().info(new Info() + .title("SrtCloud") + .description( "SrtCloud") + .contact(contact) + .version("1.0") + .license(new License().name("MIT") + .url("https://zrxlh.top"))); + + openapi.addSecurityItem(new SecurityRequirement().addList("api_key")) + .components(new Components().addSecuritySchemes("api_key", + new SecurityScheme() + .name("Authorization") + .type(SecurityScheme.Type.APIKEY) + .in(SecurityScheme.In.HEADER))); + + return openapi; + } + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/WebConfig.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/WebConfig.java new file mode 100644 index 0000000..883fd27 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/config/WebConfig.java @@ -0,0 +1,54 @@ +package net.srt.framework.common.config; + +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule; +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; +import org.springframework.http.converter.ByteArrayHttpMessageConverter; +import org.springframework.http.converter.HttpMessageConverter; +import org.springframework.http.converter.ResourceHttpMessageConverter; +import org.springframework.http.converter.StringHttpMessageConverter; +import org.springframework.http.converter.json.MappingJackson2HttpMessageConverter; +import org.springframework.http.converter.support.AllEncompassingFormHttpMessageConverter; +import org.springframework.web.servlet.config.annotation.WebMvcConfigurer; + +import java.util.List; +import java.util.TimeZone; + +/** + * Web MVC配置 + * + * @author 阿沐 babamu@126.com + */ +@Configuration +public class WebConfig implements WebMvcConfigurer { + + @Override + public void configureMessageConverters(List> converters) { + converters.add(new ByteArrayHttpMessageConverter()); + converters.add(new StringHttpMessageConverter()); + converters.add(new ResourceHttpMessageConverter()); + converters.add(new AllEncompassingFormHttpMessageConverter()); + converters.add(new StringHttpMessageConverter()); + converters.add(jackson2HttpMessageConverter()); + } + + @Bean + public MappingJackson2HttpMessageConverter jackson2HttpMessageConverter() { + MappingJackson2HttpMessageConverter converter = new MappingJackson2HttpMessageConverter(); + ObjectMapper mapper = new ObjectMapper(); + + mapper.registerModule(new JavaTimeModule()); + // 忽略未知属性 + mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); + + // 统一日期格式转换,不建议开启 + //mapper.setDateFormat(new SimpleDateFormat(DateUtils.DATE_TIME_PATTERN)); + mapper.setTimeZone(TimeZone.getTimeZone("Asia/Shanghai")); + + converter.setObjectMapper(mapper); + return converter; + } + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/constant/Constant.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/constant/Constant.java new file mode 100644 index 0000000..50fcc05 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/constant/Constant.java @@ -0,0 +1,45 @@ +package net.srt.framework.common.constant; + +/** + * 常量 + * + * @author 阿沐 babamu@126.com + */ +public interface Constant { + /** + * 根节点标识 + */ + Long ROOT = 0L; + /** + * 当前页码 + */ + String PAGE = "page"; + /** + * 数据权限 + */ + String DATA_SCOPE = "dataScope"; + /** + * 超级管理员 + */ + Integer SUPER_ADMIN = 1; + /** + * 禁用 + */ + Integer DISABLE = 0; + /** + * 启用 + */ + Integer ENABLE = 1; + /** + * 失败 + */ + Integer FAIL = 0; + /** + * 成功 + */ + Integer SUCCESS = 1; + /** + * OK + */ + String OK = "OK"; +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/exception/ErrorCode.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/exception/ErrorCode.java new file mode 100644 index 0000000..1c6ba22 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/exception/ErrorCode.java @@ -0,0 +1,21 @@ +package net.srt.framework.common.exception; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * 错误编码 + * + * @author 阿沐 babamu@126.com +*/ +@Getter +@AllArgsConstructor +public enum ErrorCode { + UNAUTHORIZED(401, "还未授权,不能访问"), + FORBIDDEN(403, "没有权限,禁止访问"), + INTERNAL_SERVER_ERROR(500, "服务器异常,请稍后再试"), + ACCOUNT_PASSWORD_ERROR(1001, "账号或密码错误"); + + private final int code; + private final String msg; +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/exception/ServerException.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/exception/ServerException.java new file mode 100644 index 0000000..48e99a7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/exception/ServerException.java @@ -0,0 +1,37 @@ +package net.srt.framework.common.exception; + +import lombok.Data; +import lombok.EqualsAndHashCode; + +/** + * 自定义异常 + * + * @author 阿沐 babamu@126.com + */ +@Data +@EqualsAndHashCode(callSuper = true) +public class ServerException extends RuntimeException { + private static final long serialVersionUID = 1L; + + private int code; + private String msg; + + public ServerException(String msg) { + super(msg); + this.code = ErrorCode.INTERNAL_SERVER_ERROR.getCode(); + this.msg = msg; + } + + public ServerException(ErrorCode errorCode) { + super(errorCode.getMsg()); + this.code = errorCode.getCode(); + this.msg = errorCode.getMsg(); + } + + public ServerException(String msg, Throwable e) { + super(msg, e); + this.code = ErrorCode.INTERNAL_SERVER_ERROR.getCode(); + this.msg = msg; + } + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/exception/ServerExceptionHandler.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/exception/ServerExceptionHandler.java new file mode 100644 index 0000000..108aae2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/exception/ServerExceptionHandler.java @@ -0,0 +1,51 @@ +package net.srt.framework.common.exception; + +import lombok.extern.slf4j.Slf4j; +import net.srt.framework.common.utils.Result; +import org.springframework.security.access.AccessDeniedException; +import org.springframework.validation.BindException; +import org.springframework.validation.FieldError; +import org.springframework.web.bind.annotation.ExceptionHandler; +import org.springframework.web.bind.annotation.RestControllerAdvice; + + +/** + * 异常处理器 + * + * @author 阿沐 babamu@126.com + */ +@Slf4j +@RestControllerAdvice +public class ServerExceptionHandler { + /** + * 处理自定义异常 + */ + @ExceptionHandler(ServerException.class) + public Result handleException(ServerException ex) { + + return Result.error(ex.getCode(), ex.getMsg()); + } + + /** + * SpringMVC参数绑定,Validator校验不正确 + */ + @ExceptionHandler(BindException.class) + public Result bindException(BindException ex) { + FieldError fieldError = ex.getFieldError(); + assert fieldError != null; + return Result.error(fieldError.getDefaultMessage()); + } + + @ExceptionHandler(AccessDeniedException.class) + public Result handleAccessDeniedException(Exception ex) { + + return Result.error(ErrorCode.FORBIDDEN); + } + + @ExceptionHandler(Exception.class) + public Result handleException(Exception ex) { + log.error(ex.getMessage(), ex); + return Result.error("出错了!异常信息:" + ex.getMessage()); + } + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/page/PageResult.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/page/PageResult.java new file mode 100644 index 0000000..cf484d5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/page/PageResult.java @@ -0,0 +1,34 @@ +package net.srt.framework.common.page; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; + +import java.io.Serializable; +import java.util.List; + +/** + * 分页工具类 + * + * @author 阿沐 babamu@126.com + */ +@Data +@Schema(description = "分页数据") +public class PageResult implements Serializable { + private static final long serialVersionUID = 1L; + + @Schema(description = "总记录数") + private int total; + + @Schema(description = "列表数据") + private List list; + + /** + * 分页 + * @param list 列表数据 + * @param total 总记录数 + */ + public PageResult(List list, long total) { + this.list = list; + this.total = (int)total; + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/query/Query.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/query/Query.java new file mode 100644 index 0000000..d80557e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/query/Query.java @@ -0,0 +1,32 @@ +package net.srt.framework.common.query; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import org.hibernate.validator.constraints.Range; + +import javax.validation.constraints.Min; +import javax.validation.constraints.NotNull; + +/** + * 查询公共参数 + * + * @author 阿沐 babamu@126.com + */ +@Data +public class Query { + @NotNull(message = "页码不能为空") + @Min(value = 1, message = "页码最小值为 1") + @Schema(description = "当前页码", required = true) + Integer page; + + @NotNull(message = "每页条数不能为空") + @Range(min = 1, max = 1000, message = "每页条数,取值范围 1-1000") + @Schema(description = "每页条数", required = true) + Integer limit; + + @Schema(description = "排序字段") + String order; + + @Schema(description = "是否升序") + boolean asc; +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/AddressUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/AddressUtils.java new file mode 100644 index 0000000..e5eac3e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/AddressUtils.java @@ -0,0 +1,59 @@ +package net.srt.framework.common.utils; + +import cn.hutool.http.HttpUtil; +import cn.hutool.json.JSONUtil; +import lombok.Data; +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; + +import java.util.HashMap; +import java.util.Map; + +/** + * 获取真实地址 + * + * @author 阿沐 babamu@126.com + */ +@Slf4j +public class AddressUtils { + // 实时查询 + public static final String ADDRESS_URL = "https://whois.pconline.com.cn/ipJson.jsp"; + public static final String UNKNOWN = "未知"; + + public static String getAddressByIP(String ip) { + // 内网 + if (IpUtils.internalIp(ip)) { + return "内网IP"; + } + + try { + Map paramMap = new HashMap<>(); + paramMap.put("ip", ip); + paramMap.put("json", true); + String response = HttpUtil.get(ADDRESS_URL, paramMap); + if (StringUtils.isBlank(response)) { + log.error("根据IP获取地址异常 {}", ip); + return UNKNOWN; + } + + Address address = JSONUtil.toBean(response, Address.class); + return String.format("%s %s", address.getPro(), address.getCity()); + } catch (Exception e) { + log.error("根据IP获取地址异常 {}", ip); + } + + return UNKNOWN; + } + + @Data + static class Address { + /** + * 省 + */ + private String pro; + /** + * 市 + */ + private String city; + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/AssertUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/AssertUtils.java new file mode 100644 index 0000000..5e96a28 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/AssertUtils.java @@ -0,0 +1,32 @@ +package net.srt.framework.common.utils; + +import cn.hutool.core.util.ArrayUtil; +import cn.hutool.core.util.StrUtil; +import net.srt.framework.common.exception.ServerException; + +/** + * 校验工具类 + * + * @author 阿沐 babamu@126.com + */ +public class AssertUtils { + + public static void isBlank(String str, String variable) { + if (StrUtil.isBlank(str)) { + throw new ServerException(variable + "不能为空"); + } + } + + public static void isNull(Object object, String variable) { + if (object == null) { + throw new ServerException(variable + "不能为空"); + } + } + + public static void isArrayEmpty(Object[] array, String variable) { + if(ArrayUtil.isEmpty(array)){ + throw new ServerException(variable + "不能为空"); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/BeanCopyUtilCallBack.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/BeanCopyUtilCallBack.java new file mode 100644 index 0000000..a427e0e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/BeanCopyUtilCallBack.java @@ -0,0 +1,18 @@ +package net.srt.framework.common.utils; + + +/*** + * @author jrl + * @param + * @param + */ +@FunctionalInterface +public interface BeanCopyUtilCallBack { + + /** + * 定义默认回调方法 + * @param t + * @param s + */ + void callBack(S t, T s); +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/BeanUtil.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/BeanUtil.java new file mode 100644 index 0000000..cbf8b6d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/BeanUtil.java @@ -0,0 +1,74 @@ +package net.srt.framework.common.utils; + +import org.springframework.beans.BeanUtils; + +import java.util.ArrayList; +import java.util.List; +import java.util.function.Supplier; + +/** + * @ClassName BeanUtil + * @author jrl + * @Date 2022/3/21 14:47 + */ +public class BeanUtil { + + /** + * 普通对象复制 + * + * @param source + * @param target + */ + public static void copyProperties(Object source, Object target) { + BeanUtils.copyProperties(source, target); + } + + /** + * 普通对象复制 + * + * @param source + * @param target + */ + public static T copyProperties(Object source, Supplier target) { + if (source == null) { + return null; + } + T t = target.get(); + BeanUtils.copyProperties(source, t); + return t; + } + + /** + * 集合数据的拷贝 + * + * @param sources: 数据源类 + * @param target: 目标类::new(eg: UserVO::new) + * @return + */ + public static List copyListProperties(List sources, Supplier target) { + return copyListProperties(sources, target, null); + } + + + /** + * 带回调函数的集合数据的拷贝(可自定义字段拷贝规则) + * + * @param sources: 数据源类 + * @param target: 目标类::new(eg: UserVO::new) + * @param callBack: 回调函数 + * @return + */ + public static List copyListProperties(List sources, Supplier target, BeanCopyUtilCallBack callBack) { + List list = new ArrayList<>(sources.size()); + for (S source : sources) { + T t = target.get(); + copyProperties(source, t); + list.add(t); + if (callBack != null) { + // 回调 + callBack.callBack(source, t); + } + } + return list; + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/BuildTreeUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/BuildTreeUtils.java new file mode 100644 index 0000000..e92d733 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/BuildTreeUtils.java @@ -0,0 +1,54 @@ +package net.srt.framework.common.utils; + +import java.util.ArrayList; +import java.util.List; + +/** + * @ClassName BuildTreeUtils + * @Author zrx + * @Date 2022/11/14 14:26 + */ +public class BuildTreeUtils { + + + /** + * 构建结构树 + * + * @param nodeVos + * @return + */ + public static List buildTree(List nodeVos) { + List resultVos = new ArrayList<>(10); + for (TreeNodeVo node : nodeVos) { + // 一级菜单parentId为0 + if (node.getParentId() == 0) { + resultVos.add(node); + } + } + // 为一级菜单设置子菜单,getChild是递归调用的 + for (TreeNodeVo node : resultVos) { + node.setChildren(getChild(node.getId(), nodeVos)); + } + return resultVos; + } + + + private static List getChild(Long id, List nodeVos) { + // 子菜单 + List childList = new ArrayList<>(10); + for (TreeNodeVo node : nodeVos) { + // 遍历所有节点,将父菜单id与传过来的id比较 + if (node.getParentId() != 0) { + if (node.getParentId().equals(id)) { + childList.add(node); + } + } + } + // 把子菜单的子菜单再循环一遍 + for (TreeNodeVo node : childList) { + node.setChildren(getChild(node.getId(), nodeVos)); + } + return childList; + } + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/DateUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/DateUtils.java new file mode 100644 index 0000000..876adbe --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/DateUtils.java @@ -0,0 +1,98 @@ +package net.srt.framework.common.utils; + + +import java.text.ParseException; +import java.text.SimpleDateFormat; +import java.time.Instant; +import java.time.LocalDate; +import java.time.LocalDateTime; +import java.time.ZoneId; +import java.time.temporal.ChronoUnit; +import java.util.Date; + +/** + * 日期处理 + * + * @author 阿沐 babamu@126.com + */ +public class DateUtils { + /** 时间格式(yyyy-MM-dd) */ + public final static String DATE_PATTERN = "yyyy-MM-dd"; + /** 时间格式(yyyy-MM-dd HH:mm:ss) */ + public final static String DATE_TIME_PATTERN = "yyyy-MM-dd HH:mm:ss"; + + /** + * 日期格式化 日期格式为:yyyy-MM-dd + * @param date 日期 + * @return 返回yyyy-MM-dd格式日期 + */ + public static String format(Date date) { + return format(date, DATE_PATTERN); + } + + /** + * 日期格式化 日期格式为:yyyy-MM-dd + * @param date 日期 + * @return 返回yyyy-MM-dd格式日期 + */ + public static String formatDateTime(Date date) { + return format(date, DATE_TIME_PATTERN); + } + + /** + * 日期格式化 日期格式为:yyyy-MM-dd + * @param date 日期 + * @param pattern 格式,如:DateUtils.DATE_TIME_PATTERN + * @return 返回yyyy-MM-dd格式日期 + */ + public static String format(Date date, String pattern) { + if(date != null){ + SimpleDateFormat df = new SimpleDateFormat(pattern); + return df.format(date); + } + return null; + } + + /** + * 日期解析 + * @param date 日期 + * @param pattern 格式,如:DateUtils.DATE_TIME_PATTERN + * @return 返回Date + */ + public static Date parse(String date, String pattern) { + try { + return new SimpleDateFormat(pattern).parse(date); + } catch (ParseException e) { + e.printStackTrace(); + } + return null; + } + + public static Date asDate(LocalDate localDate) { + return Date.from(localDate.atStartOfDay().atZone(ZoneId.systemDefault()).toInstant()); + } + + public static Date asDate(LocalDateTime localDateTime) { + return Date.from(localDateTime.atZone(ZoneId.systemDefault()).toInstant()); + } + + public static LocalDate asLocalDate(Date date) { + return Instant.ofEpochMilli(date.getTime()).atZone(ZoneId.systemDefault()).toLocalDate(); + } + + public static LocalDateTime asLocalDateTime(Date date) { + return Instant.ofEpochMilli(date.getTime()).atZone(ZoneId.systemDefault()).toLocalDateTime(); + } + + public static String getDuration(long jobStartTimeMills, long jobEndTimeMills) { + Instant startTime = Instant.ofEpochMilli(jobStartTimeMills); + Instant endTime = Instant.ofEpochMilli(jobEndTimeMills); + + long days = ChronoUnit.DAYS.between(startTime, endTime); + long hours = ChronoUnit.HOURS.between(startTime, endTime); + long minutes = ChronoUnit.MINUTES.between(startTime, endTime); + long seconds = ChronoUnit.SECONDS.between(startTime, endTime); + return days + "天 " + (hours - (days * 24)) + "小时 " + (minutes - (hours * 60)) + "分 " + + (seconds - (minutes * 60)) + "秒"; + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/ExceptionUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/ExceptionUtils.java new file mode 100644 index 0000000..f7c96c8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/ExceptionUtils.java @@ -0,0 +1,31 @@ +package net.srt.framework.common.utils; + +import cn.hutool.core.io.IoUtil; + +import java.io.PrintWriter; +import java.io.StringWriter; + +/** + * Exception工具类 + * + * @author 阿沐 babamu@126.com + */ +public class ExceptionUtils { + + /** + * 获取异常信息 + * @param e 异常 + * @return 返回异常信息 + */ + public static String getExceptionMessage(Exception e) { + StringWriter sw = new StringWriter(); + PrintWriter pw = new PrintWriter(sw, true); + e.printStackTrace(pw); + + // 关闭IO流 + IoUtil.close(pw); + IoUtil.close(sw); + + return sw.toString(); + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/HttpContextUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/HttpContextUtils.java new file mode 100644 index 0000000..1be421f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/HttpContextUtils.java @@ -0,0 +1,55 @@ +package net.srt.framework.common.utils; + +import cn.hutool.core.util.StrUtil; +import org.springframework.http.HttpHeaders; +import org.springframework.web.context.request.RequestAttributes; +import org.springframework.web.context.request.RequestContextHolder; +import org.springframework.web.context.request.ServletRequestAttributes; + +import javax.servlet.http.HttpServletRequest; +import java.util.Enumeration; +import java.util.HashMap; +import java.util.Map; + +/** + * Http + * + * @author 阿沐 babamu@126.com + */ +public class HttpContextUtils { + + public static HttpServletRequest getHttpServletRequest() { + RequestAttributes requestAttributes = RequestContextHolder.getRequestAttributes(); + if(requestAttributes == null){ + return null; + } + + return ((ServletRequestAttributes) requestAttributes).getRequest(); + } + + public static Map getParameterMap(HttpServletRequest request) { + Enumeration parameters = request.getParameterNames(); + + Map params = new HashMap<>(); + while (parameters.hasMoreElements()) { + String parameter = parameters.nextElement(); + String value = request.getParameter(parameter); + if (StrUtil.isNotBlank(value)) { + params.put(parameter, value); + } + } + + return params; + } + + public static String getDomain(){ + HttpServletRequest request = getHttpServletRequest(); + StringBuffer url = request.getRequestURL(); + return url.delete(url.length() - request.getRequestURI().length(), url.length()).toString(); + } + + public static String getOrigin(){ + HttpServletRequest request = getHttpServletRequest(); + return request.getHeader(HttpHeaders.ORIGIN); + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/IpUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/IpUtils.java new file mode 100644 index 0000000..9ebc173 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/IpUtils.java @@ -0,0 +1,205 @@ +package net.srt.framework.common.utils; + +import javax.servlet.http.HttpServletRequest; +import java.net.InetAddress; +import java.net.UnknownHostException; + +/** + * IP地址 工具类 + * + * @author 阿沐 babamu@126.com + */ +public class IpUtils { + + /** + * 获取客户端IP地址 + */ + public static String getIpAddr(HttpServletRequest request) { + if (request == null) { + return "unknown"; + } + String ip = request.getHeader("x-forwarded-for"); + if (ip == null || ip.length() == 0 || "unknown".equalsIgnoreCase(ip)) { + ip = request.getHeader("Proxy-Client-IP"); + } + if (ip == null || ip.length() == 0 || "unknown".equalsIgnoreCase(ip)) { + ip = request.getHeader("X-Forwarded-For"); + } + if (ip == null || ip.length() == 0 || "unknown".equalsIgnoreCase(ip)) { + ip = request.getHeader("WL-Proxy-Client-IP"); + } + if (ip == null || ip.length() == 0 || "unknown".equalsIgnoreCase(ip)) { + ip = request.getHeader("X-Real-IP"); + } + if (ip == null || ip.length() == 0 || "unknown".equalsIgnoreCase(ip)) { + ip = request.getRemoteAddr(); + } + + return "0:0:0:0:0:0:0:1".equals(ip) ? "124.223.48.209" : getMultistageReverseProxyIp(ip); + } + + /** + * 检查是否为内部IP地址 + * + * @param ip IP地址 + */ + public static boolean internalIp(String ip) { + byte[] addr = textToNumericFormatV4(ip); + return internalIp(addr) || "124.223.48.209".equals(ip); + } + + /** + * 检查是否为内部IP地址 + * + * @param addr byte地址 + */ + private static boolean internalIp(byte[] addr) { + if (addr == null || addr.length < 2) { + return true; + } + final byte b0 = addr[0]; + final byte b1 = addr[1]; + // 10.x.x.x/8 + final byte SECTION_1 = 0x0A; + // 172.16.x.x/12 + final byte SECTION_2 = (byte) 0xAC; + final byte SECTION_3 = (byte) 0x10; + final byte SECTION_4 = (byte) 0x1F; + // 192.168.x.x/16 + final byte SECTION_5 = (byte) 0xC0; + final byte SECTION_6 = (byte) 0xA8; + switch (b0) { + case SECTION_1: + return true; + case SECTION_2: + if (b1 >= SECTION_3 && b1 <= SECTION_4) { + return true; + } + case SECTION_5: + if (b1 == SECTION_6) { + return true; + } + default: + return false; + } + } + + /** + * 将IPv4地址转换成字节 + * + * @param text IPv4地址 + * @return byte 字节 + */ + public static byte[] textToNumericFormatV4(String text) { + if (text.length() == 0) { + return null; + } + + byte[] bytes = new byte[4]; + String[] elements = text.split("\\.", -1); + try { + long l; + int i; + switch (elements.length) { + case 1: + l = Long.parseLong(elements[0]); + if ((l < 0L) || (l > 4294967295L)) { + return null; + } + bytes[0] = (byte) (int) (l >> 24 & 0xFF); + bytes[1] = (byte) (int) ((l & 0xFFFFFF) >> 16 & 0xFF); + bytes[2] = (byte) (int) ((l & 0xFFFF) >> 8 & 0xFF); + bytes[3] = (byte) (int) (l & 0xFF); + break; + case 2: + l = Integer.parseInt(elements[0]); + if ((l < 0L) || (l > 255L)) { + return null; + } + bytes[0] = (byte) (int) (l & 0xFF); + l = Integer.parseInt(elements[1]); + if ((l < 0L) || (l > 16777215L)) { + return null; + } + bytes[1] = (byte) (int) (l >> 16 & 0xFF); + bytes[2] = (byte) (int) ((l & 0xFFFF) >> 8 & 0xFF); + bytes[3] = (byte) (int) (l & 0xFF); + break; + case 3: + for (i = 0; i < 2; ++i) { + l = Integer.parseInt(elements[i]); + if ((l < 0L) || (l > 255L)) { + return null; + } + bytes[i] = (byte) (int) (l & 0xFF); + } + l = Integer.parseInt(elements[2]); + if ((l < 0L) || (l > 65535L)) { + return null; + } + bytes[2] = (byte) (int) (l >> 8 & 0xFF); + bytes[3] = (byte) (int) (l & 0xFF); + break; + case 4: + for (i = 0; i < 4; ++i) { + l = Integer.parseInt(elements[i]); + if ((l < 0L) || (l > 255L)) { + return null; + } + bytes[i] = (byte) (int) (l & 0xFF); + } + break; + default: + return null; + } + } catch (NumberFormatException e) { + return null; + } + return bytes; + } + + /** + * 获取本地IP地址 + * + * @return 本地IP地址 + */ + public static String getHostIp() { + try { + return InetAddress.getLocalHost().getHostAddress(); + } catch (UnknownHostException ignored) { + + } + return "124.223.48.209"; + } + + /** + * 获取主机名 + * + * @return 本地主机名 + */ + public static String getHostName() { + try { + return InetAddress.getLocalHost().getHostName(); + } catch (UnknownHostException ignored) { + + } + return "未知"; + } + + /** + * 从反向代理中,获得第一个非 unknown IP地址 + */ + public static String getMultistageReverseProxyIp(String ip) { + // 反向代理检测 + if (ip.indexOf(",") > 0) { + final String[] ips = ip.trim().split(","); + for (String sub : ips) { + if (!"unknown".equalsIgnoreCase(sub)) { + ip = sub; + break; + } + } + } + return ip; + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/JsonUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/JsonUtils.java new file mode 100644 index 0000000..2446723 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/JsonUtils.java @@ -0,0 +1,68 @@ +package net.srt.framework.common.utils; + +import cn.hutool.core.util.ArrayUtil; +import cn.hutool.core.util.StrUtil; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.ObjectMapper; + +import java.util.ArrayList; +import java.util.List; + +/** + * JSON 工具类 + * + * @author 阿沐 babamu@126.com + */ +public class JsonUtils { + private static final ObjectMapper objectMapper = new ObjectMapper(); + + public static String toJsonString(Object object) { + try { + return objectMapper.writeValueAsString(object); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + public static T parseObject(String text, Class clazz) { + if (StrUtil.isEmpty(text)) { + return null; + } + try { + return objectMapper.readValue(text, clazz); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + public static T parseObject(byte[] bytes, Class clazz) { + if (ArrayUtil.isEmpty(bytes)) { + return null; + } + try { + return objectMapper.readValue(bytes, clazz); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + public static T parseObject(String text, TypeReference typeReference) { + try { + return objectMapper.readValue(text, typeReference); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + public static List parseArray(String text, Class clazz) { + if (StrUtil.isEmpty(text)) { + return new ArrayList<>(); + } + try { + return objectMapper.readValue(text, objectMapper.getTypeFactory().constructCollectionType(List.class, clazz)); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/Result.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/Result.java new file mode 100644 index 0000000..ebdb9e2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/Result.java @@ -0,0 +1,52 @@ +package net.srt.framework.common.utils; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import net.srt.framework.common.exception.ErrorCode; + +/** + * 响应数据 + * + * @author 阿沐 babamu@126.com + */ +@Data +@Schema(description = "响应") +public class Result { + @Schema(description = "编码 0表示成功,其他值表示失败") + private int code = 0; + + @Schema(description = "消息内容") + private String msg = "success"; + + @Schema(description = "响应数据") + private T data; + + public static Result ok() { + return ok(null); + } + + public static Result ok(T data) { + Result result = new Result<>(); + result.setData(data); + return result; + } + + public static Result error() { + return error(ErrorCode.INTERNAL_SERVER_ERROR); + } + + public static Result error(String msg) { + return error(ErrorCode.INTERNAL_SERVER_ERROR.getCode(), msg); + } + + public static Result error(ErrorCode errorCode) { + return error(errorCode.getCode(), errorCode.getMsg()); + } + + public static Result error(int code, String msg) { + Result result = new Result<>(); + result.setCode(code); + result.setMsg(msg); + return result; + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/SqlUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/SqlUtils.java new file mode 100644 index 0000000..6a3987f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/SqlUtils.java @@ -0,0 +1,41 @@ +package net.srt.framework.common.utils; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +/** + * @ClassName SqlUtils + * @Author zrx + * @Date 2022/11/12 15:20 + */ +public class SqlUtils { + public static Map convertColumns(List columns) { + if (null == columns || columns.isEmpty()) { + return new HashMap<>(); + } + Map result = new LinkedHashMap<>(6); + for (String column : columns) { + result.put(column, column); + } + return result; + } + + public static List> convertRows(List columns, List> rows) { + if (null == rows || rows.isEmpty()) { + return Collections.emptyList(); + } + List> result = new ArrayList<>(rows.size()); + for (List row : rows) { + Map map = new HashMap<>(); + for (int i = 0; i < row.size(); ++i) { + map.put(columns.get(i), row.get(i)); + } + result.add(map); + } + return result; + } +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/TreeNode.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/TreeNode.java new file mode 100644 index 0000000..6e63ef4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/TreeNode.java @@ -0,0 +1,34 @@ +package net.srt.framework.common.utils; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; + +import javax.validation.constraints.NotNull; +import java.io.Serializable; +import java.util.ArrayList; +import java.util.List; + +/** + * 树节点,所有需要实现树节点的,都需要继承该类 + * + * @author 阿沐 babamu@126.com + */ +@Data +public class TreeNode implements Serializable { + private static final long serialVersionUID = 1L; + /** + * 主键 + */ + @Schema(description = "id") + private Long id; + /** + * 上级ID + */ + @Schema(description = "上级ID") + @NotNull(message = "上级ID不能为空") + private Long pid; + /** + * 子节点列表 + */ + private List children = new ArrayList<>(); +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/TreeNodeVo.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/TreeNodeVo.java new file mode 100644 index 0000000..2b94761 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/TreeNodeVo.java @@ -0,0 +1,48 @@ +package net.srt.framework.common.utils; + +import com.fasterxml.jackson.annotation.JsonFormat; +import lombok.Data; + +import java.util.Date; +import java.util.List; + +/** + * @ClassName TreeListVo + * @Author zrx + * @Date 2022/11/14 14:06 + */ +@Data +public class TreeNodeVo { + private Long id; + private Long parentId; + private Integer ifLeaf; + //作业类型 + private Long taskId; + private Integer taskType; + private String parentPath; + private String path; + private Integer orderNo; + private String label; + private Long metamodelId; + private String name; + private String icon; + private String code; + private Integer builtin; + private String description; + private Long projectId; + private Long creator; + @JsonFormat(pattern = DateUtils.DATE_TIME_PATTERN) + private Date createTime; + private List children; + private boolean disabled; + private Boolean leaf; + /** + * 自定义属性 + */ + private Object attributes; + /** + * 自定义类型 + */ + private Object type; + private Object value; +} diff --git a/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/TreeUtils.java b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/TreeUtils.java new file mode 100644 index 0000000..64525c2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-common/src/main/java/net/srt/framework/common/utils/TreeUtils.java @@ -0,0 +1,70 @@ +package net.srt.framework.common.utils; + + +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +/** + * 树形结构工具类,如:菜单、机构等 + * + * @author 阿沐 babamu@126.com + */ +public class TreeUtils { + + /** + * 根据pid,构建树节点 + */ + public static List build(List treeNodes, Long pid) { + // pid不能为空 + AssertUtils.isNull(pid, "pid"); + + List treeList = new ArrayList<>(); + for(T treeNode : treeNodes) { + if (pid.equals(treeNode.getPid())) { + treeList.add(findChildren(treeNodes, treeNode)); + } + } + + return treeList; + } + + /** + * 查找子节点 + */ + private static T findChildren(List treeNodes, T rootNode) { + for(T treeNode : treeNodes) { + if(rootNode.getId().equals(treeNode.getPid())) { + rootNode.getChildren().add(findChildren(treeNodes, treeNode)); + } + } + return rootNode; + } + + /** + * 构建树节点 + */ + public static List build(List treeNodes) { + List result = new ArrayList<>(); + + // list转map + Map nodeMap = new LinkedHashMap<>(treeNodes.size()); + for(T treeNode : treeNodes){ + nodeMap.put(treeNode.getId(), treeNode); + } + + for(T node : nodeMap.values()) { + T parent = nodeMap.get(node.getPid()); + if(parent != null && !(node.getId().equals(parent.getId()))){ + parent.getChildren().add(node); + continue; + } + + result.add(node); + } + + return result; + } + +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/pom.xml b/srt-cloud-framework/srt-cloud-data-lineage/pom.xml new file mode 100644 index 0000000..1640fa2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/pom.xml @@ -0,0 +1,28 @@ + + + + srt-cloud-framework + net.srt + 2.0.0 + + 4.0.0 + + srt-cloud-data-lineage + + + + + org.springframework.boot + spring-boot-starter-data-neo4j + + + spring-boot-starter-logging + org.springframework.boot + + + + + + diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/constant/NodeType.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/constant/NodeType.java new file mode 100644 index 0000000..60e0336 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/constant/NodeType.java @@ -0,0 +1,39 @@ +package net.srt.lineage.constant; + +/** + * + **/ +public enum NodeType { + + /** + * DATABASE + */ + DATABASE(1, "DATABASE"), + + /** + * TABLE + */ + TABLE(2, "TABLE"), + + /** + * COLUMN + */ + COLUMN(3, "COLUMN"); + + + private final Integer code; + private final String value; + + NodeType(Integer code, String value) { + this.code = code; + this.value = value; + } + + public String getValue() { + return this.value; + } + + public Integer getCode() { + return this.code; + } +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/constant/RelationType.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/constant/RelationType.java new file mode 100644 index 0000000..16dd411 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/constant/RelationType.java @@ -0,0 +1,49 @@ +package net.srt.lineage.constant; + +/** + * + **/ +public enum RelationType { + + /** + * DATABASE_TO_DATABASE + */ + DATABASE_TO_DATABASE(1, "DATABASE_TO_DATABASE"), + + /** + * DATABASE_CONTAIN_TABLE + */ + DATABASE_CONTAIN_TABLE(2, "DATABASE_CONTAIN_TABLE"), + + /** + * TABLE_TO_TABLE + */ + TABLE_TO_TABLE(3, "TABLE_TO_TABLE"), + + /** + * TABLE_CONSTAIN_COLUMN + */ + TABLE_CONTAIN_COLUMN(4, "TABLE_CONSTAIN_COLUMN"), + + /** + * COLUMN_TO_COLUMN + */ + COLUMN_TO_COLUMN(5, "COLUMN_TO_COLUMN"); + + + private final Integer code; + private final String value; + + RelationType(Integer code, String value) { + this.code = code; + this.value = value; + } + + public String getValue() { + return this.value; + } + + public Integer getCode() { + return this.code; + } +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/node/Column.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/node/Column.java new file mode 100644 index 0000000..6d90c96 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/node/Column.java @@ -0,0 +1,47 @@ +package net.srt.lineage.node; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import net.srt.lineage.relation.ColumnRelation; +import org.springframework.data.neo4j.core.schema.GeneratedValue; +import org.springframework.data.neo4j.core.schema.Id; +import org.springframework.data.neo4j.core.schema.Node; +import org.springframework.data.neo4j.core.schema.Property; +import org.springframework.data.neo4j.core.schema.Relationship; + +import java.util.ArrayList; +import java.util.List; + +/** + * @ClassName Table + * @Author zrx + * @Date 2023/4/10 14:24 + */ +@Data +@AllArgsConstructor +@NoArgsConstructor +@Node("COLUMN") +public class Column { + @Id + @GeneratedValue + private Long id; + @Property + private String name; + @Property + private String code; + @Property + private Long parentId; + @Property + private Long databaseId; + + public Column(String name, String code, Long databaseId, Long parentId) { + this.name = name; + this.code = code; + this.databaseId = databaseId; + this.parentId = parentId; + } + + @Relationship(type = "FLOW_TO", direction = Relationship.Direction.OUTGOING) + private List columnRelations = new ArrayList<>(); +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/node/Database.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/node/Database.java new file mode 100644 index 0000000..dc84ec8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/node/Database.java @@ -0,0 +1,61 @@ +package net.srt.lineage.node; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import net.srt.lineage.relation.DatabaseRelation; +import net.srt.lineage.relation.DatabaseTableRelation; +import org.springframework.data.neo4j.core.schema.GeneratedValue; +import org.springframework.data.neo4j.core.schema.Id; +import org.springframework.data.neo4j.core.schema.Node; +import org.springframework.data.neo4j.core.schema.Property; +import org.springframework.data.neo4j.core.schema.Relationship; + +import java.util.ArrayList; +import java.util.List; + +/** + * 数据库节点 + * + * @ClassName Database + * @Author zrx + * @Date 2023/4/10 14:13 + */ +@Data +@NoArgsConstructor +@AllArgsConstructor +@Node("DATABASE") +public class Database { + @Id + @GeneratedValue + private Long id; + @Property + private String name; + @Property + private String code; + @Property + private String jdbcUrl; + @Property + private String username; + @Property + private String password; + //库中的id + @Property + private Long databaseId; + + public Database(String name, String code, String jdbcUrl, String username, String password, Long databaseId) { + this.name = name; + this.code = code; + this.jdbcUrl = jdbcUrl; + this.username = username; + this.password = password; + this.databaseId = databaseId; + } + + @Relationship(type = "FLOW_TO", direction = Relationship.Direction.OUTGOING) + private List databaseRelations = new ArrayList<>(); + + @Relationship(type = "BELONG_TO", direction = Relationship.Direction.INCOMING) + private List databaseTableRelations = new ArrayList<>(); + +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/node/Table.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/node/Table.java new file mode 100644 index 0000000..16b6cfb --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/node/Table.java @@ -0,0 +1,51 @@ +package net.srt.lineage.node; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import net.srt.lineage.relation.TableColumnRelation; +import net.srt.lineage.relation.TableRelation; +import org.springframework.data.neo4j.core.schema.GeneratedValue; +import org.springframework.data.neo4j.core.schema.Id; +import org.springframework.data.neo4j.core.schema.Node; +import org.springframework.data.neo4j.core.schema.Property; +import org.springframework.data.neo4j.core.schema.Relationship; + +import java.util.ArrayList; +import java.util.List; + +/** + * @ClassName Table + * @Author zrx + * @Date 2023/4/10 14:24 + */ +@Data +@AllArgsConstructor +@NoArgsConstructor +@Node("TABLE") +public class Table { + @Id + @GeneratedValue + private Long id; + @Property + private String name; + @Property + private String code; + @Property + private Long parentId; + @Property + private Long databaseId; + + public Table(String name, String code, Long databaseId, Long parentId) { + this.name = name; + this.code = code; + this.databaseId = databaseId; + this.parentId = parentId; + } + + @Relationship(type = "FLOW_TO", direction = Relationship.Direction.OUTGOING) + private List tableRelations = new ArrayList<>(); + + @Relationship(type = "BELONG_TO", direction = Relationship.Direction.INCOMING) + private List tableColumnRelations = new ArrayList<>(); +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/ColumnRelation.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/ColumnRelation.java new file mode 100644 index 0000000..1955971 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/ColumnRelation.java @@ -0,0 +1,47 @@ +package net.srt.lineage.relation; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import net.srt.lineage.node.Column; +import org.springframework.data.neo4j.core.schema.Property; +import org.springframework.data.neo4j.core.schema.RelationshipId; +import org.springframework.data.neo4j.core.schema.RelationshipProperties; +import org.springframework.data.neo4j.core.schema.TargetNode; + +/** + * @ClassName DatabaseRelation + * @Author zrx + * @Date 2023/4/10 14:35 + */ +@Data +@AllArgsConstructor +@NoArgsConstructor +@RelationshipProperties +public class ColumnRelation { + + @RelationshipId + private Long id; + private Long relId; + @TargetNode + private Column column; + @Property + private String relationType; + @Property + private Long dataAccessId; + @Property + private String dataAccessName; + @Property + private Long dataProductionTaskId; + @Property + private String dataProductionTaskName; + + public ColumnRelation(Column column, String relationType, Long dataAccessId, String dataAccessName) { + this.column = column; + this.relationType = relationType; + this.dataAccessId = dataAccessId; + this.dataAccessName = dataAccessName; + } + + +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/DatabaseRelation.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/DatabaseRelation.java new file mode 100644 index 0000000..1a2bf83 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/DatabaseRelation.java @@ -0,0 +1,54 @@ +package net.srt.lineage.relation; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import net.srt.lineage.node.Database; +import org.springframework.data.neo4j.core.schema.Property; +import org.springframework.data.neo4j.core.schema.RelationshipId; +import org.springframework.data.neo4j.core.schema.RelationshipProperties; +import org.springframework.data.neo4j.core.schema.TargetNode; + +/** + * @ClassName DatabaseRelation + * @Author zrx + * @Date 2023/4/10 14:35 + */ +@Data +@NoArgsConstructor +@AllArgsConstructor +@RelationshipProperties +public class DatabaseRelation { + + @RelationshipId + private Long id; + private Long relId; + @TargetNode + private Database database; + @Property + private String relationType; + @Property + private Long dataAccessId; + @Property + private String dataAccessName; + @Property + private Long dataProductionTaskId; + @Property + private String dataProductionTaskName; + + public DatabaseRelation(Database database, String relationType, Long dataAccessId, String dataAccessName, Long dataProductionTaskId, String dataProductionTaskName) { + this.database = database; + this.relationType = relationType; + this.dataAccessId = dataAccessId; + this.dataAccessName = dataAccessName; + this.dataProductionTaskId = dataProductionTaskId; + this.dataProductionTaskName = dataProductionTaskName; + } + + public DatabaseRelation(Database database, String relationType, Long dataAccessId, String dataAccessName) { + this.database = database; + this.relationType = relationType; + this.dataAccessId = dataAccessId; + this.dataAccessName = dataAccessName; + } +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/DatabaseTableRelation.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/DatabaseTableRelation.java new file mode 100644 index 0000000..91a401b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/DatabaseTableRelation.java @@ -0,0 +1,36 @@ +package net.srt.lineage.relation; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import net.srt.lineage.node.Table; +import org.springframework.data.neo4j.core.schema.Property; +import org.springframework.data.neo4j.core.schema.RelationshipId; +import org.springframework.data.neo4j.core.schema.RelationshipProperties; +import org.springframework.data.neo4j.core.schema.TargetNode; + +/** + * @ClassName DatabaseRelation + * @Author zrx + * @Date 2023/4/10 14:35 + */ +@Data +@RelationshipProperties +@NoArgsConstructor +@AllArgsConstructor +public class DatabaseTableRelation { + + @RelationshipId + private Long id; + private Long relId; + @TargetNode + private Table table; + @Property + private String relationType; + + public DatabaseTableRelation(Table table, String relationType) { + this.table = table; + this.relationType = relationType; + } + +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/TableColumnRelation.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/TableColumnRelation.java new file mode 100644 index 0000000..d9be62b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/TableColumnRelation.java @@ -0,0 +1,36 @@ +package net.srt.lineage.relation; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import net.srt.lineage.node.Column; +import org.springframework.data.neo4j.core.schema.Property; +import org.springframework.data.neo4j.core.schema.RelationshipId; +import org.springframework.data.neo4j.core.schema.RelationshipProperties; +import org.springframework.data.neo4j.core.schema.TargetNode; + +/** + * @ClassName DatabaseRelation + * @Author zrx + * @Date 2023/4/10 14:35 + */ +@Data +@AllArgsConstructor +@NoArgsConstructor +@RelationshipProperties +public class TableColumnRelation { + + @RelationshipId + private Long id; + private Long relId; + @TargetNode + private Column column; + @Property + private String relationType; + + public TableColumnRelation(Column column, String relationType) { + this.column = column; + this.relationType = relationType; + } + +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/TableRelation.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/TableRelation.java new file mode 100644 index 0000000..9247ed6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/relation/TableRelation.java @@ -0,0 +1,46 @@ +package net.srt.lineage.relation; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import net.srt.lineage.node.Table; +import org.springframework.data.neo4j.core.schema.Property; +import org.springframework.data.neo4j.core.schema.RelationshipId; +import org.springframework.data.neo4j.core.schema.RelationshipProperties; +import org.springframework.data.neo4j.core.schema.TargetNode; + +/** + * @ClassName DatabaseRelation + * @Author zrx + * @Date 2023/4/10 14:35 + */ +@Data +@AllArgsConstructor +@NoArgsConstructor +@RelationshipProperties +public class TableRelation { + + @RelationshipId + private Long id; + private Long relId; + @TargetNode + private Table table; + @Property + private String relationType; + @Property + private Long dataAccessId; + @Property + private String dataAccessName; + @Property + private Long dataProductionTaskId; + @Property + private String dataProductionTaskName; + + public TableRelation(Table table, String relationType, Long dataAccessId, String dataAccessName) { + this.table = table; + this.relationType = relationType; + this.dataAccessId = dataAccessId; + this.dataAccessName = dataAccessName; + } + +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/ColumnRelationRepository.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/ColumnRelationRepository.java new file mode 100644 index 0000000..b7811c6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/ColumnRelationRepository.java @@ -0,0 +1,24 @@ +package net.srt.lineage.repository; + +import net.srt.lineage.relation.ColumnRelation; +import net.srt.lineage.relation.DatabaseRelation; +import org.springframework.data.neo4j.repository.Neo4jRepository; +import org.springframework.data.neo4j.repository.query.Query; +import org.springframework.data.repository.query.Param; +import org.springframework.stereotype.Repository; + +@Repository +public interface ColumnRelationRepository extends Neo4jRepository { + + @Query("match(a:COLUMN) WHERE ID(a)=$sourceId match(b:COLUMN) WHERE ID(b)=$targetId match p=(a)-[r:FLOW_TO]->(b) return {relId:ID(r)} LIMIT 1") + ColumnRelation getBySourceAndTargetId(Long sourceId, Long targetId); + + @Query("match(a:COLUMN) WHERE ID(a)=$sourceId match(b:COLUMN) WHERE ID(b)=$targetId " + + "create (a)-[r:FLOW_TO" + + "{relationType::#{#columnRelation.relationType},dataAccessId::#{#columnRelation.dataAccessId},dataAccessName::#{#columnRelation.dataAccessName}}" + + "]->(b) ") + void create(@Param("sourceId") Long sourceId, @Param("targetId") Long targetId, @Param("columnRelation") ColumnRelation columnRelation); + + @Query("MATCH (n:COLUMN)-[r:FLOW_TO]->(m:COLUMN) WHERE ID(r)=:#{#columnRelation.id} SET r.relationType=:#{#columnRelation.relationType},r.dataAccessId=:#{#columnRelation.dataAccessId},r.dataAccessName=:#{#columnRelation.dataAccessName}") + void update(@Param("columnRelation") ColumnRelation columnRelation); +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/ColumnRepository.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/ColumnRepository.java new file mode 100644 index 0000000..ff352d3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/ColumnRepository.java @@ -0,0 +1,17 @@ +package net.srt.lineage.repository; + +import net.srt.lineage.node.Column; +import org.springframework.data.neo4j.repository.Neo4jRepository; +import org.springframework.data.neo4j.repository.query.Query; +import org.springframework.data.repository.query.Param; +import org.springframework.stereotype.Repository; + +@Repository +public interface ColumnRepository extends Neo4jRepository { + + @Query("MATCH (n:COLUMN) WHERE n.databaseId=$databaseId AND n.code=$code AND n.parentId=$parentId return n LIMIT 1") + Column get(Long databaseId, String code, Long parentId); + + @Query("MATCH (n:COLUMN) WHERE ID(n)=:#{#column.id} SET n.name=:#{#column.name},n.code=:#{#column.code},n.parentId=:#{#column.parentId},n.databaseId=:#{#column.databaseId}") + void update(@Param("column") Column column); +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/DatabaseRelationRepository.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/DatabaseRelationRepository.java new file mode 100644 index 0000000..239af51 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/DatabaseRelationRepository.java @@ -0,0 +1,23 @@ +package net.srt.lineage.repository; + +import net.srt.lineage.relation.DatabaseRelation; +import org.springframework.data.neo4j.repository.Neo4jRepository; +import org.springframework.data.neo4j.repository.query.Query; +import org.springframework.data.repository.query.Param; +import org.springframework.stereotype.Repository; + +@Repository +public interface DatabaseRelationRepository extends Neo4jRepository { + + @Query("match(a:DATABASE) WHERE ID(a)=$sourceId match(b:DATABASE) WHERE ID(b)=$targetId match p=(a)-[r:FLOW_TO]->(b) return {relId:ID(r)} LIMIT 1") + DatabaseRelation getBySourceAndTargetId(Long sourceId, Long targetId); + + @Query("match(a:DATABASE) WHERE ID(a)=$sourceId match(b:DATABASE) WHERE ID(b)=$targetId " + + "create (a)-[r:FLOW_TO" + + "{relationType::#{#databaseRelation.relationType},dataAccessId::#{#databaseRelation.dataAccessId},dataAccessName::#{#databaseRelation.dataAccessName}}" + + "]->(b) ") + void create(Long sourceId, Long targetId, @Param("databaseRelation") DatabaseRelation databaseRelation); + + @Query("MATCH (n:DATABASE)-[r:FLOW_TO]->(m:DATABASE) WHERE ID(r)=:#{#databaseRelation.id} SET r.relationType=:#{#databaseRelation.relationType},r.dataAccessId=:#{#databaseRelation.dataAccessId},r.dataAccessName=:#{#databaseRelation.dataAccessName}") + void update(@Param("databaseRelation") DatabaseRelation databaseRelation); +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/DatabaseRepository.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/DatabaseRepository.java new file mode 100644 index 0000000..29243bb --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/DatabaseRepository.java @@ -0,0 +1,22 @@ +package net.srt.lineage.repository; + +import net.srt.lineage.node.Database; +import org.springframework.data.neo4j.repository.Neo4jRepository; +import org.springframework.data.neo4j.repository.query.Query; +import org.springframework.data.repository.query.Param; +import org.springframework.stereotype.Repository; + +import java.util.List; + +@Repository +public interface DatabaseRepository extends Neo4jRepository { + + @Query("MATCH (n:DATABASE) WHERE n.databaseId=$databaseId return n LIMIT 1") + Database getByDatabaseId(Long databaseId); + + @Query("MATCH (n:DATABASE) WHERE ID(n)=:#{#database.id} SET n.name=:#{#database.name},n.code=:#{#database.code},n.jdbcUrl=:#{#database.jdbcUrl},n.username=:#{#database.username},n.password=:#{#database.password}") + void update(@Param("database") Database database); + + /*@Query("MATCH p=()-[r:FLOW_TO]->() RETURN p LIMIT 25") + List selectAll();*/ +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/DatabaseTableRelationRepository.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/DatabaseTableRelationRepository.java new file mode 100644 index 0000000..7b155fa --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/DatabaseTableRelationRepository.java @@ -0,0 +1,20 @@ +package net.srt.lineage.repository; + +import net.srt.lineage.relation.DatabaseRelation; +import net.srt.lineage.relation.DatabaseTableRelation; +import org.springframework.data.neo4j.repository.Neo4jRepository; +import org.springframework.data.neo4j.repository.query.Query; +import org.springframework.stereotype.Repository; + +@Repository +public interface DatabaseTableRelationRepository extends Neo4jRepository { + + @Query("match(a:DATABASE) WHERE ID(a)=$targetId match(b:TABLE) WHERE ID(b)=$sourceId match p=(b)-[r:BELONG_TO]->(a) return {relId:ID(r)} LIMIT 1") + DatabaseTableRelation getBySourceAndTargetId(Long sourceId, Long targetId); + + @Query("match(a:DATABASE) WHERE ID(a)=$targetId match(b:TABLE) WHERE ID(b)=$sourceId " + + "create (b)-[r:BELONG_TO" + + "{relationType:$relationType}" + + "]->(a) ") + void create(Long sourceId, Long targetId, String relationType); +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/TableColumnRelationRepository.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/TableColumnRelationRepository.java new file mode 100644 index 0000000..5046bba --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/TableColumnRelationRepository.java @@ -0,0 +1,19 @@ +package net.srt.lineage.repository; + +import net.srt.lineage.relation.TableColumnRelation; +import org.springframework.data.neo4j.repository.Neo4jRepository; +import org.springframework.data.neo4j.repository.query.Query; +import org.springframework.stereotype.Repository; + +@Repository +public interface TableColumnRelationRepository extends Neo4jRepository { + + @Query("match(a:TABLE) WHERE ID(a)=$targetId match(b:COLUMN) WHERE ID(b)=$sourceId match p=(b)-[r:BELONG_TO]->(a) return {relId:ID(r)} LIMIT 1") + TableColumnRelation getBySourceAndTargetId(Long sourceId, Long targetId); + + @Query("match(a:TABLE) WHERE ID(a)=$targetId match(b:COLUMN) WHERE ID(b)=$sourceId " + + "create (b)-[r:BELONG_TO" + + "{relationType:$relationType}" + + "]->(a) ") + void create(Long sourceId, Long targetId, String relationType); +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/TableRelationRepository.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/TableRelationRepository.java new file mode 100644 index 0000000..3d6e602 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/TableRelationRepository.java @@ -0,0 +1,24 @@ +package net.srt.lineage.repository; + +import net.srt.lineage.relation.DatabaseRelation; +import net.srt.lineage.relation.TableRelation; +import org.springframework.data.neo4j.repository.Neo4jRepository; +import org.springframework.data.neo4j.repository.query.Query; +import org.springframework.data.repository.query.Param; +import org.springframework.stereotype.Repository; + +@Repository +public interface TableRelationRepository extends Neo4jRepository { + + @Query("match(a:TABLE) WHERE ID(a)=$sourceId match(b:TABLE) WHERE ID(b)=$targetId match p=(a)-[r:FLOW_TO]->(b) return {relId:ID(r)} LIMIT 1") + TableRelation getBySourceAndTargetId(Long sourceId, Long targetId); + + @Query("match(a:TABLE) WHERE ID(a)=$sourceId match(b:TABLE) WHERE ID(b)=$targetId " + + "create (a)-[r:FLOW_TO" + + "{relationType::#{#tableRelation.relationType},dataAccessId::#{#tableRelation.dataAccessId},dataAccessName::#{#tableRelation.dataAccessName}}" + + "]->(b) ") + void create(Long sourceId, Long targetId, @Param("tableRelation") TableRelation tableRelation); + + @Query("MATCH (n:TABLE)-[r:FLOW_TO]->(m:TABLE) WHERE ID(r)=:#{#tableRelation.id} SET r.relationType=:#{#tableRelation.relationType},r.dataAccessId=:#{#tableRelation.dataAccessId},r.dataAccessName=:#{#tableRelation.dataAccessName}") + void update(@Param("tableRelation") TableRelation tableRelation); +} diff --git a/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/TableRepository.java b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/TableRepository.java new file mode 100644 index 0000000..154607b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-data-lineage/src/main/java/net/srt/lineage/repository/TableRepository.java @@ -0,0 +1,17 @@ +package net.srt.lineage.repository; + +import net.srt.lineage.node.Table; +import org.springframework.data.neo4j.repository.Neo4jRepository; +import org.springframework.data.neo4j.repository.query.Query; +import org.springframework.data.repository.query.Param; +import org.springframework.stereotype.Repository; + +@Repository +public interface TableRepository extends Neo4jRepository { + + @Query("MATCH (n:TABLE) WHERE n.databaseId=$databaseId AND n.code=$code AND n.parentId=$parentId return n LIMIT 1") + Table get(Long databaseId, String code, Long parentId); + + @Query("MATCH (n:TABLE) WHERE ID(n)=:#{#table.id} SET n.name=:#{#table.name},n.code=:#{#table.code},n.parentId=:#{#table.parentId},n.databaseId=:#{#table.databaseId}") + void update(@Param("table") Table table); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/DmJdbcDriver18.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/DmJdbcDriver18.jar new file mode 100644 index 0000000..932d299 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/DmJdbcDriver18.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/gbase-connector-java-8.3.81.53-build55.5.3-bin.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/gbase-connector-java-8.3.81.53-build55.5.3-bin.jar new file mode 100644 index 0000000..a42ebb5 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/gbase-connector-java-8.3.81.53-build55.5.3-bin.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/greenplum-jdbc-5.1.4.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/greenplum-jdbc-5.1.4.jar new file mode 100644 index 0000000..4e6b40e Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/greenplum-jdbc-5.1.4.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/jconn4.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/jconn4.jar new file mode 100644 index 0000000..e166b44 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/jconn4.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/kingbase8-8.2.0.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/kingbase8-8.2.0.jar new file mode 100644 index 0000000..70a9025 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/kingbase8-8.2.0.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/kingbase8-8.6.0.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/kingbase8-8.6.0.jar new file mode 100644 index 0000000..875c5af Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/kingbase8-8.6.0.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/msbase.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/msbase.jar new file mode 100644 index 0000000..63b5577 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/msbase.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/mssqlserver.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/mssqlserver.jar new file mode 100644 index 0000000..2ff895c Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/mssqlserver.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/msutil.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/msutil.jar new file mode 100644 index 0000000..f09b39f Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/msutil.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/ojdbc8-19.3.0.0.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/ojdbc8-19.3.0.0.jar new file mode 100644 index 0000000..2ebb05e Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/ojdbc8-19.3.0.0.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/oscarJDBC8.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/oscarJDBC8.jar new file mode 100644 index 0000000..331aaf6 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/oscarJDBC8.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/sqljdbc4-4.0.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/sqljdbc4-4.0.jar new file mode 100644 index 0000000..d6b7f6d Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/sqljdbc4-4.0.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/lib/sqljdbc6.0-6.0.jar b/srt-cloud-framework/srt-cloud-dbswitch/lib/sqljdbc6.0-6.0.jar new file mode 100644 index 0000000..82dac34 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-dbswitch/lib/sqljdbc6.0-6.0.jar differ diff --git a/srt-cloud-framework/srt-cloud-dbswitch/pom.xml b/srt-cloud-framework/srt-cloud-dbswitch/pom.xml new file mode 100644 index 0000000..7ef73b3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/pom.xml @@ -0,0 +1,262 @@ + + + + srt-cloud-framework + net.srt + 2.0.0 + + 4.0.0 + + srt-cloud-dbswitch + + + + org.springframework.boot + spring-boot-starter-web + + + + org.springframework.boot + spring-boot-starter-jdbc + + + + com.alibaba + druid-spring-boot-starter + provided + + + + mysql + mysql-connector-java + ${mysql-connector-java.version} + + + org.postgresql + postgresql + ${postgresql.version} + + + com.oracle.ojdbc + ojdbc8 + ${ojdbc8.version} + + + com.oracle.ojdbc + orai18n + ${ojdbc8.version} + runtime + + + + com.microsoft.sqlserver + sqljdbc6.0 + ${sqljdbc6.0.version} + + + + com.microsoft.sqlserver + msbase + ${msbase.version} + + + + com.microsoft.sqlserver + msutil + ${msutil.version} + + + + com.microsoft.sqlserver + mssqlserver + ${mssqlserver.version} + + + + com.pivotal + greenplum-jdbc + ${greenplum-jdbc.version} + + + + com.dameng + dm-jdbc + ${dm-jdbc.version} + + + + com.kingbase + kingbase-jdbc + ${kingbase-jdbc.version} + + + org.mariadb.jdbc + mariadb-java-client + runtime + + + com.ibm.db2.jcc + db2jcc + db2jcc4 + runtime + + + org.xerial + sqlite-jdbc + 3.31.1 + + + + org.apache.hive + hive-jdbc + ${hive-jdbc.version} + runtime + + + org.eclipse.jetty.aggregate + jetty-all + + + org.apache.hive + hive-shims + + + slf4j-log4j12 + org.slf4j + + + ch.qos.logback + logback-classic + + + tomcat + * + + + javax.servlet + * + + + org.eclipse.jetty.orbit + * + + + org.eclipse.jetty.aggregate + * + + + org.mortbay.jetty + * + + + org.eclipse.jetty + * + + + org.apache.hbase + * + + + org.apache.logging.log4j + * + + + log4j + log4j + + + guava + com.google.guava + + + + + + + com.sybase + jconn4 + 1.0 + runtime + + + + + com.oscar + oscar-jdbc + 7.0.0 + runtime + + + + + com.gbase.jdbc + gbase-connector-java + 8.3.81.53 + runtime + + + + org.ehcache + sizeof + ${sizeof.version} + + + net.minidev + json-smart + ${json-smart.version} + runtime + + + org.apache.calcite + calcite-core + ${calcite-core.version} + + + com.google.guava + guava + + + + + org.apache.calcite + calcite-server + ${calcite-server.version} + + + com.google.guava + guava + + + + + + com.github.jsqlparser + jsqlparser + ${jsqlparser.version} + + + + + net.srt + flink-common + ${project.version} + + + + + net.srt + flink-process + ${project.version} + + + + io.github.freakchick + orange + 1.0 + + + + + diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/constant/Const.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/constant/Const.java new file mode 100644 index 0000000..58330d0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/constant/Const.java @@ -0,0 +1,75 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.common.constant; + +/** + * 常量定义 + * + * @author jrl + */ +public final class Const { + + /** + * What's the file systems file separator on this operating system? + */ + public static final String FILE_SEPARATOR = System.getProperty("file.separator"); + + /** + * What's the path separator on this operating system? + */ + public static final String PATH_SEPARATOR = System.getProperty("path.separator"); + + /** + * CR: operating systems specific Carriage Return + */ + public static final String CR = System.getProperty("line.separator"); + + /** + * DOSCR: MS-DOS specific Carriage Return + */ + public static final String DOSCR = "\n\r"; + + /** + * An empty ("") String. + */ + public static final String EMPTY_STRING = ""; + + /** + * The Java runtime version + */ + public static final String JAVA_VERSION = System.getProperty("java.vm.version"); + + /** + * Create Table Statement Prefix String + */ + public static final String CREATE_TABLE = "CREATE TABLE "; + + /** + * Drop Table Statement Prefix String + */ + public static final String DROP_TABLE = "DROP TABLE "; + + /** + * Constant Keyword String + */ + public static final String IF_NOT_EXISTS = "IF NOT EXISTS "; + + /** + * Constant Keyword String + */ + public static final String IF_EXISTS = "IF EXISTS "; + + /** + * Constructor Function + */ + private Const() { + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/entity/PatternMapper.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/entity/PatternMapper.java new file mode 100644 index 0000000..27885e3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/entity/PatternMapper.java @@ -0,0 +1,71 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.common.entity; + +import java.util.Objects; + +/** + * 基于正则表达式的批评替换实体定义 + */ +public class PatternMapper { + + private String fromPattern; + private String toValue; + + public PatternMapper() { + } + + public PatternMapper(String fromPattern, String toValue) { + this.fromPattern = fromPattern; + this.toValue = toValue; + } + + public String getFromPattern() { + return fromPattern; + } + + public void setFromPattern(String fromPattern) { + this.fromPattern = fromPattern; + } + + public String getToValue() { + return toValue; + } + + public void setToValue(String toValue) { + this.toValue = toValue; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + PatternMapper that = (PatternMapper) o; + return fromPattern.equals(that.fromPattern) && toValue.equals(that.toValue); + } + + @Override + public int hashCode() { + return Objects.hash(fromPattern, toValue); + } + + @Override + public String toString() { + return "PatternMapper{" + + "fromPattern='" + fromPattern + '\'' + + ", toValue='" + toValue + '\'' + + '}'; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/type/DBTableType.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/type/DBTableType.java new file mode 100644 index 0000000..db9f3b6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/type/DBTableType.java @@ -0,0 +1,37 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.common.type; + +/** + * 数据库表类型:视图表、物理表 + * + * @author jrl + */ +public enum DBTableType { + /** + * 物理表 + */ + TABLE(0), + + /** + * 视图表 + */ + VIEW(1); + + private int index; + + DBTableType(int idx) { + this.index = idx; + } + + public int getIndex() { + return index; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/type/ProductTypeEnum.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/type/ProductTypeEnum.java new file mode 100644 index 0000000..8ef7e62 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/type/ProductTypeEnum.java @@ -0,0 +1,151 @@ +package srt.cloud.framework.dbswitch.common.type;// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// + +import java.util.Arrays; + +/** + * 数据库产品类型的枚举定义 + * + * @author Tang + */ +public enum ProductTypeEnum { + /** + * 未知数据库类型 + */ + UNKNOWN(0, null, null, null), + + /** + * MySQL数据库类型 + */ + MYSQL(1, "com.mysql.jdbc.Driver","/* ping */ SELECT 1", "jdbc:mysql://{host}:{port}/{database}?useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true&rewriteBatchedStatements=true"), + + /** + * Oracle数据库类型 + */ + ORACLE(2, "oracle.jdbc.driver.OracleDriver","SELECT 'Hello' from DUAL", "jdbc:oracle:thin:@{host}:{port}:{database}"), + + /** + * SQLServer 2000数据库类型 + */ + SQLSERVER2000(3, "com.microsoft.sqlserver.jdbc.SQLServerDriver","SELECT 1+2 as a", "jdbc:sqlserver://{host}:{port};DatabaseName={database}"), + + /** + * SQLServer数据库类型 + */ + SQLSERVER(4, "com.microsoft.sqlserver.jdbc.SQLServerDriver","SELECT 1+2 as a", "jdbc:sqlserver://{host}:{port};DatabaseName={database}"), + + /** + * PostgreSQL数据库类型 + */ + POSTGRESQL(5, "org.postgresql.Driver","SELECT 1", "jdbc:postgresql://{host}:{port}/{database}"), + + /** + * Greenplum数据库类型 + */ + GREENPLUM(6, "org.postgresql.Driver","SELECT 1", "jdbc:postgresql://{host}:{port}/{database}"), + + /** + * MariaDB数据库类型 + */ + MARIADB(7, "org.mariadb.jdbc.Driver", "SELECT 1", "jdbc:mariadb://{host}:{port}/{database}"), + + /** + * DB2数据库类型 + */ + DB2(8,"com.ibm.db2.jcc.DB2Driver", "SELECT 1 FROM SYSIBM.SYSDUMMY1", "jdbc:db2://{host}:{port}/{database}"), + + /** + * [国产]达梦数据库类型 + */ + DM(9, "dm.jdbc.driver.DmDriver","SELECT 'Hello' from DUAL", "jdbc:dm://{host}:{port}/{database}"), + + /** + * [国产]人大金仓数据库类型 + */ + KINGBASE(10, "com.kingbase8.Driver","SELECT 1", "jdbc:kingbase8://{host}:{port}/{database}"), + + /** + * [国产]神通数据库 + */ + OSCAR(11, "com.oscar.Driver", "SELECT 1", "jdbc:oscar://{host}:{port}/{database}"), + + /** + * [国产]南大通用GBase8a数据库 + */ + GBASE8A(12, "com.gbase.jdbc.Driver", "/* ping */ SELECT 1", "jdbc:gbase://{host}:{port}/{database}"), + + /** + * HIVE数据库 + */ + HIVE(13, "org.apache.hive.jdbc.HiveDriver", "SELECT 1", "jdbc:hive2://{host}:{port}/{database}"), + + /** + * SQLite数据库 + */ + SQLITE3(14, "org.sqlite.JDBC", "SELECT 1", "jdbc:sqlite::resource:{file}"), + + /** + * Sybase数据库类型 + */ + SYBASE(15, "com.sybase.jdbc4.jdbc.SybDriver", "SELECT 1+2 as a", "jdbc:sybase:Tds:{host}:{port}/{database}"), + + /** + * MySQL数据库类型 + */ + DORIS(16, "com.mysql.jdbc.Driver","/* ping */ SELECT 1", "jdbc:mysql://{host}:{port}/{database}?useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8&rewriteBatchedStatements=true"), + ; + + private Integer index; + private String driveClassName; + private String testSql; + private String url; + + public String getTestSql() { + return testSql; + } + + public String getUrl() { + return url; + } + + public String getDriveClassName() { + return driveClassName; + } + + ProductTypeEnum(Integer idx, String driveClassName, String testSql, String url) { + this.index = idx; + this.driveClassName = driveClassName; + this.testSql = testSql; + this.url = url; + } + + public Integer getIndex() { + return index; + } + + + public static ProductTypeEnum getByIndex(Integer index) { + return Arrays.stream(ProductTypeEnum.values()).filter(productTypeEnum -> productTypeEnum.getIndex().equals(index)).findFirst().orElse(ProductTypeEnum.UNKNOWN); + } + + + public boolean noCommentStatement() { + return Arrays.asList( + ProductTypeEnum.MYSQL, + ProductTypeEnum.MARIADB, + ProductTypeEnum.GBASE8A, + ProductTypeEnum.HIVE, + ProductTypeEnum.SQLITE3, + ProductTypeEnum.SYBASE, + ProductTypeEnum.DORIS + ).contains(this); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/DatabaseAwareUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/DatabaseAwareUtils.java new file mode 100644 index 0000000..ae45c8e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/DatabaseAwareUtils.java @@ -0,0 +1,125 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.common.util; + + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.HashMap; +import java.util.Map; + +/** + * 数据库类型识别工具类 + * + * @author tang + */ +public final class DatabaseAwareUtils { + + private static final Map productNameMap; + + private static final Map driverNameMap; + + static { + productNameMap = new HashMap<>(); + driverNameMap = new HashMap<>(); + + productNameMap.put("Greenplum", ProductTypeEnum.GREENPLUM); + productNameMap.put("Microsoft SQL Server", ProductTypeEnum.SQLSERVER); + productNameMap.put("DM DBMS", ProductTypeEnum.DM); + productNameMap.put("KingbaseES", ProductTypeEnum.KINGBASE); + productNameMap.put("Apache Hive", ProductTypeEnum.HIVE); + productNameMap.put("MySQL", ProductTypeEnum.MYSQL); + productNameMap.put("MariaDB", ProductTypeEnum.MARIADB); + productNameMap.put("Oracle", ProductTypeEnum.ORACLE); + productNameMap.put("PostgreSQL", ProductTypeEnum.POSTGRESQL); + productNameMap.put("DB2 for Unix/Windows", ProductTypeEnum.DB2); + productNameMap.put("Hive", ProductTypeEnum.HIVE); + productNameMap.put("SQLite", ProductTypeEnum.SQLITE3); + productNameMap.put("OSCAR", ProductTypeEnum.OSCAR); + productNameMap.put("GBase", ProductTypeEnum.GBASE8A); + productNameMap.put("Adaptive Server Enterprise", ProductTypeEnum.SYBASE); + productNameMap.put("Doris", ProductTypeEnum.DORIS); + + driverNameMap.put("MySQL Connector Java", ProductTypeEnum.MYSQL); + driverNameMap.put("MariaDB Connector/J", ProductTypeEnum.MARIADB); + driverNameMap.put("Oracle JDBC driver", ProductTypeEnum.ORACLE); + driverNameMap.put("PostgreSQL JDBC Driver", ProductTypeEnum.POSTGRESQL); + driverNameMap.put("Kingbase8 JDBC Driver", ProductTypeEnum.KINGBASE); + driverNameMap.put("IBM Data Server Driver for JDBC and SQLJ", ProductTypeEnum.DB2); + driverNameMap.put("dm.jdbc.driver.DmDriver", ProductTypeEnum.DM); + driverNameMap.put("Hive JDBC", ProductTypeEnum.HIVE); + driverNameMap.put("SQLite JDBC", ProductTypeEnum.SQLITE3); + driverNameMap.put("OSCAR JDBC DRIVER", ProductTypeEnum.OSCAR); + driverNameMap.put("GBase JDBC Driver", ProductTypeEnum.GBASE8A); + driverNameMap.put("jConnect (TM) for JDBC (TM)", ProductTypeEnum.SYBASE); + driverNameMap.put("MySQL Connector Java Doris", ProductTypeEnum.DORIS); + } + + /** + * 获取数据库的产品名 + * + * @param dataSource 数据源 + * @return 数据库产品名称字符串 + */ + public static ProductTypeEnum getDatabaseTypeByDataSource(DataSource dataSource) { + try (Connection connection = dataSource.getConnection()) { + String productName = connection.getMetaData().getDatabaseProductName(); + String driverName = connection.getMetaData().getDriverName(); + if (driverNameMap.containsKey(driverName)) { + return driverNameMap.get(driverName); + } + + ProductTypeEnum type = productNameMap.get(productName); + if (null == type) { + throw new IllegalStateException("Unable to detect database type from data source instance"); + } + return type; + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + /** + * 检查MySQL数据库表的存储引擎是否为Innodb + * + * @param schemaName schema名 + * @param tableName table名 + * @param dataSource 数据源 + * @return 为Innodb存储引擎时返回True, 否在为false + */ + public static boolean isMysqlInnodbStorageEngine(String schemaName, String tableName, + DataSource dataSource) { + String sql = "SELECT count(*) as total FROM information_schema.tables " + + "WHERE table_schema=? AND table_name=? AND ENGINE='InnoDB'"; + try (Connection connection = dataSource.getConnection(); + PreparedStatement ps = connection.prepareStatement(sql)) { + ps.setString(1, schemaName); + ps.setString(2, tableName); + try (ResultSet rs = ps.executeQuery()) { + if (rs.next()) { + return rs.getInt(1) > 0; + } + } + + return false; + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + private DatabaseAwareUtils() { + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/DbswitchStrUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/DbswitchStrUtils.java new file mode 100644 index 0000000..14a3b9e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/DbswitchStrUtils.java @@ -0,0 +1,66 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.common.util; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +/** + * 字符串工具类 + * + * @author jrl + * @date 2021/6/8 20:55 + */ +public final class DbswitchStrUtils { + + /** + * 根据逗号切分字符串为数组 + * + * @param str 待切分的字符串 + * @return List + */ + public static List stringToList(String str) { + if (null != str && str.length() > 0) { + String[] strs = str.split(","); + if (strs.length > 0) { + return new ArrayList<>(Arrays.asList(strs)); + } + } + + return new ArrayList<>(); + } + + + /** + * 将二进制数据转换为16进制的可视化字符串 + * + * @param bytes 二进制数据 + * @return 16进制的可视化字符串 + */ + public static String toHexString(byte[] bytes) { + if (null == bytes || bytes.length <= 0) { + return null; + } + final StringBuilder hexString = new StringBuilder(); + for (byte b : bytes) { + int v = b & 0xFF; + String s = Integer.toHexString(v); + if (s.length() < 2) { + hexString.append(0); + } + hexString.append(s); + } + return hexString.toString(); + } + + private DbswitchStrUtils() { + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/HivePrepareUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/HivePrepareUtils.java new file mode 100644 index 0000000..7d32208 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/HivePrepareUtils.java @@ -0,0 +1,66 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.common.util; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +public final class HivePrepareUtils { + + private final static String HIVE_SQL_1 = "set hive.resultset.use.unique.column.names=false"; + private final static String HIVE_SQL_2 = "set hive.support.concurrency=true"; + private final static String HIVE_SQL_3 = "set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager"; + + private HivePrepareUtils() { + } + + public static void setResultSetColumnNameNotUnique(Connection connection) + throws SQLException { + executeWithoutResultSet(connection, HIVE_SQL_1); + } + + public static void prepare(Connection connection, String schema, String table) + throws SQLException { + executeWithoutResultSet(connection, HIVE_SQL_1); + if (isTransactionalTable(connection, schema, table)) { + executeWithoutResultSet(connection, HIVE_SQL_2); + executeWithoutResultSet(connection, HIVE_SQL_3); + } + } + + private static boolean isTransactionalTable(Connection connection, String schema, String table) + throws SQLException { + String fullTableName = String.format("`%s`.`%s`", schema, table); + String sql = String.format("DESCRIBE FORMATTED %s", fullTableName); + try (Statement st = connection.createStatement(); + ResultSet rs = st.executeQuery(sql)) { + while (rs.next()) { + String dataType = rs.getString("data_type"); + String comment = rs.getString("comment"); + if (dataType != null + && comment != null + && dataType.startsWith("transactional") + && comment.startsWith("true")) { + return true; + } + } + return false; + } + } + + private static boolean executeWithoutResultSet(Connection connection, String sql) + throws SQLException { + try (Statement st = connection.createStatement()) { + return st.execute(sql); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/JdbcTypesUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/JdbcTypesUtils.java new file mode 100644 index 0000000..b8d1554 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/JdbcTypesUtils.java @@ -0,0 +1,142 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.common.util; + +import java.lang.reflect.Field; +import java.sql.Types; +import java.util.HashMap; +import java.util.Map; + +/** + * JDBC的数据类型相关工具类 + * + * @author jrl + */ +public final class JdbcTypesUtils { + + private static final Map TYPE_NAMES = new HashMap<>(); + + static { + try { + for (Field field : Types.class.getFields()) { + TYPE_NAMES.put((Integer) field.get(null), field.getName()); + } + } catch (Exception ex) { + throw new IllegalStateException("Failed to resolve JDBC Types constants", ex); + } + } + + /** + * 将JDBC的整型类型转换成文本类型 + * + * @param sqlType jdbc的整型类型,详见:{@codejava.sql.Types } + * @return JDBC的文本类型 + */ + public static String resolveTypeName(int sqlType) { + return TYPE_NAMES.get(sqlType); + } + + /** + * 判断是否为JDCB的浮点数类型 + * + * @param sqlType jdbc的整型类型,详见:{@codejava.sql.Types } + * @return true为是,否则为false + */ + public static boolean isNumeric(int sqlType) { + // 5 + return (Types.DECIMAL == sqlType || Types.DOUBLE == sqlType || Types.FLOAT == sqlType + || Types.NUMERIC == sqlType || Types.REAL == sqlType); + } + + /** + * 判断是否为JDCB的整型类型 + * + * @param sqlType jdbc的整型类型,详见:{@codejava.sql.Types } + * @return true为是,否则为false + */ + public static boolean isInteger(int sqlType) { + // 5 + return (Types.BIT == sqlType || Types.BIGINT == sqlType || Types.INTEGER == sqlType + || Types.SMALLINT == sqlType + || Types.TINYINT == sqlType); + } + + /** + * 判断是否为JDCB的字符文本类型 + * + * @param sqlType jdbc的整型类型,详见:{@codejava.sql.Types } + * @return true为是,否则为false + */ + public static boolean isString(int sqlType) { + // 10 + return (Types.CHAR == sqlType || Types.NCHAR == sqlType || Types.VARCHAR == sqlType + || Types.LONGVARCHAR == sqlType || Types.NVARCHAR == sqlType + || Types.LONGNVARCHAR == sqlType + || Types.CLOB == sqlType || Types.NCLOB == sqlType || Types.SQLXML == sqlType + || Types.ROWID == sqlType); + } + + /** + * 判断是否为JDCB的时间类型 + * + * @param sqlType jdbc的整型类型,详见:{@codejava.sql.Types } + * @return true为是,否则为false + */ + public static boolean isDateTime(int sqlType) { + // 5 + return (Types.DATE == sqlType || Types.TIME == sqlType || Types.TIMESTAMP == sqlType + || Types.TIME_WITH_TIMEZONE == sqlType || Types.TIMESTAMP_WITH_TIMEZONE == sqlType); + } + + /** + * 判断是否为JDCB的布尔类型 + * + * @param sqlType jdbc的整型类型,详见:{@codejava.sql.Types } + * @return true为是,否则为false + */ + public static boolean isBoolean(int sqlType) { + // 1 + return (Types.BOOLEAN == sqlType); + } + + /** + * 判断是否为JDCB的二进制类型 + * + * @param sqlType jdbc的整型类型,详见:{@codejava.sql.Types } + * @return true为是,否则为false + */ + public static boolean isBinary(int sqlType) { + // 4 + return (Types.BINARY == sqlType || Types.VARBINARY == sqlType || Types.BLOB == sqlType + || Types.LONGVARBINARY == sqlType); + } + + public static boolean isTextable(int sqlType) { + return isNumeric(sqlType) || isString(sqlType) || isDateTime(sqlType) || isBoolean(sqlType); + } + + // 其他类型如下:9个 + // JAVA_OBJECT + // OTHER + // NULL + // DISTINCT + // STRUCT + // ARRAY + // REF + // DATALINK + // REF_CURSOR + + /** + * 构造函数私有化 + */ + private JdbcTypesUtils() { + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/PatterNameUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/PatterNameUtils.java new file mode 100644 index 0000000..2c2b598 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/PatterNameUtils.java @@ -0,0 +1,75 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.common.util; + + +import srt.cloud.framework.dbswitch.common.entity.PatternMapper; + +import java.util.Arrays; +import java.util.List; + +/** + * 基于正则的名称替换工具类 + */ +public final class PatterNameUtils { + + /** + * 根据正则名称的转换函数 + * + * @param originalName 原始名称 + * @param patternMappers 替换的正则规则列表 + * @return 替换后的名称 + */ + public static String getFinalName(String originalName, List patternMappers) { + if (null == originalName) { + return null; + } + + String targetName = originalName; + if (null != patternMappers && !patternMappers.isEmpty()) { + for (PatternMapper mapper : patternMappers) { + String fromPattern = mapper.getFromPattern(); + String toValue = mapper.getToValue(); + if (null == fromPattern) { + continue; + } + if (null == toValue) { + toValue = ""; + } + targetName = targetName.replaceAll(fromPattern, toValue); + } + } + return targetName; + } + + /** + * 测试函数 + */ + public static void main(String[] args) { + // 添加前缀和后缀 + System.out.println(getFinalName( + "hello", + Arrays.asList(new PatternMapper("^", "T_"), new PatternMapper("$", "_Z"))) + ); + + // 匹配的名字替换 + System.out.println(getFinalName( + "hello", + Arrays.asList(new PatternMapper("hello", "new_hello"))) + ); + + // 不匹配的名字不替换 + System.out.println(getFinalName( + "test", + Arrays.asList(new PatternMapper("hello", "new_hello"))) + ); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/SingletonObject.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/SingletonObject.java new file mode 100644 index 0000000..455333a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/SingletonObject.java @@ -0,0 +1,39 @@ +package srt.cloud.framework.dbswitch.common.util; + +import com.fasterxml.jackson.annotation.JsonInclude.Include; +import com.fasterxml.jackson.core.JsonParser.Feature; +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.SerializationFeature; +import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule; + +import java.math.BigDecimal; +import java.text.SimpleDateFormat; +import java.util.TimeZone; + +/** + * 该类用于提供一些类似的单例的,无状态的对象 + * + * @author jrl + */ +public class SingletonObject { + + public static final ObjectMapper OBJECT_MAPPER = new ObjectMapper(); + + public static final BigDecimal HUNDRED = new BigDecimal(100); + + static { + //允许出现特殊字符和转义符 + OBJECT_MAPPER.configure(Feature.ALLOW_UNQUOTED_CONTROL_CHARS, true); + OBJECT_MAPPER.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false); + OBJECT_MAPPER.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); + OBJECT_MAPPER.configure(DeserializationFeature.FAIL_ON_NULL_FOR_PRIMITIVES, true); + //OBJECT_MAPPER.setSerializationInclusion(Include.NON_ABSENT); + OBJECT_MAPPER.registerModule(new JavaTimeModule()); + + SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); + simpleDateFormat.setTimeZone(TimeZone.getTimeZone("Asia/Shanghai")); + OBJECT_MAPPER.setDateFormat(simpleDateFormat); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/StringUtil.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/StringUtil.java new file mode 100644 index 0000000..ea6661d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/StringUtil.java @@ -0,0 +1,352 @@ +package srt.cloud.framework.dbswitch.common.util; + +import com.fasterxml.jackson.core.JsonParser; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; +import org.apache.commons.lang3.StringUtils; + +import java.math.BigDecimal; +import java.util.Base64; +import java.util.Collection; +import java.util.Iterator; +import java.util.concurrent.ThreadLocalRandom; +import java.util.regex.Pattern; + +/** + * 字符串工具类 + * + * @author jrl + */ +public class StringUtil { + + public static final String NOT_BLANK = "^[\\s\\S]*.*[^\\s][\\s\\S]*$"; + + public static final String REGEX_NUMBER = "^\\d+$"; + + public static final String REGEX_DECIMAL = "^-?\\d?\\.?\\d+$"; + + public static final String REGEX_EMAIL = "^([a-zA-Z0-9_-])+@([a-zA-Z0-9_-])+((\\.[a-zA-Z0-9_-]{2,3}){1,2})$"; + + public static final String REGEX_PHONE = "^(\\d{11})|^((\\d{7,8})|(\\d{4}|\\d{3})-(\\d{7,8})|(\\d{4}|\\d{3})-(\\d{7,8})-(\\d{4}|\\d{3}|\\d{2}|\\d{1})|(\\d{7,8})-(\\d{4}|\\d{3}|\\d{2}|\\d{1}))$"; + + public static final String REGEX_IDCARD = "(\\d{14}[0-9a-zA-Z])|(\\d{17}[0-9a-zA-Z])"; + + public static final String DATE_FORAMT = "^((\\d{2}(([02468][048])|([13579][26]))[\\-\\/\\s]?((((0?[13578])|(1[02]))[\\-\\/\\s]?((0?[1-9])|([1-2][0-9])|(3[01])))|(((0?[469])|(11))[\\-\\/\\s]?((0?[1-9])|([1-2][0-9])|(30)))|(0?2[\\-\\/\\s]?((0?[1-9])|([1-2][0-9])))))|(\\d{2}(([02468][1235679])|([13579][01345789]))[\\-\\/\\s]?((((0?[13578])|(1[02]))[\\-\\/\\s]?((0?[1-9])|([1-2][0-9])|(3[01])))|(((0?[469])|(11))[\\-\\/\\s]?((0?[1-9])|([1-2][0-9])|(30)))|(0?2[\\-\\/\\s]?((0?[1-9])|(1[0-9])|(2[0-8]))))))(\\s(((0?[0-9])|([1][0-9])|([2][0-4]))\\:([0-5]?[0-9])((\\s)|(\\:([0-5]?[0-9])))))?$"; + + public static final String COMMA = ","; + + public static final String BLANK = " "; + + public static final String URI_PATH_SEPERATOR = "/"; + + public static final String UNDERLINE = "_"; + + public static boolean equal(String str1, String str2) { + if (str1 == str2) { + return true; + } + + if (null == str1 || null == str2) { + return false; + } + + return str1.equals(str2); + } + + public static boolean isNumber(String number) { + return match(number, REGEX_NUMBER); + } + + public static boolean isDecimal(String decimal) { + return match(decimal, REGEX_DECIMAL); + } + + public static boolean isEmail(String email) { + return match(email, REGEX_EMAIL); + } + + public static boolean isPhone(String phone) { + return match(phone, REGEX_PHONE); + } + + public static boolean isIdCard(String idCard) { + return match(idCard, REGEX_IDCARD); + } + + public static boolean match(String string, String regex) { + return string != null ? Pattern.compile(regex).matcher(string).find() : false; + } + + public static String getRandom(int len) { + ThreadLocalRandom random = ThreadLocalRandom.current(); + StringBuilder builder = new StringBuilder(); + for (int i = 0; i < len; i++) { + builder.append(random.nextInt(10)); + } + + return builder.toString(); + } + + public static String getRandom2(int len) { + String source = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; + ThreadLocalRandom random = ThreadLocalRandom.current(); + StringBuilder builder = new StringBuilder(); + for (int i = 0; i < len; i++) { + builder.append(source.charAt(random.nextInt(62))); + } + + return builder.toString(); + } + + public static String convertToCommaSplitString(Collection strs) { + if (strs == null || strs.isEmpty()) { + return null; + } + + Iterator iterator = strs.iterator(); + StringBuilder stringBuilder = new StringBuilder(); + while (iterator.hasNext()) { + stringBuilder.append(iterator.next()); + if (iterator.hasNext()) { + stringBuilder.append(","); + } + } + return stringBuilder.toString(); + } + + public static boolean isJson(String value) { + if (isBlank(value)) { + throw new NullPointerException("StringUtil.isJson(null)"); + } + + boolean valid = false; + try { + JsonParser jsonParser = SingletonObject.OBJECT_MAPPER.getFactory().createParser(value); + while (jsonParser.nextToken() != null) { + } + valid = true; + } catch (Exception e) { + } + + return valid; + } + + public static String toJson(Object value) { + if (null == value) { + throw new NullPointerException("StringUtil.toJson(null)"); + } + try { + return SingletonObject.OBJECT_MAPPER.writeValueAsString(value); + } catch (JsonProcessingException e) { + throw new RuntimeException(e); + } + } + + public static T fromJson(String value, Class clazz) { + if (isBlank(value)) { + throw new NullPointerException("StringUtil.fromJson(null, clazz)"); + } + + try { + return SingletonObject.OBJECT_MAPPER.readValue(value, clazz); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + public static T fromJson(String value, TypeReference clazz) { + if (isBlank(value)) { + throw new NullPointerException("StringUtil.fromJson(null, clazz)"); + } + + try { + return SingletonObject.OBJECT_MAPPER.readValue(value, clazz); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + public static String getDefaultNullString(String value) { + if (null == value || value.length() == 0) { + return null; + } + + return value; + } + + public static String replaceURLParameter(String url, String parameterName, String value) { + if (StringUtils.isBlank(url) || StringUtils.isBlank(parameterName)) { + return url; + } + + String numberSign = "#"; + String ampersand = "&"; + String questionMark = "?"; + String equalsSign = "="; + if (-1 != url.indexOf(numberSign)) { + url = url.substring(0, url.indexOf(numberSign)); + } + url = removeMatchCharOnTail(url, ampersand); + url = removeMatchCharOnTail(url, questionMark); + + if (StringUtils.isBlank(url) || StringUtils.isBlank(parameterName)) { + return url; + } + + if (-1 == url.indexOf(questionMark)) { + return url + questionMark + parameterName + equalsSign + value; + } + + int beginIndex = url.indexOf(parameterName + equalsSign); + if (-1 != beginIndex) { + int nextParameterIndex = url.indexOf(ampersand, beginIndex); + if (-1 == nextParameterIndex) { + url = url.substring(0, beginIndex - 1); + } else { + url = url.substring(0, beginIndex) + url.substring(nextParameterIndex + 1); + } + } + + if (-1 == url.indexOf(questionMark)) { + return url + questionMark + parameterName + equalsSign + value; + } + return url + ampersand + parameterName + equalsSign + value; + } + + public static String removeMatchCharOnTail(String string, String ch) { + int chLen = ch.length(); + if (string.lastIndexOf(ch) == string.length() - chLen) { + string = string.substring(0, string.length() - chLen); + string = removeMatchCharOnTail(string, ch); + } + + return string; + } + + + public static String ltrim(String raw) { + if (StringUtils.isBlank(raw)) { + return raw; + } + + if (raw.startsWith(BLANK)) { + return ltrim(raw.substring(1)); + } + + return raw; + } + + public static String rtrim(String raw) { + if (StringUtils.isBlank(raw)) { + return raw; + } + + if (raw.endsWith(BLANK)) { + return rtrim(raw.substring(0, raw.length() - 1)); + } + + return raw; + } + + public static String trim(String raw) { + if (StringUtils.isBlank(raw)) { + return raw; + } + + return rtrim(ltrim(raw)); + } + + public static boolean isBlank(String value) { + return null == value || value.trim().length() == 0; + } + + public static boolean isNotBlank(String value) { + return null != value && value.trim().length() > 0; + } + + public static String getYuanFromFen(Integer fen) { + if (null == fen) { + return "0.00"; + } + + if (fen < 0) { + return "-" + getYuanFromFen(0 - fen); + } + + int ten = 10; + int hundred = 100; + if (fen < ten && fen >= 0) { + return "0.0" + fen; + } + if (fen < hundred && fen >= ten) { + return "0." + fen; + } + + String temp = String.valueOf(fen); + int len = temp.length() - 2; + return temp.substring(0, len) + "." + temp.substring(len); + } + + public static String getYuanFromFen(Long fen) { + if (null == fen) { + return "0.00"; + } + + if (fen < 0) { + return "-" + getYuanFromFen(0 - fen); + } + + int ten = 10; + int hundred = 100; + if (fen < ten && fen >= 0) { + return "0.0" + fen; + } + if (fen < hundred && fen >= ten) { + return "0." + fen; + } + + String temp = String.valueOf(fen); + int len = temp.length() - 2; + return temp.substring(0, len) + "." + temp.substring(len); + } + + public static long getFenFromYuan(String yuan) { + if (StringUtils.isBlank(yuan)) { + return 0L; + } + + return getFenFromYuan(new BigDecimal(yuan)); + } + + public static long getFenFromYuan(BigDecimal yuan) { + if (null == yuan) { + return 0L; + } + if (yuan.compareTo(BigDecimal.ZERO) == 0) { + return 0L; + } + + if (yuan.compareTo(BigDecimal.ZERO) < 0) { + return -getFenFromYuan(yuan.multiply(new BigDecimal(-1))); + } + + return yuan.multiply(new BigDecimal(100)).longValue(); + } + + public static String join(String[] sources, String sep) { + if (null == sources || sources.length == 0) { + return ""; + } + StringBuilder builder = new StringBuilder(); + builder.append(sources[0]); + for (int i = 1; i < sources.length; i++) { + builder.append(sep).append(sources[i]); + } + return builder.toString(); + } + + public static byte[] decoder2Byte(String img) { + Base64.Decoder decoder = Base64.getDecoder(); + return decoder.decode(img); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/TypeConvertUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/TypeConvertUtils.java new file mode 100644 index 0000000..7c3b883 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/common/util/TypeConvertUtils.java @@ -0,0 +1,211 @@ +package srt.cloud.framework.dbswitch.common.util; + +import lombok.extern.slf4j.Slf4j; + +import java.io.ByteArrayOutputStream; +import java.io.ObjectOutputStream; +import java.lang.reflect.Method; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.sql.SQLException; + +@Slf4j +public final class TypeConvertUtils { + + private TypeConvertUtils() { + throw new IllegalStateException("Utility class can not create instance!"); + } + + public static String castToString(final Object in) { + if (in instanceof Character) { + return in.toString(); + } else if (in instanceof String) { + return in.toString(); + } else if (in instanceof Character) { + return in.toString(); + } else if (in instanceof java.sql.Clob) { + return clob2Str((java.sql.Clob) in); + } else if (in instanceof Number) { + return in.toString(); + } else if (in instanceof java.sql.RowId) { + return in.toString(); + } else if (in instanceof Boolean) { + return in.toString(); + } else if (in instanceof java.util.Date) { + return in.toString(); + } else if (in instanceof java.time.LocalDate) { + return in.toString(); + } else if (in instanceof java.time.LocalTime) { + return in.toString(); + } else if (in instanceof java.time.LocalDateTime) { + return in.toString(); + } else if (in instanceof java.time.OffsetDateTime) { + return in.toString(); + } else if (in instanceof java.sql.SQLXML) { + return in.toString(); + } else if (in instanceof java.sql.Array) { + return in.toString(); + } else if (in instanceof java.util.UUID) { + return in.toString(); + } else if ("org.postgresql.util.PGobject".equals(in.getClass().getName())) { + return in.toString(); + } else if ("org.postgresql.jdbc.PgSQLXML".equals(in.getClass().getName())) { + try { + Class clz = in.getClass(); + Method getString = clz.getMethod("getString"); + return getString.invoke(in).toString(); + } catch (Exception e) { + return ""; + } + } else if (in.getClass().getName().equals("oracle.sql.INTERVALDS")) { + return in.toString(); + } else if (in.getClass().getName().equals("oracle.sql.INTERVALYM")) { + return in.toString(); + } else if (in.getClass().getName().equals("oracle.sql.TIMESTAMPLTZ")) { + return in.toString(); + } else if (in.getClass().getName().equals("oracle.sql.TIMESTAMPTZ")) { + return in.toString(); + } else if (in.getClass().getName().equals("oracle.sql.BFILE")) { + Class clz = in.getClass(); + try { + Method methodFileExists = clz.getMethod("fileExists"); + boolean exists = (boolean) methodFileExists.invoke(in); + if (!exists) { + return ""; + } + + Method methodOpenFile = clz.getMethod("openFile"); + methodOpenFile.invoke(in); + + try { + Method methodCharacterStreamValue = clz.getMethod("getBinaryStream"); + java.io.InputStream is = (java.io.InputStream) methodCharacterStreamValue.invoke(in); + + String line; + StringBuilder sb = new StringBuilder(); + + java.io.BufferedReader br = new java.io.BufferedReader(new java.io.InputStreamReader(is)); + while ((line = br.readLine()) != null) { + sb.append(line); + } + + return sb.toString(); + } finally { + Method methodCloseFile = clz.getMethod("closeFile"); + methodCloseFile.invoke(in); + } + } catch (java.lang.reflect.InvocationTargetException ex) { + log.warn("Error for handle oracle.sql.BFILE: ", ex); + return ""; + } catch (Exception e) { + throw new RuntimeException(e); + } + } else if (in.getClass().getName().equals("microsoft.sql.DateTimeOffset")) { + return in.toString(); + } else if (in instanceof byte[]) { + return new String((byte[]) in); + } + + return null; + } + + public static byte[] castToByteArray(final Object in) { + if (in instanceof byte[]) { + return (byte[]) in; + } else if (in instanceof java.util.Date) { + return in.toString().getBytes(); + } else if (in instanceof java.sql.Blob) { + return blob2Bytes((java.sql.Blob) in); + } else if (in instanceof String || in instanceof Character) { + return in.toString().getBytes(); + } else if (in instanceof java.sql.Clob) { + return clob2Str((java.sql.Clob) in).getBytes(); + } else { + return toByteArray(in); + } + } + + public static Object castByDetermine(final Object in) { + if (null == in) { + return null; + } + + if (in instanceof BigInteger) { + return ((BigInteger) in).longValue(); + } else if (in instanceof BigDecimal) { + BigDecimal decimal = (BigDecimal) in; + if (decimal.doubleValue() > 2.147483647E9D || decimal.doubleValue() < -2.147483648E9D) { + return 0D; + } + return decimal.doubleValue(); + } else if (in instanceof java.sql.Clob) { + return clob2Str((java.sql.Clob) in); + } else if (in instanceof java.sql.Array + || in instanceof java.sql.SQLXML) { + try { + return castToString(in); + } catch (Exception e) { + log.warn("Unsupported type for convert {} to java.lang.String", in.getClass().getName()); + return null; + } + } else if (in instanceof java.sql.Blob) { + try { + return blob2Bytes((java.sql.Blob) in); + } catch (Exception e) { + log.warn("Unsupported type for convert {} to byte[] ", in.getClass().getName()); + return null; + } + } else if (in instanceof java.sql.Struct) { + log.warn("Unsupported type for convert {} to java.lang.String", in.getClass().getName()); + return null; + } + + return in; + } + + public static byte[] blob2Bytes(java.sql.Blob blob) { + try (java.io.InputStream inputStream = blob.getBinaryStream()) { + try (java.io.BufferedInputStream is = new java.io.BufferedInputStream(inputStream)) { + byte[] bytes = new byte[(int) blob.length()]; + int len = bytes.length; + int offset = 0; + int read = 0; + while (offset < len && (read = is.read(bytes, offset, len - offset)) >= 0) { + offset += read; + } + return bytes; + } + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + public static String clob2Str(java.sql.Clob clob) { + try (java.io.Reader is = clob.getCharacterStream()) { + java.io.BufferedReader reader = new java.io.BufferedReader(is); + String line = reader.readLine(); + StringBuilder sb = new StringBuilder(); + while (line != null) { + sb.append(line); + line = reader.readLine(); + } + return sb.toString(); + } catch (SQLException | java.io.IOException e) { + log.warn("Field Value convert from java.sql.Clob to java.lang.String failed:", e); + return null; + } + } + + private static byte[] toByteArray(Object obj) { + try (ByteArrayOutputStream bos = new ByteArrayOutputStream(); + ObjectOutputStream oos = new ObjectOutputStream(bos)) { + oos.writeObject(obj); + oos.flush(); + return bos.toByteArray(); + } catch (Exception e) { + log.error("Field value convert from {} to byte[] failed:", obj.getClass().getName(), e); + throw new RuntimeException(e); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/AbstractDatabase.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/AbstractDatabase.java new file mode 100644 index 0000000..d6ab594 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/AbstractDatabase.java @@ -0,0 +1,807 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database; + +import cn.hutool.core.text.CharSequenceUtil; +import com.alibaba.druid.sql.SQLUtils; +import com.alibaba.druid.sql.ast.SQLStatement; +import com.github.freakchick.orange.SqlMeta; +import lombok.extern.slf4j.Slf4j; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.common.utils.SqlUtil; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.commons.lang3.StringUtils; +import org.springframework.util.CollectionUtils; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.DbswitchStrUtils; +import srt.cloud.framework.dbswitch.common.util.HivePrepareUtils; +import srt.cloud.framework.dbswitch.common.util.StringUtil; +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.model.JdbcSelectResult; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.util.SqlEngineUtil; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * 数据库元信息抽象基类 + * + * @author jrl + */ +@Slf4j +public abstract class AbstractDatabase implements IDatabaseInterface { + + public static final int CLOB_LENGTH = 9999999; + + protected String driverClassName; + protected String catalogName = null; + + public AbstractDatabase(String driverClassName) { + try { + this.driverClassName = driverClassName; + Class.forName(driverClassName); + } catch (ClassNotFoundException e) { + throw new RuntimeException(e); + } + } + + @Override + public String getDriverClassName() { + return this.driverClassName; + } + + @Override + public List querySchemaList(Connection connection) { + Set ret = new HashSet<>(); + try (ResultSet schemas = connection.getMetaData().getSchemas()) { + while (schemas.next()) { + ret.add(schemas.getString("TABLE_SCHEM")); + } + return new ArrayList<>(ret); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public List queryTableList(Connection connection, String schemaName) { + List ret = new ArrayList<>(); + Set uniqueSet = new HashSet<>(); + String[] types = new String[]{"TABLE", "VIEW"}; + try (ResultSet tables = connection.getMetaData() + .getTables(this.catalogName, schemaName, "%", types)) { + while (tables.next()) { + String tableName = tables.getString("TABLE_NAME"); + if (uniqueSet.contains(tableName)) { + continue; + } else { + uniqueSet.add(tableName); + } + + TableDescription td = new TableDescription(); + td.setSchemaName(schemaName); + td.setTableName(tableName); + td.setRemarks(tables.getString("REMARKS")); + td.setTableType(tables.getString("TABLE_TYPE").toUpperCase()); + ret.add(td); + } + return ret; + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public TableDescription queryTableMeta(Connection connection, String schemaName, + String tableName) { + return queryTableList(connection, schemaName).stream() + .filter(one -> tableName.equals(one.getTableName())) + .findAny().orElse(null); + } + + @Override + public List queryTableColumnName(Connection connection, String schemaName, + String tableName) { + Set columns = new HashSet<>(); + try (ResultSet rs = connection.getMetaData() + .getColumns(this.catalogName, schemaName, tableName, null)) { + while (rs.next()) { + columns.add(rs.getString("COLUMN_NAME")); + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + return new ArrayList<>(columns); + } + + @Override + public void setColumnDefaultValue(Connection connection, String schemaName, String tableName, List columnDescriptions) { + + String sql = this.getDefaultValueSql(schemaName, tableName); + if (sql == null) { + return; + } + try (Statement st = connection.createStatement()) { + try (ResultSet rs = st.executeQuery(sql)) { + while (rs.next()) { + String columnName = rs.getString("column_name"); + String columnDefault = rs.getString("column_default"); + String columnComment = rs.getString("column_comment"); + if (columnName != null) { + for (ColumnDescription cd : columnDescriptions) { + if (columnName.equals(cd.getFieldName())) { + cd.setDefaultValue(columnDefault); + cd.setRemarks(columnComment); + break; + } + } + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + protected abstract String getDefaultValueSql(String schemaName, String tableName); + + @Override + public void setColumnIndexInfo(Connection connection, String schemaName, String tableName, List columnDescriptions) { + // 补充一下索引信息 + try (ResultSet indexInfo = connection.getMetaData().getIndexInfo(this.catalogName, schemaName, tableName, false, true)) { + setIndex(columnDescriptions, indexInfo); + } catch (SQLException e) { + log.error(schemaName + "." + tableName + " setColumnIndexInfo error:" + e.getMessage()); + throw new RuntimeException(schemaName + "." + tableName + " setColumnIndexInfo error!!", e); + } + } + + /** + * 设置索引信息 + * + * @param columnDescriptions + * @param indexInfo + * @throws SQLException + */ + public void setIndex(List columnDescriptions, ResultSet indexInfo) throws SQLException { + while (indexInfo.next()) { + //索引值是否可以不唯一 + boolean nonUnique = indexInfo.getBoolean("NON_UNIQUE"); + //索引类别 + String indexQualifier = indexInfo.getString("INDEX_QUALIFIER"); + String indexName = indexInfo.getString("INDEX_NAME"); + /** + * 索引类型: + * tableIndexStatistic - 此标识与表的索引描述一起返回的表统计信息 + * tableIndexClustered - 此为集群索引 + * tableIndexHashed - 此为散列索引 + * tableIndexOther - 此为某种其他样式的索引 + */ + short type = indexInfo.getShort("TYPE"); + String columnName = indexInfo.getString("COLUMN_NAME"); + String ascOrDesc = indexInfo.getString("ASC_OR_DESC"); + if (columnName != null) { + for (ColumnDescription cd : columnDescriptions) { + if (columnName.equals(cd.getFieldName())) { + cd.setNonIndexUnique(nonUnique); + cd.setIndexQualifier(indexQualifier); + cd.setIndexName(indexName); + cd.setIndexType(type); + cd.setAscOrDesc(ascOrDesc); + break; + } + } + } + } + } + + @Override + public List queryTableColumnMeta(Connection connection, String schemaName, + String tableName) { + String sql = this.getTableFieldsQuerySQL(schemaName, tableName); + List ret = this.querySelectSqlColumnMeta(connection, sql); + // 补充一下注释信息,索引信息 + try (ResultSet columns = connection.getMetaData() + .getColumns(this.catalogName, schemaName, tableName, null)) { + while (columns.next()) { + String columnName = columns.getString("COLUMN_NAME"); + String remarks = columns.getString("REMARKS"); + for (ColumnDescription cd : ret) { + if (columnName.equals(cd.getFieldName())) { + cd.setRemarks(remarks); + break; + } + } + } + } catch (SQLException e) { + log.error(schemaName + "." + tableName + " queryTableColumnMeta error:" + e.getMessage()); + throw new RuntimeException(schemaName + "." + tableName + " queryTableColumnMeta error!!", e); + } + return ret; + } + + @Override + public List queryTableColumnMetaOnly(Connection connection, String schemaName, + String tableName) { + String sql = this.getTableFieldsQuerySQL(schemaName, tableName); + return this.querySelectSqlColumnMeta(connection, sql); + } + + @Override + public List queryTablePrimaryKeys(Connection connection, String schemaName, + String tableName) { + Set ret = new HashSet<>(); + try (ResultSet primaryKeys = connection.getMetaData() + .getPrimaryKeys(this.catalogName, schemaName, tableName)) { + while (primaryKeys.next()) { + String name = primaryKeys.getString("COLUMN_NAME"); + ret.add(name); + } + return new ArrayList<>(ret); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public SchemaTableData queryTableData(Connection connection, String schemaName, String tableName, + int rowCount) { + String fullTableName = getQuotedSchemaTableCombination(schemaName, tableName); + String querySQL = String.format("SELECT * FROM %s ", fullTableName); + SchemaTableData data = new SchemaTableData(); + data.setSchemaName(schemaName); + data.setTableName(tableName); + data.setColumns(new ArrayList<>()); + data.setRows(new ArrayList<>()); + try (Statement st = connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)) { + // 限制下最大数量 + st.setMaxRows(rowCount); + //st.setFetchSize(Integer.MIN_VALUE); + if (getDatabaseType() == ProductTypeEnum.HIVE) { + HivePrepareUtils.prepare(connection, schemaName, tableName); + } + return getSchemaTableData(querySQL, rowCount, data, st); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public SchemaTableData queryTableDataBySql(Connection connection, String sql, int rowCount) { + SchemaTableData data = new SchemaTableData(); + data.setColumns(new ArrayList<>()); + data.setRows(new ArrayList<>()); + try (Statement st = connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)) { + // 限制下最大数量 + st.setMaxRows(rowCount); + //st.setFetchSize(Integer.MIN_VALUE); + return getSchemaTableData(sql, rowCount, data, st); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + private SchemaTableData getSchemaTableData(String sql, int rowCount, SchemaTableData data, Statement st) throws SQLException { + try (ResultSet rs = st.executeQuery(sql)) { + ResultSetMetaData m = rs.getMetaData(); + int count = m.getColumnCount(); + for (int i = 1; i <= count; i++) { + data.getColumns().add(m.getColumnLabel(i)); + } + + int counter = 0; + while (rs.next() && counter++ < rowCount) { + List row = new ArrayList<>(count); + for (int i = 1; i <= count; i++) { + Object value = rs.getObject(i); + if (value != null && value instanceof byte[]) { + row.add(DbswitchStrUtils.toHexString((byte[]) value)); + } else if (value != null && value instanceof java.sql.Clob) { + row.add(TypeConvertUtils.castToString(value)); + } else if (value != null && value instanceof java.sql.Blob) { + byte[] bytes = TypeConvertUtils.castToByteArray(value); + row.add(DbswitchStrUtils.toHexString(bytes)); + } else { + row.add(null == value ? null : value.toString()); + } + } + data.getRows().add(row); + } + + return data; + } + } + + @Override + public void testQuerySQL(Connection connection, String sql) { + String wrapperSql = this.getTestQuerySQL(sql); + try (Statement statement = connection.createStatement();) { + statement.execute(wrapperSql); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public String getQuotedSchemaTableCombination(String schemaName, String tableName) { + return String.format(" \"%s\".\"%s\" ", schemaName, tableName); + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + throw new RuntimeException("AbstractDatabase Unimplemented!"); + } + + @Override + public String getPrimaryKeyAsString(List pks) { + if (!pks.isEmpty()) { + StringBuilder sb = new StringBuilder(); + sb.append("\""); + sb.append(StringUtils.join(pks, "\" , \"")); + sb.append("\""); + return sb.toString(); + } + + return ""; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + throw new RuntimeException("AbstractDatabase Unimplemented!"); + } + + /************************************** + * internal function + **************************************/ + + protected abstract String getTableFieldsQuerySQL(String schemaName, String tableName); + + protected abstract String getTestQuerySQL(String sql); + + protected List getSelectSqlColumnMeta(Connection connection, String querySQL) { + List ret = new ArrayList<>(); + try (Statement st = connection.createStatement()) { + if (getDatabaseType() == ProductTypeEnum.HIVE) { + HivePrepareUtils.setResultSetColumnNameNotUnique(connection); + } + + try (ResultSet rs = st.executeQuery(querySQL)) { + ResultSetMetaData m = rs.getMetaData(); + int columns = m.getColumnCount(); + for (int i = 1; i <= columns; i++) { + String name = m.getColumnLabel(i); + if (null == name) { + name = m.getColumnName(i); + } + ColumnDescription cd = new ColumnDescription(); + cd.setFieldName(name); + cd.setLabelName(name); + cd.setFieldType(m.getColumnType(i)); + if (0 != cd.getFieldType()) { + cd.setFieldTypeName(m.getColumnTypeName(i)); + cd.setFiledTypeClassName(m.getColumnClassName(i)); + cd.setDisplaySize(m.getColumnDisplaySize(i)); + cd.setPrecisionSize(m.getPrecision(i)); + cd.setScaleSize(m.getScale(i)); + cd.setAutoIncrement(m.isAutoIncrement(i)); + cd.setNullable(m.isNullable(i) != ResultSetMetaData.columnNoNulls); + } else { + // 处理视图中NULL as fieldName的情况 + cd.setFieldTypeName("CHAR"); + cd.setFiledTypeClassName(String.class.getName()); + cd.setDisplaySize(1); + cd.setPrecisionSize(1); + cd.setScaleSize(0); + cd.setAutoIncrement(false); + cd.setNullable(true); + } + + boolean signed = false; + try { + signed = m.isSigned(i); + } catch (Exception ignored) { + // This JDBC Driver doesn't support the isSigned method + // nothing more we can do here by catch the exception. + } + cd.setSigned(signed); + cd.setDbType(getDatabaseType()); + + ret.add(cd); + } + + return ret; + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + /** + * 获取字段类型 + * + * @param v + * @return + */ + @Override + public boolean canCreateIndex(ColumnMetaData v) { + return false; + } + + /** + * 創建索引 + * + * @param fieldNames + * @param primaryKeys + * @param schemaName + * @param tableName + * @param results + */ + public void createIndexDefinition(List fieldNames, List primaryKeys, String schemaName, String tableName, List results) { + //去除主鍵,获取需要创建索引的字段 + List columns = fieldNames.stream() + .filter(columnDescription -> !primaryKeys.contains(columnDescription.getFieldName()) && columnDescription.getIndexName() != null) + .collect(Collectors.toList()); + Map> columnMap = new HashMap<>(6); + for (ColumnDescription columnDescription : columns) { + //获取字段元数据信息 + ColumnMetaData v = columnDescription.getMetaData(); + //zrx 如果不是能创建索引的类型 + if (!canCreateIndex(v)) { + continue; + } + List columnDescriptionList = new ArrayList<>(2); + if (columnMap.containsKey(columnDescription.getIndexName())) { + columnDescriptionList = columnMap.get(columnDescription.getIndexName()); + columnDescriptionList.add(columnDescription); + } else { + columnDescriptionList.add(columnDescription); + columnMap.put(columnDescription.getIndexName(), columnDescriptionList); + } + } + //遍历map,创建索引语句 + setIndexSql(schemaName, tableName, results, columnMap); + } + + @Override + public void setIndexSql(String schemaName, String tableName, List results, Map> columnMap) { + for (Map.Entry> entry : columnMap.entrySet()) { + String indexName = entry.getKey(); + List descriptions = entry.getValue(); + ColumnDescription columnDescription = descriptions.get(0); + String indexSql; + String lastIndexName = indexName.length() > 8 ? indexName.substring(0, 8) : indexName; + if (descriptions.size() > 1) { + indexSql = "CREATE " + (columnDescription.isNonIndexUnique() ? "" : "UNIQUE") + + " INDEX " + lastIndexName + "_" + StringUtil.getRandom2(4) + " ON " + schemaName + "." + tableName + + " (" + descriptions.stream().map(ColumnDescription::getFieldName).collect(Collectors.joining(",")) + ") "; + } else { + indexSql = "CREATE " + (columnDescription.isNonIndexUnique() ? "" : "UNIQUE") + + " INDEX " + lastIndexName + "_" + StringUtil.getRandom2(4) + " ON " + schemaName + "." + tableName + " (" + columnDescription.getFieldName() + ") "; + } + results.add(indexSql); + } + } + + @Override + public void addNoExistColumnsByTarget(Connection connection, String targetSchemaName, String targetTableName, List allColumns, List targetColumnDescriptions) { + try (Statement statement = connection.createStatement()) { + for (ColumnDescription targetColumn : targetColumnDescriptions) { + if (!allColumns.contains(targetColumn.getFieldName()) && !StringUtils.isEmpty(targetColumn.getFieldName())) { + //添加字段 + statement.execute(getAddColumnSql(targetSchemaName, targetTableName, targetColumn)); + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public String getAddColumnSql(String targetSchemaName, String targetTableName, ColumnDescription targetColumn) { + //如果表信息里不不含该字段,新建 + String alterSql = "ALTER TABLE " + targetSchemaName + "." + targetTableName + + " ADD COLUMN "; + //获取字段元数据信息 + ColumnMetaData v = targetColumn.getMetaData(); + alterSql += getFieldDefinition(v, null, false, false, true); + return alterSql; + } + + public void executeSql(Connection connection, String sql) { + try (Statement statement = connection.createStatement()) { + statement.execute(sql); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + public JdbcSelectResult queryDataBySql(Connection connection, String dbType, String sql, Integer openTrans, int rowCount) { + ProcessEntity process = ProcessContextHolder.getProcess(); + process.info("Start parse sql..."); + List stmtList = SQLUtils.parseStatements(sql, dbType.toLowerCase()); + process.info(CharSequenceUtil.format("A total of {} statement have been Parsed.", stmtList.size())); + process.info("Start execute sql..."); + JdbcSelectResult jobResult = new JdbcSelectResult(); + List results = new ArrayList<>(); + jobResult.setSuccess(true); + jobResult.setResults(results); + + for (SQLStatement item : stmtList) { + process.info("Execute sql:\n" + item.toString()); + // 将查询数据存储到数据中 + List> dataList = new ArrayList<>(); + // 存储列名的数组 + List columnList = new LinkedList<>(); + // 新增、修改、删除受影响行数 + Integer updateCount = null; + JdbcSelectResult result = new JdbcSelectResult(); + result.setSuccess(true); + Statement stmt = null; + ResultSet rs = null; + long sqlStart = System.currentTimeMillis(); + try { + if (openTrans == 1) { + // 为了设置fetchSize,必须设置为false + connection.setAutoCommit(false); + } else { + connection.setAutoCommit(true); + } + //stmt = connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); + stmt = connection.createStatement(); + // 限制下最大数量 + stmt.setMaxRows(rowCount); + /*if (openTrans == 1) { + stmt.setFetchSize(Integer.MIN_VALUE); + }*/ + // 是否查询操作 + boolean execute = stmt.execute(item.toString()); + if (execute) { + result.setIfQuery(true); + int count = 0; + rs = stmt.getResultSet(); + // 获取结果集的元数据信息 + ResultSetMetaData rsmd = rs.getMetaData(); + // 获取列字段的个数 + int colunmCount = rsmd.getColumnCount(); + for (int i = 1; i <= colunmCount; i++) { + // 获取所有的字段名称 + columnList.add(rsmd.getColumnName(i)); + } + while (rs.next()) { + Map map = new HashMap<>(); + for (int i = 1; i <= colunmCount; i++) { + // 获取列名 + String columnName = rsmd.getColumnName(i); + Object val = rs.getObject(i); + if (val instanceof byte[]) { + val = DbswitchStrUtils.toHexString((byte[]) val); + } else if (val instanceof java.sql.Clob) { + val = TypeConvertUtils.castToString(val); + } else if (val instanceof java.sql.Blob) { + byte[] bytes = TypeConvertUtils.castToByteArray(val); + val = DbswitchStrUtils.toHexString(bytes); + } else { + val = null == val ? null : val.toString(); + } + map.put(columnName, val); + } + dataList.add(map); + count++; + if (count >= rowCount) { + break; + } + } + } else { + result.setIfQuery(false); + // 执行新增、修改、删除受影响行数 + updateCount = stmt.getUpdateCount(); + } + if (openTrans == 1) { + connection.commit(); + } + } catch (Exception e) { + result.setSuccess(false); + result.setErrorMsg(LogUtil.getError(e)); + process.error(result.getErrorMsg()); + jobResult.setSuccess(false); + jobResult.setErrorMsg(result.getErrorMsg()); + if (openTrans == 1) { + try { + connection.rollback(); + } catch (SQLException ignored) { + } + } + } finally { + try { + if (rs != null) { + rs.close(); + } + if (stmt != null) { + stmt.close(); + } + } catch (SQLException ignored) { + } + } + long sqlEnd = System.currentTimeMillis(); + process.info("use time:\n" + item.toString() + ":\n" + (sqlEnd - sqlStart) + "ms."); + result.setSql(item.toString()); + result.setCount(updateCount); + result.setColumns(columnList); + result.setRowData(dataList); + result.setTime(sqlEnd - sqlStart); + results.add(result); + //如果执行失败了,终止执行 + if (!result.getSuccess()) { + break; + } + } + return jobResult; + } + + public JdbcSelectResult queryDataByApiSql(Connection connection, String sql, Integer openTrans, String sqlSeparator, Map sqlParam, int rowCount) { + String[] statements = SqlUtil.getStatements(sql, sqlSeparator); + JdbcSelectResult jobResult = new JdbcSelectResult(); + List results = new ArrayList<>(); + jobResult.setSuccess(true); + jobResult.setResults(results); + sqlParam = sqlParam == null ? new HashMap<>() : sqlParam; + + for (int k = 0; k < statements.length; k++) { + String item = statements[k]; + if (k == statements.length - 1 && StringUtil.isBlank(item)) { + break; + } + // 将查询数据存储到数据中 + List> dataList = new ArrayList<>(); + // 存储列名的数组 + List columnList = new LinkedList<>(); + // 新增、修改、删除受影响行数 + Integer updateCount = null; + JdbcSelectResult result = new JdbcSelectResult(); + result.setSuccess(true); + PreparedStatement stmt = null; + ResultSet rs = null; + long sqlStart = System.currentTimeMillis(); + try { + if (openTrans == 1) { + // 为了设置fetchSize,必须设置为false + connection.setAutoCommit(false); + } else { + connection.setAutoCommit(true); + } + SqlMeta sqlMeta = SqlEngineUtil.getEngine().parse(item, sqlParam); + //stmt = connection.prepareStatement(sqlMeta.getSql(), ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); + stmt = connection.prepareStatement(sqlMeta.getSql()); + item = sqlMeta.getSql(); + //参数注入 + List jdbcParamValues = sqlMeta.getJdbcParamValues(); + if (!CollectionUtils.isEmpty(jdbcParamValues)) { + for (int i = 1; i <= jdbcParamValues.size(); i++) { + stmt.setObject(i, jdbcParamValues.get(i - 1)); + } + } + // 限制下最大数量 + stmt.setMaxRows(rowCount); + /*if (openTrans == 1) { + stmt.setFetchSize(Integer.MIN_VALUE); + }*/ + // 是否查询操作 + boolean execute = stmt.execute(); + if (execute) { + result.setIfQuery(true); + int count = 0; + rs = stmt.getResultSet(); + // 获取结果集的元数据信息 + ResultSetMetaData rsmd = rs.getMetaData(); + // 获取列字段的个数 + int colunmCount = rsmd.getColumnCount(); + for (int i = 1; i <= colunmCount; i++) { + // 获取所有的字段名称 + columnList.add(rsmd.getColumnName(i)); + } + while (rs.next()) { + Map map = new HashMap<>(); + for (int i = 1; i <= colunmCount; i++) { + // 获取列名 + String columnName = rsmd.getColumnName(i); + Object val = rs.getObject(i); + if (val instanceof byte[]) { + val = DbswitchStrUtils.toHexString((byte[]) val); + } else if (val instanceof java.sql.Clob) { + val = TypeConvertUtils.castToString(val); + } else if (val instanceof java.sql.Blob) { + byte[] bytes = TypeConvertUtils.castToByteArray(val); + val = DbswitchStrUtils.toHexString(bytes); + } else { + val = null == val ? null : val.toString(); + } + map.put(columnName, val); + } + dataList.add(map); + count++; + if (count >= rowCount) { + break; + } + } + } else { + result.setIfQuery(false); + // 执行新增、修改、删除受影响行数 + updateCount = stmt.getUpdateCount(); + } + if (openTrans == 1) { + connection.commit(); + } + } catch (Exception e) { + result.setSuccess(false); + result.setErrorMsg(LogUtil.getError(e)); + jobResult.setSuccess(false); + jobResult.setErrorMsg(result.getErrorMsg()); + if (openTrans == 1) { + try { + connection.rollback(); + } catch (SQLException ignored) { + } + } + } finally { + try { + if (rs != null) { + rs.close(); + } + if (stmt != null) { + stmt.close(); + } + } catch (SQLException ignored) { + } + } + long sqlEnd = System.currentTimeMillis(); + result.setSql(item); + result.setCount(updateCount); + result.setColumns(columnList); + result.setRowData(dataList); + result.setTime(sqlEnd - sqlStart); + results.add(result); + //如果执行失败了,终止执行 + if (!result.getSuccess()) { + break; + } + } + return jobResult; + } + + public String getCountMoreThanOneSql(String schemaName, String tableName, List columns) { + String columnStr = "\"" + String.join("\",\"", columns) + "\""; + return String.format("SELECT %s FROM \"%s\".\"%s\" GROUP BY %s HAVING count(*)>1", columnStr, schemaName, tableName, columnStr); + } + + public String getCountOneSql(String schemaName, String tableName, List columns) { + String columnStr = "\"" + String.join("\",\"", columns) + "\""; + return String.format("SELECT %s FROM \"%s\".\"%s\" GROUP BY %s HAVING count(*)=1", columnStr, schemaName, tableName, columnStr); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/DatabaseFactory.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/DatabaseFactory.java new file mode 100644 index 0000000..45b673f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/DatabaseFactory.java @@ -0,0 +1,84 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseDB2Impl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseDmImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseDorisImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseGbase8aImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseGreenplumImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseHiveImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseKingbaseImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseMariaDBImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseMysqlImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseOracleImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseOscarImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabasePostgresImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseSqliteImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseSqlserver2000Impl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseSqlserverImpl; +import srt.cloud.framework.dbswitch.core.database.impl.DatabaseSybaseImpl; + +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.Callable; + +/** + * 数据库实例构建工厂类 + * + * @author jrl + */ +public final class DatabaseFactory { + + private static final Map> DATABASE_MAPPER + = new HashMap>() { + + private static final long serialVersionUID = 9202705534880971997L; + + { + put(ProductTypeEnum.MYSQL, DatabaseMysqlImpl::new); + put(ProductTypeEnum.MARIADB, DatabaseMariaDBImpl::new); + put(ProductTypeEnum.ORACLE, DatabaseOracleImpl::new); + put(ProductTypeEnum.SQLSERVER2000, DatabaseSqlserver2000Impl::new); + put(ProductTypeEnum.SQLSERVER, DatabaseSqlserverImpl::new); + put(ProductTypeEnum.POSTGRESQL, DatabasePostgresImpl::new); + put(ProductTypeEnum.GREENPLUM, DatabaseGreenplumImpl::new); + put(ProductTypeEnum.DB2, DatabaseDB2Impl::new); + put(ProductTypeEnum.DM, DatabaseDmImpl::new); + put(ProductTypeEnum.SYBASE, DatabaseSybaseImpl::new); + put(ProductTypeEnum.KINGBASE, DatabaseKingbaseImpl::new); + put(ProductTypeEnum.OSCAR, DatabaseOscarImpl::new); + put(ProductTypeEnum.GBASE8A, DatabaseGbase8aImpl::new); + put(ProductTypeEnum.HIVE, DatabaseHiveImpl::new); + put(ProductTypeEnum.SQLITE3, DatabaseSqliteImpl::new); + put(ProductTypeEnum.DORIS, DatabaseDorisImpl::new); + } + }; + + public static AbstractDatabase getDatabaseInstance(ProductTypeEnum type) { + Callable callable = DATABASE_MAPPER.get(type); + if (null != callable) { + try { + return callable.call(); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + throw new UnsupportedOperationException( + String.format("Unknown database type (%s)", type.name())); + } + + private DatabaseFactory() { + throw new IllegalStateException(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/IDatabaseInterface.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/IDatabaseInterface.java new file mode 100644 index 0000000..17f7614 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/IDatabaseInterface.java @@ -0,0 +1,233 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; + +import java.sql.Connection; +import java.util.List; +import java.util.Map; + +/** + * 数据库访问通用业务接口 + * + * @author jrl + */ +public interface IDatabaseInterface { + + /** + * 获取数据库类型 + * + * @return 数据库类型 + */ + ProductTypeEnum getDatabaseType(); + + /** + * 获取数据库的JDBC驱动类 + * + * @return + */ + String getDriverClassName(); + + /** + * 获取数据库的模式schema列表 + * + * @param connection JDBC连接 + * @return 模式名列表 + */ + List querySchemaList(Connection connection); + + /** + * 获取指定模式Schema内的所有表列表 + * + * @param connection JDBC连接 + * @param schemaName 模式名称 + * @return 表及视图名列表 + */ + List queryTableList(Connection connection, String schemaName); + + /** + * 精确获取表或视图的元数据 + * + * @param connection JDBC连接 + * @param schemaName 模式名称 + * @param tableName 表或视图名称 + * @return + */ + TableDescription queryTableMeta(Connection connection, String schemaName, String tableName); + + /** + * 获取指定物理表的DDL语句 + * + * @param connection JDBC连接 + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return 字段元信息列表 + */ + String getTableDDL(Connection connection, String schemaName, String tableName); + + /** + * 获取指定视图表的DDL语句 + * + * @param connection JDBC连接 + * @param schemaName 模式名称 + * @param tableName 表或视图名称 + * @return 字段元信息列表 + */ + String getViewDDL(Connection connection, String schemaName, String tableName); + + /** + * 获取指定模式表的字段列表 + * + * @param connection JDBC连接 + * @param schemaName 模式名称 + * @param tableName 表或视图名称 + * @return 字段元信息列表 + */ + List queryTableColumnName(Connection connection, String schemaName, + String tableName); + + void setColumnDefaultValue(Connection connection, String schemaName, String tableName, List columnDescriptions); + + void setColumnIndexInfo(Connection connection, String schemaName, String tableName, List columnDescriptions); + + /** + * 获取指定模式表的元信息 + * + * @param connection JDBC连接 + * @param schemaName 模式名称 + * @param tableName 表或视图名称 + * @return 字段元信息列表 + */ + List queryTableColumnMeta(Connection connection, String schemaName, + String tableName); + + /** + * 获取指定查询SQL的元信息 + * + * @param connection JDBC连接 + * @param sql SQL查询语句 + * @return 字段元信息列表 + */ + List querySelectSqlColumnMeta(Connection connection, String sql); + + List queryTableColumnMetaOnly(Connection connection, String schemaName, + String tableName); + + /** + * 获取指定模式表的主键字段列表 + * + * @param connection JDBC连接 + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return 主键字段名称列表 + */ + List queryTablePrimaryKeys(Connection connection, String schemaName, String tableName); + + /** + * 获取指定模式表内的数据 + * + * @param connection JDBC连接 + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param rowCount 记录的行数 + * @return 数据内容 + */ + SchemaTableData queryTableData(Connection connection, String schemaName, String tableName, + int rowCount); + + SchemaTableData queryTableDataBySql(Connection connection, String sql, int rowCount); + + /** + * 测试查询SQL语句的有效性 + * + * @param connection JDBC连接 + * @param sql 待验证的SQL语句 + */ + void testQuerySQL(Connection connection, String sql); + + /** + * 获取数据库的表全名 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return 表全名 + */ + String getQuotedSchemaTableCombination(String schemaName, String tableName); + + /** + * 获取字段列的结构定义 + * + * @param v 值元数据定义 + * @param pks 主键字段名称列表 + * @param addCr 是否结尾换行 + * @param useAutoInc 是否自增 + * @param withRemarks 是否带有注释 + * @return 字段定义字符串 + */ + String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, boolean addCr, + boolean withRemarks); + + /** + * 主键列转换为逗号分隔的字符串 + * + * @param pks 主键字段列表 + * @return 主键字段拼接串 + */ + String getPrimaryKeyAsString(List pks); + + /** + * 获取表和字段的注释定义 + * + * @param td 表信息定义 + * @param cds 列信息定义 + * @return 定义字符串列表 + */ + List getTableColumnCommentDefinition(TableDescription td, List cds); + + /** + * 获取字段类型 + * + * @param v + * @return + */ + boolean canCreateIndex(ColumnMetaData v); + + /** + * 获取创建索引的sql + * + * @param schemaName + * @param tableName + * @param results + * @param columnMap + */ + void setIndexSql(String schemaName, String tableName, List results, Map> columnMap); + + /** + * 根据要同步的字段添加目标表中不存在的字段 + * + * @param targetSchemaName + * @param targetTableName + */ + void addNoExistColumnsByTarget(Connection connection, String targetSchemaName, String targetTableName, List allColumns, List targetColumnDescriptions); + + /** + * 获取添加字段的sql + * @param targetSchemaName + * @param targetTableName + * @param targetColumn + * @return + */ + String getAddColumnSql(String targetSchemaName, String targetTableName, ColumnDescription targetColumn); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/constant/PostgresqlConst.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/constant/PostgresqlConst.java new file mode 100644 index 0000000..ea5208a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/constant/PostgresqlConst.java @@ -0,0 +1,112 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.constant; + +public final class PostgresqlConst { + + public static final String TPL_KEY_SCHEMA = ""; + public static final String TPL_KEY_TABLE = ""; + + public static final String CREATE_TABLE_SQL_TPL = + "WITH tabobj as (\n" + + " select pg_class.relfilenode as oid,pg_namespace.nspname as nspname,pg_class.relname as relname\n" + + " from pg_catalog.pg_class \n" + + " join pg_catalog.pg_namespace on pg_class.relnamespace = pg_namespace.oid \n" + + " where pg_namespace.nspname='' and pg_class.relname ='
'\n" + + "),\n" + + "attrdef AS (\n" + + " SELECT \n" + + " n.nspname,\n" + + " c.relname,\n" + + " pg_catalog.array_to_string(c.reloptions || array(select 'toast.' || x from pg_catalog.unnest(tc.reloptions) x), ', ') as relopts,\n" + + " c.relpersistence,\n" + + " a.attnum,\n" + + " a.attname,\n" + + " pg_catalog.format_type(a.atttypid, a.atttypmod) as atttype,\n" + + " (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid, true) for 128) \n" + + " FROM pg_catalog.pg_attrdef d\n" + + " WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef\n" + + " )as attdefault,\n" + + " a.attnotnull,\n" + + " (SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type t\n" + + " WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND a.attcollation <> t.typcollation\n" + + " ) as attcollation,\n" + + " a.attidentity,\n" + + " '' as attgenerated\n" + + " FROM pg_catalog.pg_attribute a\n" + + " JOIN pg_catalog.pg_class c ON a.attrelid = c.oid\n" + + " JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid\n" + + " LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid), tabobj\n" + + " WHERE n.nspname =tabobj.nspname \n" + + " AND c.relname =tabobj.relname\n" + + " AND a.attnum > 0\n" + + " AND NOT a.attisdropped\n" + + " ORDER BY a.attnum\n" + + "),\n" + + "coldef AS (\n" + + " SELECT\n" + + " attrdef.nspname,\n" + + " attrdef.relname,\n" + + " attrdef.relopts,\n" + + " attrdef.relpersistence,\n" + + " pg_catalog.format(\n" + + " '%I %s%s%s%s%s',\n" + + " attrdef.attname,\n" + + " attrdef.atttype,\n" + + " case when attrdef.attcollation is null then '' else pg_catalog.format(' COLLATE %I', attrdef.attcollation) end,\n" + + " case when attrdef.attnotnull then ' NOT NULL' else '' end,\n" + + " case when attrdef.attdefault is null then ''\n" + + " else case when attrdef.attgenerated = 's' then pg_catalog.format(' GENERATED ALWAYS AS (%s) STORED', attrdef.attdefault)\n" + + " when attrdef.attgenerated <> '' then ' GENERATED AS NOT_IMPLEMENTED'\n" + + " else pg_catalog.format(' DEFAULT %s', attrdef.attdefault)\n" + + " end\n" + + " end,\n" + + " case when attrdef.attidentity<>'' then pg_catalog.format(' GENERATED %s AS IDENTITY',\n" + + " case attrdef.attidentity when 'd' then 'BY DEFAULT' when 'a' then 'ALWAYS' else 'NOT_IMPLEMENTED' end)\n" + + " else '' end\n" + + " ) as col_create_sql\n" + + " FROM attrdef\n" + + " ORDER BY attrdef.attnum\n" + + "),\n" + + "tabdef AS (\n" + + " SELECT\n" + + " coldef.nspname,\n" + + " coldef.relname,\n" + + " coldef.relopts,\n" + + " coldef.relpersistence,\n" + + " string_agg(coldef.col_create_sql, E',\\n ') as cols_create_sql\n" + + " FROM coldef\n" + + " GROUP BY\n" + + " coldef.nspname, coldef.relname, coldef.relopts, coldef.relpersistence\n" + + ")\n" + + "SELECT\n" + + " format(\n" + + " 'CREATE%s TABLE %I.%I%s%s%s;',\n" + + " case tabdef.relpersistence when 't' then ' TEMP' when 'u' then ' UNLOGGED' else '' end,\n" + + " tabdef.nspname,\n" + + " tabdef.relname,\n" + + " coalesce(\n" + + " (SELECT format(E'\\n PARTITION OF %I.%I %s\\n', pn.nspname, pc.relname, pg_get_expr(c.relpartbound, c.oid))\n" + + " FROM pg_class c JOIN pg_inherits i ON c.oid = i.inhrelid\n" + + " JOIN pg_class pc ON pc.oid = i.inhparent\n" + + " JOIN pg_namespace pn ON pn.oid = pc.relnamespace\n" + + " join tabobj on c.oid=tabobj.oid\n" + + " ),\n" + + " format(E' (\\n %s\\n)', tabdef.cols_create_sql)\n" + + " ),\n" + + " case when tabdef.relopts <> '' then format(' WITH (%s)', tabdef.relopts) else '' end,\n" + + " coalesce(E'\\nPARTITION BY '||pg_get_partkeydef(tabobj.oid), '')\n" + + " ) as table_create_sql\n" + + "FROM tabdef,tabobj"; + + private PostgresqlConst() { + + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/constant/SQLServerConst.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/constant/SQLServerConst.java new file mode 100644 index 0000000..f8640a3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/constant/SQLServerConst.java @@ -0,0 +1,93 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.constant; + +public final class SQLServerConst { + + public static final String GET_CURRENT_CATALOG_SQL = + "Select Name From Master..SysDataBases Where DbId=(Select Dbid From Master..SysProcesses Where Spid = @@spid)"; + + // https://blog.csdn.net/AskTommorow/article/details/52370072 + public static final String CREATE_TABLE_SQL_TPL = + "declare @schemaname varchar(1024)\n" + + "declare @tabname varchar(1024)\n" + + "set @schemaname='%s' \n" + + "set @tabname='%s' \n" + + "\n" + + "if ( object_id('tempdb.dbo.#t') is not null)\n" + + "begin\n" + + "DROP TABLE #t\n" + + "end\n" + + "\n" + + "select 'create table ['+db_name()+'].['+@schemaname+'].[' + so.name + '] (' + o.list + ')' \n" + + " + CASE WHEN tc.Constraint_Name IS NULL THEN '' ELSE 'ALTER TABLE ['+db_name()+'].['+@schemaname+'].[' + so.Name + '] ADD CONSTRAINT ' + tc.Constraint_Name + ' PRIMARY KEY ' + ' (' + LEFT(j.List, Len(j.List)-1) + ')' END \n" + + " TABLE_DDL\n" + + "into #t from sysobjects so\n" + + "cross apply\n" + + " (SELECT \n" + + " ' ['+column_name+'] ' + \n" + + " data_type + case data_type\n" + + " when 'sql_variant' then ''\n" + + " when 'text' then ''\n" + + " when 'ntext' then ''\n" + + " when 'xml' then ''\n" + + " when 'decimal' then '(' + cast(numeric_precision as varchar) + ', ' + cast(numeric_scale as varchar) + ')'\n" + + " else coalesce('('+case when character_maximum_length = -1 then 'MAX' else cast(character_maximum_length as varchar) end +')','') end + ' ' +\n" + + " case when exists ( \n" + + " select id from syscolumns\n" + + " where object_name(id)=so.name\n" + + " and name=column_name\n" + + " and columnproperty(id,name,'IsIdentity') = 1 \n" + + " ) then\n" + + " 'IDENTITY(' + \n" + + " cast(ident_seed(so.name) as varchar) + ',' + \n" + + " cast(ident_incr(so.name) as varchar) + ')'\n" + + " else ''\n" + + " end + ' ' +\n" + + " (case when IS_NULLABLE = 'No' then 'NOT ' else '' end ) + 'NULL ' + \n" + + " case when information_schema.columns.COLUMN_DEFAULT IS NOT NULL THEN 'DEFAULT '+ information_schema.columns.COLUMN_DEFAULT ELSE '' END + ', ' \n" + + "\n" + + " from information_schema.columns where table_schema=@schemaname and table_name = so.name\n" + + " order by ordinal_position\n" + + " FOR XML PATH('')) o (list)\n" + + "left join\n" + + " information_schema.table_constraints tc\n" + + "on tc.Table_name = so.Name\n" + + "AND tc.Constraint_Type = 'PRIMARY KEY'\n" + + "cross apply\n" + + " (select '[' + Column_Name + '], '\n" + + " FROM information_schema.key_column_usage kcu\n" + + " WHERE kcu.Constraint_Name = tc.Constraint_Name\n" + + " ORDER BY\n" + + " ORDINAL_POSITION\n" + + " FOR XML PATH('')) j (list)\n" + + "where xtype = 'U'\n" + + "AND name=@tabname\n" + + "\n" + + "select (\n" + + " case when (\n" + + " select count(a.constraint_type)\n" + + " from information_schema.table_constraints a \n" + + " inner join information_schema.constraint_column_usage b\n" + + " on a.constraint_name = b.constraint_name\n" + + " where a.constraint_type = 'PRIMARY KEY' \n" + + " AND a.CONSTRAINT_SCHEMA = @schemaname\n" + + " and a.table_name = @tabname\n" + + " )=1 then\n" + + " replace(table_ddl,', )ALTER TABLE',') ALTER TABLE')\n" + + " else \n" + + " SUBSTRING(table_ddl,1,len(table_ddl)-3)+')' \n" + + " end\n" + + ") from #t"; + + private SQLServerConst() { + + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseDB2Impl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseDB2Impl.java new file mode 100644 index 0000000..cdbc590 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseDB2Impl.java @@ -0,0 +1,305 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import org.apache.commons.lang3.StringUtils; + +import java.sql.CallableStatement; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.List; +import java.util.Optional; + +/** + * 支持DB2数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseDB2Impl extends AbstractDatabase implements IDatabaseInterface { + + private static final String CALL_DB2LK_GEN = + "CALL SYSPROC.DB2LK_GENERATE_DDL(?,?)"; + private static final String CALL_DB2LK_CLEAN = + "CALL SYSPROC.DB2LK_CLEAN_TABLE(?)"; + private static final String SEL_DB2LK = + "SELECT SQL_STMT FROM SYSTOOLS.DB2LOOK_INFO WHERE OP_TOKEN = '%d' ORDER BY OP_SEQUENCE WITH UR"; + private static final String DB2LK_COMMAND = + "-e -x -xd -td %s -t %s"; + private static final String SHOW_CREATE_VIEW_SQL = + "SELECT TEXT FROM SYSCAT.VIEWS WHERE VIEWSCHEMA ='%s' AND VIEWNAME ='%s'"; + + public DatabaseDB2Impl() { + super("com.ibm.db2.jcc.DB2Driver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.DB2; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String fullName = String.format("\"%s\".\"%s\"", schemaName, tableName); + final String command = String.format(DB2LK_COMMAND, "\n", fullName); + List result = new ArrayList<>(); + try (CallableStatement stmt = connection.prepareCall(CALL_DB2LK_GEN)) { + stmt.registerOutParameter(2, java.sql.Types.INTEGER); + stmt.setString(1, command); + stmt.execute(); + int token = stmt.getInt(2); + String sql = String.format(SEL_DB2LK, token); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null) { + while (rs.next()) { + String value = rs.getString(1); + Optional.ofNullable(value).ifPresent(result::add); + } + } + } + } + } finally { + try (CallableStatement st = connection.prepareCall(CALL_DB2LK_CLEAN)) { + st.setInt(1, token); + st.execute(); + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return String.join(";", result); + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_VIEW_SQL, schemaName, tableName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format(" %s fetch first 1 rows only ", sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("SELECT * FROM ( %s ) t WHERE 1=2 ", sql.replace(";", "")); + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return null; + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + int type = v.getType(); + + String retval = " \"" + fieldname + "\" "; + boolean canHaveDefaultValue = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + retval += "TIMESTAMP"; + break; + case ColumnMetaData.TYPE_TIME: + retval += "TIME"; + break; + case ColumnMetaData.TYPE_DATE: + retval += "DATE"; + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "BOOLEAN"; + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && pks.contains(fieldname)) { + if (useAutoInc) { + retval += "BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 0, INCREMENT BY 1, NOCACHE)"; + } else { + retval += "BIGINT NOT NULL"; + } + } else { + if (length > 0) { + retval += "DECIMAL(" + length; + if (precision > 0) { + retval += ", " + precision; + } + retval += ")"; + } else { + retval += "FLOAT"; + } + } + break; + case ColumnMetaData.TYPE_INTEGER: + if (null != pks && pks.contains(fieldname)) { + if (useAutoInc) { + retval += "INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 0, INCREMENT BY 1, NOCACHE)"; + } else { + retval += "INTEGER NOT NULL"; + } + } else { + retval += "INTEGER"; + } + break; + case ColumnMetaData.TYPE_STRING: + if (length * 3 > 32672) { + retval += "CLOB"; + canHaveDefaultValue = false; + } else { + retval += "VARCHAR"; + if (length > 0) { + retval += "(" + length * 3; + } else { + retval += "("; + } + + retval += ")"; + } + + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + retval += " NOT NULL"; + } + + break; + case ColumnMetaData.TYPE_BINARY: + if (length > 32672) { + retval += "BLOB(" + length + ")"; + canHaveDefaultValue = false; + } else { + if (length > 0) { + retval += "CHAR(" + length + ") FOR BIT DATA"; + } else { + retval += "BLOB"; + canHaveDefaultValue = false; + } + } + break; + default: + retval += "CLOB"; + canHaveDefaultValue = false; + break; + } + + if (canHaveDefaultValue && v.getDefaultValue() != null && !"null".equals(v.getDefaultValue()) && !"NULL".equals(v.getDefaultValue())) { + if (type != ColumnMetaData.TYPE_TIMESTAMP && type != ColumnMetaData.TYPE_TIME && type != ColumnMetaData.TYPE_DATE) { + if (v.getDefaultValue().startsWith("'")) { + retval += " DEFAULT " + v.getDefaultValue(); + } else { + retval += " DEFAULT '" + v.getDefaultValue() + "'"; + } + } /*else { + retval += " DEFAULT DEFAULT SYSDATE"; + }*/ + } + + if (!v.isNullable()) { + retval += " NOT NULL"; + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + List results = new ArrayList<>(); + if (StringUtils.isNotBlank(td.getRemarks())) { + results.add(String + .format("COMMENT ON TABLE \"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), + td.getRemarks().replace("\"", "\\\""))); + } + + for (ColumnDescription cd : cds) { + if (StringUtils.isNotBlank(cd.getRemarks())) { + results.add(String + .format("COMMENT ON COLUMN \"%s\".\"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), cd.getFieldName(), + cd.getRemarks().replace("\"", "\\\""))); + } + } + + return results; + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + int length = v.getLength(); + int type = v.getType(); + boolean canCreateIndex = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + case ColumnMetaData.TYPE_BOOLEAN: + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_BIGNUMBER: + case ColumnMetaData.TYPE_INTEGER: + break; + case ColumnMetaData.TYPE_STRING: + if (length * 3 > 32672) { + canCreateIndex = false; + } + break; + case ColumnMetaData.TYPE_BINARY: + if (length > 32672) { + canCreateIndex = false; + } else { + if (length <= 0) { + canCreateIndex = false; + } + } + break; + default: + canCreateIndex = false; + break; + } + + return canCreateIndex; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseDmImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseDmImpl.java new file mode 100644 index 0000000..4f31a03 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseDmImpl.java @@ -0,0 +1,246 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.List; + +/** + * 支持DM数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseDmImpl extends AbstractDatabase implements IDatabaseInterface { + + private static final String SHOW_CREATE_TABLE_SQL = + "SELECT DBMS_METADATA.GET_DDL('TABLE','%s','%s') FROM DUAL "; + private static final String SHOW_CREATE_VIEW_SQL = + "SELECT DBMS_METADATA.GET_DDL('VIEW','%s','%s') FROM DUAL "; + + public DatabaseDmImpl() { + super("dm.jdbc.driver.DmDriver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.DM; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_TABLE_SQL, tableName, schemaName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_VIEW_SQL, tableName, schemaName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format( + "SELECT * from (%s) tmp where ROWNUM<=1 ", + sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("explain %s", sql.replace(";", "")); + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return null; + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + + StringBuilder retval = new StringBuilder(128); + retval.append(" \"").append(fieldname).append("\" "); + boolean canHaveDefaultValue = true; + int type = v.getType(); + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + retval.append("TIMESTAMP"); + break; + case ColumnMetaData.TYPE_DATE: + retval.append("DATE"); + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval.append("BIT"); + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + retval.append("BIGINT"); + } else { + retval.append("NUMERIC"); + if (length > 0) { + if (length > 38) { + length = 38; + } + + retval.append('(').append(length); + if (precision > 0) { + retval.append(", ").append(precision); + } + retval.append(')'); + } + } + break; + case ColumnMetaData.TYPE_INTEGER: + retval.append("BIGINT"); + break; + case ColumnMetaData.TYPE_STRING: + if (null != pks && pks.contains(fieldname)) { + retval.append("VARCHAR(").append(length).append(")"); + } else if (length < 2048) { + retval.append("VARCHAR(").append(length).append(")"); + } else { + retval.append("TEXT"); + } + break; + case ColumnMetaData.TYPE_BINARY: + retval.append("BLOB"); + canHaveDefaultValue = false; + break; + default: + retval.append("CLOB"); + canHaveDefaultValue = false; + break; + } + + if (canHaveDefaultValue && v.getDefaultValue() != null && !"null".equals(v.getDefaultValue()) && !"NULL".equals(v.getDefaultValue())) { + if (type != ColumnMetaData.TYPE_TIMESTAMP && type != ColumnMetaData.TYPE_TIME && type != ColumnMetaData.TYPE_DATE) { + if (v.getDefaultValue().startsWith("'")) { + retval.append(" DEFAULT ").append(v.getDefaultValue()); + } else { + retval.append(" DEFAULT '").append(v.getDefaultValue()).append("'"); + } + } /*else { + retval += " DEFAULT DEFAULT SYSDATE"; + }*/ + } + + if (!v.isNullable()) { + retval.append(" NOT NULL"); + } + + if (addCr) { + retval.append(Const.CR); + } + + return retval.toString(); + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + List results = new ArrayList<>(); + if (StringUtils.isNotBlank(td.getRemarks())) { + results.add(String + .format("COMMENT ON TABLE \"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), + td.getRemarks().replace("\"", "\\\""))); + } + + for (ColumnDescription cd : cds) { + if (StringUtils.isNotBlank(cd.getRemarks())) { + results.add(String + .format("COMMENT ON COLUMN \"%s\".\"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), cd.getFieldName(), + cd.getRemarks().replace("\"", "\\\""))); + } + } + + return results; + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + int length = v.getLength(); + int type = v.getType(); + boolean canCreateIndex = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + case ColumnMetaData.TYPE_BOOLEAN: + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_BIGNUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_STRING: + if (length >= 2048) { + canCreateIndex = false; + } + break; + case ColumnMetaData.TYPE_BINARY: + canCreateIndex = false; + break; + default: + canCreateIndex = false; + break; + } + + return canCreateIndex; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseDorisImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseDorisImpl.java new file mode 100644 index 0000000..5e7d4a0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseDorisImpl.java @@ -0,0 +1,285 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import org.apache.commons.lang3.StringUtils; +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.util.JdbcUrlUtils; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Set; + +/** + * 支持MySQL数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseDorisImpl extends DatabaseMysqlImpl implements IDatabaseInterface { + + private static final String SHOW_CREATE_TABLE_SQL = "SHOW CREATE TABLE `%s`.`%s` "; + private static final String SHOW_CREATE_VIEW_SQL = "SHOW CREATE VIEW `%s`.`%s` "; + + public DatabaseDorisImpl() { + super("com.mysql.jdbc.Driver"); + } + + public DatabaseDorisImpl(String driverClassName) { + super(driverClassName); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.DORIS; + } + + @Override + public List queryTableColumnMeta(Connection connection, String schemaName, + String tableName) { + String sql = this.getTableFieldsQuerySQL(schemaName, tableName); + List columnDescriptions = this.querySelectSqlColumnMeta(connection, sql); + // 补充一下字段信息,获取的不准 + String extraSql = "SELECT column_name,data_type,column_size,decimal_digits,column_comment FROM information_schema.COLUMNS WHERE table_schema='" + schemaName + "' AND table_name='" + tableName + "'"; + try (PreparedStatement ps = connection.prepareStatement(extraSql); + ResultSet rs = ps.executeQuery(); + ) { + while (rs.next()) { + String columnName = rs.getString("column_name"); + String dataType = rs.getString("data_type"); + String columnSize = rs.getString("column_size"); + String decimalDigits = rs.getString("decimal_digits"); + String columnComment = rs.getString("column_comment"); + if (columnName != null) { + for (ColumnDescription cd : columnDescriptions) { + if (columnName.equals(cd.getFieldName())) { + cd.setFieldTypeName(dataType); + int csize = columnSize != null ? Integer.parseInt(columnSize) : 0; + cd.setDisplaySize(csize); + cd.setPrecisionSize(csize); + cd.setScaleSize(decimalDigits != null ? Integer.parseInt(decimalDigits) : 0); + cd.setRemarks(columnComment); + break; + } + } + } + } + } catch (SQLException e) { + throw new RuntimeException(schemaName + "." + tableName + " queryTableColumnMeta error!!", e); + } + + return columnDescriptions; + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + int type = v.getType(); + + String retval = " `" + fieldname + "` "; + boolean canHaveDefaultValue = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + retval += "DATETIME"; + break; + case ColumnMetaData.TYPE_DATE: + retval += "DATE"; + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "TINYINT"; + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + retval += "BIGINT"; + } else { + // Integer values... + if (precision == 0) { + if (length > 9) { + if (length < 19) { + // can hold signed values between -9223372036854775808 and 9223372036854775807 + // 18 significant digits + retval += "BIGINT"; + } else { + retval += "DECIMAL(" + length + ")"; + } + } else { + retval += "INT"; + } + } else { + retval += "DECIMAL(" + length; + if (precision > 0) { + retval += ", " + precision; + } + retval += ")"; + // Floating point values... + /*if (length > 15) { + retval += "DECIMAL(" + length; + if (precision > 0) { + retval += ", " + precision; + } + retval += ")"; + } else { + // A double-precision floating-point number is accurate to approximately 15 + // decimal places. + // http://mysql.mirrors-r-us.net/doc/refman/5.1/en/numeric-type-overview.html + retval += "DOUBLE"; + }*/ + } + } + break; + case ColumnMetaData.TYPE_STRING: + if (length * 3 <= 65533) { + retval += "VARCHAR(" + length * 3 + ")"; + } else { + retval += "TEXT"; + canHaveDefaultValue = false; + } + break; + default: + retval += "TEXT"; + canHaveDefaultValue = false; + break; + } + + if (!v.isNullable()) { + retval += " NOT NULL"; + } + + if (canHaveDefaultValue && v.getDefaultValue() != null && !"null".equals(v.getDefaultValue()) && !"NULL".equals(v.getDefaultValue())) { + if (type != ColumnMetaData.TYPE_TIMESTAMP && type != ColumnMetaData.TYPE_TIME && type != ColumnMetaData.TYPE_DATE) { + if (v.getDefaultValue().startsWith("'")) { + retval += " DEFAULT " + v.getDefaultValue(); + } else { + retval += " DEFAULT '" + v.getDefaultValue() + "'"; + } + } else { + retval += " DEFAULT CURRENT_TIMESTAMP"; + } + } + + + if (withRemarks && StringUtils.isNotBlank(v.getRemarks())) { + retval += String.format(" COMMENT '%s' ", v.getRemarks().replace("'", "\\'")); + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + return false; + } + + @Override + public String getPrimaryKeyAsString(List pks) { + if (null != pks && !pks.isEmpty()) { + return "`" + + StringUtils.join(pks, "` , `") + + "`"; + } + + return ""; + } + + @Override + public void setColumnDefaultValue(Connection connection, String schemaName, String tableName, List columnDescriptions) { + String sql = this.getDefaultValueSql(schemaName, tableName); + try (Statement st = connection.createStatement()) { + try (ResultSet rs = st.executeQuery(sql)) { + while (rs.next()) { + String columnName = rs.getString("Field"); + String columnDefault = rs.getString("Default"); + if (columnName != null) { + for (ColumnDescription cd : columnDescriptions) { + if (columnName.equals(cd.getFieldName())) { + cd.setDefaultValue(columnDefault); + break; + } + } + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return String.format("desc `%s`.`%s`", schemaName, tableName); + } + + + @Override + public List queryTablePrimaryKeys(Connection connection, String schemaName, + String tableName) { + Set ret = new HashSet<>(); + String sql = String.format("desc `%s`.`%s`", schemaName, tableName); + try (PreparedStatement ps = connection.prepareStatement(sql); + ResultSet rs = ps.executeQuery(); + ) { + //看下是否又none的字段,如果有,说明key模式为DUPLICATE KEY 可重复 + boolean NoneExtra = false; + while (rs.next()) { + String field = rs.getString("Field"); + String key = rs.getString("Key"); + String extra = rs.getString("Extra"); + if ("true".equalsIgnoreCase(key)) { + ret.add(field); + } else { + if ("NONE".equalsIgnoreCase(extra)) { + NoneExtra = true; + } + } + } + if (NoneExtra) { + return new ArrayList<>(); + } + return new ArrayList<>(ret); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public void setColumnIndexInfo(Connection connection, String schemaName, String tableName, List columnDescriptions) { + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + return Collections.emptyList(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseGbase8aImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseGbase8aImpl.java new file mode 100644 index 0000000..6bd575c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseGbase8aImpl.java @@ -0,0 +1,31 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +/** + * 支持GBase8a数据库的元信息实现 + * + * @author tang + */ +public class DatabaseGbase8aImpl extends DatabaseMysqlImpl { + + public DatabaseGbase8aImpl() { + super("com.gbase.jdbc.Driver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.GBASE8A; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseGreenplumImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseGreenplumImpl.java new file mode 100644 index 0000000..37c220a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseGreenplumImpl.java @@ -0,0 +1,287 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.database.constant.PostgresqlConst; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.util.PostgresUtils; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * 支持Greenplum数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseGreenplumImpl extends AbstractDatabase implements IDatabaseInterface { + + private static Set systemSchemas = new HashSet<>(); + + private static final String SHOW_CREATE_VIEW_SQL = + "select pg_get_viewdef('\"%s\".\"%s\"', true)"; + + static { + systemSchemas.add("gp_toolkit"); + systemSchemas.add("pg_aoseg"); + systemSchemas.add("information_schema"); + systemSchemas.add("pg_catalog"); + systemSchemas.add("pg_bitmapindex"); + } + + public DatabaseGreenplumImpl() { + super("com.pivotal.jdbc.GreenplumDriver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.GREENPLUM; + } + + @Override + public List querySchemaList(Connection connection) { + List schemas = super.querySchemaList(connection); + return schemas.stream() + .filter(s -> !systemSchemas.contains(s)) + .collect(Collectors.toList()); + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return null; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = PostgresqlConst.CREATE_TABLE_SQL_TPL + .replace(PostgresqlConst.TPL_KEY_SCHEMA, schemaName) + .replace(PostgresqlConst.TPL_KEY_TABLE, tableName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + //throw new RuntimeException(e); + } + + // 低版本的PostgreSQL的表的DDL获取方法 + return PostgresUtils.getTableDDL(connection, schemaName, tableName); + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_VIEW_SQL, schemaName, tableName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format(" %s LIMIT 1 OFFSET 0 ", sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("explain %s", sql.replace(";", "")); + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + int type = v.getType(); + + String retval = " \"" + fieldname + "\" "; + boolean canHaveDefaultValue = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + retval += "TIMESTAMP"; + break; + case ColumnMetaData.TYPE_TIME: + retval += "TIME"; + break; + case ColumnMetaData.TYPE_DATE: + retval += "DATE"; + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "BOOLEAN"; + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + if (useAutoInc) { + retval += "BIGSERIAL"; + } else { + retval += "BIGINT"; + } + } else { + if (length > 0) { + if (precision > 0 || length > 18) { + if ((length + precision) > 0 && precision > 0) { + // Numeric(Precision, Scale): Precision = total length; Scale = decimal places + retval += "NUMERIC(" + (length + precision) + ", " + precision + ")"; + } else { + retval += "DOUBLE PRECISION"; + } + } else { + if (length > 9) { + retval += "BIGINT"; + } else { + if (length < 5) { + retval += "SMALLINT"; + } else { + retval += "INTEGER"; + } + } + } + + } else { + retval += "DOUBLE PRECISION"; + } + } + break; + case ColumnMetaData.TYPE_STRING: + if (length < 1 || length >= AbstractDatabase.CLOB_LENGTH) { + retval += "TEXT"; + canHaveDefaultValue = false; + } else if (length <= 2000) { + retval += "VARCHAR(" + length + ")"; + } else { + if (null != pks && pks.contains(fieldname)) { + retval += "VARCHAR(" + length + ")"; + } else { + retval += "TEXT"; + canHaveDefaultValue = false; + } + } + break; + case ColumnMetaData.TYPE_BINARY: + retval += "BYTEA"; + canHaveDefaultValue = false; + break; + default: + retval += " TEXT"; + canHaveDefaultValue = false; + break; + } + + if (canHaveDefaultValue && v.getDefaultValue() != null && !"null".equals(v.getDefaultValue()) && !"NULL".equals(v.getDefaultValue())) { + if (type != ColumnMetaData.TYPE_TIMESTAMP && type != ColumnMetaData.TYPE_TIME && type != ColumnMetaData.TYPE_DATE) { + if (v.getDefaultValue().startsWith("'")) { + retval += " DEFAULT " + v.getDefaultValue(); + } else { + retval += " DEFAULT '" + v.getDefaultValue() + "'"; + } + } /*else { + retval += " DEFAULT DEFAULT SYSDATE"; + }*/ + } + + if (!v.isNullable()) { + retval += " NOT NULL"; + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + List results = new ArrayList<>(); + if (StringUtils.isNotBlank(td.getRemarks())) { + results.add(String + .format("COMMENT ON TABLE \"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), + td.getRemarks().replace("\"", "\\\""))); + } + + for (ColumnDescription cd : cds) { + if (StringUtils.isNotBlank(cd.getRemarks())) { + results.add(String + .format("COMMENT ON COLUMN \"%s\".\"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), cd.getFieldName(), + cd.getRemarks().replace("\"", "\\\""))); + } + } + + return results; + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + int length = v.getLength(); + int type = v.getType(); + boolean canCreateIndex = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + case ColumnMetaData.TYPE_BOOLEAN: + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + break; + case ColumnMetaData.TYPE_STRING: + if (length < 1 || length >= AbstractDatabase.CLOB_LENGTH) { + canCreateIndex = false; + } else if (length > 2000) { + canCreateIndex = false; + } + break; + default: + canCreateIndex = false; + break; + } + + return canCreateIndex; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseHiveImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseHiveImpl.java new file mode 100644 index 0000000..abb24ea --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseHiveImpl.java @@ -0,0 +1,228 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.HivePrepareUtils; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Optional; + +public class DatabaseHiveImpl extends AbstractDatabase implements IDatabaseInterface { + + private static final String SHOW_CREATE_TABLE_SQL = "SHOW CREATE TABLE `%s`.`%s` "; + + public DatabaseHiveImpl() { + super("org.apache.hive.jdbc.HiveDriver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.HIVE; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_TABLE_SQL, schemaName, tableName); + List result = new ArrayList<>(); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null) { + while (rs.next()) { + String value = rs.getString(1); + Optional.ofNullable(value).ifPresent(result::add); + } + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return String.join("\n", result); + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + return getTableDDL(connection, schemaName, tableName); + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return null; + } + + @Override + public List queryTableColumnMeta(Connection connection, String schemaName, + String tableName) { + String querySQL = this.getTableFieldsQuerySQL(schemaName, tableName); + List ret = new ArrayList<>(); + try (Statement st = connection.createStatement()) { + HivePrepareUtils.prepare(connection, schemaName, tableName); + try (ResultSet rs = st.executeQuery(querySQL)) { + ResultSetMetaData m = rs.getMetaData(); + int columns = m.getColumnCount(); + for (int i = 1; i <= columns; i++) { + String name = m.getColumnLabel(i); + if (null == name) { + name = m.getColumnName(i); + } + + ColumnDescription cd = new ColumnDescription(); + cd.setFieldName(name); + cd.setLabelName(name); + cd.setFieldType(m.getColumnType(i)); + if (0 != cd.getFieldType()) { + cd.setFieldTypeName(m.getColumnTypeName(i)); + cd.setFiledTypeClassName(m.getColumnClassName(i)); + cd.setDisplaySize(m.getColumnDisplaySize(i)); + cd.setPrecisionSize(m.getPrecision(i)); + cd.setScaleSize(m.getScale(i)); + cd.setAutoIncrement(m.isAutoIncrement(i)); + cd.setNullable(m.isNullable(i) != ResultSetMetaData.columnNoNulls); + } else { + // 处理视图中NULL as fieldName的情况 + cd.setFieldTypeName("CHAR"); + cd.setFiledTypeClassName(String.class.getName()); + cd.setDisplaySize(1); + cd.setPrecisionSize(1); + cd.setScaleSize(0); + cd.setAutoIncrement(false); + cd.setNullable(true); + } + + boolean signed = false; + try { + signed = m.isSigned(i); + } catch (Exception ignored) { + // This JDBC Driver doesn't support the isSigned method + // nothing more we can do here by catch the exception. + } + cd.setSigned(signed); + cd.setDbType(ProductTypeEnum.HIVE); + + ret.add(cd); + } + + return ret; + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public List queryTablePrimaryKeys(Connection connection, String schemaName, + String tableName) { + return Collections.emptyList(); + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format(" %s LIMIT 1", sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM `%s`.`%s` ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("explain %s", sql.replace(";", "")); + } + + @Override + public String getQuotedSchemaTableCombination(String schemaName, String tableName) { + return String.format(" `%s`.`%s` ", schemaName, tableName); + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int type = v.getType(); + + String retval = " `" + fieldname + "` "; + + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + retval += "TIMESTAMP"; + break; + case ColumnMetaData.TYPE_DATE: + retval += "DATE"; + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "TINYINT"; + break; + case ColumnMetaData.TYPE_NUMBER: + retval += "FLOAT"; + break; + case ColumnMetaData.TYPE_INTEGER: + retval += "DECIMAL"; + break; + case ColumnMetaData.TYPE_BIGNUMBER: + retval += "BIGINT"; + break; + case ColumnMetaData.TYPE_STRING: + retval += "STRING"; + break; + case ColumnMetaData.TYPE_BINARY: + retval += "BINARY"; + break; + default: + retval += "STRING"; + break; + } + + if (withRemarks && StringUtils.isNotBlank(v.getRemarks())) { + retval += String.format(" COMMENT '%s' ", v.getRemarks().replace("'", "\\'")); + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + return Collections.emptyList(); + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + return false; + } + + @Override + public void setColumnIndexInfo(Connection connection, String schemaName, String tableName, List columnDescriptions) { + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseKingbaseImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseKingbaseImpl.java new file mode 100644 index 0000000..1e3b3e8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseKingbaseImpl.java @@ -0,0 +1,267 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.database.constant.PostgresqlConst; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.List; + +/** + * 支持Kingbase数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseKingbaseImpl extends AbstractDatabase implements IDatabaseInterface { + + private static final String SHOW_CREATE_VIEW_SQL = + "SELECT pg_get_viewdef((select pg_class.oid from pg_catalog.pg_class \n" + + "join pg_catalog.pg_namespace on pg_class.relnamespace = pg_namespace.oid \n" + + "where pg_namespace.nspname='%s' and pg_class.relname ='%s'),true) "; + + public DatabaseKingbaseImpl() { + super("com.kingbase8.Driver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.KINGBASE; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = PostgresqlConst.CREATE_TABLE_SQL_TPL + .replace(PostgresqlConst.TPL_KEY_SCHEMA, schemaName) + .replace(PostgresqlConst.TPL_KEY_TABLE, tableName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_VIEW_SQL, schemaName, tableName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format(" %s LIMIT 0 ", sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("explain %s", sql.replace(";", "")); + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return null; + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + int type = v.getType(); + + String retval = " \"" + fieldname + "\" "; + boolean canHaveDefaultValue = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + retval += "TIMESTAMP"; + break; + case ColumnMetaData.TYPE_TIME: + retval += "TIME"; + break; + case ColumnMetaData.TYPE_DATE: + retval += "DATE"; + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "BOOLEAN"; + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + if (useAutoInc) { + retval += "BIGSERIAL"; + } else { + retval += "BIGINT"; + } + } else { + if (length > 0) { + if (precision > 0 || length > 18) { + if ((length + precision) > 0 && precision > 0) { + // Numeric(Precision, Scale): Precision = total length; Scale = decimal places + retval += "NUMERIC(" + (length + precision) + ", " + precision + ")"; + } else { + retval += "DOUBLE PRECISION"; + } + } else { + if (length > 9) { + retval += "BIGINT"; + } else { + if (length < 5) { + retval += "SMALLINT"; + } else { + retval += "INTEGER"; + } + } + } + + } else { + retval += "DOUBLE PRECISION"; + } + } + break; + case ColumnMetaData.TYPE_STRING: + if (length < 1 || length >= AbstractDatabase.CLOB_LENGTH) { + retval += "TEXT"; + canHaveDefaultValue = false; + } else if (length <= 2000) { + retval += "VARCHAR(" + length + ")"; + } else { + if (null != pks && pks.contains(fieldname)) { + retval += "VARCHAR(" + length + ")"; + } else { + retval += "TEXT"; + canHaveDefaultValue = false; + } + } + break; + case ColumnMetaData.TYPE_BINARY: + retval += "BYTEA"; + canHaveDefaultValue = false; + break; + default: + retval += "TEXT"; + canHaveDefaultValue = false; + break; + } + + if (canHaveDefaultValue && v.getDefaultValue() != null && !"null".equals(v.getDefaultValue()) && !"NULL".equals(v.getDefaultValue())) { + if (type != ColumnMetaData.TYPE_TIMESTAMP && type != ColumnMetaData.TYPE_TIME && type != ColumnMetaData.TYPE_DATE) { + if (v.getDefaultValue().startsWith("'")) { + retval += " DEFAULT " + v.getDefaultValue(); + } else { + retval += " DEFAULT '" + v.getDefaultValue() + "'"; + } + } else { + retval += " DEFAULT DEFAULT SYSDATE"; + } + } + + if (!v.isNullable()) { + retval += " NOT NULL"; + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + List results = new ArrayList<>(); + if (StringUtils.isNotBlank(td.getRemarks())) { + results.add(String + .format("COMMENT ON TABLE \"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), + td.getRemarks().replace("\"", "\\\""))); + } + + for (ColumnDescription cd : cds) { + if (StringUtils.isNotBlank(cd.getRemarks())) { + results.add(String + .format("COMMENT ON COLUMN \"%s\".\"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), cd.getFieldName(), + cd.getRemarks().replace("\"", "\\\""))); + } + } + + return results; + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + int length = v.getLength(); + int type = v.getType(); + boolean canCreateIndex = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + case ColumnMetaData.TYPE_BOOLEAN: + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + break; + case ColumnMetaData.TYPE_STRING: + if (length < 1 || length >= AbstractDatabase.CLOB_LENGTH) { + canCreateIndex = false; + } else if (length > 2000) { + canCreateIndex = false; + } + break; + default: + canCreateIndex = false; + break; + } + + return canCreateIndex; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseMariaDBImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseMariaDBImpl.java new file mode 100644 index 0000000..f1bc87f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseMariaDBImpl.java @@ -0,0 +1,30 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +/** + * 支持MariaDB数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseMariaDBImpl extends DatabaseMysqlImpl { + + public DatabaseMariaDBImpl() { + super("org.mariadb.jdbc.Driver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.MARIADB; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseMysqlImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseMysqlImpl.java new file mode 100644 index 0000000..d8bd429 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseMysqlImpl.java @@ -0,0 +1,387 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.util.JdbcUrlUtils; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Set; + +/** + * 支持MySQL数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseMysqlImpl extends AbstractDatabase implements IDatabaseInterface { + + private static final String SHOW_CREATE_TABLE_SQL = "SHOW CREATE TABLE `%s`.`%s` "; + private static final String SHOW_CREATE_VIEW_SQL = "SHOW CREATE VIEW `%s`.`%s` "; + + public DatabaseMysqlImpl() { + super("com.mysql.jdbc.Driver"); + } + + public DatabaseMysqlImpl(String driverClassName) { + super(driverClassName); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.MYSQL; + } + + @Override + public List querySchemaList(Connection connection) { + String mysqlJdbcUrl = null; + try { + mysqlJdbcUrl = connection.getMetaData().getURL(); + } catch (SQLException e) { + throw new RuntimeException(e); + } + + Map data = JdbcUrlUtils.findParamsByMySqlJdbcUrl(mysqlJdbcUrl); + List ret = new ArrayList(); + ret.add(data.get("schema")); + return ret; + } + + @Override + public List queryTableList(Connection connection, String schemaName) { + List ret = new ArrayList<>(); + String sql = String.format("SELECT `TABLE_SCHEMA`,`TABLE_NAME`,`TABLE_TYPE`,`TABLE_COMMENT` " + + "FROM `information_schema`.`TABLES` where `TABLE_SCHEMA`='%s'", schemaName); + try (PreparedStatement ps = connection.prepareStatement(sql); + ResultSet rs = ps.executeQuery();) { + while (rs.next()) { + TableDescription td = new TableDescription(); + td.setSchemaName(rs.getString("TABLE_SCHEMA")); + td.setTableName(rs.getString("TABLE_NAME")); + td.setRemarks(rs.getString("TABLE_COMMENT")); + String tableType = rs.getString("TABLE_TYPE"); + if (tableType.equalsIgnoreCase("VIEW")) { + td.setTableType("VIEW"); + } else { + td.setTableType("TABLE"); + } + + ret.add(td); + } + + return ret; + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return "select column_name,column_default,column_comment AS column_comment from information_schema.columns where table_schema = '" + schemaName + "' and table_name = '" + tableName + "'"; + } + + @Override + public List queryTablePrimaryKeys(Connection connection, String schemaName, + String tableName) { + Set ret = new HashSet<>(); + String sql = "select column_name,ordinal_position AS COLPOSITION, column_default AS DATADEFAULT, is_nullable AS NULLABLE, data_type AS DATATYPE, " + + "character_maximum_length AS DATALENGTH, numeric_precision AS DATAPRECISION, numeric_scale AS DATASCALE, column_key , column_comment AS COLCOMMENT " + + "from information_schema.columns where table_schema = '" + schemaName + "' and table_name = '" + tableName + "' and column_key='PRI' order by ordinal_position "; + try (PreparedStatement ps = connection.prepareStatement(sql); + ResultSet rs = ps.executeQuery(); + ) { + while (rs.next()) { + ret.add(rs.getString("column_name")); + } + + return new ArrayList<>(ret); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_TABLE_SQL, schemaName, tableName); + List result = new ArrayList<>(); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null) { + while (rs.next()) { + String value = rs.getString(2); + Optional.ofNullable(value).ifPresent(result::add); + } + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return result.stream().findAny().orElse(null); + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_VIEW_SQL, schemaName, tableName); + List result = new ArrayList<>(); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null) { + while (rs.next()) { + String value = rs.getString(2); + Optional.ofNullable(value).ifPresent(result::add); + } + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return result.stream().findAny().orElse(null); + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format(" %s LIMIT 0,1", sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM `%s`.`%s` ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("explain %s", sql.replace(";", "")); + } + + @Override + public String getQuotedSchemaTableCombination(String schemaName, String tableName) { + return String.format(" `%s`.`%s` ", schemaName, tableName); + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + int type = v.getType(); + + String retval = " `" + fieldname + "` "; + boolean canHaveDefaultValue = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + retval += "DATETIME"; + break; + case ColumnMetaData.TYPE_TIME: + retval += "TIME"; + break; + case ColumnMetaData.TYPE_DATE: + retval += "DATE"; + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "TINYINT"; + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + if (useAutoInc) { + retval += "BIGINT AUTO_INCREMENT"; + } else { + retval += "BIGINT"; + } + } else { + // Integer values... + if (precision == 0) { + if (length > 9) { + if (length < 19) { + // can hold signed values between -9223372036854775808 and 9223372036854775807 + // 18 significant digits + retval += "BIGINT"; + } else { + retval += "DECIMAL(" + length + ")"; + } + } else { + retval += "INT"; + } + } else { + retval += "DECIMAL(" + length; + if (precision > 0) { + retval += ", " + precision; + } + retval += ")"; + /*// Floating point values... + if (length > 15) { + + } else { + // A double-precision floating-point number is accurate to approximately 15 + // decimal places. + // http://mysql.mirrors-r-us.net/doc/refman/5.1/en/numeric-type-overview.html + retval += "DOUBLE"; + }*/ + } + } + break; + case ColumnMetaData.TYPE_STRING: + if (length > 0) { + if (length == 1) { + retval += "CHAR(1)"; + } else if (length < 1025) { + retval += "VARCHAR(" + length + ")"; + } else if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + /* + * MySQL5.6中varchar字段为主键时最大长度为254,例如如下的建表语句在MySQL5.7下能通过,但在MySQL5.6下无法通过: + * create table `t_test`( + * `key` varchar(1024) binary, + * `val` varchar(1024) binary, + * primary key(`key`) + * ); + */ + retval += "VARCHAR(254) BINARY"; + } else if (length < 65536) { + retval += "TEXT"; + canHaveDefaultValue = false; + } else if (length < 16777216) { + retval += "MEDIUMTEXT"; + canHaveDefaultValue = false; + } else { + retval += "LONGTEXT"; + canHaveDefaultValue = false; + } + } else if (length < 0) { + retval += "TEXT"; + } else { + retval += "TINYTEXT"; + } + break; + case ColumnMetaData.TYPE_BINARY: + retval += "LONGBLOB"; + canHaveDefaultValue = false; + break; + default: + retval += " LONGTEXT"; + canHaveDefaultValue = false; + break; + } + + if (!v.isNullable()) { + retval += " NOT NULL"; + } + + if (canHaveDefaultValue && v.getDefaultValue() != null && !"null".equals(v.getDefaultValue()) && !"NULL".equals(v.getDefaultValue())) { + if (type != ColumnMetaData.TYPE_TIMESTAMP && type != ColumnMetaData.TYPE_TIME && type != ColumnMetaData.TYPE_DATE) { + if (v.getDefaultValue().startsWith("'")) { + retval += " DEFAULT " + v.getDefaultValue(); + } else { + retval += " DEFAULT '" + v.getDefaultValue() + "'"; + } + } else { + retval += " DEFAULT CURRENT_TIMESTAMP"; + } + } + + if (withRemarks && StringUtils.isNotBlank(v.getRemarks())) { + retval += String.format(" COMMENT '%s' ", v.getRemarks().replace("'", "\\'")); + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + int length = v.getLength(); + int type = v.getType(); + + boolean canCreateIndex = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + case ColumnMetaData.TYPE_BOOLEAN: + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + break; + case ColumnMetaData.TYPE_STRING: + if (length > 0) { + if (length >= 1025) { + canCreateIndex = false; + } + } else { + canCreateIndex = false; + } + break; + default: + canCreateIndex = false; + break; + } + return canCreateIndex; + } + + @Override + public String getPrimaryKeyAsString(List pks) { + if (null != pks && !pks.isEmpty()) { + StringBuilder sb = new StringBuilder(); + sb.append("`"); + sb.append(StringUtils.join(pks, "` , `")); + sb.append("`"); + return sb.toString(); + } + + return ""; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + return Collections.emptyList(); + } + + @Override + public String getCountMoreThanOneSql(String schemaName, String tableName, List columns) { + String columnStr = "`" + String.join("`,`", columns) + "`"; + return String.format("SELECT %s FROM `%s`.`%s` GROUP BY %s HAVING count(*)>1", columnStr, schemaName, tableName, columnStr); + } + + @Override + public String getCountOneSql(String schemaName, String tableName, List columns) { + String columnStr = "`" + String.join("`,`", columns) + "`"; + return String.format("SELECT %s FROM `%s`.`%s` GROUP BY %s HAVING count(*)=1", columnStr, schemaName, tableName, columnStr); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseOracleImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseOracleImpl.java new file mode 100644 index 0000000..1c4d094 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseOracleImpl.java @@ -0,0 +1,434 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import org.apache.commons.lang3.StringUtils; +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.StringUtil; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * 支持Oracle数据库的元信息实现 + *

+ * 备注: + *

+ * (1)Oracle12c安装教程: + *

+ * 官方安装版:https://www.w3cschool.cn/oraclejc/oraclejc-vuqx2qqu.html + *

+ * Docker版本:http://www.pianshen.com/article/4448142743/ + *

+ * https://www.cnblogs.com/Dev0ps/p/10676930.html + *

+ * (2) Oracle的一个表里至多只能有一个字段为LONG类型 + * + * @author jrl + */ +public class DatabaseOracleImpl extends AbstractDatabase implements IDatabaseInterface { + + private static final String SHOW_CREATE_TABLE_SQL = + "SELECT DBMS_METADATA.GET_DDL('TABLE','%s','%s') FROM DUAL "; + private static final String SHOW_CREATE_VIEW_SQL = + "SELECT DBMS_METADATA.GET_DDL('VIEW','%s','%s') FROM DUAL "; + + public DatabaseOracleImpl() { + super("oracle.jdbc.driver.OracleDriver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.ORACLE; + } + + @Override + public List queryTableList(Connection connection, String schemaName) { + List ret = new ArrayList<>(); + String sql = String.format("SELECT \"OWNER\",\"TABLE_NAME\",\"TABLE_TYPE\",\"COMMENTS\" " + + "FROM all_tab_comments where \"OWNER\"='%s'", schemaName); + try (PreparedStatement ps = connection.prepareStatement(sql); + ResultSet rs = ps.executeQuery();) { + while (rs.next()) { + TableDescription td = new TableDescription(); + td.setSchemaName(rs.getString("OWNER")); + td.setTableName(rs.getString("TABLE_NAME")); + td.setRemarks(rs.getString("COMMENTS")); + String tableType = rs.getString("TABLE_TYPE").trim(); + if ("VIEW".equalsIgnoreCase(tableType)) { + td.setTableType("VIEW"); + } else { + td.setTableType("TABLE"); + } + + ret.add(td); + } + + return ret; + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + /*return "select columns.column_name AS \"column_name\", columns.data_type AS DATATYPE, columns.data_length AS DATALENGTH, columns.data_precision AS DATAPRECISION, " + + "columns.data_scale AS DATASCALE, columns.nullable AS NULLABLE, columns.column_id AS COLPOSITION, columns.data_default AS \"column_default\", comments.comments AS \"column_comment\"," + + "case when t.column_name is null then 0 else 1 end as COLKEY " + + "from sys.user_tab_columns columns LEFT JOIN sys.user_col_comments comments ON columns.table_name = comments.table_name AND columns.column_name = comments.column_name " + + "left join ( " + + "select col.column_name as column_name, con.table_name as table_name from user_constraints con, user_cons_columns col " + + "where con.constraint_name = col.constraint_name and con.constraint_type = 'P' " + + ") t on t.table_name = columns.table_name and columns.column_name = t.column_name " + + "where columns.table_name = '" + tableName + "' order by columns.column_id ";*/ + return "SELECT\n" + + " COLUMNS.column_name AS column_name,\n" + + " COLUMNS.table_name,\n" + + " COLUMNS.data_type AS DATATYPE,\n" + + " COLUMNS.data_length AS DATALENGTH,\n" + + " COLUMNS.data_precision AS DATAPRECISION,\n" + + " COLUMNS.data_scale AS DATASCALE,\n" + + " COLUMNS.nullable AS NULLABLE,\n" + + " COLUMNS.column_id AS COLPOSITION,\n" + + " COLUMNS.data_default AS column_default,\n" + + " comments.comments AS column_comment,\n" + + " CASE\n" + + " WHEN t.column_name IS NULL THEN \n" + + "\t\t0\n" + + " ELSE 1\n" + + " END AS COLKEY\n" + + "FROM\n" + + " --sys.user_tab_columns\n" + + " all_TAB_COLUMNS\n" + + " --COLUMNS LEFT JOIN sys.user_col_comments comments ON COLUMNS.table_name = comments.table_name\n" + + " COLUMNS\n" + + "LEFT JOIN all_col_comments comments ON\n" + + " COLUMNS.table_name = comments.table_name\n" + + " AND COLUMNS.column_name = comments.column_name\n" + + "LEFT JOIN (\n" + + " SELECT\n" + + " col.column_name AS column_name,\n" + + " con.table_name AS table_name\n" + + " FROM\n" + + " ALL_constraints con,\n" + + " ALL_cons_columns col\n" + + " WHERE\n" + + " con.constraint_name = col.constraint_name\n" + + " AND con.constraint_type = 'P'\n" + + ") t ON\n" + + " t.table_name = COLUMNS.table_name\n" + + " AND COLUMNS.column_name = t.column_name\n" + + "WHERE\n" + + " -- COLUMNS.table_name = 'REAL_PEOPLE'\n" + + " COLUMNS.OWNER = comments.OWNER\n" + + " AND COLUMNS.table_name = '" + tableName + "'\n" + + " AND COLUMNS.OWNER = '" + schemaName + "'\n" + + "ORDER BY\n" + + "COLUMNS.column_id"; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_TABLE_SQL, tableName, schemaName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_VIEW_SQL, tableName, schemaName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public List queryTablePrimaryKeys(Connection connection, String schemaName, + String tableName) { + // Oracle表的主键可以使用如下命令设置主键是否生效 + // 使主键失效:alter table tableName disable primary key; + // 使主键恢复:alter table tableName enable primary key; + Set ret = new HashSet<>(); + String sql = String.format( + "SELECT col.COLUMN_NAME FROM all_cons_columns col INNER JOIN all_constraints con \n" + + "ON col.constraint_name=con.constraint_name AND col.OWNER =con.OWNER AND col.TABLE_NAME =con.TABLE_NAME \n" + + "WHERE con.constraint_type = 'P' and con.STATUS='ENABLED' and con.owner='%s' AND con.table_name='%s'", + schemaName, tableName); + try (PreparedStatement ps = connection.prepareStatement(sql); + ResultSet rs = ps.executeQuery(); + ) { + while (rs.next()) { + ret.add(rs.getString("COLUMN_NAME")); + } + + return new ArrayList<>(ret); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format("SELECT * from (%s) tmp where ROWNUM<=1 ", + sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("explain plan for %s", sql.replace(";", "")); + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + + StringBuilder retval = new StringBuilder(128); + retval.append(" \"").append(fieldname).append("\" "); + + int type = v.getType(); + boolean canHaveDefaultValue = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + retval.append("TIMESTAMP"); + break; + case ColumnMetaData.TYPE_DATE: + retval.append("DATE"); + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval.append("NUMBER(1)"); + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_BIGNUMBER: + retval.append("NUMBER"); + if (length > 0) { + if (length > 38) { + length = 38; + } + + retval.append('(').append(length); + if (precision > 0) { + retval.append(", ").append(precision); + } + retval.append(')'); + } + break; + case ColumnMetaData.TYPE_INTEGER: + retval.append("INTEGER"); + break; + case ColumnMetaData.TYPE_STRING: + if (length >= AbstractDatabase.CLOB_LENGTH) { + retval.append("CLOB"); + canHaveDefaultValue = false; + } else { + if (length == 1) { + retval.append("NVARCHAR2(1)"); + } else if (length > 0 && length < 2000) { + // VARCHAR2(size),size最大值为4000,单位是字节;而NVARCHAR2(size),size最大值为2000,单位是字符 + retval.append("NVARCHAR2(").append(length).append(')'); + } else { + retval.append("CLOB");// We don't know, so we just use the maximum... + canHaveDefaultValue = false; + } + } + break; + case ColumnMetaData.TYPE_BINARY: // the BLOB can contain binary data. + retval.append("BLOB"); + canHaveDefaultValue = false; + break; + default: + retval.append("CLOB"); + canHaveDefaultValue = false; + break; + } + + if (canHaveDefaultValue && v.getDefaultValue() != null && !"null".equals(v.getDefaultValue()) && !"NULL".equals(v.getDefaultValue())) { + if (type != ColumnMetaData.TYPE_TIMESTAMP && type != ColumnMetaData.TYPE_TIME && type != ColumnMetaData.TYPE_DATE) { + if (v.getDefaultValue().startsWith("'")) { + retval.append(" DEFAULT").append(v.getDefaultValue()); + } else { + retval.append(" DEFAULT").append(" '").append(v.getDefaultValue()).append("'"); + } + } else { + retval.append(" DEFAULT SYSDATE"); + } + } + + if (!v.isNullable()) { + retval.append(" NOT NULL"); + } + + if (addCr) { + retval.append(Const.CR); + } + + return retval.toString(); + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + int length = v.getLength(); + int type = v.getType(); + boolean canCreateIndex = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + case ColumnMetaData.TYPE_BOOLEAN: + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_BIGNUMBER: + case ColumnMetaData.TYPE_INTEGER: + break; + case ColumnMetaData.TYPE_STRING: + if (length >= AbstractDatabase.CLOB_LENGTH) { + canCreateIndex = false; + } else { + if (length >= 2000) { + canCreateIndex = false; + } + } + break; + default: + canCreateIndex = false; + break; + } + + return canCreateIndex; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + List results = new ArrayList<>(); + if (StringUtils.isNotBlank(td.getRemarks())) { + results.add(String + .format("COMMENT ON TABLE \"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), + td.getRemarks().replace("\"", "\\\""))); + } + + for (ColumnDescription cd : cds) { + if (StringUtils.isNotBlank(cd.getRemarks())) { + results.add(String + .format("COMMENT ON COLUMN \"%s\".\"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), cd.getFieldName(), + cd.getRemarks().replace("\"", "\\\""))); + } + } + + return results; + } + + @Override + public void setIndexSql(String schemaName, String tableName, List results, Map> columnMap) { + for (Map.Entry> entry : columnMap.entrySet()) { + String indexName = entry.getKey(); + List descriptions = entry.getValue(); + ColumnDescription columnDescription = descriptions.get(0); + String indexSql; + String lastIndexName = indexName.length() > 8 ? indexName.substring(0, 8) : indexName; + if (descriptions.size() > 1) { + indexSql = "CREATE " + (columnDescription.isNonIndexUnique() ? "" : "UNIQUE") + + " INDEX \"" + schemaName + "\".\"" + lastIndexName + "_" + StringUtil.getRandom2(4) + "\" ON \"" + schemaName + "\".\"" + tableName + + "\" (\"" + descriptions.stream().map(ColumnDescription::getFieldName).collect(Collectors.joining("\",\"")) + "\") "; + } else { + indexSql = "CREATE " + (columnDescription.isNonIndexUnique() ? "" : "UNIQUE") + + " INDEX \"" + schemaName + "\".\"" + lastIndexName + "_" + StringUtil.getRandom2(4) + "\" ON \"" + schemaName + "\".\"" + tableName + "\" (\"" + columnDescription.getFieldName() + "\") "; + } + results.add(indexSql); + } + } + + @Override + public void setColumnDefaultValue(Connection connection, String schemaName, String tableName, List columnDescriptions) { + + String sql = this.getDefaultValueSql(schemaName, tableName); + + List columns = new ArrayList<>(10); + try (Statement st = connection.createStatement()) { + try (ResultSet rs = st.executeQuery(sql)) { + while (rs.next()) { + String columnName = rs.getString("column_name"); + String columnComment = rs.getString("column_comment"); + ColumnDescription columnDescription = new ColumnDescription(); + columnDescription.setFieldName(columnName); + columnDescription.setRemarks(columnComment); + columns.add(columnDescription); + } + } + //oracle的数据库默认值需要单独查询,否则会报错 + try (ResultSet rs = st.executeQuery(sql)) { + int i = 0; + while (rs.next()) { + String columnDefault = rs.getString("column_default"); + columns.get(i).setDefaultValue(columnDefault); + i++; + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + //整合数据 + for (ColumnDescription columnDescription : columnDescriptions) { + for (ColumnDescription column : columns) { + if (columnDescription.getFieldName().equals(column.getFieldName())) { + columnDescription.setDefaultValue(column.getDefaultValue()); + columnDescription.setRemarks(column.getRemarks()); + break; + } + } + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseOscarImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseOscarImpl.java new file mode 100644 index 0000000..e8a63f4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseOscarImpl.java @@ -0,0 +1,220 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.List; + +/** + * 支持神通数据库的元信息实现 + * + * @author tang + */ +public class DatabaseOscarImpl extends AbstractDatabase implements IDatabaseInterface { + + private static final String SHOW_CREATE_TABLE_SQL = + "SELECT \"SYS_GET_TABLEDEF\" FROM \"V_SYS_TABLE\" WHERE \"SCHEMANAME\"= ? AND \"TABLENAME\"= ? "; + private static final String SHOW_CREATE_VIEW_SQL = + "SELECT \"DEFINITION\" FROM \"V_SYS_VIEWS\" WHERE \"SCHEMANAME\"= ? AND \"VIEWNAME\"= ?"; + + public DatabaseOscarImpl() { + super("com.oscar.Driver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.OSCAR; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_TABLE_SQL, tableName, schemaName); + try (PreparedStatement ps = connection.prepareStatement(sql)) { + ps.setString(1, schemaName); + ps.setString(2, tableName); + try (ResultSet rs = ps.executeQuery()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_VIEW_SQL, tableName, schemaName); + try (PreparedStatement ps = connection.prepareStatement(sql)) { + ps.setString(1, schemaName); + ps.setString(2, tableName); + try (ResultSet rs = ps.executeQuery()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format( + "SELECT * from (%s) tmp LIMIT 0 ", + sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("explain %s", sql.replace(";", "")); + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return null; + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + + StringBuilder retval = new StringBuilder(128); + retval.append(" \"").append(fieldname).append("\" "); + + int type = v.getType(); + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + retval.append("TIMESTAMP"); + break; + case ColumnMetaData.TYPE_DATE: + retval.append("DATE"); + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval.append("BIT"); + break; + case ColumnMetaData.TYPE_BIGNUMBER: + case ColumnMetaData.TYPE_INTEGER: + retval.append("BIGINT"); + break; + case ColumnMetaData.TYPE_NUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + retval.append("BIGINT"); + } else { + if (length > 0) { + if (precision > 0 || length > 18) { + if ((length + precision) > 0 && precision > 0) { + // Numeric(Precision, Scale): Precision = total length; Scale = decimal places + retval.append("NUMERIC(" + (length + precision) + ", " + precision + ")"); + } else { + retval.append("DOUBLE PRECISION"); + } + } else { + if (length > 9) { + retval.append("BIGINT"); + } else { + if (length < 5) { + retval.append("SMALLINT"); + } else { + retval.append("INTEGER"); + } + } + } + + } else { + retval.append("DOUBLE PRECISION"); + } + } + break; + case ColumnMetaData.TYPE_STRING: + if (2 * length >= AbstractDatabase.CLOB_LENGTH) { + retval.append("TEXT"); + } else { + if (length == 1) { + retval.append("VARCHAR(2)"); + } else if (length > 0 && length < 2048) { + retval.append("VARCHAR(").append(2 * length).append(')'); + } else { + retval.append("TEXT"); + } + } + break; + case ColumnMetaData.TYPE_BINARY: + retval.append("BLOB"); + break; + default: + retval.append("TEXT"); + break; + } + + if (addCr) { + retval.append(Const.CR); + } + + return retval.toString(); + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + List results = new ArrayList<>(); + if (StringUtils.isNotBlank(td.getRemarks())) { + results.add(String + .format("COMMENT ON TABLE \"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), + td.getRemarks().replace("\"", "\\\""))); + } + + for (ColumnDescription cd : cds) { + if (StringUtils.isNotBlank(cd.getRemarks())) { + results.add(String + .format("COMMENT ON COLUMN \"%s\".\"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), cd.getFieldName(), + cd.getRemarks().replace("\"", "\\\""))); + } + } + + return results; + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + //TODO + return false; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabasePostgresImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabasePostgresImpl.java new file mode 100644 index 0000000..865ebce --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabasePostgresImpl.java @@ -0,0 +1,313 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.database.constant.PostgresqlConst; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.util.DDLFormatterUtils; +import srt.cloud.framework.dbswitch.core.util.PostgresUtils; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * 支持PostgreSQL数据库的元信息实现 + * + * @author jrl + */ +public class DatabasePostgresImpl extends AbstractDatabase implements IDatabaseInterface { + + private static Set systemSchemas = new HashSet<>(); + + private static final String SHOW_CREATE_VIEW_SQL_1 = + "SELECT pg_get_viewdef((select pg_class.relfilenode from pg_catalog.pg_class \n" + + "join pg_catalog.pg_namespace on pg_class.relnamespace = pg_namespace.oid \n" + + "where pg_namespace.nspname='%s' and pg_class.relname ='%s'),true) "; + private static final String SHOW_CREATE_VIEW_SQL_2 = + "select pg_get_viewdef('\"%s\".\"%s\"', true)"; + + static { + systemSchemas.add("pg_aoseg"); + systemSchemas.add("information_schema"); + systemSchemas.add("pg_catalog"); + systemSchemas.add("pg_bitmapindex"); + } + + public DatabasePostgresImpl() { + super("org.postgresql.Driver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.POSTGRESQL; + } + + @Override + public List querySchemaList(Connection connection) { + List schemas = super.querySchemaList(connection); + return schemas.stream() + .filter(s -> !systemSchemas.contains(s)) + .collect(Collectors.toList()); + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return "select col.column_name AS column_name, col.ordinal_position AS COLPOSITION, col.column_default AS column_default, col.is_nullable AS NULLABLE, col.udt_name AS DATATYPE, " + + "col.character_maximum_length AS DATALENGTH, col.numeric_precision AS DATAPRECISION, col.numeric_scale AS DATASCALE, des.description AS column_comment, " + + "case when t.colname is null then 0 else 1 end as COLKEY " + + "from information_schema.columns col left join pg_description des on col.table_name::regclass = des.objoid and col.ordinal_position = des.objsubid " + + "left join ( " + + "select pg_attribute.attname as colname from pg_constraint inner join pg_class on pg_constraint.conrelid = pg_class.oid " + + "inner join pg_attribute on pg_attribute.attrelid = pg_class.oid and pg_attribute.attnum = any(pg_constraint.conkey) " + + "where pg_class.relname = '" + tableName + "' and pg_constraint.contype = 'p' " + + ") t on t.colname = col.column_name " + + "where col.table_schema = '" + schemaName + "' and col.table_name = '" + tableName + "' order by col.ordinal_position "; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = PostgresqlConst.CREATE_TABLE_SQL_TPL + .replace(PostgresqlConst.TPL_KEY_SCHEMA, schemaName) + .replace(PostgresqlConst.TPL_KEY_TABLE, tableName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return DDLFormatterUtils.format(rs.getString(1)); + } + } + } + } catch (SQLException e) { + //throw new RuntimeException(e); + } + + // 低版本的PostgreSQL的表的DDL获取方法 + return PostgresUtils.getTableDDL(connection, schemaName, tableName); + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_VIEW_SQL_1, schemaName, tableName); + try (Statement st = connection.createStatement()) { + try { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException se) { + sql = String.format(SHOW_CREATE_VIEW_SQL_2, schemaName, tableName); + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format(" %s LIMIT 0 ", sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("explain %s", sql.replace(";", "")); + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + int type = v.getType(); + + String retval = " \"" + fieldname + "\" "; + boolean canHaveDefaultValue = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + retval += "TIMESTAMP"; + break; + case ColumnMetaData.TYPE_TIME: + retval += "TIME"; + break; + case ColumnMetaData.TYPE_DATE: + retval += "DATE"; + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "BOOLEAN"; + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + if (useAutoInc) { + retval += "BIGSERIAL"; + } else { + retval += "BIGINT"; + } + } else { + if (length > 0) { + if (precision > 0 || length > 18) { + if ((length + precision) > 0 && precision > 0) { + // Numeric(Precision, Scale): Precision = total length; Scale = decimal places + retval += "NUMERIC(" + (length + precision) + ", " + precision + ")"; + } else { + retval += "DOUBLE PRECISION"; + } + } else { + if (length > 9) { + retval += "BIGINT"; + } else { + if (length < 5) { + retval += "SMALLINT"; + } else { + retval += "INTEGER"; + } + } + } + + } else { + retval += "DOUBLE PRECISION"; + } + } + break; + case ColumnMetaData.TYPE_STRING: + if (length < 1 || length >= AbstractDatabase.CLOB_LENGTH) { + retval += "TEXT"; + canHaveDefaultValue = false; + } else if (length <= 2000) { + retval += "VARCHAR(" + length + ")"; + } else { + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + retval += "VARCHAR(" + length + ")"; + } else { + retval += "TEXT"; + canHaveDefaultValue = false; + } + } + break; + case ColumnMetaData.TYPE_BINARY: + retval += "BYTEA"; + canHaveDefaultValue = false; + break; + default: + retval += "TEXT"; + canHaveDefaultValue = false; + break; + } + + if (canHaveDefaultValue && v.getDefaultValue() != null && !"null".equals(v.getDefaultValue()) && !"NULL".equals(v.getDefaultValue())) { + if (type != ColumnMetaData.TYPE_TIMESTAMP && type != ColumnMetaData.TYPE_TIME && type != ColumnMetaData.TYPE_DATE) { + if (v.getDefaultValue().startsWith("'")) { + retval += " DEFAULT " + v.getDefaultValue(); + } else { + retval += " DEFAULT '" + v.getDefaultValue() + "'"; + } + } else { + retval += " DEFAULT CURRENT_TIMESTAMP"; + } + } + + if (!v.isNullable()) { + retval += " NOT NULL"; + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + List results = new ArrayList<>(); + if (StringUtils.isNotBlank(td.getRemarks())) { + results.add(String + .format("COMMENT ON TABLE \"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), + td.getRemarks().replace("\"", "\\\""))); + } + + for (ColumnDescription cd : cds) { + if (StringUtils.isNotBlank(cd.getRemarks())) { + results.add(String + .format("COMMENT ON COLUMN \"%s\".\"%s\".\"%s\" IS '%s' ", + td.getSchemaName(), td.getTableName(), cd.getFieldName(), + cd.getRemarks().replace("\"", "\\\""))); + } + } + + return results; + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + int length = v.getLength(); + int type = v.getType(); + boolean canCreateIndex = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + case ColumnMetaData.TYPE_BOOLEAN: + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + break; + case ColumnMetaData.TYPE_STRING: + if (length < 1 || length >= AbstractDatabase.CLOB_LENGTH) { + canCreateIndex = false; + } else if (length > 2000) { + canCreateIndex = false; + } + break; + default: + canCreateIndex = false; + break; + } + + + return canCreateIndex; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSqliteImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSqliteImpl.java new file mode 100644 index 0000000..f390cb4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSqliteImpl.java @@ -0,0 +1,175 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.util.DDLFormatterUtils; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.Collections; +import java.util.List; + +/** + * 支持SQLite数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseSqliteImpl extends AbstractDatabase implements IDatabaseInterface { + + public DatabaseSqliteImpl() { + super("org.sqlite.JDBC"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.SQLITE3; + } + + @Override + public List querySchemaList(Connection connection) { + return Collections.singletonList("main"); + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return null; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = "SELECT sql FROM \"sqlite_master\" where type='table' and tbl_name=? "; + try (PreparedStatement ps = connection.prepareStatement(sql)) { + ps.setString(1, tableName); + try (ResultSet rs = ps.executeQuery()) { + if (rs != null && rs.next()) { + return DDLFormatterUtils.format(rs.getString(1)); + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return ""; + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = "SELECT sql FROM \"sqlite_master\" where type='view' and tbl_name=? "; + try (PreparedStatement ps = connection.prepareStatement(sql)) { + ps.setString(1, tableName); + try (ResultSet rs = ps.executeQuery()) { + if (rs != null && rs.next()) { + return DDLFormatterUtils.format(rs.getString(1)); + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return ""; + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format(" %s LIMIT 0 ", sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("explain %s", sql.replace(";", "")); + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + int type = v.getType(); + + String retval = " \"" + fieldname + "\" "; + + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + // sqlite中没有时间数据类型 + retval += "DATETIME"; + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "CHAR(1)"; + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + // 关键字 AUTOINCREMENT 只能⽤于整型(INTEGER)字段。 + if (useAutoInc) { + retval += "INTEGER PRIMARY KEY AUTOINCREMENT"; + } else { + retval += "BIGINT "; + } + } else { + if (precision != 0 || length < 0 || length > 18) { + retval += "NUMERIC"; + } else { + retval += "INTEGER"; + } + } + break; + case ColumnMetaData.TYPE_STRING: + if (length < 1 || length >= AbstractDatabase.CLOB_LENGTH) { + retval += "BLOB"; + } else { + retval += "TEXT"; + } + break; + case ColumnMetaData.TYPE_BINARY: + retval += "BLOB"; + break; + default: + retval += "TEXT"; + break; + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + return Collections.emptyList(); + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + return false; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSqlserver2000Impl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSqlserver2000Impl.java new file mode 100644 index 0000000..4866433 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSqlserver2000Impl.java @@ -0,0 +1,106 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.TableDescription; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +/** + * 支持SQLServer2000数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseSqlserver2000Impl extends DatabaseSqlserverImpl implements IDatabaseInterface { + + public DatabaseSqlserver2000Impl() { + super("com.microsoft.jdbc.sqlserver.SQLServerDriver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.SQLSERVER2000; + } + + @Override + public List queryTableList(Connection connection, String schemaName) { + List ret = new ArrayList<>(); + Set uniqueSet = new HashSet<>(); + String[] types = new String[]{"TABLE", "VIEW"}; + try (ResultSet tables = connection.getMetaData() + .getTables(this.catalogName, schemaName, "%", types);) { + while (tables.next()) { + String tableName = tables.getString("TABLE_NAME"); + if (uniqueSet.contains(tableName)) { + continue; + } else { + uniqueSet.add(tableName); + } + + TableDescription td = new TableDescription(); + td.setSchemaName(schemaName); + td.setTableName(tableName); + td.setRemarks(tables.getString("REMARKS")); + if (tables.getString("TABLE_TYPE").equalsIgnoreCase("VIEW")) { + td.setTableType("VIEW"); + } else { + td.setTableType("TABLE"); + } + ret.add(td); + } + return ret; + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return "select columns.name AS column_name, columns.column_id AS COLPOSITION, columns.max_length AS DATALENGTH, columns.precision AS DATAPRECISION, columns.scale AS DATASCALE, " + + "columns.is_nullable AS NULLABLE, types.name AS DATATYPE, CAST(ep.value AS NVARCHAR(128)) AS column_comment, e.text AS column_default, " + + "(select top 1 ind.is_primary_key from sys.index_columns ic left join sys.indexes ind on ic.object_id = ind.object_id and ic.index_id = ind.index_id and ind.name like 'PK_%' where ic.object_id=columns.object_id and ic.column_id=columns.column_id) AS COLKEY " + + "from sys.columns columns LEFT JOIN sys.types types ON columns.system_type_id = types.system_type_id " + + "LEFT JOIN syscomments e ON columns.default_object_id= e.id " + + "LEFT JOIN sys.extended_properties ep ON ep.major_id = columns.object_id AND ep.minor_id = columns.column_id AND ep.name = 'MS_Description' " + + "where columns.object_id = object_id('" + tableName + "') order by columns.column_id "; + } + + @Override + public List queryTableColumnMeta(Connection connection, String schemaName, + String tableName) { + String sql = this.getTableFieldsQuerySQL(schemaName, tableName); + List ret = this.querySelectSqlColumnMeta(connection, sql); + try (ResultSet columns = connection.getMetaData() + .getColumns(this.catalogName, schemaName, tableName, null)) { + while (columns.next()) { + String columnName = columns.getString("COLUMN_NAME"); + String remarks = columns.getString("REMARKS"); + for (ColumnDescription cd : ret) { + if (columnName.equalsIgnoreCase(cd.getFieldName())) { + cd.setRemarks(remarks); + } + } + } + return ret; + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSqlserverImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSqlserverImpl.java new file mode 100644 index 0000000..56b1152 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSqlserverImpl.java @@ -0,0 +1,414 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.database.constant.SQLServerConst; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.util.DDLFormatterUtils; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +/** + * 支持SQLServer数据库的元信息实现 + * + * @author jrl + */ +public class DatabaseSqlserverImpl extends AbstractDatabase implements IDatabaseInterface { + + private static final String SHOW_CREATE_VIEW_SQL = + "SELECT VIEW_DEFINITION from INFORMATION_SCHEMA.VIEWS where TABLE_SCHEMA ='%s' and TABLE_NAME ='%s'"; + + private static Set excludesSchemaNames; + + static { + excludesSchemaNames = new HashSet<>(); + excludesSchemaNames.add("db_denydatawriter"); + excludesSchemaNames.add("db_datawriter"); + excludesSchemaNames.add("db_accessadmin"); + excludesSchemaNames.add("db_ddladmin"); + excludesSchemaNames.add("db_securityadmin"); + excludesSchemaNames.add("db_denydatareader"); + excludesSchemaNames.add("db_backupoperator"); + excludesSchemaNames.add("db_datareader"); + excludesSchemaNames.add("db_owner"); + } + + public DatabaseSqlserverImpl() { + super("com.microsoft.sqlserver.jdbc.SQLServerDriver"); + } + + public DatabaseSqlserverImpl(String driverName) { + super(driverName); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.SQLSERVER; + } + + private int getDatabaseMajorVersion(Connection connection) { + try { + return connection.getMetaData().getDatabaseMajorVersion(); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public List querySchemaList(Connection connection) { + Set ret = new HashSet<>(); + try (ResultSet schemas = connection.getMetaData().getSchemas();) { + while (schemas.next()) { + String name = schemas.getString("TABLE_SCHEM"); + if (!excludesSchemaNames.contains(name)) { + ret.add(name); + } + } + return new ArrayList<>(ret); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public List queryTableList(Connection connection, String schemaName) { + int majorVersion = getDatabaseMajorVersion(connection); + if (majorVersion <= 8) { + return super.queryTableList(connection, schemaName); + } + + List ret = new ArrayList<>(); + String sql = String.format( + "SELECT DISTINCT t.TABLE_SCHEMA as TABLE_SCHEMA, t.TABLE_NAME as TABLE_NAME, t.TABLE_TYPE as TABLE_TYPE, CONVERT(nvarchar(50),ISNULL(g.[value], '')) as COMMENTS \r\n" + + "FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN sysobjects d on t.TABLE_NAME = d.name \r\n" + + "LEFT JOIN sys.extended_properties g on g.major_id=d.id and g.minor_id='0' where t.TABLE_SCHEMA='%s'", + schemaName); + try (PreparedStatement ps = connection.prepareStatement(sql); + ResultSet rs = ps.executeQuery();) { + while (rs.next()) { + TableDescription td = new TableDescription(); + td.setSchemaName(rs.getString("TABLE_SCHEMA")); + td.setTableName(rs.getString("TABLE_NAME")); + td.setRemarks(rs.getString("COMMENTS")); + String tableType = rs.getString("TABLE_TYPE").trim(); + if (tableType.equalsIgnoreCase("VIEW")) { + td.setTableType("VIEW"); + } else { + td.setTableType("TABLE"); + } + + ret.add(td); + } + + return ret; + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return "select columns.name AS column_name, columns.column_id AS COLPOSITION, columns.max_length AS DATALENGTH, columns.precision AS DATAPRECISION, columns.scale AS DATASCALE, " + + "columns.is_nullable AS NULLABLE, types.name AS DATATYPE, CAST(ep.value AS NVARCHAR(128)) AS column_comment, e.text AS column_default, " + + "(select top 1 ind.is_primary_key from sys.index_columns ic left join sys.indexes ind on ic.object_id = ind.object_id and ic.index_id = ind.index_id and ind.name like 'PK_%' where ic.object_id=columns.object_id and ic.column_id=columns.column_id) AS COLKEY " + + "from sys.columns columns LEFT JOIN sys.types types ON columns.system_type_id = types.system_type_id " + + "LEFT JOIN syscomments e ON columns.default_object_id= e.id " + + "LEFT JOIN sys.extended_properties ep ON ep.major_id = columns.object_id AND ep.minor_id = columns.column_id AND ep.name = 'MS_Description' " + + "where columns.object_id = object_id('" + tableName + "') order by columns.column_id "; + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SQLServerConst.CREATE_TABLE_SQL_TPL, schemaName, tableName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return DDLFormatterUtils.format(rs.getString(1)); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + String sql = String.format(SHOW_CREATE_VIEW_SQL, schemaName, tableName); + try (Statement st = connection.createStatement()) { + if (st.execute(sql)) { + try (ResultSet rs = st.getResultSet()) { + if (rs != null && rs.next()) { + return rs.getString(1); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + + return null; + } + + @Override + public List queryTableColumnMeta(Connection connection, String schemaName, + String tableName) { + int majorVersion = getDatabaseMajorVersion(connection); + if (majorVersion <= 8) { + return super.queryTableColumnMeta(connection, schemaName, tableName); + } + + String sql = this.getTableFieldsQuerySQL(schemaName, tableName); + List ret = this.querySelectSqlColumnMeta(connection, sql); + String querySql = String.format( + "SELECT a.name AS COLUMN_NAME,CONVERT(nvarchar(50),ISNULL(g.[value], '')) AS REMARKS FROM sys.columns a\r\n" + + "LEFT JOIN sys.extended_properties g ON ( a.object_id = g.major_id AND g.minor_id = a.column_id )\r\n" + + "WHERE object_id = (SELECT top 1 object_id FROM sys.tables st INNER JOIN INFORMATION_SCHEMA.TABLES t on st.name=t.TABLE_NAME\r\n" + + "WHERE st.name = '%s' and t.TABLE_SCHEMA='%s')", + tableName, schemaName); + try (PreparedStatement ps = connection.prepareStatement(querySql); + ResultSet rs = ps.executeQuery();) { + while (rs.next()) { + String columnName = rs.getString("COLUMN_NAME"); + String remarks = rs.getString("REMARKS"); + for (ColumnDescription cd : ret) { + if (columnName.equalsIgnoreCase(cd.getFieldName())) { + cd.setRemarks(remarks); + } + } + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + return ret; + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + String querySQL = String.format("SELECT TOP 1 * from (%s) tmp ", sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("select top 1 * from [%s].[%s] ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("SELECT top 1 * from ( %s ) tmp", sql.replace(";", "")); + } + + @Override + public String getQuotedSchemaTableCombination(String schemaName, String tableName) { + return String.format(" [%s].[%s] ", schemaName, tableName); + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + int type = v.getType(); + + String retval = " [" + fieldname + "] "; + boolean canHaveDefaultValue = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + retval += "DATETIME"; + break; + case ColumnMetaData.TYPE_TIME: + retval += "TIME"; + break; + case ColumnMetaData.TYPE_DATE: + retval += "DATE"; + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "BIT"; + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + if (useAutoInc) { + retval += "BIGINT IDENTITY(0,1)"; + } else { + retval += "BIGINT"; + } + } else { + if (precision == 0) { + if (length > 18) { + retval += "DECIMAL(" + length + ",0)"; + } else { + if (length > 9) { + retval += "BIGINT"; + } else { + retval += "INT"; + } + } + } else { + if (precision > 0 && length > 0) { + retval += "DECIMAL(" + length + "," + precision + ")"; + } else { + retval += "FLOAT(53)"; + } + } + } + break; + case ColumnMetaData.TYPE_STRING: + if (length < 8000) { + // Maybe use some default DB String length in case length<=0 + if (length > 0) { + // VARCHAR(n)最多能存n个字节,一个中文是两个字节。 + length = 2 * length; + if (length > 8000) { + length = 8000; + } + retval += "VARCHAR(" + length + ")"; + } else { + retval += "VARCHAR(100)"; + } + } else { + retval += "TEXT"; // Up to 2bilion characters. + canHaveDefaultValue = false; + } + break; + case ColumnMetaData.TYPE_BINARY: + retval += "VARBINARY(MAX)"; + canHaveDefaultValue = false; + break; + default: + retval += "TEXT"; + canHaveDefaultValue = false; + break; + } + + if (canHaveDefaultValue && v.getDefaultValue() != null && !"null".equals(v.getDefaultValue()) && !"NULL".equals(v.getDefaultValue())) { + if (type != ColumnMetaData.TYPE_TIMESTAMP && type != ColumnMetaData.TYPE_TIME && type != ColumnMetaData.TYPE_DATE) { + if (v.getDefaultValue().startsWith("'")) { + retval += " DEFAULT " + v.getDefaultValue(); + } else { + retval += " DEFAULT '" + v.getDefaultValue() + "'"; + } + } else { + retval += " DEFAULT DEFAULT (getdate())"; + } + } + + if (!v.isNullable()) { + retval += " NOT NULL"; + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public String getPrimaryKeyAsString(List pks) { + if (null != pks && !pks.isEmpty()) { + StringBuilder sb = new StringBuilder(); + sb.append("["); + sb.append(StringUtils.join(pks, "] , [")); + sb.append("]"); + return sb.toString(); + } + + return ""; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + List results = new ArrayList<>(); + if (StringUtils.isNotBlank(td.getRemarks())) { + results.add(String + .format( + "EXEC [sys].sp_addextendedproperty 'MS_Description', N'%s', 'schema', N'%s', 'table', N'%s' ", + td.getRemarks().replace("\"", "\\\""), td.getSchemaName(), td.getTableName())); + } + + for (ColumnDescription cd : cds) { + if (StringUtils.isNotBlank(cd.getRemarks())) { + results.add(String + .format( + "EXEC [sys].sp_addextendedproperty 'MS_Description', N'%s', 'schema', N'%s', 'table', N'%s', 'column', N'%s' ", + cd.getRemarks().replace("\"", "\\\""), td.getSchemaName(), td.getTableName(), + cd.getFieldName())); + } + } + + return results; + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + int length = v.getLength(); + int type = v.getType(); + boolean canCreateIndex = true; + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + case ColumnMetaData.TYPE_BOOLEAN: + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + break; + case ColumnMetaData.TYPE_STRING: + if (length >= 8000) { + canCreateIndex = false; + } + break; + case ColumnMetaData.TYPE_BINARY: + canCreateIndex = false; + break; + default: + canCreateIndex = false; + break; + } + + return canCreateIndex; + } + + @Override + public String getCountMoreThanOneSql(String schemaName, String tableName, List columns) { + String columnStr = "[" + String.join("],[", columns) + "]"; + return String.format("SELECT %s FROM [%s].[%s] GROUP BY %s HAVING count(*)>1", columnStr, schemaName, tableName, columnStr); + } + + @Override + public String getCountOneSql(String schemaName, String tableName, List columns) { + String columnStr = "[" + String.join("],[", columns) + "]"; + return String.format("SELECT %s FROM [%s].[%s] GROUP BY %s HAVING count(*)=1", columnStr, schemaName, tableName, columnStr); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSybaseImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSybaseImpl.java new file mode 100644 index 0000000..8bcb2c6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/database/impl/DatabaseSybaseImpl.java @@ -0,0 +1,278 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.database.impl; + + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.IDatabaseInterface; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.util.GenerateSqlUtils; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * 支持Sybase数据库的元信息实现 + * + * @author tang + */ +public class DatabaseSybaseImpl extends AbstractDatabase implements IDatabaseInterface { + + private static final String SHOW_CREATE_VIEW_SQL = "SELECT sc.text FROM sysobjects so, syscomments sc WHERE user_name(so.uid)=? AND so.name=? and sc.id = so.id ORDER BY sc.colid"; + + private static Set excludesSchemaNames; + + static { + excludesSchemaNames = new HashSet<>(); + excludesSchemaNames.add("keycustodian_role"); + excludesSchemaNames.add("ha_role"); + excludesSchemaNames.add("replication_role"); + excludesSchemaNames.add("sa_role"); + excludesSchemaNames.add("usedb_user"); + excludesSchemaNames.add("replication_maint_role_gp"); + excludesSchemaNames.add("sybase_ts_role"); + excludesSchemaNames.add("dtm_tm_role"); + excludesSchemaNames.add("sso_role"); + excludesSchemaNames.add("navigator_role"); + excludesSchemaNames.add("sa_serverprivs_role"); + excludesSchemaNames.add("probe"); + excludesSchemaNames.add("mon_role"); + excludesSchemaNames.add("webservices_role"); + excludesSchemaNames.add("js_admin_role"); + excludesSchemaNames.add("js_user_role"); + excludesSchemaNames.add("messaging_role"); + excludesSchemaNames.add("js_client_role"); + excludesSchemaNames.add("oper_role"); + excludesSchemaNames.add("hadr_admin_role_gp"); + } + + public DatabaseSybaseImpl() { + super("com.sybase.jdbc4.jdbc.SybDriver"); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return ProductTypeEnum.SYBASE; + } + + private void setCatalogName(Connection connection){ + try { + this.catalogName = connection.getCatalog(); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + @Override + public List querySchemaList(Connection connection) { + setCatalogName(connection); + List schemas = super.querySchemaList(connection); + return schemas.stream().filter(s -> !excludesSchemaNames.contains(s)).collect(Collectors.toList()); + } + + @Override + public List queryTableList(Connection connection, String schemaName) { + setCatalogName(connection); + return super.queryTableList(connection, schemaName); + } + + @Override + public List queryTableColumnName(Connection connection, String schemaName, + String tableName) { + setCatalogName(connection); + return super.queryTableColumnName(connection, schemaName, tableName); + } + + @Override + protected String getDefaultValueSql(String schemaName, String tableName) { + return null; + } + + @Override + public List queryTablePrimaryKeys(Connection connection, String schemaName, + String tableName) { + setCatalogName(connection); + return super.queryTablePrimaryKeys(connection, schemaName, tableName); + } + + @Override + public List queryTableColumnMeta(Connection connection, String schemaName, + String tableName) { + setCatalogName(connection); + return super.queryTableColumnMeta(connection, schemaName, tableName); + } + + @Override + public String getTableDDL(Connection connection, String schemaName, String tableName) { + List columnDescriptions = queryTableColumnMeta(connection, schemaName, tableName); + List pks = queryTablePrimaryKeys(connection, schemaName, tableName); + return GenerateSqlUtils.getDDLCreateTableSQL(ProductTypeEnum.SYBASE, + columnDescriptions, pks, schemaName, tableName, false); + } + + @Override + public String getViewDDL(Connection connection, String schemaName, String tableName) { + try (PreparedStatement ps = connection.prepareStatement(SHOW_CREATE_VIEW_SQL)) { + ps.setString(1, schemaName); + ps.setString(2, tableName); + try (ResultSet rs = ps.executeQuery()) { + StringBuilder sql = new StringBuilder(); + while (rs.next()) { + sql.append(rs.getString(1)); + } + return sql.toString(); + } + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + @Override + public List querySelectSqlColumnMeta(Connection connection, String sql) { + setCatalogName(connection); + String querySQL = String.format("SELECT TOP 1 * from (%s) tmp ", sql.replace(";", "")); + return this.getSelectSqlColumnMeta(connection, querySQL); + } + + @Override + protected String getTableFieldsQuerySQL(String schemaName, String tableName) { + return String.format("select top 1 * from [%s].[%s] ", schemaName, tableName); + } + + @Override + protected String getTestQuerySQL(String sql) { + return String.format("SELECT top 1 * from ( %s ) tmp", sql.replace(";", "")); + } + + @Override + public String getQuotedSchemaTableCombination(String schemaName, String tableName) { + return String.format(" [%s].[%s] ", schemaName, tableName); + } + + @Override + public String getFieldDefinition(ColumnMetaData v, List pks, boolean useAutoInc, + boolean addCr, boolean withRemarks) { + String fieldname = v.getName(); + int length = v.getLength(); + int precision = v.getPrecision(); + int type = v.getType(); + + String retval = " [" + fieldname + "] "; + + switch (type) { + case ColumnMetaData.TYPE_TIMESTAMP: + case ColumnMetaData.TYPE_TIME: + case ColumnMetaData.TYPE_DATE: + retval += "DATETIME"; + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + retval += " NOT NULL"; + } + break; + case ColumnMetaData.TYPE_BOOLEAN: + retval += "TINYINT"; + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + retval += " NOT NULL"; + } + break; + case ColumnMetaData.TYPE_NUMBER: + case ColumnMetaData.TYPE_INTEGER: + case ColumnMetaData.TYPE_BIGNUMBER: + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + if (useAutoInc) { + retval += "INTEGER IDENTITY NOT NULL"; + } else { + retval += "INTEGER NOT NULL"; + } + } else { + if (precision != 0 || (precision == 0 && length > 9)) { + if (precision > 0 && length > 0) { + retval += "DECIMAL(" + length + ", " + precision + ") NULL"; + } else { + retval += "DOUBLE PRECISION NULL"; + } + } else { + if (length < 3) { + retval += "TINYINT NULL"; + } else if (length < 5) { + retval += "SMALLINT NULL"; + } else { + retval += "INTEGER NULL"; + } + } + } + break; + case ColumnMetaData.TYPE_STRING: + if (length >= 2048) { + retval += "TEXT NULL"; + } else { + retval += "VARCHAR"; + if (length > 0) { + retval += "(" + length + ")"; + } + if (null != pks && !pks.isEmpty() && pks.contains(fieldname)) { + retval += " NOT NULL"; + } else { + retval += " NULL"; + } + } + break; + case ColumnMetaData.TYPE_BINARY: + retval += "VARBINARY"; + break; + default: + retval += "TEXT NULL"; + break; + } + + if (addCr) { + retval += Const.CR; + } + + return retval; + } + + @Override + public String getPrimaryKeyAsString(List pks) { + if (null != pks && !pks.isEmpty()) { + StringBuilder sb = new StringBuilder(); + sb.append("["); + sb.append(StringUtils.join(pks, "] , [")); + sb.append("]"); + return sb.toString(); + } + + return ""; + } + + @Override + public List getTableColumnCommentDefinition(TableDescription td, + List cds) { + return Collections.emptyList(); + } + + @Override + public boolean canCreateIndex(ColumnMetaData v) { + //TODO + return false; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/ColumnDescription.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/ColumnDescription.java new file mode 100644 index 0000000..dd3a8cc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/ColumnDescription.java @@ -0,0 +1,247 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.model; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +import java.util.Objects; + +/** + * 数据库列描述符信息定义(Column Description) + * + * @author jrl + */ +public class ColumnDescription { + + private String fieldName; + private String labelName; + private String fieldTypeName; + private String filedTypeClassName; + private int fieldType; + private int displaySize; + private int scaleSize; + private int precisionSize; + private boolean isAutoIncrement; + private boolean isNullable; + private String remarks; + private boolean signed = false; + private ProductTypeEnum dbtype; + //索引是否可以不唯一 + private boolean nonIndexUnique; + //索引类别 + private String indexQualifier; + //索引名称 + private String indexName; + private short indexType; + private String ascOrDesc; + //默认值 + private String defaultValue; + //是否是主键 + private boolean isPk; + + public String getFieldName() { + if (null != this.fieldName) { + return fieldName; + } + + return this.labelName; + } + + public void setFieldName(String fieldName) { + this.fieldName = fieldName; + } + + public String getLabelName() { + if (null != labelName) { + return labelName; + } + + return this.fieldName; + } + + public void setLabelName(String labalName) { + this.labelName = labalName; + } + + public String getFieldTypeName() { + return fieldTypeName; + } + + public void setFieldTypeName(String fieldTypeName) { + this.fieldTypeName = fieldTypeName; + } + + public String getFiledTypeClassName() { + return filedTypeClassName; + } + + public void setFiledTypeClassName(String filedTypeClassName) { + this.filedTypeClassName = filedTypeClassName; + } + + public int getFieldType() { + return fieldType; + } + + public void setFieldType(int fieldType) { + this.fieldType = fieldType; + } + + public int getDisplaySize() { + return displaySize; + } + + public void setDisplaySize(int displaySize) { + this.displaySize = displaySize; + } + + public int getScaleSize() { + return scaleSize; + } + + public void setScaleSize(int scaleSize) { + this.scaleSize = scaleSize; + } + + public int getPrecisionSize() { + return precisionSize; + } + + public void setPrecisionSize(int precisionSize) { + this.precisionSize = precisionSize; + } + + public boolean isAutoIncrement() { + return isAutoIncrement; + } + + public void setAutoIncrement(boolean isAutoIncrement) { + this.isAutoIncrement = isAutoIncrement; + } + + public boolean isNullable() { + return isNullable; + } + + public void setNullable(boolean isNullable) { + this.isNullable = isNullable; + } + + public boolean isSigned() { + return signed; + } + + public void setSigned(boolean signed) { + this.signed = signed; + } + + public ProductTypeEnum getDbType() { + return this.dbtype; + } + + public void setDbType(ProductTypeEnum dbtype) { + this.dbtype = dbtype; + } + + public String getRemarks() { + return this.remarks; + } + + public void setRemarks(String remarks) { + this.remarks = remarks; + } + + public void setNonIndexUnique(boolean nonIndexUnique) { + this.nonIndexUnique = nonIndexUnique; + } + + public boolean isNonIndexUnique() { + return nonIndexUnique; + } + + public String getIndexQualifier() { + return indexQualifier; + } + + public void setIndexQualifier(String indexQualifier) { + this.indexQualifier = indexQualifier; + } + + public String getIndexName() { + return indexName; + } + + public void setIndexName(String indexName) { + this.indexName = indexName; + } + + public short getIndexType() { + return indexType; + } + + public void setIndexType(short indexType) { + this.indexType = indexType; + } + + public String getAscOrDesc() { + return ascOrDesc; + } + + public void setAscOrDesc(String ascOrDesc) { + this.ascOrDesc = ascOrDesc; + } + + public String getDefaultValue() { + return defaultValue; + } + + public void setDefaultValue(String defaultValue) { + this.defaultValue = defaultValue; + } + + public boolean isPk() { + return isPk; + } + + public void setPk(boolean pk) { + isPk = pk; + } + + public ColumnDescription copy() { + ColumnDescription description = new ColumnDescription(); + description.setFieldName(fieldName); + description.setLabelName(labelName); + description.setFieldTypeName(fieldTypeName); + description.setFiledTypeClassName(filedTypeClassName); + description.setFieldType(fieldType); + description.setDisplaySize(displaySize); + description.setScaleSize(scaleSize); + description.setPrecisionSize(precisionSize); + description.setAutoIncrement(isAutoIncrement); + description.setNullable(isNullable); + description.setRemarks(remarks); + description.setSigned(signed); + description.setDbType(dbtype); + description.setNonIndexUnique(nonIndexUnique); + description.setIndexQualifier(indexQualifier); + description.setIndexName(indexName); + description.setIndexType(indexType); + description.setAscOrDesc(ascOrDesc); + description.setDefaultValue(defaultValue); + description.setPk(isPk); + return description; + } + + ///////////////////////////////////////////// + + public ColumnMetaData getMetaData() { + return new ColumnMetaData(this); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/ColumnMetaData.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/ColumnMetaData.java new file mode 100644 index 0000000..6b8a79a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/ColumnMetaData.java @@ -0,0 +1,496 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.model; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; + +/** + * 数据库表列的元信息 + * + * @author jrl + */ +public class ColumnMetaData { + + /** + * Value type indicating that the value has no type set + */ + public static final int TYPE_NONE = 0; + + /** + * Value type indicating that the value contains a floating point double precision number. + */ + public static final int TYPE_NUMBER = 1; + + /** + * Value type indicating that the value contains a text String. + */ + public static final int TYPE_STRING = 2; + + /** + * Value type indicating that the value contains a Date. + */ + public static final int TYPE_DATE = 3; + + /** + * Value type indicating that the value contains a boolean. + */ + public static final int TYPE_BOOLEAN = 4; + + /** + * Value type indicating that the value contains a long integer. + */ + public static final int TYPE_INTEGER = 5; + + /** + * Value type indicating that the value contains a floating point precision number with arbitrary + * precision. + */ + public static final int TYPE_BIGNUMBER = 6; + + /** + * Value type indicating that the value contains an Object. + */ + public static final int TYPE_SERIALIZABLE = 7; + + /** + * Value type indicating that the value contains binary data: BLOB, CLOB, ... + */ + public static final int TYPE_BINARY = 8; + + /** + * Value type indicating that the value contains a date-time with nanosecond precision + */ + public static final int TYPE_TIMESTAMP = 9; + + /** + * Value type indicating that the value contains a time + */ + public static final int TYPE_TIME = 10; + + /** + * Value type indicating that the value contains a Internet address + */ + public static final int TYPE_INET = 11; + + /** + * The Constant typeCodes. + */ + public static final String[] TYPE_CODES = new String[]{"-", "Number", "String", "Date", "Boolean", + "Integer", + "BigNumber", "Serializable", "Binary", "Timestamp", "Time", "Internet Address",}; + + ////////////////////////////////////////////////////////////////////// + + /** + * Column name + */ + protected String name; + protected int length; + protected int precision; + protected int type; + private String remarks; + private boolean isNullable; + private String defaultValue; + + /** + * Constructor function + * + * @param desc + */ + public ColumnMetaData(ColumnDescription desc) { + this.create(desc); + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public int getLength() { + return length; + } + + public void setLength(int length) { + this.length = length; + } + + public int getPrecision() { + return precision; + } + + public void setPrecision(int precision) { + this.precision = precision; + } + + public int getType() { + return type; + } + + public void setType(int type) { + this.type = type; + } + + public String getRemarks() { + return remarks; + } + + public void setRemarks(String remarks) { + this.remarks = remarks; + } + + public boolean isNullable() { + return isNullable; + } + + public void setNullable(boolean nullable) { + isNullable = nullable; + } + + public String getDefaultValue() { + return defaultValue; + } + + public void setDefaultValue(String defaultValue) { + this.defaultValue = defaultValue; + } + + /** + * Checks whether or not the value is a String. + * + * @return true if the value is a String. + */ + public boolean isString() { + return type == TYPE_STRING; + } + + /** + * Checks whether or not this value is a Date + * + * @return true if the value is a Date + */ + public boolean isDate() { + return type == TYPE_DATE; + } + + /** + * Checks whether or not this value is a Time + * + * @return true if the value is a Time + */ + public boolean isTime() { + return type == TYPE_TIME; + } + + /** + * Checks whether or not this value is a DateTime + * + * @return true if the value is a DateTime + */ + public boolean isDateTime() { + return type == TYPE_TIMESTAMP; + } + + /** + * Checks whether or not the value is a Big Number + * + * @return true is this value is a big number + */ + public boolean isBigNumber() { + return type == TYPE_BIGNUMBER; + } + + /** + * Checks whether or not the value is a Number + * + * @return true is this value is a number + */ + public boolean isNumber() { + return type == TYPE_NUMBER; + } + + /** + * Checks whether or not this value is a boolean + * + * @return true if this value has type boolean. + */ + public boolean isBoolean() { + return type == TYPE_BOOLEAN; + } + + /** + * Checks whether or not this value is of type Serializable + * + * @return true if this value has type Serializable + */ + public boolean isSerializableType() { + return type == TYPE_SERIALIZABLE; + } + + /** + * Checks whether or not this value is of type Binary + * + * @return true if this value has type Binary + */ + public boolean isBinary() { + return type == TYPE_BINARY; + } + + /** + * Checks whether or not this value is an Integer + * + * @return true if this value is an integer + */ + public boolean isInteger() { + return type == TYPE_INTEGER; + } + + /** + * Checks whether or not this Value is Numeric A Value is numeric if it is either of type Number + * or Integer + * + * @return true if the value is either of type Number or Integer + */ + public boolean isNumeric() { + return isInteger() || isNumber() || isBigNumber(); + } + + /** + * Checks whether or not the specified type is either Integer or Number + * + * @param t the type to check + * @return true if the type is Integer or Number + */ + public static final boolean isNumeric(int t) { + return t == TYPE_INTEGER || t == TYPE_NUMBER || t == TYPE_BIGNUMBER; + } + + /** + * Return the type of a value in a textual form: "String", "Number", "Integer", "Boolean", "Date", + * ... + * + * @return A String describing the type of value. + */ + public String getTypeDesc() { + return TYPE_CODES[type]; + } + + private void create(ColumnDescription desc) { + int length = -1; + int precision = -1; + int valtype = ColumnMetaData.TYPE_NONE; + int type = desc.getFieldType(); + boolean signed = desc.isSigned(); + + switch (type) { + case java.sql.Types.CHAR: + case java.sql.Types.NCHAR: + case java.sql.Types.VARCHAR: + case java.sql.Types.NVARCHAR: + valtype = ColumnMetaData.TYPE_STRING; + length = desc.getDisplaySize(); + break; + + case java.sql.Types.LONGVARCHAR: + case java.sql.Types.LONGNVARCHAR: + case java.sql.Types.CLOB: + case java.sql.Types.NCLOB: + case java.sql.Types.SQLXML: + case java.sql.Types.ROWID: + valtype = ColumnMetaData.TYPE_STRING; + length = AbstractDatabase.CLOB_LENGTH; + break; + + case java.sql.Types.BIGINT: + // verify Unsigned BIGINT overflow! + // + if (signed) { + valtype = ColumnMetaData.TYPE_INTEGER; + precision = 0; // Max 9.223.372.036.854.775.807 + length = 15; + } else { + valtype = ColumnMetaData.TYPE_BIGNUMBER; + precision = 0; // Max 18.446.744.073.709.551.615 + length = 16; + } + break; + + case java.sql.Types.INTEGER: + valtype = ColumnMetaData.TYPE_INTEGER; + precision = 0; // Max 2.147.483.647 + length = 9; + break; + + case java.sql.Types.SMALLINT: + valtype = ColumnMetaData.TYPE_INTEGER; + precision = 0; // Max 32.767 + length = 4; + break; + + case java.sql.Types.TINYINT: + valtype = ColumnMetaData.TYPE_INTEGER; + precision = 0; // Max 127 + length = 2; + break; + + case java.sql.Types.DECIMAL: + case java.sql.Types.DOUBLE: + case java.sql.Types.FLOAT: + case java.sql.Types.REAL: + case java.sql.Types.NUMERIC: + valtype = ColumnMetaData.TYPE_NUMBER; + length = desc.getPrecisionSize(); + precision = desc.getScaleSize(); + if (length >= 126) { + length = -1; + } + if (precision >= 126) { + precision = -1; + } + + if (type == java.sql.Types.DOUBLE || type == java.sql.Types.FLOAT + || type == java.sql.Types.REAL) { + if (precision == 0) { + if (!signed) { + precision = -1; // precision is obviously incorrect if the type if + // Double/Float/Real + } else { + length = 18; + precision = 4; + } + } + + // If we're dealing with PostgreSQL and double precision types + if (desc.getDbType() == ProductTypeEnum.POSTGRESQL && type == java.sql.Types.DOUBLE + && precision >= 16 + && length >= 16) { + precision = -1; + length = -1; + } + + // MySQL: max resolution is double precision floating point (double) + // The (12,31) that is given back is not correct + if (desc.getDbType() == ProductTypeEnum.MYSQL) { + if (precision >= length) { + precision = -1; + length = -1; + } + } + + // If we're dealing with Hive and double/float precision types + if (desc.getDbType() == ProductTypeEnum.HIVE) { + if (type == java.sql.Types.DOUBLE + && precision >= 15 + && length >= 15) { + precision = 6; + length = 25; + } + + if (type == java.sql.Types.FLOAT + && precision >= 7 + && length >= 7) { + precision = 6; + length = 25; + } + } + + // if the length or precision needs a BIGNUMBER + //if (length > 15 || precision > 15) { + // valtype = ColumnMetaData.TYPE_BIGNUMBER; + //} + } else { + if (precision == 0) { + if (length <= 18 && length > 0) { // Among others Oracle is affected + // here. + valtype = ColumnMetaData.TYPE_INTEGER; // Long can hold up to 18 + // significant digits + } else if (length > 18) { + valtype = ColumnMetaData.TYPE_BIGNUMBER; + } + } else { // we have a precision: keep NUMBER or change to BIGNUMBER? + if (length > 15 || precision > 15) { + valtype = ColumnMetaData.TYPE_BIGNUMBER; + } + } + } + + if (desc.getDbType() == ProductTypeEnum.POSTGRESQL + || desc.getDbType() == ProductTypeEnum.GREENPLUM) { + // undefined size => arbitrary precision + if (type == java.sql.Types.NUMERIC && length == 0 && precision == 0) { + valtype = ColumnMetaData.TYPE_BIGNUMBER; + length = -1; + precision = -1; + } + } + + if (desc.getDbType() == ProductTypeEnum.ORACLE) { + if (precision == 0 && length == 38) { + valtype = ColumnMetaData.TYPE_INTEGER; + } + if (precision <= 0 && length <= 0) { + // undefined size: BIGNUMBER, + // precision on Oracle can be 38, too + // big for a Number type + valtype = ColumnMetaData.TYPE_BIGNUMBER; + length = -1; + precision = -1; + } + } + + break; + + case java.sql.Types.TIMESTAMP: + case java.sql.Types.TIMESTAMP_WITH_TIMEZONE: + valtype = ColumnMetaData.TYPE_TIMESTAMP; + length = desc.getScaleSize(); + break; + + case java.sql.Types.DATE: + valtype = ColumnMetaData.TYPE_DATE; + break; + + case java.sql.Types.TIME: + case java.sql.Types.TIME_WITH_TIMEZONE: + valtype = ColumnMetaData.TYPE_TIME; + break; + + case java.sql.Types.BOOLEAN: + case java.sql.Types.BIT: + valtype = ColumnMetaData.TYPE_BOOLEAN; + break; + + case java.sql.Types.BINARY: + case java.sql.Types.BLOB: + case java.sql.Types.VARBINARY: + case java.sql.Types.LONGVARBINARY: + valtype = ColumnMetaData.TYPE_BINARY; + precision = -1; + break; + + default: + valtype = ColumnMetaData.TYPE_STRING; + precision = desc.getScaleSize(); + break; + } + + this.name = desc.getFieldName(); + this.length = length; + this.precision = precision; + this.type = valtype; + this.remarks = desc.getRemarks(); + this.isNullable = desc.isNullable(); + this.defaultValue = desc.getDefaultValue() == null ? null : desc.getDefaultValue().trim().replaceAll("::.*", "").replaceAll("\\(", "").replaceAll("\\)", ""); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/DatabaseDescription.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/DatabaseDescription.java new file mode 100644 index 0000000..ab6a9d8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/DatabaseDescription.java @@ -0,0 +1,84 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.model; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +/** + * 数据库连接描述符信息定义(Database Description) + * + * @author jrl + */ +public class DatabaseDescription { + + protected ProductTypeEnum type; + protected String host; + protected int port; + /** + * 对于Oracle数据库的模式,可取范围为:sid,serviceName,TNSName三种 + */ + protected String mode; + protected String dbname; + protected String charset; + protected String username; + protected String password; + + public DatabaseDescription(String dbtype, String host, int port, String mode, String dbname, + String charset, String username, String password) { + this.type = ProductTypeEnum.valueOf(dbtype.toUpperCase()); + this.host = host; + this.port = port; + this.mode = mode; + this.dbname = dbname; + this.charset = charset; + this.username = username; + this.password = password; + } + + public ProductTypeEnum getType() { + return type; + } + + public String getHost() { + return host; + } + + public int getPort() { + return port; + } + + public String getMode() { + return mode; + } + + public String getDbname() { + return dbname; + } + + public String getCharset() { + return charset; + } + + public String getUsername() { + return username; + } + + public String getPassword() { + return password; + } + + @Override + public String toString() { + return "DatabaseDescription [type=" + type + ", host=" + host + ", port=" + port + ", mode=" + + mode + ", dbname=" + dbname + ", charset=" + charset + ", username=" + username + + ", password=" + password + "]"; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/FlinkColumnType.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/FlinkColumnType.java new file mode 100644 index 0000000..5f7838c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/FlinkColumnType.java @@ -0,0 +1,65 @@ + +package srt.cloud.framework.dbswitch.core.model; + +/** + * ColumnType + * + * @author zrx + **/ +public enum FlinkColumnType { + + STRING("java.lang.String", "STRING"), + JAVA_LANG_BOOLEAN("java.lang.Boolean", "BOOLEAN"), + BOOLEAN("Boolean", "BOOLEAN NOT NULL"), + JAVA_LANG_BYTE("java.lang.Byte", "TINYINT"), + BYTE("byte", "TINYINT NOT NULL"), + JAVA_LANG_SHORT("java.lang.Short", "SMALLINT"), + SHORT("short", "SMALLINT NOT NULL"), + INTEGER("java.lang.Integer", "INT"), + INT("int", "INT NOT NULL"), + JAVA_LANG_LONG("java.lang.Long", "BIGINT"), + LONG("long", "BIGINT NOT NULL"), + JAVA_LANG_FLOAT("java.lang.Float", "FLOAT"), + FLOAT("float", "FLOAT NOT NULL"), + JAVA_LANG_DOUBLE("java.lang.Double", "DOUBLE"), + DOUBLE("double", "DOUBLE NOT NULL"), + DATE("java.sql.Date", "DATE"), + LOCALDATE("java.time.LocalDate", "DATE"), + TIME("java.sql.Time", "TIME"), + LOCALTIME("java.time.LocalTime", "TIME"), + TIMESTAMP("java.sql.Timestamp", "TIMESTAMP"), + LOCALDATETIME("java.time.LocalDateTime", "TIMESTAMP"), + OFFSETDATETIME("java.time.OffsetDateTime", "TIMESTAMP WITH TIME ZONE"), + INSTANT("java.time.Instant", "TIMESTAMP_LTZ"), + DURATION("java.time.Duration", "INVERVAL SECOND"), + PERIOD("java.time.Period", "INTERVAL YEAR TO MONTH"), + DECIMAL("java.math.BigDecimal", "DECIMAL"), + BYTES("byte[]", "BYTES"), + T("T[]", "ARRAY"), + MAP("java.util.Map", "MAP"); + + private String javaType; + private String flinkType; + + FlinkColumnType(String javaType, String flinkType) { + this.javaType = javaType; + this.flinkType = flinkType; + } + + public String getJavaType() { + return javaType; + } + + public String getFlinkType() { + return flinkType; + } + + public static FlinkColumnType getByJavaType(String javaType) { + for (FlinkColumnType columnType : FlinkColumnType.values()) { + if (columnType.javaType.equals(javaType)) { + return columnType; + } + } + return STRING; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/JdbcSelectResult.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/JdbcSelectResult.java new file mode 100644 index 0000000..dd95b94 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/JdbcSelectResult.java @@ -0,0 +1,34 @@ +package srt.cloud.framework.dbswitch.core.model; + +import lombok.Data; +import net.srt.flink.common.result.IResult; + +import java.time.LocalDateTime; +import java.util.List; +import java.util.Map; + +@Data +public class JdbcSelectResult implements IResult { + + private static final long serialVersionUID = 1L; + + private List results; + private Boolean ifQuery; + private String sql; + private Long time; + private Boolean success; + private String errorMsg; + private Integer count; + private List columns; + private List> rowData; + + @Override + public void setStartTime(LocalDateTime startTime) { + + } + + @Override + public String getJobId() { + return null; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/SchemaTableData.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/SchemaTableData.java new file mode 100644 index 0000000..88f2570 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/SchemaTableData.java @@ -0,0 +1,57 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.model; + +import java.util.List; + +/** + * 数据库表的数据 + * + * @author jrl + */ +public class SchemaTableData { + + private String schemaName; + private String tableName; + private List columns; + private List> rows; + + public String getSchemaName() { + return schemaName; + } + + public void setSchemaName(String schemaName) { + this.schemaName = schemaName; + } + + public String getTableName() { + return tableName; + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + public List getColumns() { + return columns; + } + + public void setColumns(List columns) { + this.columns = columns; + } + + public List> getRows() { + return rows; + } + + public void setRows(List> rows) { + this.rows = rows; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/SchemaTableMeta.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/SchemaTableMeta.java new file mode 100644 index 0000000..2af4b83 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/SchemaTableMeta.java @@ -0,0 +1,45 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.model; + +import java.util.List; + +public class SchemaTableMeta extends TableDescription { + + private List primaryKeys; + + private String createSql; + + private List columns; + + public List getPrimaryKeys() { + return primaryKeys; + } + + public void setPrimaryKeys(List primaryKeys) { + this.primaryKeys = primaryKeys; + } + + public String getCreateSql() { + return createSql; + } + + public void setCreateSql(String createSql) { + this.createSql = createSql; + } + + public List getColumns() { + return columns; + } + + public void setColumns(List columns) { + this.columns = columns; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/TableDescription.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/TableDescription.java new file mode 100644 index 0000000..930a2b9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/model/TableDescription.java @@ -0,0 +1,61 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.model; + +import srt.cloud.framework.dbswitch.common.type.DBTableType; + +/** + * 数据库表描述符信息定义(Table Description) + * + * @author jrl + */ +public class TableDescription { + + private String tableName; + private String schemaName; + private String remarks; + private DBTableType tableType; + + public String getTableName() { + return tableName; + } + + public void setTableName(String tableName) { + this.tableName = tableName; + } + + public String getSchemaName() { + return schemaName; + } + + public void setSchemaName(String schemaName) { + this.schemaName = schemaName; + } + + public String getRemarks() { + return this.remarks; + } + + public void setRemarks(String remarks) { + this.remarks = remarks; + } + + public String getTableType() { + return tableType.name(); + } + + public void setTableType(String tableType) { + this.tableType = DBTableType.valueOf(tableType.toUpperCase()); + } + + public boolean isViewTable() { + return DBTableType.VIEW == tableType; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/IMetaDataByDatasourceService.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/IMetaDataByDatasourceService.java new file mode 100644 index 0000000..d0a7515 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/IMetaDataByDatasourceService.java @@ -0,0 +1,158 @@ +package srt.cloud.framework.dbswitch.core.service; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.model.SchemaTableMeta; +import srt.cloud.framework.dbswitch.core.model.TableDescription; + +import javax.sql.DataSource; +import java.util.List; + +public interface IMetaDataByDatasourceService { + + /** + * 获取数据源对象 + * + * @return + */ + DataSource getDataSource(); + + /** + * 获取数据库的schema模式列表 + * + * @return + */ + List querySchemaList(); + + /** + * 获取指定Schema下所有的表列表 + * + * @param schemaName 模式名称 + * @return + */ + List queryTableList(String schemaName); + + /** + * 获取物理表的DDL建表语句 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return + */ + String getTableDDL(String schemaName, String tableName); + + /** + * 获取物理表的注释 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return + */ + String getTableRemark(String schemaName, String tableName); + + /** + * 获取物理表的DDL建表语句 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return + */ + String getViewDDL(String schemaName, String tableName); + + /** + * 获取指定schema.table的字段名列表 + * + * @param schemaName 模式名称 + * @param tableName 表或视图名称 + * @return + */ + List queryTableColumnName(String schemaName, String tableName); + + /** + * 获取指定schema.table的表结构字段信息 + * + * @param schemaName 模式名称 + * @param tableName 表或视图名称 + * @return + */ + List queryTableColumnMeta(String schemaName, String tableName); + + List queryTableColumnMetaOnly(String schemaName, String tableName); + + /** + * 获取指定SQL结构字段信息 + * + * @param querySql 查询的SQL语句 + * @return + */ + List querySqlColumnMeta(String querySql); + + /** + * 获取表的主键信息字段列表 + * + * @param schemaName + * @param tableName + * @return + */ + List queryTablePrimaryKeys(String schemaName, String tableName); + + /** + * 测试数据库SQL查询 + * + * @param sql 待查询的SQL语句 + */ + void testQuerySQL(String sql); + + /** + * 获取表的元数据 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return + */ + SchemaTableMeta queryTableMeta(String schemaName, String tableName); + + /** + * 获取表的数据内容 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param rowCount 记录总数 + * @return + */ + SchemaTableData queryTableData(String schemaName, String tableName, int rowCount); + + /** + * 根据字段结构信息组装对应数据库的建表DDL语句 + * + * @param type 目的数据库类型 + * @param fieldNames 字段结构信息 + * @param primaryKeys 主键字段信息 + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param autoIncr 是否允许主键自增 + * @return 对应数据库的DDL建表语句 + */ + List getDDLCreateTableSQL(ProductTypeEnum type, List fieldNames, + List primaryKeys, String schemaName, String tableName, String tableRemarks, + boolean autoIncr); + + /** + * 根据要同步的字段添加目标表中不存在的字段 + * + * @param targetSchemaName + * @param targetTableName + */ + void addNoExistColumnsByTarget(String targetSchemaName, String targetTableName, List targetColumnDescriptions); + + /** + * 索引创建语句 + * @param targetColumns + * @param targetPrimaryKeys + * @param targetSchemaName + * @param targetTableName + * @param sqlCreateTable + */ + void createIndexDefinition(List targetColumns, List targetPrimaryKeys, String targetSchemaName, String targetTableName, List sqlCreateTable); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/IMetaDataByDescriptionService.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/IMetaDataByDescriptionService.java new file mode 100644 index 0000000..059cc2f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/IMetaDataByDescriptionService.java @@ -0,0 +1,133 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Data : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.service; + + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.DatabaseDescription; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.model.SchemaTableMeta; +import srt.cloud.framework.dbswitch.core.model.TableDescription; + +import java.util.List; + +/** + * 表结构迁移接口定义 + * + * @author jrl + */ +public interface IMetaDataByDescriptionService { + + /** + * 获取数据库的连接信息 + * + */ + DatabaseDescription getDatabaseConnection(); + + /** + * 获取数据库的schema模式列表 + * + * @return + */ + List querySchemaList(); + + /** + * 获取指定Schema下所有的表列表 + * + * @param schemaName 模式名称 + * @return + */ + List queryTableList(String schemaName); + + /** + * 获取物理表的DDL建表语句 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return + */ + String getTableDDL(String schemaName, String tableName); + + /** + * 获取物理表的DDL建表语句 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return + */ + String getViewDDL(String schemaName, String tableName); + + /** + * 获取指定schema.table的表结构字段信息 + * + * @param schemaName 模式名称 + * @param tableName 表或视图名称 + * @return + */ + List queryTableColumnMeta(String schemaName, String tableName); + + /** + * 获取指定SQL结构字段信息 + * + * @param querySql 查询的SQL语句 + * @return + */ + List querySqlColumnMeta(String querySql); + + /** + * 获取表的主键信息字段列表 + * + * @param schemaName + * @param tableName + * @return + */ + List queryTablePrimaryKeys(String schemaName, String tableName); + + /** + * 测试数据库SQL查询 + * + * @param sql 待查询的SQL语句 + */ + void testQuerySQL(String sql); + + /** + * 获取表的元数据 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return + */ + SchemaTableMeta queryTableMeta(String schemaName, String tableName); + + /** + * 获取表的数据内容 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param rowCount 记录总数 + * @return + */ + SchemaTableData queryTableData(String schemaName, String tableName, int rowCount); + + /** + * 根据字段结构信息组装对应数据库的建表DDL语句 + * + * @param type 目的数据库类型 + * @param fieldNames 字段结构信息 + * @param primaryKeys 主键字段信息 + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param autoIncr 是否允许主键自增 + * @return 对应数据库的DDL建表语句 + */ + String getDDLCreateTableSQL(ProductTypeEnum type, List fieldNames, + List primaryKeys, String schemaName, String tableName, boolean autoIncr); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/IMetaDataByJdbcService.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/IMetaDataByJdbcService.java new file mode 100644 index 0000000..3b655ef --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/IMetaDataByJdbcService.java @@ -0,0 +1,226 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Data : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.service; + +import net.srt.flink.common.result.SqlExplainResult; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.JdbcSelectResult; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.model.SchemaTableMeta; +import srt.cloud.framework.dbswitch.core.model.TableDescription; + +import java.util.List; +import java.util.Map; + +/** + * 元信息获取接口定义 + * + * @author jrl + */ +public interface IMetaDataByJdbcService { + + /** + * 获取数据库类型 + * + * @return + */ + ProductTypeEnum getDatabaseType(); + + /** + * 获取数据库的schema模式列表 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @return + */ + List querySchemaList(String jdbcUrl, String username, String password); + + /** + * 获取指定Schema下所有的表列表 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param schemaName 模式名称 + * @return + */ + List queryTableList(String jdbcUrl, String username, String password, + String schemaName); + + /** + * 获取物理表的DDL建表语句 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return + */ + String getTableDDL(String jdbcUrl, String username, String password, String schemaName, + String tableName); + + /** + * 获取视图表的DDL建表语句 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return + */ + String getViewDDL(String jdbcUrl, String username, String password, String schemaName, + String tableName); + + /** + * 获取指定schema.table的表结构字段信息 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param schemaName 模式名称 + * @param tableName 表或视图名称 + * @return + */ + List queryTableColumnMeta(String jdbcUrl, String username, String password, + String schemaName, String tableName); + + /** + * 获取指定schema.table的表结构字段信息 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param schemaName 模式名称 + * @param tableName 表或视图名称 + * @return + */ + List queryTableColumnMetaOnly(String jdbcUrl, String username, String password, + String schemaName, String tableName); + + /** + * 获取指定SQL结构字段信息 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param querySql 查询的SQL语句 + * @return + */ + List querySqlColumnMeta(String jdbcUrl, String username, String password, + String querySql); + + /** + * 获取表的主键信息字段列表 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param schemaName Schema模式名称 + * @param tableName Table表名称 + * @return + */ + List queryTablePrimaryKeys(String jdbcUrl, String username, String password, + String schemaName, + String tableName); + + SchemaTableData queryTableDataBySql(String jdbcUrl, String username, String password, + String sql, int rowCount); + + JdbcSelectResult queryDataBySql(String jdbcUrl, String dbType, String username, String password, + String sql, Integer openTrans, int rowCount); + + JdbcSelectResult queryDataByApiSql(String jdbcUrl, String username, String password, + String sql, Integer openTrans, String sqlSeparator, Map sqlParam, int rowCount); + + List explain(String sql, String dbType); + + /** + * 测试数据库SQL查询 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param sql 待查询的SQL语句 + */ + void testQuerySQL(String jdbcUrl, String username, String password, String sql); + + /** + * 测试数据库SQL查询 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param sql 待查询的SQL语句 + */ + void executeSql(String jdbcUrl, String username, String password, String sql); + + /** + * 测试数据库SQL查询 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param tableName 表名 + */ + boolean tableExist(String jdbcUrl, String username, String password, String tableName); + + /** + * 获取(物理/视图)表的元数据 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param schemaName Schema模式名称 + * @param tableName Table表名称 + * @return + */ + SchemaTableMeta queryTableMeta(String jdbcUrl, String username, String password, + String schemaName, String tableName); + + /** + * 获取(物理/视图)表的数据内容 + * + * @param jdbcUrl 数据库连接的JDBC-URL + * @param username 数据库连接的帐号 + * @param password 数据库连接的密码 + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param rowCount 记录总数 + * @return + */ + SchemaTableData queryTableData(String jdbcUrl, String username, String password, + String schemaName, String tableName, int rowCount); + + /** + * 根据字段结构信息组装对应数据库的建表DDL语句 + * + * @param type 目的数据库类型 + * @param fieldNames 字段结构信息 + * @param primaryKeys 主键字段信息 + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param autoIncr 是否允许主键自增 + * @return 对应数据库的DDL建表语句 + */ + String getDDLCreateTableSQL(ProductTypeEnum type, List fieldNames, + List primaryKeys, String schemaName, String tableName, boolean autoIncr); + + String getFlinkTableSql(List columns, String schemaName, String tableName, String tableRemarks, String flinkConfig); + + String getSqlSelect(List columnDescriptions, String schemaName, String tableName, String tableRemarks); + + String getCountMoreThanOneSql(String schemaName, String tableName, List columns); + + String getCountOneSql(String schemaName, String tableName, List columns); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/impl/MetaDataByDataSourceServiceImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/impl/MetaDataByDataSourceServiceImpl.java new file mode 100644 index 0000000..6256d16 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/impl/MetaDataByDataSourceServiceImpl.java @@ -0,0 +1,228 @@ +// Copyright tang. All rights reserved. +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.service.impl; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.DatabaseAwareUtils; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.DatabaseFactory; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.model.SchemaTableMeta; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByDatasourceService; +import srt.cloud.framework.dbswitch.core.util.GenerateSqlUtils; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.SQLException; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * 用DataSource对象的元数据获取服务 + * + * @author jrl + */ +public class MetaDataByDataSourceServiceImpl implements IMetaDataByDatasourceService { + + private DataSource dataSource; + + private AbstractDatabase database; + + private ProductTypeEnum type; + + public MetaDataByDataSourceServiceImpl(DataSource dataSource) { + this(dataSource, DatabaseAwareUtils.getDatabaseTypeByDataSource(dataSource)); + } + + public MetaDataByDataSourceServiceImpl(DataSource dataSource, ProductTypeEnum type) { + this.dataSource = dataSource; + this.database = DatabaseFactory.getDatabaseInstance(type); + this.type = type; + } + + @Override + public DataSource getDataSource() { + return this.dataSource; + } + + @Override + public List querySchemaList() { + try (Connection connection = dataSource.getConnection()) { + return database.querySchemaList(connection); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTableList(String schemaName) { + try (Connection connection = dataSource.getConnection()) { + return database.queryTableList(connection, schemaName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public String getTableDDL(String schemaName, String tableName) { + try (Connection connection = dataSource.getConnection()) { + return database.getTableDDL(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public String getTableRemark(String schemaName, String tableName) { + try (Connection connection = dataSource.getConnection()) { + TableDescription td = database.queryTableMeta(connection, schemaName, tableName); + return null == td ? null : td.getRemarks(); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public String getViewDDL(String schemaName, String tableName) { + try (Connection connection = dataSource.getConnection()) { + return database.getViewDDL(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTableColumnName(String schemaName, String tableName) { + try (Connection connection = dataSource.getConnection()) { + return database.queryTableColumnName(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTableColumnMeta(String schemaName, String tableName) { + try (Connection connection = dataSource.getConnection()) { + List columnDescriptions = database.queryTableColumnMeta(connection, schemaName, tableName); + database.setColumnDefaultValue(connection, schemaName, tableName, columnDescriptions); + database.setColumnIndexInfo(connection, schemaName, tableName, columnDescriptions); + return columnDescriptions; + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTableColumnMetaOnly(String schemaName, String tableName) { + try (Connection connection = dataSource.getConnection()) { + return database.queryTableColumnMeta(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List querySqlColumnMeta(String querySql) { + try (Connection connection = dataSource.getConnection()) { + return database.querySelectSqlColumnMeta(connection, querySql); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTablePrimaryKeys(String schemaName, String tableName) { + try (Connection connection = dataSource.getConnection()) { + return database.queryTablePrimaryKeys(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public SchemaTableMeta queryTableMeta(String schemaName, String tableName) { + SchemaTableMeta tableMeta = new SchemaTableMeta(); + + try (Connection connection = dataSource.getConnection()) { + TableDescription tableDesc = database.queryTableMeta(connection, schemaName, tableName); + if (null == tableDesc) { + throw new IllegalArgumentException("Table Or View Not Exist"); + } + + List columns = database.queryTableColumnMeta( + connection, schemaName, tableName); + + List pks; + String createSql; + if (tableDesc.isViewTable()) { + pks = Collections.emptyList(); + createSql = database.getViewDDL(connection, schemaName, tableName); + } else { + pks = database.queryTablePrimaryKeys(connection, schemaName, tableName); + createSql = database.getTableDDL(connection, schemaName, tableName); + } + + tableMeta.setSchemaName(schemaName); + tableMeta.setTableName(tableName); + tableMeta.setTableType(tableDesc.getTableType()); + tableMeta.setRemarks(tableDesc.getRemarks()); + tableMeta.setColumns(columns); + tableMeta.setPrimaryKeys(pks); + tableMeta.setCreateSql(createSql); + + return tableMeta; + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public SchemaTableData queryTableData(String schemaName, String tableName, int rowCount) { + try (Connection connection = dataSource.getConnection()) { + return database.queryTableData(connection, schemaName, tableName, rowCount); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public void testQuerySQL(String sql) { + try (Connection connection = dataSource.getConnection()) { + database.testQuerySQL(connection, sql); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List getDDLCreateTableSQL(ProductTypeEnum type, + List fieldNames, List primaryKeys, String schemaName, + String tableName, String tableRemarks, boolean autoIncr) { + return GenerateSqlUtils.getDDLCreateTableSQL( + type, fieldNames, primaryKeys, schemaName, tableName, tableRemarks, autoIncr); + } + + @Override + public void addNoExistColumnsByTarget(String targetSchemaName, String targetTableName, List targetColumnDescriptions) { + try (Connection connection = dataSource.getConnection()) { + database.addNoExistColumnsByTarget(connection, targetSchemaName, targetTableName, queryTableColumnMetaOnly(targetSchemaName, targetTableName).stream().map(ColumnDescription::getFieldName).collect(Collectors.toList()), targetColumnDescriptions); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public void createIndexDefinition(List targetColumns, List targetPrimaryKeys, String targetSchemaName, String targetTableName, List sqlCreateTable) { + database.createIndexDefinition(targetColumns, targetPrimaryKeys, targetSchemaName, targetTableName, sqlCreateTable); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/impl/MetaDataByDescriptionServiceImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/impl/MetaDataByDescriptionServiceImpl.java new file mode 100644 index 0000000..63dcf08 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/impl/MetaDataByDescriptionServiceImpl.java @@ -0,0 +1,217 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Data : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.service.impl; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.DatabaseFactory; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.DatabaseDescription; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.model.SchemaTableMeta; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByDescriptionService; +import srt.cloud.framework.dbswitch.core.util.ConnectionUtils; +import srt.cloud.framework.dbswitch.core.util.GenerateSqlUtils; +import srt.cloud.framework.dbswitch.core.util.JdbcUrlUtils; + +import java.sql.Connection; +import java.sql.SQLException; +import java.util.Collections; +import java.util.List; + +/** + * 用IP:PORT等参数配置的元数据获取服务 + * + * @author jrl + */ +public class MetaDataByDescriptionServiceImpl implements IMetaDataByDescriptionService { + + private static int connectTimeOut = 6; + protected AbstractDatabase database = null; + protected DatabaseDescription databaseDesc = null; + + public MetaDataByDescriptionServiceImpl(DatabaseDescription databaseDesc) { + this.databaseDesc = databaseDesc; + this.database = DatabaseFactory.getDatabaseInstance(databaseDesc.getType()); + } + + @Override + public DatabaseDescription getDatabaseConnection() { + return this.databaseDesc; + } + + @Override + public List querySchemaList() { + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.querySchemaList(connection); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTableList(String schemaName) { + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.queryTableList(connection, schemaName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public String getTableDDL(String schemaName, String tableName) { + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.getTableDDL(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public String getViewDDL(String schemaName, String tableName) { + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.getViewDDL(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTableColumnMeta(String schemaName, String tableName) { + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + List columnDescriptions = database.queryTableColumnMeta(connection, schemaName, tableName); + database.setColumnDefaultValue(connection, schemaName, tableName, columnDescriptions); + database.setColumnIndexInfo(connection, schemaName, tableName, columnDescriptions); + return columnDescriptions; + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List querySqlColumnMeta(String querySql) { + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.querySelectSqlColumnMeta(connection, querySql); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTablePrimaryKeys(String schemaName, String tableName) { + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.queryTablePrimaryKeys(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public SchemaTableMeta queryTableMeta(String schemaName, String tableName) { + SchemaTableMeta tableMeta = new SchemaTableMeta(); + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + TableDescription tableDesc = this.database.queryTableMeta(connection, schemaName, tableName); + if (null == tableDesc) { + throw new IllegalArgumentException("Table Or View Not Exist"); + } + List columns = this.queryTableColumnMeta(schemaName, tableName); + + List pks; + String createSql; + if (tableDesc.isViewTable()) { + pks = Collections.emptyList(); + createSql = this.database.getViewDDL(connection, schemaName, tableName); + } else { + pks = this.database.queryTablePrimaryKeys(connection, schemaName, tableName); + createSql = this.database.getTableDDL(connection, schemaName, tableName); + } + + tableMeta.setSchemaName(schemaName); + tableMeta.setTableName(tableName); + tableMeta.setTableType(tableDesc.getTableType()); + tableMeta.setRemarks(tableDesc.getRemarks()); + tableMeta.setColumns(columns); + tableMeta.setPrimaryKeys(pks); + tableMeta.setCreateSql(createSql); + + return tableMeta; + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public SchemaTableData queryTableData(String schemaName, String tableName, int rowCount) { + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return this.database.queryTableData(connection, schemaName, tableName, rowCount); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public void testQuerySQL(String sql) { + String jdbcUrl = JdbcUrlUtils.getJdbcUrl( + this.databaseDesc, MetaDataByDescriptionServiceImpl.connectTimeOut); + String username = this.databaseDesc.getUsername(); + String password = this.databaseDesc.getPassword(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + database.testQuerySQL(connection, sql); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public String getDDLCreateTableSQL(ProductTypeEnum type, List fieldNames, + List primaryKeys, String schemaName, String tableName, boolean autoIncr) { + return GenerateSqlUtils.getDDLCreateTableSQL( + type, fieldNames, primaryKeys, schemaName, tableName, autoIncr); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/impl/MetaDataByJdbcServiceImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/impl/MetaDataByJdbcServiceImpl.java new file mode 100644 index 0000000..bc3284d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/service/impl/MetaDataByJdbcServiceImpl.java @@ -0,0 +1,373 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Data : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.service.impl; + +import com.alibaba.druid.sql.SQLUtils; +import com.alibaba.druid.sql.ast.SQLStatement; +import net.srt.flink.common.result.SqlExplainResult; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.StringUtil; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.DatabaseFactory; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.FlinkColumnType; +import srt.cloud.framework.dbswitch.core.model.JdbcSelectResult; +import srt.cloud.framework.dbswitch.core.model.SchemaTableData; +import srt.cloud.framework.dbswitch.core.model.SchemaTableMeta; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByJdbcService; +import srt.cloud.framework.dbswitch.core.util.ConnectionUtils; +import srt.cloud.framework.dbswitch.core.util.GenerateSqlUtils; +import srt.cloud.framework.dbswitch.core.util.SqlUtil; + +import java.sql.Connection; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +/** + * 使用JDBC连接串的元数据获取服务 + * + * @author jrl + */ +public class MetaDataByJdbcServiceImpl implements IMetaDataByJdbcService { + + protected ProductTypeEnum dbType; + protected AbstractDatabase database; + + public MetaDataByJdbcServiceImpl(ProductTypeEnum type) { + this.dbType = type; + this.database = DatabaseFactory.getDatabaseInstance(type); + } + + @Override + public ProductTypeEnum getDatabaseType() { + return this.dbType; + } + + @Override + public List querySchemaList(String jdbcUrl, String username, String password) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.querySchemaList(connection); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTableList(String jdbcUrl, String username, String password, + String schemaName) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.queryTableList(connection, schemaName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public String getTableDDL(String jdbcUrl, String username, String password, String schemaName, + String tableName) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.getTableDDL(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public String getViewDDL(String jdbcUrl, String username, String password, String schemaName, + String tableName) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.getViewDDL(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTableColumnMeta(String jdbcUrl, String username, + String password, String schemaName, String tableName) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + List columnDescriptions = database.queryTableColumnMeta(connection, schemaName, tableName); + database.setColumnDefaultValue(connection, schemaName, tableName, columnDescriptions); + database.setColumnIndexInfo(connection, schemaName, tableName, columnDescriptions); + return columnDescriptions; + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTableColumnMetaOnly(String jdbcUrl, String username, String password, String schemaName, String tableName) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.queryTableColumnMeta(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List querySqlColumnMeta(String jdbcUrl, String username, + String password, String querySql) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.querySelectSqlColumnMeta(connection, querySql); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List queryTablePrimaryKeys(String jdbcUrl, String username, String password, + String schemaName, String tableName) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.queryTablePrimaryKeys(connection, schemaName, tableName); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public SchemaTableMeta queryTableMeta(String jdbcUrl, String username, String password, + String schemaName, String tableName) { + SchemaTableMeta tableMeta = new SchemaTableMeta(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + TableDescription tableDesc = database.queryTableMeta(connection, schemaName, tableName); + if (null == tableDesc) { + throw new IllegalArgumentException("Table Or View Not Exist"); + } + + List columns = database + .queryTableColumnMeta(connection, schemaName, tableName); + + List pks; + String createSql; + if (tableDesc.isViewTable()) { + pks = Collections.emptyList(); + createSql = database.getViewDDL(connection, schemaName, tableName); + } else { + pks = database.queryTablePrimaryKeys(connection, schemaName, tableName); + createSql = database.getTableDDL(connection, schemaName, tableName); + } + + tableMeta.setSchemaName(schemaName); + tableMeta.setTableName(tableName); + tableMeta.setTableType(tableDesc.getTableType()); + tableMeta.setRemarks(tableDesc.getRemarks()); + tableMeta.setColumns(columns); + tableMeta.setPrimaryKeys(pks); + tableMeta.setCreateSql(createSql); + + return tableMeta; + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public SchemaTableData queryTableData(String jdbcUrl, String username, String password, + String schemaName, String tableName, int rowCount) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.queryTableData(connection, schemaName, tableName, rowCount); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public SchemaTableData queryTableDataBySql(String jdbcUrl, String username, String password, + String sql, int rowCount) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.queryTableDataBySql(connection, sql, rowCount); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public JdbcSelectResult queryDataBySql(String jdbcUrl, String dbType, String username, String password, String sql, Integer openTrans, int rowCount) { + ProcessEntity process = ProcessContextHolder.getProcess(); + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.queryDataBySql(connection, dbType, sql, openTrans, rowCount); + } catch (SQLException se) { + process.error(LogUtil.getError(se)); + process.infoEnd(); + throw new RuntimeException(se); + } + } + + @Override + public JdbcSelectResult queryDataByApiSql(String jdbcUrl, String username, String password, String sql, Integer openTrans, String sqlSeparator, Map sqlParam, int rowCount) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + return database.queryDataByApiSql(connection, sql, openTrans, sqlSeparator, sqlParam, rowCount); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public List explain(String sql, String dbType) { + ProcessEntity process = ProcessContextHolder.getProcess(); + List sqlExplainResults = new ArrayList<>(); + String current = null; + process.info("Start check sql..."); + try { + List stmtList = SQLUtils.parseStatements(sql, dbType.toLowerCase()); + for (SQLStatement item : stmtList) { + current = item.toString(); + String type = item.getClass().getSimpleName(); + sqlExplainResults.add(SqlExplainResult.success(type, current, null)); + } + process.info("Sql is correct."); + + } catch (Exception e) { + sqlExplainResults.add(SqlExplainResult.fail(current, LogUtil.getError(e))); + process.error(LogUtil.getError(e)); + } + return sqlExplainResults; + } + + @Override + public void testQuerySQL(String jdbcUrl, String username, String password, String sql) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + database.testQuerySQL(connection, sql); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public void executeSql(String jdbcUrl, String username, String password, String sql) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + database.executeSql(connection, sql); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + @Override + public boolean tableExist(String jdbcUrl, String username, String password, String tableName) { + try (Connection connection = ConnectionUtils.connect(jdbcUrl, username, password)) { + database.executeSql(connection, String.format("SELECT 1 FROM %s WHERE 1=0", tableName)); + return true; + } catch (Exception e) { + return false; + } + } + + @Override + public String getDDLCreateTableSQL(ProductTypeEnum type, List fieldNames, + List primaryKeys, String schemaName, String tableName, boolean autoIncr) { + return GenerateSqlUtils.getDDLCreateTableSQL( + type, fieldNames, primaryKeys, schemaName, tableName, autoIncr); + } + + @Override + public String getFlinkTableSql(List columns, String schemaName, String tableName, String tableRemarks, String flinkConfig) { + StringBuilder sb = new StringBuilder("DROP TABLE IF EXISTS "); + sb.append(tableName).append(";\n"); + sb.append("CREATE TABLE IF NOT EXISTS ").append(tableName).append(" (\n"); + List pks = new ArrayList<>(); + for (int i = 0; i < columns.size(); i++) { + String type = FlinkColumnType.getByJavaType(columns.get(i).getFiledTypeClassName()).getFlinkType(); + sb.append(" "); + if (i > 0) { + sb.append(","); + } + sb.append(columns.get(i).getFieldName()).append(" ").append(type); + if (StringUtil.isNotBlank(columns.get(i).getRemarks())) { + if (columns.get(i).getRemarks().contains("\'") | columns.get(i).getRemarks().contains("\"")) { + sb.append(" COMMENT '").append(columns.get(i).getRemarks().replaceAll("[\"']", "")).append("'"); + } else { + sb.append(" COMMENT '").append(columns.get(i).getRemarks()).append("'"); + } + } + sb.append("\n"); + if (columns.get(i).isPk()) { + pks.add(columns.get(i).getFieldName()); + } + } + StringBuilder pksb = new StringBuilder("PRIMARY KEY ( "); + for (int i = 0; i < pks.size(); i++) { + if (i > 0) { + pksb.append(","); + } + pksb.append(pks.get(i)); + } + pksb.append(" ) NOT ENFORCED\n"); + if (pks.size() > 0) { + sb.append(" ,"); + sb.append(pksb); + } + sb.append(")"); + if (StringUtil.isNotBlank(tableRemarks)) { + if (tableRemarks.contains("\'") | tableRemarks.contains("\"")) { + sb.append(" COMMENT '").append(tableRemarks.replaceAll("[\"']", "")).append("'\n"); + } else { + sb.append(" COMMENT '").append(tableRemarks).append("'\n"); + } + } + sb.append(" WITH (\n"); + sb.append(getFlinkTableWith(flinkConfig, schemaName, tableName)); + sb.append("\n);\n"); + return sb.toString(); + } + + @Override + public String getSqlSelect(List columns, String schemaName, String tableName, String tableRemarks) { + StringBuilder sb = new StringBuilder("SELECT\n"); + for (int i = 0; i < columns.size(); i++) { + sb.append(" "); + if (i > 0) { + sb.append(","); + } + String columnComment = columns.get(i).getRemarks(); + if (StringUtil.isNotBlank(columnComment)) { + if (columnComment.contains("\'") | columnComment.contains("\"")) { + columnComment = columnComment.replaceAll("[\"']", ""); + } + sb.append(columns.get(i).getFieldName()).append(" -- ").append(columnComment).append(" \n"); + } else { + sb.append(columns.get(i).getFieldName()).append(" \n"); + + } + } + if (StringUtil.isNotBlank(tableRemarks)) { + sb.append(" FROM ").append(schemaName).append(".").append(tableName).append(";").append(" -- ").append(tableRemarks).append("\n"); + } else { + sb.append(" FROM ").append(schemaName).append(".").append(tableName).append(";\n"); + } + return sb.toString(); + } + + + @Override + public String getCountMoreThanOneSql(String schemaName, String tableName, List columns) { + return database.getCountMoreThanOneSql(schemaName, tableName, columns); + } + + @Override + public String getCountOneSql(String schemaName, String tableName, List columns) { + return database.getCountOneSql(schemaName, tableName, columns); + } + + private String getFlinkTableWith(String flinkConfig, String schemaName, String tableName) { + String tableWithSql = ""; + if (StringUtil.isNotBlank(flinkConfig)) { + tableWithSql = SqlUtil.replaceAllParam(flinkConfig, "schemaName", schemaName); + tableWithSql = SqlUtil.replaceAllParam(tableWithSql, "tableName", tableName); + } + return tableWithSql; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/ConnectionUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/ConnectionUtils.java new file mode 100644 index 0000000..9297924 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/ConnectionUtils.java @@ -0,0 +1,55 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.util; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.SQLException; +import java.util.Properties; + +public final class ConnectionUtils { + + private static final int DEFAULT_LOGIN_TIMEOUT_SECONDS = 30; + + /** + * 建立与数据库的连接 + * + * @param jdbcUrl JDBC连接串 + * @param username 用户名 + * @param password 密码 + * @return java.sql.Connection + */ + public static Connection connect(String jdbcUrl, String username, String password) { + /* + * 超时时间设置问题: https://blog.csdn.net/lsunwing/article/details/79461217 + * https://blog.csdn.net/weixin_34405332/article/details/91664781 + */ + try { + Properties props = new Properties(); + props.put("user", username); + props.put("password", password); + + /** + * Oracle在通过jdbc连接的时候需要添加一个参数来设置是否获取注释 + */ + if (jdbcUrl.trim().startsWith("jdbc:oracle:thin:@")) { + props.put("remarksReporting", "true"); + } + + // 设置最大时间 + DriverManager.setLoginTimeout(DEFAULT_LOGIN_TIMEOUT_SECONDS); + + return DriverManager.getConnection(jdbcUrl, props); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/DDLFormatterUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/DDLFormatterUtils.java new file mode 100644 index 0000000..8155a0e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/DDLFormatterUtils.java @@ -0,0 +1,129 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.util; + +import java.util.Locale; +import java.util.StringTokenizer; + +/** + * DDL的SQL语句格式化(摘自hibernate) + * + * @author jrl + */ +public class DDLFormatterUtils { + + public static String format(String sql) { + if (null == sql || 0 == sql.length()) { + return sql; + } + if (sql.toLowerCase(Locale.ROOT).startsWith("create table")) { + return formatCreateTable(sql); + } else if (sql.toLowerCase(Locale.ROOT).startsWith("alter table")) { + return formatAlterTable(sql); + } else if (sql.toLowerCase(Locale.ROOT).startsWith("comment on")) { + return formatCommentOn(sql); + } else { + return "\n " + sql; + } + } + + private static String formatCommentOn(String sql) { + final StringBuilder result = new StringBuilder(60).append(" "); + final StringTokenizer tokens = new StringTokenizer(sql, " '[]\"", true); + + boolean quoted = false; + while (tokens.hasMoreTokens()) { + final String token = tokens.nextToken(); + result.append(token); + if (isQuote(token)) { + quoted = !quoted; + } else if (!quoted) { + if ("is".equals(token)) { + result.append("\n "); + } + } + } + + return result.toString(); + } + + private static String formatAlterTable(String sql) { + final StringBuilder result = new StringBuilder(60).append(" "); + final StringTokenizer tokens = new StringTokenizer(sql, " (,)'[]\"", true); + + boolean quoted = false; + while (tokens.hasMoreTokens()) { + final String token = tokens.nextToken(); + if (isQuote(token)) { + quoted = !quoted; + } else if (!quoted) { + if (isBreak(token)) { + result.append("\n "); + } + } + result.append(token); + } + + return result.toString(); + } + + private static String formatCreateTable(String sql) { + final StringBuilder result = new StringBuilder(60).append(" "); + final StringTokenizer tokens = new StringTokenizer(sql, "(,)'[]\"", true); + + int depth = 0; + boolean quoted = false; + while (tokens.hasMoreTokens()) { + final String token = tokens.nextToken(); + if (isQuote(token)) { + quoted = !quoted; + result.append(token); + } else if (quoted) { + result.append(token); + } else { + if (")".equals(token)) { + depth--; + if (depth == 0) { + result.append("\n "); + } + } + result.append(token); + if (",".equals(token) && depth == 1) { + result.append("\n "); + } + if ("(".equals(token)) { + depth++; + if (depth == 1) { + result.append("\n "); + } + } + } + } + + return result.toString(); + } + + private static boolean isBreak(String token) { + return "drop".equals(token) || + "add".equals(token) || + "references".equals(token) || + "foreign".equals(token) || + "on".equals(token); + } + + private static boolean isQuote(String tok) { + return "\"".equals(tok) || + "`".equals(tok) || + "]".equals(tok) || + "[".equals(tok) || + "'".equals(tok); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/GenerateSqlUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/GenerateSqlUtils.java new file mode 100644 index 0000000..462031a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/GenerateSqlUtils.java @@ -0,0 +1,138 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.util; + +import srt.cloud.framework.dbswitch.common.constant.Const; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.DatabaseFactory; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.ColumnMetaData; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import org.apache.commons.lang3.StringUtils; + +import java.util.ArrayList; +import java.util.List; +import java.util.stream.Collectors; + +/** + * 拼接SQL工具类 + * + * @author jrl + */ +public final class GenerateSqlUtils { + + public static String getDDLCreateTableSQL( + ProductTypeEnum type, + List fieldNames, + List primaryKeys, + String schemaName, + String tableName, + boolean autoIncr) { + AbstractDatabase db = DatabaseFactory.getDatabaseInstance(type); + return getDDLCreateTableSQL( + db, + fieldNames, + primaryKeys, + schemaName, + tableName, + false, + null, + autoIncr); + } + + public static String getDDLCreateTableSQL( + AbstractDatabase db, + List fieldNames, + List primaryKeys, + String schemaName, + String tableName, + boolean withRemarks, + String tableRemarks, + boolean autoIncr) { + ProductTypeEnum type = db.getDatabaseType(); + StringBuilder sb = new StringBuilder(); + List pks = fieldNames.stream() + .filter((cd) -> primaryKeys.contains(cd.getFieldName())) + .map(ColumnDescription::getFieldName) + .collect(Collectors.toList()); + + sb.append(Const.CREATE_TABLE); + // if(ifNotExist && type!=DatabaseType.ORACLE) { + // sb.append( Const.IF_NOT_EXISTS ); + // } + sb.append(db.getQuotedSchemaTableCombination(schemaName, tableName)); + sb.append("("); + + for (int i = 0; i < fieldNames.size(); i++) { + if (i > 0) { + sb.append(", "); + } else { + sb.append(" "); + } + + ColumnMetaData v = fieldNames.get(i).getMetaData(); + sb.append(db.getFieldDefinition(v, pks, autoIncr, false, withRemarks)); + } + + if (!pks.isEmpty() && !ProductTypeEnum.DORIS.equals(type)) { + String pk = db.getPrimaryKeyAsString(pks); + sb.append(", PRIMARY KEY (").append(pk).append(")"); + } + + sb.append(")"); + if (ProductTypeEnum.MYSQL.equals(type)) { + sb.append("ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin"); + if (withRemarks && StringUtils.isNotBlank(tableRemarks)) { + sb.append(String.format(" COMMENT='%s' ", tableRemarks.replace("'", "\\'"))); + } + } else if (ProductTypeEnum.DORIS.equals(type)) { + if (!pks.isEmpty()) { + String pk = db.getPrimaryKeyAsString(pks); + sb.append("unique key(").append(pk).append(")").append(Const.CR); + } + if (withRemarks && StringUtils.isNotBlank(tableRemarks)) { + sb.append(String.format(" COMMENT '%s' ", tableRemarks.replace("'", "\\'"))); + sb.append(Const.CR); + } + sb.append(String.format("DISTRIBUTED BY HASH(%s) BUCKETS 10", !pks.isEmpty() ? pks.get(0) : fieldNames.get(0).getFieldName())).append(Const.CR).append("PROPERTIES(\"replication_num\" = \"1\");"); + } + + return DDLFormatterUtils.format(sb.toString()); + } + + public static List getDDLCreateTableSQL( + ProductTypeEnum type, + List fieldNames, + List primaryKeys, + String schemaName, + String tableName, + String tableRemarks, + boolean autoIncr) { + AbstractDatabase db = DatabaseFactory.getDatabaseInstance(type); + List results = new ArrayList<>(2); + String createTableSql = getDDLCreateTableSQL(db, fieldNames, primaryKeys, schemaName, + tableName, true, tableRemarks, autoIncr); + results.add(createTableSql); + if (type.noCommentStatement()) { + return results; + } + + TableDescription td = new TableDescription(); + td.setSchemaName(schemaName); + td.setTableName(tableName); + td.setRemarks(tableRemarks); + td.setTableType("TABLE"); + results = db.getTableColumnCommentDefinition(td, fieldNames); + results.add(0, createTableSql); + return results; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/JdbcUrlUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/JdbcUrlUtils.java new file mode 100644 index 0000000..ca7532c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/JdbcUrlUtils.java @@ -0,0 +1,206 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.core.util; + +import srt.cloud.framework.dbswitch.core.model.DatabaseDescription; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +/** + * JDBC的URL相关工具类 + * + * @author jrl + */ +public class JdbcUrlUtils { + + private static Pattern patternWithParam = Pattern.compile( + "(?^.+):(?.+)://(?.+):(?.+)/(?.+)\\?(?.+)"); + + private static Pattern patternSimple = Pattern.compile( + "(?^.+):(?.+)://(?.+):(?.+)/(?.+)"); + + /** + * Oracle数据库Jdbc连接模式 + * + * @author tang + */ + protected enum OracleJdbcConnectionMode { + /** + * SID + */ + SID(1), + + /** + * SerivceName + */ + SERVICENAME(2), + + /** + * TNSName + */ + TNSNAME(3); + + private int index; + + OracleJdbcConnectionMode(int idx) { + this.index = idx; + } + + public int getIndex() { + return index; + } + } + + /** + * 根据数据库种类拼接jdbc的url信息 参考地址:https://www.cnblogs.com/chenglc/p/8421573.html + *

+ * 说明: + *

+ * (1)SQLServer数据库驱动问题 + *

+ * 在SQL Server 2000 中加载驱动和URL路径的语句是 + *

+ * String driverName = "com.microsoft.jdbc.sqlserver.SQLServerDriver"; + *

+ * String dbURL = "jdbc:microsoft:sqlserver://localhost:1433;DatabaseName=sample"; + *

+ * 而SQL Server2005和SQL Server 2008 中加载驱动和URL的语句则为 + *

+ * String driverName = "com.microsoft.sqlserver.jdbc.SQLServerDriver"; + *

+ * String dbURL = "jdbc:sqlserver://localhost:1433; DatabaseName=sample"; + *

+ * (2)Oracle数据库驱动连接问题 + *

+ * JDBC的URL三种方式:https://blog.csdn.net/gnail_oug/article/details/80075263 + * + * @param db 数据库连接描述信息 + * @param connectTimeout 连接超时时间(单位:秒) + * @return 对应数据库的JDBC的URL字符串 + */ + public static String getJdbcUrl(DatabaseDescription db, int connectTimeout) { + switch (db.getType()) { + case MYSQL: + String charset = db.getCharset(); + if (Objects.isNull(charset) || charset.isEmpty()) { + charset = "utf-8"; + } + return String.format( + "jdbc:mysql://%s:%d/%s?useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=%s&nullCatalogMeansCurrent=true&connectTimeout=%d", + db.getHost(), db.getPort(), db.getDbname(), charset, connectTimeout * 1000); + case ORACLE: + OracleJdbcConnectionMode type; + String mode = db.getMode(); + if (Objects.isNull(mode)) { + type = OracleJdbcConnectionMode.SID; + } else { + type = OracleJdbcConnectionMode.valueOf(mode.trim().toUpperCase()); + } + + // 处理Oracle的oracle.sql.TIMESTAMP类型到java.sql.Timestamp的兼容问题 + // 参考地址:https://blog.csdn.net/alanwei04/article/details/77507807 + System.setProperty("oracle.jdbc.J2EE13Compliant", "true"); + // Oracle设置连接超时时间 + System.setProperty("oracle.net.CONNECT_TIMEOUT", Integer.toString(1000 * connectTimeout)); + if (OracleJdbcConnectionMode.SID == type) { + return String.format("jdbc:oracle:thin:@%s:%d:%s", + db.getHost(), db.getPort(), db.getDbname()); + } else if (OracleJdbcConnectionMode.SERVICENAME == type) { + return String.format("jdbc:oracle:thin:@//%s:%d/%s", + db.getHost(), db.getPort(), db.getDbname()); + } else if (OracleJdbcConnectionMode.TNSNAME == type) { + // + // return String.format( + // "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=%s)(PORT=%d))) + // " + // + "(CONNECT_DATA=(SERVICE_NAME=%s)))", + /// db.getHost(), db.getPort(), db.getDbname()); + // + // (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.17.20.58)(PORT=1521)))(CONNECT_DATA=(SID=orcl))) + // + // or + // + // (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.17.20.58)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=orcl.test.com.cn))) + return String.format("jdbc:oracle:thin:@%s", db.getDbname()); + } else { + return String.format("jdbc:oracle:thin:@%s:%d:%s", + db.getHost(), db.getPort(), db.getDbname()); + } + case SQLSERVER2000: + return String.format("jdbc:microsoft:sqlserver://%s:%d;DatabaseName=%s", + db.getHost(), db.getPort(), + db.getDbname()); + case SQLSERVER: + return String.format("jdbc:sqlserver://%s:%d;DatabaseName=%s", + db.getHost(), db.getPort(), db.getDbname()); + case POSTGRESQL: + return String.format("jdbc:postgresql://%s:%d/%s?connectTimeout=%d", + db.getHost(), db.getPort(), + db.getDbname(), connectTimeout); + case GREENPLUM: + return String.format("jdbc:pivotal:greenplum://%s:%d;DatabaseName=%s", + db.getHost(), db.getPort(), + db.getDbname()); + case MARIADB: + String charsets = db.getCharset(); + if (Objects.isNull(charsets) || charsets.isEmpty()) { + charsets = "utf-8"; + } + return String.format( + "jdbc:mariadb://%s:%d/%s?useSSL=false&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=%s&nullCatalogMeansCurrent=true&connectTimeout=%d", + db.getHost(), db.getPort(), db.getDbname(), charsets, connectTimeout * 1000); + case DB2: + System.setProperty("db2.jcc.charsetdecoderencoder", "3"); + return String.format("jdbc:db2://%s:%d/%s", db.getHost(), db.getPort(), db.getDbname()); + default: + throw new RuntimeException( + String.format("Unknown database type (%s)", db.getType().name())); + } + } + + /** + * 从MySQL数据库的JDBC的URL中提取数据库连接相关参数 + * + * @param jdbcUrl JDBC连接的URL字符串 + * @return Map 参数列表 + */ + public static Map findParamsByMySqlJdbcUrl(String jdbcUrl) { + Pattern pattern = null; + if (jdbcUrl.indexOf('?') > 0) { + pattern = patternWithParam; + } else { + pattern = patternSimple; + } + + Matcher m = pattern.matcher(jdbcUrl); + if (m.find()) { + Map ret = new HashMap(); + ret.put("protocol", m.group("protocol")); + ret.put("dbtype", m.group("dbtype")); + ret.put("addresss", m.group("addresss")); + ret.put("port", m.group("port")); + ret.put("schema", m.group("schema")); + + if (m.groupCount() > 5) { + ret.put("path", m.group("path")); + } + + return ret; + } else { + return null; + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/PostgresUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/PostgresUtils.java new file mode 100644 index 0000000..7179c08 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/PostgresUtils.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.core.util; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.core.database.AbstractDatabase; +import srt.cloud.framework.dbswitch.core.database.DatabaseFactory; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; + +import java.sql.Connection; +import java.util.List; + +public final class PostgresUtils { + + public static String getTableDDL(Connection connection, String schema, String table) { + AbstractDatabase db = DatabaseFactory.getDatabaseInstance(ProductTypeEnum.POSTGRESQL); + List columnDescriptions = db.queryTableColumnMeta(connection, schema, table); + List pks = db.queryTablePrimaryKeys(connection, schema, table); + return GenerateSqlUtils.getDDLCreateTableSQL( + db.getDatabaseType(), columnDescriptions, pks, schema, table, false); + } + + private PostgresUtils() { + throw new IllegalStateException(); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/SqlEngineUtil.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/SqlEngineUtil.java new file mode 100644 index 0000000..3f8be2d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/SqlEngineUtil.java @@ -0,0 +1,12 @@ +package srt.cloud.framework.dbswitch.core.util; + +import com.github.freakchick.orange.engine.DynamicSqlEngine; + +public class SqlEngineUtil { + + public static final DynamicSqlEngine ENGINE = new DynamicSqlEngine(); + + public static DynamicSqlEngine getEngine() { + return ENGINE; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/SqlUtil.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/SqlUtil.java new file mode 100644 index 0000000..47b7cdc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/core/util/SqlUtil.java @@ -0,0 +1,59 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package srt.cloud.framework.dbswitch.core.util; + + +import srt.cloud.framework.dbswitch.common.util.StringUtil; + +/** + * SqlUtil + * + */ +public class SqlUtil { + + private static final String SEMICOLON = ";"; + + public static String[] getStatements(String sql, String sqlSeparator) { + if (StringUtil.isNotBlank(sql)) { + return new String[0]; + } + + String[] splits = sql.replace(";\r\n", ";\n").split(sqlSeparator); + String lastStatement = splits[splits.length - 1].trim(); + if (lastStatement.endsWith(SEMICOLON)) { + splits[splits.length - 1] = lastStatement.substring(0, lastStatement.length() - 1); + } + + return splits; + } + + public static String removeNote(String sql) { + if (StringUtil.isNotBlank(sql)) { + sql = sql.replaceAll("\u00A0", " ") + .replaceAll("[\r\n]+", "\n") + .replaceAll("--([^'\n]{0,}('[^'\n]{0,}'){0,1}[^'\n]{0,}){0,}", "").trim(); + } + return sql; + } + + public static String replaceAllParam(String sql, String name, String value) { + return sql.replaceAll("\\$\\{" + name + "}", value); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/config/DbswichProperties.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/config/DbswichProperties.java new file mode 100644 index 0000000..122955c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/config/DbswichProperties.java @@ -0,0 +1,31 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.data.config; + + +import lombok.Data; +import srt.cloud.framework.dbswitch.data.entity.SourceDataSourceProperties; +import srt.cloud.framework.dbswitch.data.entity.TargetDataSourceProperties; + +import java.util.ArrayList; +import java.util.List; + +/** + * 属性映射配置 + * + * @author jrl + */ +@Data +public class DbswichProperties { + + private List source = new ArrayList<>(); + + private TargetDataSourceProperties target = new TargetDataSourceProperties(); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/domain/DbSwitchResult.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/domain/DbSwitchResult.java new file mode 100644 index 0000000..66a7350 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/domain/DbSwitchResult.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.data.domain; + +import lombok.Data; + +import java.util.List; +import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicLong; + +/** + * @ClassName DbSwitchResult + * @Author zrx + * @Date 2022/10/28 9:36 + */ +@Data +public class DbSwitchResult { + private AtomicBoolean ifAllSuccess = new AtomicBoolean(true); + private AtomicLong totalTableCount = new AtomicLong(0); + private AtomicLong totalTableSuccessCount = new AtomicLong(0); + private AtomicLong totalTableFailCount = new AtomicLong(0); + private AtomicLong totalRowCount = new AtomicLong(0); + private AtomicLong totalBytes = new AtomicLong(0); + private List tableResultList = new CopyOnWriteArrayList<>(); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/domain/DbSwitchTableResult.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/domain/DbSwitchTableResult.java new file mode 100644 index 0000000..211b508 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/domain/DbSwitchTableResult.java @@ -0,0 +1,27 @@ +package srt.cloud.framework.dbswitch.data.domain; + +import lombok.Data; + +import java.util.Date; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicLong; + +/** + * @ClassName DbSwitchTable + * @Author zrx + * @Date 2022/10/28 9:37 + */ +@Data +public class DbSwitchTableResult { + private String sourceSchemaName; + private String sourceTableName; + private String targetSchemaName; + private String targetTableName; + private String tableRemarks; + private Date syncTime; + private AtomicLong syncCount = new AtomicLong(0); + private AtomicLong syncBytes = new AtomicLong(0); + private AtomicBoolean ifSuccess = new AtomicBoolean(true); + private String errorMsg; + private String successMsg = "同步成功"; +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/domain/PerfStat.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/domain/PerfStat.java new file mode 100644 index 0000000..cf2012f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/domain/PerfStat.java @@ -0,0 +1,38 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.data.domain; + +import lombok.AllArgsConstructor; +import lombok.Data; + +/** + * 统计信息 + * + * @author jrl + */ +@Data +@AllArgsConstructor +public class PerfStat { + + private Integer index; + private Integer total; + private Integer failure; + //private Long bytes; + private Long totalRowCount; + + @Override + public String toString() { + return "Data Source Index: \t" + index + "\n" + + "Total Tables Count: \t" + total + "\n" + + "Failure Tables count: \t" + failure + "\n" + + "Total Row count: \t" + totalRowCount + "\n"; + //"Total Transfer Size: \t" + BytesUnitUtils.bytesSizeToHuman(bytes) + "\n"; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/entity/SourceDataSourceProperties.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/entity/SourceDataSourceProperties.java new file mode 100644 index 0000000..408022a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/entity/SourceDataSourceProperties.java @@ -0,0 +1,40 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.data.entity; + +import lombok.Data; +import srt.cloud.framework.dbswitch.common.entity.PatternMapper; +import srt.cloud.framework.dbswitch.common.type.DBTableType; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +import java.util.List; +import java.util.concurrent.TimeUnit; + +@Data +public class SourceDataSourceProperties { + + + private ProductTypeEnum sourceProductType; + private String url; + private String driverClassName; + private String username; + private String password; + private Long connectionTimeout = TimeUnit.SECONDS.toMillis(60); + private Long maxLifeTime = TimeUnit.MINUTES.toMillis(60); + + private Integer fetchSize = 5000; + private DBTableType tableType; + private String sourceSchema = ""; + private Integer includeOrExclude; + private String sourceIncludes = ""; + private String sourceExcludes = ""; + private List regexTableMapper; + private List regexColumnMapper; +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/entity/TargetDataSourceProperties.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/entity/TargetDataSourceProperties.java new file mode 100644 index 0000000..ceecfee --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/entity/TargetDataSourceProperties.java @@ -0,0 +1,48 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.data.entity; + +import lombok.Data; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +import java.util.concurrent.TimeUnit; + +@Data +public class TargetDataSourceProperties { + + private ProductTypeEnum targetProductType; + private String url; + private String driverClassName; + private String username; + private String password; + private Long connectionTimeout = TimeUnit.SECONDS.toMillis(60); + private Long maxLifeTime = TimeUnit.MINUTES.toMillis(60); + + private String targetSchema = ""; + private Boolean targetDrop = Boolean.TRUE; + private Boolean syncExist = Boolean.TRUE; + private Boolean onlyCreate = Boolean.FALSE; + /** + * 是否同步索引信息,只有targetDrop为TRUE时生效 + */ + private Boolean indexCreate = Boolean.FALSE; + /** + * 表明前缀 + */ + private String tablePrefix; + /** + * 是否自动转为小写 + */ + private Boolean lowercase = Boolean.FALSE; + private Boolean uppercase = Boolean.FALSE; + private Boolean createTableAutoIncrement = Boolean.FALSE; + private Boolean writerEngineInsert = Boolean.TRUE; + private Boolean changeDataSync = Boolean.FALSE; +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/handler/MigrationHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/handler/MigrationHandler.java new file mode 100644 index 0000000..bcbde47 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/handler/MigrationHandler.java @@ -0,0 +1,618 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.data.handler; + +import com.zaxxer.hikari.HikariDataSource; +import lombok.extern.slf4j.Slf4j; +import org.ehcache.sizeof.SizeOf; +import org.springframework.jdbc.core.JdbcTemplate; +import org.springframework.util.StringUtils; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.DatabaseAwareUtils; +import srt.cloud.framework.dbswitch.common.util.PatterNameUtils; +import srt.cloud.framework.dbswitch.common.util.StringUtil; +import srt.cloud.framework.dbswitch.core.model.ColumnDescription; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByDatasourceService; +import srt.cloud.framework.dbswitch.core.service.impl.MetaDataByDataSourceServiceImpl; +import srt.cloud.framework.dbswitch.data.config.DbswichProperties; +import srt.cloud.framework.dbswitch.data.domain.DbSwitchTableResult; +import srt.cloud.framework.dbswitch.data.entity.SourceDataSourceProperties; +import srt.cloud.framework.dbswitch.data.entity.TargetDataSourceProperties; +import srt.cloud.framework.dbswitch.data.util.BytesUnitUtils; +import srt.cloud.framework.dbswitch.dbchange.ChangeCalculatorService; +import srt.cloud.framework.dbswitch.dbchange.IDatabaseChangeCaculator; +import srt.cloud.framework.dbswitch.dbchange.IDatabaseRowHandler; +import srt.cloud.framework.dbswitch.dbchange.RecordChangeTypeEnum; +import srt.cloud.framework.dbswitch.dbchange.TaskParamEntity; +import srt.cloud.framework.dbswitch.dbcommon.database.DatabaseOperatorFactory; +import srt.cloud.framework.dbswitch.dbcommon.database.IDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; +import srt.cloud.framework.dbswitch.dbsynch.DatabaseSynchronizeFactory; +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbwriter.DatabaseWriterFactory; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; + +import java.sql.ResultSet; +import java.util.ArrayList; +import java.util.Date; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.Supplier; +import java.util.stream.Collectors; + +/** + * 在一个线程内的单表迁移处理逻辑 + * + * @author jrl + */ +@Slf4j +public class MigrationHandler implements Supplier { + + private final long MAX_CACHE_BYTES_SIZE = 512 * 1024 * 1024; + + private int fetchSize = 100; + private final DbswichProperties properties; + private final SourceDataSourceProperties sourceProperties; + private final TargetDataSourceProperties targetProperties; + + private volatile boolean interrupted = false; + + // 来源端 + private final HikariDataSource sourceDataSource; + private ProductTypeEnum sourceProductType; + private String sourceSchemaName; + private String sourceTableName; + private String sourceTableRemarks; + private List sourceColumnDescriptions; + private List sourcePrimaryKeys; + + private IMetaDataByDatasourceService sourceMetaDataService; + + // 目的端 + private final HikariDataSource targetDataSource; + private ProductTypeEnum targetProductType; + private String targetSchemaName; + private String targetTableName; + private List targetColumnDescriptions; + private List targetPrimaryKeys; + + // 日志输出字符串使用 + private String tableNameMapString; + + public static MigrationHandler createInstance(TableDescription td, + DbswichProperties properties, + Integer sourcePropertiesIndex, + HikariDataSource sds, + HikariDataSource tds) { + return new MigrationHandler(td, properties, sourcePropertiesIndex, sds, tds); + } + + private MigrationHandler(TableDescription td, + DbswichProperties properties, + Integer sourcePropertiesIndex, + HikariDataSource sds, + HikariDataSource tds) { + this.sourceSchemaName = td.getSchemaName(); + this.sourceTableName = td.getTableName(); + this.properties = properties; + this.sourceProperties = properties.getSource().get(sourcePropertiesIndex); + this.targetProperties = properties.getTarget(); + this.sourceDataSource = sds; + this.targetDataSource = tds; + + if (sourceProperties.getFetchSize() >= fetchSize) { + fetchSize = sourceProperties.getFetchSize(); + } + + // 获取映射转换后新的表名 + this.targetSchemaName = properties.getTarget().getTargetSchema(); + this.targetTableName = PatterNameUtils.getFinalName(td.getTableName(), + sourceProperties.getRegexTableMapper()); + if (StringUtils.isEmpty(this.targetTableName)) { + throw new RuntimeException("表名的映射规则配置有误,不能将[" + this.sourceTableName + "]映射为空"); + } + //添加表名前缀 + if (StringUtil.isNotBlank(properties.getTarget().getTablePrefix()) && !this.targetTableName.startsWith(properties.getTarget().getTablePrefix())) { + this.targetTableName = properties.getTarget().getTablePrefix() + this.targetTableName; + } + if (properties.getTarget().getLowercase()) { + this.targetTableName = this.targetTableName.toLowerCase(); + } + if (properties.getTarget().getUppercase()) { + this.targetTableName = this.targetTableName.toUpperCase(); + } + + this.tableNameMapString = String.format("%s.%s --> %s.%s", + td.getSchemaName(), td.getTableName(), + targetSchemaName, targetTableName); + } + + public void interrupt() { + this.interrupted = true; + } + + @Override + public DbSwitchTableResult get() { + + log.info("Begin Migrate table for {}", tableNameMapString); + + /*this.sourceProductType = DatabaseAwareUtils.getDatabaseTypeByDataSource(sourceDataSource); + this.targetProductType = DatabaseAwareUtils.getDatabaseTypeByDataSource(targetDataSource);*/ + this.sourceProductType = sourceProperties.getSourceProductType(); + this.targetProductType = targetProperties.getTargetProductType(); + this.sourceMetaDataService = new MetaDataByDataSourceServiceImpl(sourceDataSource, + sourceProductType); + + // 读取源表的表及字段元数据 + this.sourceTableRemarks = sourceMetaDataService + .getTableRemark(sourceSchemaName, sourceTableName); + this.sourceColumnDescriptions = sourceMetaDataService + .queryTableColumnMeta(sourceSchemaName, sourceTableName); + + + this.sourcePrimaryKeys = sourceMetaDataService + .queryTablePrimaryKeys(sourceSchemaName, sourceTableName); + + // 根据表的列名映射转换准备目标端表的字段信息 + this.targetColumnDescriptions = sourceColumnDescriptions.stream() + .map(column -> { + String newName = PatterNameUtils.getFinalName( + column.getFieldName(), + sourceProperties.getRegexColumnMapper()); + ColumnDescription description = column.copy(); + description.setFieldName(properties.getTarget().getLowercase() && newName != null ? newName.toLowerCase() : properties.getTarget().getUppercase() && newName != null ? newName.toUpperCase() : newName); + description.setLabelName(properties.getTarget().getLowercase() && newName != null ? newName.toLowerCase() : properties.getTarget().getUppercase() && newName != null ? newName.toUpperCase() : newName); + return description; + }).collect(Collectors.toList()); + this.targetPrimaryKeys = sourcePrimaryKeys.stream() + .map(name -> { + String finalName = PatterNameUtils.getFinalName(name, sourceProperties.getRegexColumnMapper()); + if (properties.getTarget().getLowercase() && finalName != null) { + finalName = finalName.toLowerCase(); + } + if (properties.getTarget().getUppercase() && finalName != null) { + finalName = finalName.toUpperCase(); + } + return finalName; + } + ).collect(Collectors.toList()); + + //构建表的同步结果 + DbSwitchTableResult dbSwitchTableResult = new DbSwitchTableResult(); + dbSwitchTableResult.setSourceSchemaName(sourceSchemaName); + dbSwitchTableResult.setSourceTableName(sourceTableName); + dbSwitchTableResult.setTargetSchemaName(targetSchemaName); + dbSwitchTableResult.setTargetTableName(targetTableName); + dbSwitchTableResult.setTableRemarks(sourceTableRemarks); + dbSwitchTableResult.setSyncTime(new Date()); + + // 打印表名与字段名的映射关系 + List columnMapperPairs = new ArrayList<>(); + Map mapChecker = new HashMap<>(); + for (int i = 0; i < sourceColumnDescriptions.size(); ++i) { + String sourceColumnName = sourceColumnDescriptions.get(i).getFieldName(); + String targetColumnName = targetColumnDescriptions.get(i).getFieldName(); + if (StringUtils.hasLength(targetColumnName)) { + columnMapperPairs.add(String.format("%s --> %s", sourceColumnName, targetColumnName)); + mapChecker.put(sourceColumnName, targetColumnName); + } else { + columnMapperPairs.add(String.format( + "%s --> %s", + sourceColumnName, + String.format("", (i + 1)) + )); + } + } + log.info("Mapping relation : \ntable mapper :\n\t{} \ncolumn mapper :\n\t{} ", + tableNameMapString, String.join("\n\t", columnMapperPairs)); + Set valueSet = new HashSet<>(mapChecker.values()); + if (valueSet.size() <= 0) { + throw new RuntimeException("字段映射配置有误,禁止通过映射将表所有的字段都删除!"); + } + if (!valueSet.containsAll(this.targetPrimaryKeys)) { + throw new RuntimeException("字段映射配置有误,禁止通过映射将表的主键字段删除!"); + } + if (mapChecker.keySet().size() != valueSet.size()) { + throw new RuntimeException("字段映射配置有误,禁止将多个字段映射到一个同名字段!"); + } + + if (interrupted) { + throw new RuntimeException("task is interrupted"); + } + + IDatabaseWriter writer = DatabaseWriterFactory.createDatabaseWriter( + targetDataSource, targetProductType, properties.getTarget().getWriterEngineInsert()); + + //zrx 如果不同步已存在的 + if (!properties.getTarget().getSyncExist()) { + IMetaDataByDatasourceService metaDataByDatasourceServicee = new MetaDataByDataSourceServiceImpl(targetDataSource, targetProductType); + List tableDescriptions = metaDataByDatasourceServicee.queryTableList(targetSchemaName); + if (tableDescriptions.stream().anyMatch(tableDescription -> tableDescription.getTableName().equals(targetTableName))) { + log.info("syncExist is false,table {}.{} has existed,do not sync,just return!", targetSchemaName, targetTableName); + dbSwitchTableResult.setSuccessMsg("由于设置了不同步已存在的表,所以未同步表和数据"); + return dbSwitchTableResult; + } + } + + if (properties.getTarget().getTargetDrop()) { + /* + 如果配置了dbswitch.target.datasource-target-drop=true时, +

+ 先执行drop table语句,然后执行create table语句 + */ + + try { + DatabaseOperatorFactory.createDatabaseOperator(targetDataSource, targetProductType) + .dropTable(targetSchemaName, targetTableName); + log.info("Target Table {}.{} is exits, drop it now !", targetSchemaName, targetTableName); + } catch (Exception e) { + log.info("Target Table {}.{} is not exits, create it!", targetSchemaName, targetTableName); + } + + IMetaDataByDatasourceService targetDatasourceservice = + new MetaDataByDataSourceServiceImpl(targetDataSource, targetProductType); + // 生成建表语句并创建 + List targetColumns = targetColumnDescriptions.stream() + .filter(column -> StringUtils.hasLength(column.getFieldName())) + .collect(Collectors.toList()); + List sqlCreateTable = sourceMetaDataService.getDDLCreateTableSQL( + targetProductType, + targetColumns, + targetPrimaryKeys, + targetSchemaName, + targetTableName, + sourceTableRemarks, + properties.getTarget().getCreateTableAutoIncrement() + ); + //zrx 索引创建语句 + if (properties.getTarget().getIndexCreate()) { + targetDatasourceservice.createIndexDefinition(targetColumns, targetPrimaryKeys, targetSchemaName, targetTableName, sqlCreateTable); + } + JdbcTemplate targetJdbcTemplate = new JdbcTemplate(targetDataSource); + for (String sql : sqlCreateTable) { + targetJdbcTemplate.execute(sql); + log.info("Execute SQL: \n{}", sql); + } + + // 如果只想创建表,这里直接返回 + if (null != properties.getTarget().getOnlyCreate() + && properties.getTarget().getOnlyCreate()) { + dbSwitchTableResult.setSuccessMsg("由于设置了只创建表,所以未同步数据,已建表"); + return dbSwitchTableResult; + } + + if (interrupted) { + throw new RuntimeException("task is interrupted"); + } + + return doFullCoverSynchronize(writer, dbSwitchTableResult); + } else { + // 对于只想创建表的情况,不提供后续的变化量数据同步功能 + if (null != properties.getTarget().getOnlyCreate() + && properties.getTarget().getOnlyCreate()) { + dbSwitchTableResult.setSuccessMsg("由于设置了只创建表,所以未同步数据,已建表"); + return dbSwitchTableResult; + } + + if (interrupted) { + throw new RuntimeException("task is interrupted"); + } + + IMetaDataByDatasourceService metaDataByDatasourceService = + new MetaDataByDataSourceServiceImpl(targetDataSource, targetProductType); + List targetTableNames = metaDataByDatasourceService + .queryTableList(targetSchemaName) + .stream().map(TableDescription::getTableName) + .collect(Collectors.toList()); + + if (!targetTableNames.contains(targetTableName)) { + // 当目标端不存在该表时,则生成建表语句并创建 + List targetColumns = targetColumnDescriptions.stream() + .filter(column -> StringUtils.hasLength(column.getFieldName())) + .collect(Collectors.toList()); + List sqlCreateTable = sourceMetaDataService.getDDLCreateTableSQL( + targetProductType, + targetColumns, + targetPrimaryKeys, + targetSchemaName, + targetTableName, + sourceTableRemarks, + properties.getTarget().getCreateTableAutoIncrement() + ); + + //zrx 索引创建语句 + if (properties.getTarget().getIndexCreate()) { + metaDataByDatasourceService.createIndexDefinition(targetColumns, targetPrimaryKeys, targetSchemaName, targetTableName, sqlCreateTable); + } + + JdbcTemplate targetJdbcTemplate = new JdbcTemplate(targetDataSource); + for (String sql : sqlCreateTable) { + targetJdbcTemplate.execute(sql); + log.info("Execute SQL: \n{}", sql); + } + + if (interrupted) { + throw new RuntimeException("task is interrupted"); + } + + return doFullCoverSynchronize(writer, dbSwitchTableResult); + } + + // 判断是否具备变化量同步的条件:(1)两端表结构一致,且都有一样的主键字段;(2)MySQL使用Innodb引擎; + if (properties.getTarget().getChangeDataSync()) { + // 根据主键情况判断同步的方式:增量同步或覆盖同步 + List dbTargetPks = metaDataByDatasourceService.queryTablePrimaryKeys( + targetSchemaName, targetTableName); + + //添加目标表中不存在的字段 + metaDataByDatasourceService.addNoExistColumnsByTarget(targetSchemaName, targetTableName, targetColumnDescriptions); + + if (!targetPrimaryKeys.isEmpty() && !dbTargetPks.isEmpty() + && targetPrimaryKeys.containsAll(dbTargetPks) + && dbTargetPks.containsAll(targetPrimaryKeys)) { + if (targetProductType == ProductTypeEnum.MYSQL + && !DatabaseAwareUtils.isMysqlInnodbStorageEngine( + targetSchemaName, targetTableName, targetDataSource)) { + return doFullCoverSynchronize(writer, dbSwitchTableResult); + } else { + return doIncreaseSynchronize(writer, dbSwitchTableResult); + } + } else { + return doFullCoverSynchronize(writer, dbSwitchTableResult); + } + } else { + return doFullCoverSynchronize(writer, dbSwitchTableResult); + } + } + } + + /** + * 执行覆盖同步 + * + * @param writer 目的端的写入器 + */ + private DbSwitchTableResult doFullCoverSynchronize(IDatabaseWriter writer, DbSwitchTableResult dbSwitchTableResult) { + + AtomicLong syncBytes = dbSwitchTableResult.getSyncBytes(); + AtomicLong syncCount = dbSwitchTableResult.getSyncCount(); + + final int BATCH_SIZE = fetchSize; + + List sourceFields = new ArrayList<>(); + List targetFields = new ArrayList<>(); + for (int i = 0; i < targetColumnDescriptions.size(); ++i) { + ColumnDescription scd = sourceColumnDescriptions.get(i); + ColumnDescription tcd = targetColumnDescriptions.get(i); + if (!StringUtils.isEmpty(tcd.getFieldName())) { + sourceFields.add(scd.getFieldName()); + targetFields.add(tcd.getFieldName()); + } + } + // 准备目的端的数据写入操作 + writer.prepareWrite(targetSchemaName, targetTableName, targetFields); + + // 清空目的端表的数据 + IDatabaseOperator targetOperator = DatabaseOperatorFactory + .createDatabaseOperator(writer.getDataSource(), targetProductType); + targetOperator.truncateTableData(targetSchemaName, targetTableName); + + // 查询源端数据并写入目的端 + IDatabaseOperator sourceOperator = DatabaseOperatorFactory + .createDatabaseOperator(sourceDataSource, sourceProductType); + sourceOperator.setFetchSize(BATCH_SIZE); + + try (StatementResultSet srs = sourceOperator.queryTableData( + sourceSchemaName, sourceTableName, sourceFields + ); ResultSet rs = srs.getResultset()) { + List cache = new LinkedList<>(); + long cacheBytes = 0; + /*long totalCount = 0; + long totalBytes = 0;*/ + while (rs.next()) { + Object[] record = new Object[sourceFields.size()]; + for (int i = 1; i <= sourceFields.size(); ++i) { + try { + record[i - 1] = rs.getObject(i); + } catch (Exception e) { + log.warn("!!! Read data from table [ {} ] use function ResultSet.getObject() error", + tableNameMapString, e); + record[i - 1] = null; + } + } + + cache.add(record); + long bytes = SizeOf.newInstance().sizeOf(record); + cacheBytes += bytes; + syncCount.getAndAdd(1); + + if (cache.size() >= BATCH_SIZE || cacheBytes >= MAX_CACHE_BYTES_SIZE) { + long ret = writer.write(targetFields, cache); + log.info("[FullCoverSync] handle table [{}] data count: {}, the batch bytes size: {}", + tableNameMapString, ret, BytesUnitUtils.bytesSizeToHuman(cacheBytes)); + cache.clear(); + /*totalBytes += cacheBytes;*/ + syncBytes.getAndAdd(cacheBytes); + cacheBytes = 0; + } + } + + if (cache.size() > 0) { + long ret = writer.write(targetFields, cache); + log.info("[FullCoverSync] handle table [{}] data count: {}, last batch bytes size: {}", + tableNameMapString, ret, BytesUnitUtils.bytesSizeToHuman(cacheBytes)); + cache.clear(); + /*totalBytes += cacheBytes;*/ + syncBytes.getAndAdd(cacheBytes); + } + + /*log.info("[FullCoverSync] handle table [{}] total data count:{}, total bytes={}", + tableNameMapString, totalCount, BytesUnitUtils.bytesSizeToHuman(totalBytes));*/ + } catch (Exception e) { + throw new RuntimeException(e); + } + //返回表执行结果 + return dbSwitchTableResult; + } + + /** + * 变化量同步 + * + * @param writer 目的端的写入器 + */ + private DbSwitchTableResult doIncreaseSynchronize(IDatabaseWriter writer, DbSwitchTableResult dbSwitchTableResult) { + final int BATCH_SIZE = fetchSize; + + AtomicLong syncCount = dbSwitchTableResult.getSyncCount(); + AtomicLong syncBytes = dbSwitchTableResult.getSyncBytes(); + + List sourceFields = new ArrayList<>(); + List targetFields = new ArrayList<>(); + Map columnNameMaps = new HashMap<>(); + for (int i = 0; i < targetColumnDescriptions.size(); ++i) { + ColumnDescription scd = sourceColumnDescriptions.get(i); + ColumnDescription tcd = targetColumnDescriptions.get(i); + if (!StringUtils.isEmpty(tcd.getFieldName())) { + sourceFields.add(scd.getFieldName()); + targetFields.add(tcd.getFieldName()); + columnNameMaps.put(scd.getFieldName(), tcd.getFieldName()); + } + } + + TaskParamEntity.TaskParamEntityBuilder taskBuilder = TaskParamEntity.builder(); + //target相当于老数据,source相当于要同步的新数据 + taskBuilder.oldProductType(targetProductType); + taskBuilder.oldDataSource(writer.getDataSource()); + taskBuilder.oldSchemaName(targetSchemaName); + taskBuilder.oldTableName(targetTableName); + taskBuilder.newProductType(sourceProductType); + taskBuilder.newDataSource(sourceDataSource); + taskBuilder.newSchemaName(sourceSchemaName); + taskBuilder.newTableName(sourceTableName); + taskBuilder.fieldColumns(sourceFields); + taskBuilder.columnsMap(columnNameMaps); + + TaskParamEntity param = taskBuilder.build(); + + IDatabaseSynchronize synchronizer = DatabaseSynchronizeFactory + .createDatabaseWriter(writer.getDataSource(), targetProductType); + synchronizer.prepare(targetSchemaName, targetTableName, targetFields, targetPrimaryKeys); + + IDatabaseChangeCaculator calculator = new ChangeCalculatorService(); + calculator.setFetchSize(fetchSize); + calculator.setRecordIdentical(false); + calculator.setCheckJdbcType(false); + + // 执行实际的变化同步过程 + calculator.executeCalculate(param, new IDatabaseRowHandler() { + + private long countInsert = 0; + private long countUpdate = 0; + private long countDelete = 0; + private long countTotal = 0; + private long cacheBytes = 0; + private final List cacheInsert = new LinkedList<>(); + private final List cacheUpdate = new LinkedList<>(); + private final List cacheDelete = new LinkedList<>(); + + @Override + public void handle(List fields, Object[] record, RecordChangeTypeEnum flag) { + if (flag == RecordChangeTypeEnum.VALUE_INSERT) { + cacheInsert.add(record); + countInsert++; + } else if (flag == RecordChangeTypeEnum.VALUE_CHANGED) { + cacheUpdate.add(record); + countUpdate++; + } else { + cacheDelete.add(record); + countDelete++; + } + + long bytes = SizeOf.newInstance().sizeOf(record); + cacheBytes += bytes; + syncBytes.getAndAdd(bytes); + countTotal++; + syncCount.getAndAdd(1); + checkFull(fields); + } + + /** + * 检测缓存是否已满,如果已满执行同步操作 + * + * @param fields 同步的字段列表 + */ + private void checkFull(List fields) { + if (cacheInsert.size() >= BATCH_SIZE || cacheUpdate.size() >= BATCH_SIZE + || cacheDelete.size() >= BATCH_SIZE || cacheBytes >= MAX_CACHE_BYTES_SIZE) { + if (cacheDelete.size() > 0) { + doDelete(fields); + } + + if (cacheInsert.size() > 0) { + doInsert(fields); + } + + if (cacheUpdate.size() > 0) { + doUpdate(fields); + } + + log.info("[IncreaseSync] Handle table [{}] data one batch size: {}", + tableNameMapString, BytesUnitUtils.bytesSizeToHuman(cacheBytes)); + cacheBytes = 0; + } + } + + @Override + public void destroy(List fields) { + if (cacheDelete.size() > 0) { + doDelete(fields); + } + + if (cacheInsert.size() > 0) { + doInsert(fields); + } + + if (cacheUpdate.size() > 0) { + doUpdate(fields); + } + + log.info("[IncreaseSync] Handle table [{}] total count: {}, Insert:{},Update:{},Delete:{} ", + tableNameMapString, countTotal, countInsert, countUpdate, countDelete); + } + + private void doInsert(List fields) { + long ret = synchronizer.executeInsert(cacheInsert); + log.info("[IncreaseSync] Handle table [{}] data Insert count: {}", tableNameMapString, ret); + cacheInsert.clear(); + } + + private void doUpdate(List fields) { + long ret = synchronizer.executeUpdate(cacheUpdate); + log.info("[IncreaseSync] Handle table [{}] data Update count: {}", tableNameMapString, ret); + cacheUpdate.clear(); + } + + private void doDelete(List fields) { + long ret = synchronizer.executeDelete(cacheDelete); + log.info("[IncreaseSync] Handle table [{}] data Delete count: {}", tableNameMapString, ret); + cacheDelete.clear(); + } + + }); + + return dbSwitchTableResult; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/handler/TableResultHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/handler/TableResultHandler.java new file mode 100644 index 0000000..b484779 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/handler/TableResultHandler.java @@ -0,0 +1,12 @@ +package srt.cloud.framework.dbswitch.data.handler; + +import srt.cloud.framework.dbswitch.data.domain.DbSwitchTableResult; + +/** + * @ClassName TableResultHandler + * @Author zrx + * @Date 2022/11/24 20:47 + */ +public interface TableResultHandler { + void handler(DbSwitchTableResult tableResult); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/service/MigrationService.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/service/MigrationService.java new file mode 100644 index 0000000..544ce93 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/service/MigrationService.java @@ -0,0 +1,326 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.data.service; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.zaxxer.hikari.HikariDataSource; +import lombok.extern.slf4j.Slf4j; +import net.srt.flink.common.utils.LogUtil; +import srt.cloud.framework.dbswitch.common.type.DBTableType; +import srt.cloud.framework.dbswitch.common.util.DbswitchStrUtils; +import srt.cloud.framework.dbswitch.common.util.PatterNameUtils; +import srt.cloud.framework.dbswitch.core.model.TableDescription; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByDatasourceService; +import srt.cloud.framework.dbswitch.core.service.impl.MetaDataByDataSourceServiceImpl; +import srt.cloud.framework.dbswitch.data.config.DbswichProperties; +import srt.cloud.framework.dbswitch.data.domain.DbSwitchResult; +import srt.cloud.framework.dbswitch.data.domain.DbSwitchTableResult; +import srt.cloud.framework.dbswitch.data.domain.PerfStat; +import srt.cloud.framework.dbswitch.data.entity.SourceDataSourceProperties; +import srt.cloud.framework.dbswitch.data.handler.MigrationHandler; +import srt.cloud.framework.dbswitch.data.handler.TableResultHandler; +import srt.cloud.framework.dbswitch.data.util.DataSourceUtils; +import org.springframework.util.StopWatch; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.Function; +import java.util.function.Supplier; +import java.util.regex.Pattern; + +/** + * 数据迁移主逻辑类 + * + * @author jrl + */ +@Slf4j +public class MigrationService { + + /** + * JSON序列化工具 + */ + private final ObjectMapper jackson = new ObjectMapper(); + + /** + * 性能统计记录表 + */ + private final List perfStats = new ArrayList<>(); + + /** + * 任务列表 + */ + private List migrationHandlers = new ArrayList<>(); + + /** + * 线程是否被中断的标识 + */ + private volatile boolean interrupted = false; + + /** + * 配置参数 + */ + private final DbswichProperties properties; + + /** + * 配置参数 + */ + private final TableResultHandler tableResultHandler; + + + /** + * 构造函数 + * + * @param properties 配置信息 + */ + public MigrationService(DbswichProperties properties, TableResultHandler tableResultHandler) { + this.properties = Objects.requireNonNull(properties, "properties is null"); + this.tableResultHandler = tableResultHandler; + } + + /** + * 中断执行中的任务 + */ + synchronized public void interrupt() { + this.interrupted = true; + migrationHandlers.forEach(MigrationHandler::interrupt); + } + + /** + * 执行主逻辑 + */ + public DbSwitchResult run() throws Exception { + StopWatch watch = new StopWatch(); + watch.start(); + + DbSwitchResult dbSwitchResult = new DbSwitchResult(); + + log.info("dbswitch data service is started...."); + + try (HikariDataSource targetDataSource = DataSourceUtils + .createTargetDataSource(properties.getTarget())) { + int sourcePropertiesIndex = 0; + int totalTableCount = 0; + List sourcesProperties = properties.getSource(); + for (SourceDataSourceProperties sourceProperties : sourcesProperties) { + if (interrupted) { + throw new RuntimeException("task is interrupted"); + } + try (HikariDataSource sourceDataSource = DataSourceUtils + .createSourceDataSource(sourceProperties)) { + IMetaDataByDatasourceService + sourceMetaDataService = new MetaDataByDataSourceServiceImpl(sourceDataSource, sourceProperties.getSourceProductType()); + + // 判断处理的策略:是排除还是包含 + List includes = DbswitchStrUtils + .stringToList(sourceProperties.getSourceIncludes()); + log.info("Includes tables is :{}", jackson.writeValueAsString(includes)); + List filters = DbswitchStrUtils + .stringToList(sourceProperties.getSourceExcludes()); + log.info("Filter tables is :{}", jackson.writeValueAsString(filters)); + + boolean useExcludeTables = includes.isEmpty(); + if (useExcludeTables) { + log.info("!!!! Use dbswitch.source[{}].source-excludes parameter to filter tables", + sourcePropertiesIndex); + } else { + log.info("!!!! Use dbswitch.source[{}].source-includes parameter to filter tables", + sourcePropertiesIndex); + } + + List> futures = new ArrayList<>(); + + List schemas = DbswitchStrUtils.stringToList(sourceProperties.getSourceSchema()); + log.info("Source schema names is :{}", jackson.writeValueAsString(schemas)); + + AtomicInteger numberOfFailures = new AtomicInteger(0); + //改为返回个数 + AtomicLong currentSourceRowCount = new AtomicLong(0L); + final int indexInternal = sourcePropertiesIndex; + for (String schema : schemas) { + if (interrupted) { + break; + } + List tableList = sourceMetaDataService.queryTableList(schema); + if (tableList.isEmpty()) { + log.warn("### Find source database table list empty for schema name is : {}", schema); + } else { + DBTableType tableType = sourceProperties.getTableType(); + for (TableDescription td : tableList) { + // 当没有配置迁移的表名时,默认为根据类型同步所有 + if (includes.isEmpty()) { + if (null != tableType && !tableType.name().equals(td.getTableType())) { + continue; + } + } + + String tableName = td.getTableName(); + + if (useExcludeTables) { + if (!filters.contains(tableName)) { + futures.add(makeFutureTask(td, indexInternal, sourceDataSource, targetDataSource, numberOfFailures, currentSourceRowCount, dbSwitchResult)); + } + } else { + if (includes.size() == 1 && (includes.get(0).contains("*") || includes.get(0) + .contains("?"))) { + if (Pattern.matches(includes.get(0), tableName)) { + futures.add(makeFutureTask(td, indexInternal, sourceDataSource, targetDataSource, numberOfFailures, currentSourceRowCount, dbSwitchResult)); + } + } else if (includes.contains(tableName)) { + futures.add(makeFutureTask(td, indexInternal, sourceDataSource, targetDataSource, numberOfFailures, currentSourceRowCount, dbSwitchResult)); + } + } + } + } + } + if (!interrupted) { + CompletableFuture.allOf(futures.toArray(new CompletableFuture[]{})).get(); + log.info( + "#### Complete data migration for the [ {} ] data source:\ntotal table count={}\nfailure count={}\ntotal count size={}", + sourcePropertiesIndex, futures.size(), numberOfFailures.get(), + currentSourceRowCount.get()); + perfStats.add(new PerfStat(sourcePropertiesIndex, futures.size(), + numberOfFailures.get(), currentSourceRowCount.get())); + ++sourcePropertiesIndex; + totalTableCount += futures.size(); + } + } + } + log.info("service run all success, total migrate table count={},total migrate data count={} ", totalTableCount, dbSwitchResult.getTotalRowCount()); + return dbSwitchResult; + } finally { + watch.stop(); + log.info("total ellipse = {} s", watch.getTotalTimeSeconds()); + + StringBuilder sb = new StringBuilder(); + sb.append("===================================\n"); + sb.append(String.format("total ellipse time:\t %f s\n", watch.getTotalTimeSeconds())); + sb.append("-------------------------------------\n"); + perfStats.forEach(st -> { + sb.append(st); + if (perfStats.size() > 1) { + sb.append("-------------------------------------\n"); + } + }); + sb.append("===================================\n"); + log.info("\n\n" + sb.toString()); + } + } + + /** + * 构造一个异步执行任务 + * + * @param td 表描述上下文 + * @param indexInternal 源端索引号 + * @param sds 源端的DataSource数据源 + * @param tds 目的端的DataSource数据源 + * @return CompletableFuture + */ + private CompletableFuture makeFutureTask( + TableDescription td, + Integer indexInternal, + HikariDataSource sds, + HikariDataSource tds, + AtomicInteger numberOfFailures, + AtomicLong currentSourceRowCount, + DbSwitchResult dbSwitchResult) { + return CompletableFuture.supplyAsync(getMigrateHandler(td, indexInternal, sds, tds)) + .exceptionally(getExceptHandler(td, indexInternal, numberOfFailures)) + .thenAccept((result) -> getAcceptHandler(result, currentSourceRowCount, dbSwitchResult)); + } + + /** + * 处理汇总结果 + * + * @param tableResult + * @param switchResult + */ + private void getAcceptHandler(DbSwitchTableResult tableResult, AtomicLong currentSourceRowCount, DbSwitchResult switchResult) { + + currentSourceRowCount.getAndAdd(tableResult.getSyncCount().get()); + + List tableResultList = switchResult.getTableResultList(); + AtomicBoolean ifAllSuccess = switchResult.getIfAllSuccess(); + AtomicLong totalRowCount = switchResult.getTotalRowCount(); + AtomicLong totalBytes = switchResult.getTotalBytes(); + AtomicLong totalTableCount = switchResult.getTotalTableCount(); + AtomicLong totalTableSuccessCount = switchResult.getTotalTableSuccessCount(); + AtomicLong totalTableFailCount = switchResult.getTotalTableFailCount(); + + totalRowCount.getAndAdd(tableResult.getSyncCount().get()); + totalBytes.getAndAdd(tableResult.getSyncBytes().get()); + totalTableCount.getAndIncrement(); + if (!tableResult.getIfSuccess().get()) { + ifAllSuccess.set(false); + totalTableFailCount.getAndIncrement(); + } else { + totalTableSuccessCount.getAndIncrement(); + } + tableResultList.add(tableResult); + //调用表结果处理器 + tableResultHandler.handler(tableResult); + } + + + /** + * 单表迁移处理方法 + * + * @param td 表描述上下文 + * @param indexInternal 源端索引号 + * @param sds 源端的DataSource数据源 + * @param tds 目的端的DataSource数据源 + * @return Supplier + */ + private Supplier getMigrateHandler( + TableDescription td, + Integer indexInternal, + HikariDataSource sds, + HikariDataSource tds) { + MigrationHandler instance = MigrationHandler.createInstance(td, properties, indexInternal, sds, tds); + migrationHandlers.add(instance); + return instance; + } + + /** + * 异常处理函数方法 + * + * @param td 表描述上下文 + * @return Function + */ + private Function getExceptHandler( + TableDescription td, + Integer indexInternal, AtomicInteger numberOfFailures) { + return (e) -> { + log.error("Error migration for table: {}.{}, error message:", td.getSchemaName(), + td.getTableName(), e); + + numberOfFailures.getAndIncrement(); + + SourceDataSourceProperties sourceProperties = properties.getSource().get(indexInternal); + String targetSchemaName = properties.getTarget().getTargetSchema(); + String targetTableName = PatterNameUtils.getFinalName(td.getTableName(), sourceProperties.getRegexTableMapper()); + DbSwitchTableResult dbSwitchTableResult = new DbSwitchTableResult(); + dbSwitchTableResult.setSourceSchemaName(td.getSchemaName()); + dbSwitchTableResult.setSourceTableName(td.getTableName()); + dbSwitchTableResult.setTargetSchemaName(targetSchemaName); + dbSwitchTableResult.setTargetTableName(targetTableName); + dbSwitchTableResult.getIfSuccess().set(false); + dbSwitchTableResult.setSuccessMsg(null); + dbSwitchTableResult.setErrorMsg(LogUtil.getError(e)); + return dbSwitchTableResult; + }; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/util/BytesUnitUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/util/BytesUnitUtils.java new file mode 100644 index 0000000..a563c74 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/util/BytesUnitUtils.java @@ -0,0 +1,51 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.data.util; + +import java.text.DecimalFormat; + +/** + * 字节单位转换 + * + * @author jrl + */ +public final class BytesUnitUtils { + + public static String bytesSizeToHuman(long size) { + /** 定义GB的计算常量 */ + long GB = 1024 * 1024 * 1024; + /** 定义MB的计算常量 */ + long MB = 1024 * 1024; + /** 定义KB的计算常量 */ + long KB = 1024; + + /** 格式化小数 */ + DecimalFormat df = new DecimalFormat("0.00"); + String resultSize = "0.00"; + + if (size / GB >= 1) { + //如果当前Byte的值大于等于1GB + resultSize = df.format(size / (float) GB) + "GB "; + } else if (size / MB >= 1) { + //如果当前Byte的值大于等于1MB + resultSize = df.format(size / (float) MB) + "MB "; + } else if (size / KB >= 1) { + //如果当前Byte的值大于等于1KB + resultSize = df.format(size / (float) KB) + "KB "; + } else { + resultSize = size + "B "; + } + + return resultSize; + } + + private BytesUnitUtils() { + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/util/DataSourceUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/util/DataSourceUtils.java new file mode 100644 index 0000000..746e6e0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/data/util/DataSourceUtils.java @@ -0,0 +1,114 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.data.util; + +import com.zaxxer.hikari.HikariDataSource; +import lombok.extern.slf4j.Slf4j; +import srt.cloud.framework.dbswitch.data.entity.SourceDataSourceProperties; +import srt.cloud.framework.dbswitch.data.entity.TargetDataSourceProperties; +import org.springframework.jdbc.core.JdbcTemplate; + +import java.util.Objects; + +/** + * DataSource工具类 + * + * @author jrl + */ +@Slf4j +public final class DataSourceUtils { + + /** + * 创建于指定数据库连接描述符的连接池 + * + * @param properties 数据库连接描述符 + * @return HikariDataSource连接池 + */ + public static HikariDataSource createSourceDataSource(SourceDataSourceProperties properties) { + HikariDataSource ds = new HikariDataSource(); + //设置可以获取注释信息 + ds.addDataSourceProperty("remarksReporting",true); + ds.addDataSourceProperty("useInformationSchema",true); + ds.setPoolName("The_Source_DB_Connection"); + ds.setJdbcUrl(properties.getUrl()); + ds.setDriverClassName(properties.getDriverClassName()); + ds.setUsername(properties.getUsername()); + ds.setPassword(properties.getPassword()); + if (properties.getDriverClassName().contains("oracle")) { + ds.setConnectionTestQuery("SELECT 'Hello' from DUAL"); + // https://blog.csdn.net/qq_20960159/article/details/78593936 + System.getProperties().setProperty("oracle.jdbc.J2EE13Compliant", "true"); + } else if (properties.getDriverClassName().contains("db2")) { + ds.setConnectionTestQuery("SELECT 1 FROM SYSIBM.SYSDUMMY1"); + } else { + ds.setConnectionTestQuery("SELECT 1"); + } + ds.setMaximumPoolSize(8); + ds.setMinimumIdle(5); + ds.setMaxLifetime(properties.getMaxLifeTime()); + ds.setConnectionTimeout(properties.getConnectionTimeout()); + ds.setIdleTimeout(60000); + + return ds; + } + + /** + * 创建于指定数据库连接描述符的连接池 + * + * @param properties 数据库连接描述符 + * @return HikariDataSource连接池 + */ + public static HikariDataSource createTargetDataSource(TargetDataSourceProperties properties) { + if (properties.getUrl().trim().startsWith("jdbc:hive2://")) { + throw new UnsupportedOperationException("Unsupported hive as target datasource!!!"); + } + + HikariDataSource ds = new HikariDataSource(); + ds.setPoolName("The_Target_DB_Connection"); + ds.setJdbcUrl(properties.getUrl()); + ds.setDriverClassName(properties.getDriverClassName()); + ds.setUsername(properties.getUsername()); + ds.setPassword(properties.getPassword()); + if (properties.getDriverClassName().contains("oracle")) { + ds.setConnectionTestQuery("SELECT 'Hello' from DUAL"); + } else if (properties.getDriverClassName().contains("db2")) { + ds.setConnectionTestQuery("SELECT 1 FROM SYSIBM.SYSDUMMY1"); + } else { + ds.setConnectionTestQuery("SELECT 1"); + } + ds.setMaximumPoolSize(8); + ds.setMinimumIdle(5); + ds.setMaxLifetime(properties.getMaxLifeTime()); + ds.setConnectionTimeout(properties.getConnectionTimeout()); + ds.setIdleTimeout(60000); + + // 如果是Greenplum数据库,这里需要关闭会话的查询优化器 + if (properties.getDriverClassName().contains("postgresql")) { + org.springframework.jdbc.datasource.DriverManagerDataSource dataSource = new org.springframework.jdbc.datasource.DriverManagerDataSource(); + dataSource.setDriverClassName(properties.getDriverClassName()); + dataSource.setUrl(properties.getUrl()); + dataSource.setUsername(properties.getUsername()); + dataSource.setPassword(properties.getPassword()); + JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource); + String versionString = jdbcTemplate.queryForObject("SELECT version()", String.class); + if (Objects.nonNull(versionString) && versionString.contains("Greenplum")) { + log.info( + "#### Target database is Greenplum Cluster, Close Optimizer now: set optimizer to 'off' "); + ds.setConnectionInitSql("set optimizer to 'off'"); + } + } + + return ds; + } + + private DataSourceUtils() { + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/ChangeCalculatorService.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/ChangeCalculatorService.java new file mode 100644 index 0000000..7f224e4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/ChangeCalculatorService.java @@ -0,0 +1,586 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbchange; + +import lombok.NonNull; +import lombok.extern.slf4j.Slf4j; +import srt.cloud.framework.dbswitch.common.util.JdbcTypesUtils; +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.core.service.IMetaDataByDatasourceService; +import srt.cloud.framework.dbswitch.core.service.impl.MetaDataByDataSourceServiceImpl; +import srt.cloud.framework.dbswitch.dbcommon.constant.Constants; +import srt.cloud.framework.dbswitch.dbcommon.database.DatabaseOperatorFactory; +import srt.cloud.framework.dbswitch.dbcommon.database.IDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; +import org.springframework.util.CollectionUtils; + +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.text.DateFormat; +import java.text.SimpleDateFormat; +import java.time.LocalDate; +import java.time.LocalDateTime; +import java.time.format.DateTimeFormatter; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.TimeZone; +import java.util.stream.Collectors; + +/** + * 数据变化量计算核心类 + * + * @author jrl + */ +@Slf4j +public final class ChangeCalculatorService implements IDatabaseChangeCaculator { + + /** + * SimpleDateFormat不是线程安全的,所以需要放在ThreadLocal中来保证安全性 + */ + private static final ThreadLocal DATE_FORMAT = ThreadLocal.withInitial(() -> { + SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd"); + simpleDateFormat.setTimeZone(TimeZone.getTimeZone("Asia/Shanghai")); + return simpleDateFormat; + }); + + private static final ThreadLocal TIMESTAMP_FORMAT = ThreadLocal.withInitial(() -> { + SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); + simpleDateFormat.setTimeZone(TimeZone.getTimeZone("Asia/Shanghai")); + return simpleDateFormat; + }); + + private static final ThreadLocal TIME_FORMAT = ThreadLocal.withInitial(() -> { + SimpleDateFormat simpleDateFormat = new SimpleDateFormat("HH:mm:ss"); + simpleDateFormat.setTimeZone(TimeZone.getTimeZone("Asia/Shanghai")); + return simpleDateFormat; + }); + + private static final DateTimeFormatter DATE_FORMAT_NEW = DateTimeFormatter.ofPattern("yyyy-MM-dd"); + private static final DateTimeFormatter TIMESTAMP_FORMAT_NEW = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss"); + private static final DateTimeFormatter TIME_FORMAT_NEW = DateTimeFormatter.ofPattern("HH:mm:ss"); + /** + * 是否记录不变化的记录 + */ + private boolean recordIdentical; + + /** + * 是否进行jdbc数据type检查 + */ + private boolean checkJdbcType; + + /** + * 批量读取数据的行数大小 + */ + private int queryFetchSize; + + public ChangeCalculatorService() { + this(false, true); + } + + public ChangeCalculatorService(boolean recordIdentical, boolean checkJdbcType) { + this.recordIdentical = recordIdentical; + this.checkJdbcType = checkJdbcType; + this.queryFetchSize = Constants.DEFAULT_FETCH_SIZE; + } + + @Override + public boolean isRecordIdentical() { + return this.recordIdentical; + } + + @Override + public void setRecordIdentical(boolean recordOrNot) { + this.recordIdentical = recordOrNot; + } + + @Override + public boolean isCheckJdbcType() { + return this.checkJdbcType; + } + + @Override + public void setCheckJdbcType(boolean checkOrNot) { + this.checkJdbcType = checkOrNot; + } + + @Override + public int getFetchSize() { + return this.queryFetchSize; + } + + @Override + public void setFetchSize(int size) { + if (size < Constants.MINIMUM_FETCH_SIZE) { + throw new IllegalArgumentException( + "设置的批量处理行数的大小fetchSize不得小于" + Constants.MINIMUM_FETCH_SIZE); + } + + this.queryFetchSize = size; + } + + /** + * 变化量计算函数 + *

+ * 说明 : old 后缀的为目标段; new 后缀的为来源段; + *

+ * 数据由 new 向 old 方向 同步。 + * + * @param task 任务描述实体对象 + * @param handler 计算结果回调处理器 + */ + @Override + public void executeCalculate(@NonNull TaskParamEntity task, + @NonNull IDatabaseRowHandler handler) { + if (log.isDebugEnabled()) { + log.debug("###### Begin execute calculate table CDC data now"); + } + + Map columnsMap = task.getColumnsMap(); + boolean useOwnFieldsColumns = !CollectionUtils.isEmpty(task.getFieldColumns()); + + // 检查新旧两张表的主键字段与比较字段 + IMetaDataByDatasourceService + oldMd = new MetaDataByDataSourceServiceImpl(task.getOldDataSource(), task.getOldProductType()); + IMetaDataByDatasourceService + newMd = new MetaDataByDataSourceServiceImpl(task.getNewDataSource(), task.getNewProductType()); + List fieldsPrimaryKeyOld = oldMd + .queryTablePrimaryKeys(task.getOldSchemaName(), task.getOldTableName()); + List fieldsAllColumnOld = oldMd + .queryTableColumnName(task.getOldSchemaName(), task.getOldTableName()); + List fieldsPrimaryKeyNew = newMd + .queryTablePrimaryKeys(task.getNewSchemaName(), task.getNewTableName()); + List fieldsAllColumnNew = newMd + .queryTableColumnName(task.getNewSchemaName(), task.getNewTableName()); + List fieldsMappedPrimaryKeyNew = fieldsPrimaryKeyNew.stream() + .map(s -> columnsMap.getOrDefault(s, s)) + .collect(Collectors.toList()); + List fieldsMappedAllColumnNew = fieldsAllColumnNew.stream() + .map(s -> columnsMap.getOrDefault(s, s)) + .collect(Collectors.toList()); + + if (fieldsPrimaryKeyOld.isEmpty() || fieldsPrimaryKeyNew.isEmpty()) { + throw new RuntimeException("计算变化量的表中存在无主键的表"); + } + + if (!isListEqual(fieldsPrimaryKeyOld, fieldsMappedPrimaryKeyNew)) { + throw new RuntimeException("两个表的主键映射关系不匹配"); + } + + if (useOwnFieldsColumns) { + // 如果自己配置了字段列表,判断子集关系 + List mappedFieldColumns = task.getFieldColumns().stream() + .map(s -> columnsMap.getOrDefault(s, s)) + .collect(Collectors.toList()); + if (!fieldsAllColumnNew.containsAll(task.getFieldColumns()) + || !fieldsAllColumnOld.containsAll(mappedFieldColumns)) { + throw new RuntimeException("指定的字段列不完全在两个表中存在"); + } + boolean same = (mappedFieldColumns.containsAll(fieldsPrimaryKeyOld) + && task.getFieldColumns().containsAll(fieldsPrimaryKeyNew)); + if (!same) { + throw new RuntimeException("提供的比较字段中未包含主键"); + } + same = (fieldsAllColumnOld.containsAll(mappedFieldColumns) + && fieldsAllColumnNew.containsAll(task.getFieldColumns())); + if (!same) { + throw new RuntimeException("提供的比较字段中存在表中不存在(映射关系对不上)的字段"); + } + } else { + boolean same = (fieldsMappedAllColumnNew.containsAll(fieldsPrimaryKeyOld) + && fieldsAllColumnOld.containsAll(fieldsMappedAllColumnNew)); + if (!same) { + throw new RuntimeException("两个表的字段映射关系不匹配"); + } + } + + // 计算除主键外的比较字段 + List fieldsOfCompareValue = new ArrayList<>(); + if (useOwnFieldsColumns) { + fieldsOfCompareValue.addAll(task.getFieldColumns()); + } else { + fieldsOfCompareValue.addAll(fieldsAllColumnNew); + } + fieldsOfCompareValue.removeAll(fieldsPrimaryKeyNew); + + // 构造查询列字段 + List queryFieldColumn; + List mappedQueryFieldColumn; + if (useOwnFieldsColumns) { + queryFieldColumn = task.getFieldColumns(); + } else { + queryFieldColumn = fieldsAllColumnOld; + } + mappedQueryFieldColumn = queryFieldColumn.stream() + .map(s -> columnsMap.getOrDefault(s, s)) + .collect(Collectors.toList()); + + StatementResultSet rsold = null; + StatementResultSet rsnew = null; + + try { + // 提取新旧两表数据的结果集(按主键排序后的) + IDatabaseOperator oldQuery = DatabaseOperatorFactory + .createDatabaseOperator(task.getOldDataSource(), task.getOldProductType()); + oldQuery.setFetchSize(this.queryFetchSize); + IDatabaseOperator newQuery = DatabaseOperatorFactory + .createDatabaseOperator(task.getNewDataSource(), task.getNewProductType()); + newQuery.setFetchSize(this.queryFetchSize); + + if (log.isDebugEnabled()) { + log.debug("###### Query data from two table now"); + } + + rsold = oldQuery + .queryTableData(task.getOldSchemaName(), task.getOldTableName(), + mappedQueryFieldColumn, fieldsMappedPrimaryKeyNew); + rsnew = newQuery + .queryTableData(task.getNewSchemaName(), task.getNewTableName(), + queryFieldColumn, fieldsPrimaryKeyNew); + ResultSetMetaData metaData = rsnew.getResultset().getMetaData(); + + if (log.isDebugEnabled()) { + log.debug("###### Check data validate now"); + } + + // 检查结果集源信息是否一直 + int oldcnt = rsold.getResultset().getMetaData().getColumnCount(); + int newcnt = rsnew.getResultset().getMetaData().getColumnCount(); + if (oldcnt != newcnt) { + throw new RuntimeException(String.format("两个表的字段总个数不相等,即:%d!=%d", oldcnt, newcnt)); + } else { + for (int k = 1; k < metaData.getColumnCount(); ++k) { + String key1 = rsnew.getResultset().getMetaData().getColumnLabel(k); + if (null == key1) { + key1 = rsnew.getResultset().getMetaData().getColumnName(k); + } + + String key2 = rsold.getResultset().getMetaData().getColumnLabel(k); + if (null == key2) { + key2 = rsold.getResultset().getMetaData().getColumnName(k); + } + + if (checkJdbcType) { + int type1 = rsold.getResultset().getMetaData().getColumnType(k); + int type2 = rsnew.getResultset().getMetaData().getColumnType(k); + if (type1 != type2) { + throw new RuntimeException(String.format("字段 [name=%s -> %s] 的数据类型不同,因 %s!=%s !", + key1, key2, + JdbcTypesUtils.resolveTypeName(type1), JdbcTypesUtils.resolveTypeName(type2))); + } + } + + } + } + + // 计算主键字段序列在结果集中的索引号 + int[] keyNumbers = new int[fieldsPrimaryKeyNew.size()]; + for (int i = 0; i < keyNumbers.length; ++i) { + String fn = fieldsPrimaryKeyNew.get(i); + keyNumbers[i] = getIndexOfField(fn, metaData); + } + + // 计算比较(非主键)字段序列在结果集中的索引号 + int[] valNumbers = new int[fieldsOfCompareValue.size()]; + for (int i = 0; i < valNumbers.length; ++i) { + String fn = fieldsOfCompareValue.get(i); + valNumbers[i] = getIndexOfField(fn, metaData); + } + + // 初始化计算结果数据字段列信息 + List targetColumns = new ArrayList<>(); + for (int k = 1; k <= metaData.getColumnCount(); ++k) { + String key = metaData.getColumnLabel(k); + if (null == key) { + key = metaData.getColumnName(k); + } + targetColumns.add(columnsMap.getOrDefault(key, key)); + } + + if (log.isDebugEnabled()) { + log.debug("###### Enter CDC calculate now"); + } + + // 进入核心比较计算算法区域 + RecordChangeTypeEnum flagField = null; + Object[] outputRow; + Object[] one = getRowData(rsold.getResultset()); + Object[] two = getRowData(rsnew.getResultset()); + while (true) { + if (one == null && two == null) { + break; + } else if (one == null && two != null) { + flagField = RecordChangeTypeEnum.VALUE_INSERT; + outputRow = two; + two = getRowData(rsnew.getResultset()); + } else if (one != null && two == null) { + flagField = RecordChangeTypeEnum.VALUE_DELETED; + outputRow = one; + one = getRowData(rsold.getResultset()); + } else { + int compare = this.compare(one, two, keyNumbers, metaData); + if (0 == compare) { + int compareValues = this.compare(one, two, valNumbers, metaData); + if (compareValues == 0) { + flagField = RecordChangeTypeEnum.VALUE_IDENTICAL; + outputRow = one; + } else { + flagField = RecordChangeTypeEnum.VALUE_CHANGED; + outputRow = two; + } + + one = getRowData(rsold.getResultset()); + two = getRowData(rsnew.getResultset()); + } else { + if (compare < 0) { + flagField = RecordChangeTypeEnum.VALUE_DELETED; + outputRow = one; + one = getRowData(rsold.getResultset()); + } else { + flagField = RecordChangeTypeEnum.VALUE_INSERT; + outputRow = two; + two = getRowData(rsnew.getResultset()); + } + } + } + + if (!this.recordIdentical && RecordChangeTypeEnum.VALUE_IDENTICAL == flagField) { + continue; + } + + // 这里对计算的单条记录结果进行处理 + handler.handle(Collections.unmodifiableList(targetColumns), outputRow, flagField); + } + + if (log.isDebugEnabled()) { + log.debug("###### Calculate CDC Over now"); + } + + // 结束返回前的回调 + handler.destroy(Collections.unmodifiableList(targetColumns)); + + } catch (SQLException e) { + throw new RuntimeException(e); + } finally { + if (null != rsold) { + rsold.close(); + } + if (null != rsnew) { + rsnew.close(); + } + } + + } + + private boolean isListEqual(List left, List right) { + return left.containsAll(right) && right.containsAll(left); + } + + /** + * 获取字段的索引号 + * + * @param key 字段名 + * @param metaData 结果集的元信息 + * @return 字段的索引号 + * @throws SQLException + */ + private int getIndexOfField(String key, ResultSetMetaData metaData) throws SQLException { + for (int k = 1; k <= metaData.getColumnCount(); ++k) { + String fieldName = metaData.getColumnLabel(k); + if (null == fieldName) { + fieldName = metaData.getColumnName(k); + } + + if (fieldName.equals(key)) { + return k - 1; + } + } + + return -1; + } + + /** + * 记录比较 + * + * @param obj1 记录1 + * @param obj2 记录2 + * @param fieldnrs 待比较的字段索引号 + * @param metaData 记录集的元信息 + * @return 比较的结果:0,-1,1 + * @throws SQLException + */ + private int compare(Object[] obj1, Object[] obj2, int[] fieldnrs, ResultSetMetaData metaData) + throws SQLException { + if (obj1.length != obj2.length) { + throw new RuntimeException("Invalid compare object list !"); + } + + for (int fieldnr : fieldnrs) { + int jdbcType = metaData.getColumnType(fieldnr + 1); + Object o1 = obj1[fieldnr]; + Object o2 = obj2[fieldnr]; + + int cmp = typeCompare(jdbcType, o1, o2); + if (cmp != 0) { + return cmp; + } + } + + return 0; + } + + /** + * 字段值对象比较,将对象转换为字节数组来比较实现 + * + * @param type 字段的JDBC数据类型 + * @param o1 对象1 + * @param o2 对象2 + * @return 0为相等,-1为小于,1为大于 + */ + private int typeCompare(int type, Object o1, Object o2) { + boolean n1 = (o1 == null); + boolean n2 = (o2 == null); + if (n1 && !n2) { + return -1; + } + if (!n1 && n2) { + return 1; + } + if (n1 && n2) { + return 0; + } + + /** + *

+ * 这里要比较的两个对象o1与o2可能类型不同,但值相同,例如: Integer o1=12,Long o2=12; + *

+ *

+ * 但是这种不属于同一类的比较情况不应出现: String o1="12", Integer o2=12; + *

+ */ + if (JdbcTypesUtils.isString(type)) { + String s1 = TypeConvertUtils.castToString(o1); + String s2 = TypeConvertUtils.castToString(o2); + return s1.compareTo(s2); + } else if (JdbcTypesUtils.isNumeric(type) && o1 instanceof Number + && o2 instanceof Number) { + Number s1 = (Number) o1; + Number s2 = (Number) o2; + return Double.compare(s1.doubleValue(), s2.doubleValue()); + } else if (JdbcTypesUtils.isInteger(type) && o1 instanceof Number + && o2 instanceof Number) { + Number s1 = (Number) o1; + Number s2 = (Number) o2; + return Long.compare(s1.longValue(), s2.longValue()); + } else if (JdbcTypesUtils.isDateTime(type)) { + if (o1 instanceof java.sql.Time && o2 instanceof java.sql.Time) { + java.sql.Time t1 = (java.sql.Time) o1; + java.sql.Time t2 = (java.sql.Time) o2; + return t1.compareTo(t2); + } else if (o1 instanceof java.sql.Timestamp && o2 instanceof java.sql.Timestamp) { + java.sql.Timestamp t1 = (java.sql.Timestamp) o1; + java.sql.Timestamp t2 = (java.sql.Timestamp) o2; + return t1.compareTo(t2); + } else if (o1 instanceof java.sql.Date && o2 instanceof java.sql.Date) { + java.sql.Date t1 = (java.sql.Date) o1; + java.sql.Date t2 = (java.sql.Date) o2; + return t1.compareTo(t2); + } else if (o1 instanceof LocalDateTime && o2 instanceof LocalDateTime) { + LocalDateTime t1 = (LocalDateTime) o1; + LocalDateTime t2 = (LocalDateTime) o2; + return t1.compareTo(t2); + } else { + String s1 = getDateStr(o1); + String s2 = getDateStr(o2); + return s1.compareTo(s2); + } + } else { + try { + return compareTo( + TypeConvertUtils.castToByteArray(o1), + TypeConvertUtils.castToByteArray(o2) + ); + } catch (Exception e) { + log.warn("CDC compare field value failed, return 0 instead,{}", e.getMessage()); + return 0; + } + } + } + + private String getDateStr(Object o) { + String s; + if (o instanceof java.sql.Time) { + s = TIME_FORMAT.get().format(o); + } else if (o instanceof java.sql.Timestamp) { + s = TIMESTAMP_FORMAT.get().format(o); + }else if (o instanceof java.sql.Date) { + s = TIMESTAMP_FORMAT.get().format(o); + } else if (o instanceof LocalDateTime) { + LocalDateTime t = (LocalDateTime) o; + s = TIMESTAMP_FORMAT_NEW.format(t); + } else if (o instanceof LocalDate) { + LocalDate t = (LocalDate) o; + s = TIMESTAMP_FORMAT_NEW.format(t); + } else { + s = String.valueOf(o); + } + return s; + } + + /** + * 字节数组的比较 + * + * @param s1 字节数组1 + * @param s2 字节数组2 + * @return 0为相等,-1为小于,1为大于 + */ + public int compareTo(byte[] s1, byte[] s2) { + int len1 = s1.length; + int len2 = s2.length; + int lim = Math.min(len1, len2); + byte[] v1 = s1; + byte[] v2 = s2; + + int k = 0; + while (k < lim) { + byte c1 = v1[k]; + byte c2 = v2[k]; + if (c1 != c2) { + return c1 - c2; + } + k++; + } + return len1 - len2; + } + + /** + * 从结果集中取出一条记录 + * + * @param rs 记录集 + * @return 一条记录,到记录结尾时返回null + * @throws SQLException + */ + private Object[] getRowData(ResultSet rs) throws SQLException { + ResultSetMetaData metaData = rs.getMetaData(); + Object[] rowData = null; + + if (rs.next()) { + rowData = new Object[metaData.getColumnCount()]; + for (int j = 1; j <= metaData.getColumnCount(); ++j) { + rowData[j - 1] = rs.getObject(j); + } + } + + return rowData; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/IDatabaseChangeCaculator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/IDatabaseChangeCaculator.java new file mode 100644 index 0000000..f772ea5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/IDatabaseChangeCaculator.java @@ -0,0 +1,68 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbchange; + +/** + * 变化量计算器接口定义 + * + * @author jrl + */ +public interface IDatabaseChangeCaculator { + + /** + * 是否记录无变化的数据 + * + * @return 是否记录无变化的数据 + */ + boolean isRecordIdentical(); + + /** + * 设置是否记录无变化的数据 + * + * @param recordOrNot 是否记录无变化的数据 + */ + void setRecordIdentical(boolean recordOrNot); + + /** + * 是否进行Jdbc的数据类型检查 + * + * @return 是否进行检查 + */ + boolean isCheckJdbcType(); + + /** + * 设置是否进行Jdbc的数据类型检查 + * + * @param checkOrNot 是否进行检查 + */ + void setCheckJdbcType(boolean checkOrNot); + + /** + * 获取JDBC驱动批量读取数据的行数大小 + * + * @return 批量行数大小 + */ + int getFetchSize(); + + /** + * 设置JDBC驱动批量读取数据的行数大小 + * + * @param size 批量行数大小 + */ + void setFetchSize(int size); + + /** + * 执行变化量计算任务 + * + * @param task 任务描述实体对象 + * @param handler 计算结果回调处理器 + */ + void executeCalculate(TaskParamEntity task, IDatabaseRowHandler handler); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/IDatabaseRowHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/IDatabaseRowHandler.java new file mode 100644 index 0000000..7229a21 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/IDatabaseRowHandler.java @@ -0,0 +1,36 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbchange; + +import java.util.List; + +/** + * 计算结果行记录处理器 + * + * @author jrl + */ +public interface IDatabaseRowHandler { + + /** + * 行数据处理 + * + * @param fields 字段名称列表,该列表只读 + * @param record 一条数据记实录 + * @param flag 数据变化状态 + */ + void handle(List fields, Object[] record, RecordChangeTypeEnum flag); + + /** + * 计算结束通知 + * + * @param fields 字段名称列表,该列表只读 + */ + void destroy(List fields); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/RecordChangeTypeEnum.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/RecordChangeTypeEnum.java new file mode 100644 index 0000000..35ebe37 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/RecordChangeTypeEnum.java @@ -0,0 +1,61 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbchange; + +/** + * 记录变化状态枚举类 + * + * @author jrl + */ +public enum RecordChangeTypeEnum { + /** + * 未变标识 + */ + VALUE_IDENTICAL(0, "identical"), + + /** + * 更新标识 + */ + VALUE_CHANGED(1, "update"), + + /** + * 插入标识 + */ + VALUE_INSERT(2, "insert"), + + /** + * 删除标识 + */ + VALUE_DELETED(3, "delete"); + + /** + * index + */ + private Integer index; + + /** + * 状态标记 + */ + private String status; + + RecordChangeTypeEnum(int idx, String flag) { + this.index = idx; + this.status = flag; + } + + public int getIndex() { + return index; + } + + public String getStatus() { + return this.status; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/TaskParamEntity.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/TaskParamEntity.java new file mode 100644 index 0000000..e2652bf --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbchange/TaskParamEntity.java @@ -0,0 +1,83 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbchange; + +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Data; +import lombok.NonNull; +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +import javax.sql.DataSource; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +/** + * 任务参数实体类定义 + * + * @author jrl + */ +@Data +@Builder +@AllArgsConstructor +public class TaskParamEntity { + + + private ProductTypeEnum oldProductType; + /** + * 老表的数据源 + */ + @NonNull + private DataSource oldDataSource; + + /** + * 老表的schema名 + */ + @NonNull + private String oldSchemaName; + + /** + * 老表的table名 + */ + @NonNull + private String oldTableName; + + private ProductTypeEnum newProductType; + /** + * 新表的数据源 + */ + @NonNull + private DataSource newDataSource; + + /** + * 新表的schema名 + */ + @NonNull + private String newSchemaName; + + /** + * 新表的table名 + */ + @NonNull + private String newTableName; + + /** + * 字段列 + */ + private List fieldColumns; + + /** + * 字段名映射关系 + */ + @NonNull + @Builder.Default + private Map columnsMap = Collections.EMPTY_MAP; +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/constant/Constants.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/constant/Constants.java new file mode 100644 index 0000000..e26d451 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/constant/Constants.java @@ -0,0 +1,33 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.constant; + +/** + * 常量值定义 + * + * @author jrl + */ +public final class Constants { + + /** + * 默认的JDBC数据查询超时时间(单位:秒) + */ + public static Integer DEFAULT_QUERY_TIMEOUT_SECONDS = 1 * 60 * 60; + + /** + * 默认的fetch-size的值 + */ + public static int DEFAULT_FETCH_SIZE = 1000; + + /** + * fetch-size的最小有效值 + */ + public static int MINIMUM_FETCH_SIZE = 100; +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/AbstractDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/AbstractDatabaseOperator.java new file mode 100644 index 0000000..2a1a28d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/AbstractDatabaseOperator.java @@ -0,0 +1,112 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database; + +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; +import srt.cloud.framework.dbswitch.dbcommon.constant.Constants; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.List; +import java.util.Objects; + +/** + * 数据读取抽象基类 + * + * @author jrl + */ +@Slf4j +public abstract class AbstractDatabaseOperator implements IDatabaseOperator { + + protected DataSource dataSource; + + protected int fetchSize; + + public AbstractDatabaseOperator(DataSource dataSource) { + this.dataSource = Objects.requireNonNull(dataSource, "数据源非法,为null"); + this.fetchSize = Constants.DEFAULT_FETCH_SIZE; + } + + @Override + public DataSource getDataSource() { + return this.dataSource; + } + + @Override + public int getFetchSize() { + return this.fetchSize; + } + + @Override + public StatementResultSet queryTableData(String sql) { + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public void setFetchSize(int size) { + if (size < Constants.MINIMUM_FETCH_SIZE) { + throw new IllegalArgumentException( + "设置的批量处理行数的大小fetchSize不得小于" + Constants.MINIMUM_FETCH_SIZE); + } + + this.fetchSize = size; + } + + /** + * 已经指定的查询SQL语句查询数据结果集 + * + * @param sql 查询的SQL语句 + * @param fetchSize 批处理大小 + * @return 结果集包装对象 + */ + protected final StatementResultSet selectTableData(String sql, int fetchSize) { + if (log.isDebugEnabled()) { + log.debug("Query table data sql :{}", sql); + } + + try { + StatementResultSet srs = new StatementResultSet(); + srs.setConnection(dataSource.getConnection()); + srs.setAutoCommit(srs.getConnection().getAutoCommit()); + srs.getConnection().setAutoCommit(false); + srs.setStatement(srs.getConnection() + .createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)); + srs.getStatement().setQueryTimeout(Constants.DEFAULT_QUERY_TIMEOUT_SECONDS); + srs.getStatement().setFetchSize(fetchSize); + srs.setResultset(srs.getStatement().executeQuery(sql)); + return srs; + } catch (Throwable t) { + throw new RuntimeException(t); + } + } + + /** + * 执行写SQL操作 + * + * @param sql 写SQL语句 + */ + protected final int executeSql(String sql) { + if (log.isDebugEnabled()) { + log.debug("Execute sql :{}", sql); + } + try (Connection connection = dataSource.getConnection(); + Statement st = connection.createStatement()) { + return st.executeUpdate(sql); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/DatabaseOperatorFactory.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/DatabaseOperatorFactory.java new file mode 100644 index 0000000..5461189 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/DatabaseOperatorFactory.java @@ -0,0 +1,96 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.DatabaseAwareUtils; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.DB2DatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.DmDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.DorisDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.GreenplumDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.HiveDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.KingbaseDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.MysqlDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.OracleDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.OscarDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.PostgreSqlDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.SqlServerDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.SqliteDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.impl.SybaseDatabaseOperator; + +import javax.sql.DataSource; +import java.util.HashMap; +import java.util.Map; +import java.util.function.Function; + +/** + * 数据库操作器构造工厂类 + * + * @author jrl + */ +public final class DatabaseOperatorFactory { + + private static final Map> DATABASE_OPERATOR_MAPPER + = new HashMap>() { + + private static final long serialVersionUID = -5278835613240515265L; + + { + put(ProductTypeEnum.MYSQL, MysqlDatabaseOperator::new); + put(ProductTypeEnum.MARIADB, MysqlDatabaseOperator::new); + put(ProductTypeEnum.ORACLE, OracleDatabaseOperator::new); + put(ProductTypeEnum.SQLSERVER, SqlServerDatabaseOperator::new); + put(ProductTypeEnum.SQLSERVER2000, SqlServerDatabaseOperator::new); + put(ProductTypeEnum.POSTGRESQL, PostgreSqlDatabaseOperator::new); + put(ProductTypeEnum.GREENPLUM, GreenplumDatabaseOperator::new); + put(ProductTypeEnum.DB2, DB2DatabaseOperator::new); + put(ProductTypeEnum.DM, DmDatabaseOperator::new); + put(ProductTypeEnum.SYBASE, SybaseDatabaseOperator::new); + put(ProductTypeEnum.KINGBASE, KingbaseDatabaseOperator::new); + put(ProductTypeEnum.OSCAR, OscarDatabaseOperator::new); + put(ProductTypeEnum.GBASE8A, MysqlDatabaseOperator::new); + put(ProductTypeEnum.HIVE, HiveDatabaseOperator::new); + put(ProductTypeEnum.SQLITE3, SqliteDatabaseOperator::new); + put(ProductTypeEnum.DORIS, DorisDatabaseOperator::new); + } + }; + + /** + * 根据数据源获取数据的读取操作器 + * + * @param dataSource 数据库源 + * @return 指定类型的数据库读取器 + */ + public static IDatabaseOperator createDatabaseOperator(DataSource dataSource) { + ProductTypeEnum type = DatabaseAwareUtils.getDatabaseTypeByDataSource(dataSource); + if (!DATABASE_OPERATOR_MAPPER.containsKey(type)) { + throw new RuntimeException( + String.format("[dbcommon] Unsupported database type (%s)", type)); + } + + return DATABASE_OPERATOR_MAPPER.get(type).apply(dataSource); + } + + /** + * 根据数据源获取数据的读取操作器 + * + * @param dataSource 数据库源 + * @return 指定类型的数据库读取器 + */ + public static IDatabaseOperator createDatabaseOperator(DataSource dataSource, ProductTypeEnum productType) { + if (!DATABASE_OPERATOR_MAPPER.containsKey(productType)) { + throw new RuntimeException( + String.format("[dbcommon] Unsupported database type (%s)", productType)); + } + + return DATABASE_OPERATOR_MAPPER.get(productType).apply(dataSource); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/IDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/IDatabaseOperator.java new file mode 100644 index 0000000..8dcb18b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/IDatabaseOperator.java @@ -0,0 +1,94 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database; + +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; + +import javax.sql.DataSource; +import java.util.List; + +/** + * 数据库操作器接口定义 + * + * @author jrl + */ +public interface IDatabaseOperator { + + /** + * 获取数据源 + * + * @return 数据源 + */ + DataSource getDataSource(); + + /** + * 获取读取(fetch)数据的批次大小 + * + * @return 批次大小 + */ + int getFetchSize(); + + StatementResultSet queryTableData(String sql); + + /** + * 设置读取(fetch)数据的批次大小 + * + * @param size 批次大小 + */ + void setFetchSize(int size); + + /** + * 生成查询指定字段的select查询SQL语句 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param fields 字段列表 + * @return 查询指定字段的select查询SQL语句 + */ + String getSelectTableSql(String schemaName, String tableName, List fields); + + /** + * 获取指定schema下表的按主键有序的结果集 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param fields 字段列表 + * @param orders 排序字段列表 + * @return 结果集包装对象 + */ + StatementResultSet queryTableData(String schemaName, String tableName, List fields, + List orders); + + /** + * 获取指定schema下表的结果集 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param fields 字段列表 + * @return 结果集包装对象 + */ + StatementResultSet queryTableData(String schemaName, String tableName, List fields); + + /** + * 清除指定表的所有数据 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + */ + void truncateTableData(String schemaName, String tableName); + + /** + * 删除指定物理表 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + */ + void dropTable(String schemaName, String tableName); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/DB2DatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/DB2DatabaseOperator.java new file mode 100644 index 0000000..486a902 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/DB2DatabaseOperator.java @@ -0,0 +1,66 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import srt.cloud.framework.dbswitch.dbcommon.database.AbstractDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.IDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * DB2数据库实现类 + * + * @author jrl + */ +public class DB2DatabaseOperator extends AbstractDatabaseOperator implements IDatabaseOperator { + + public DB2DatabaseOperator(DataSource dataSource) { + super(dataSource); + } + + @Override + public String getSelectTableSql(String schemaName, String tableName, List fields) { + return String.format("select \"%s\" from \"%s\".\"%s\" ", + StringUtils.join(fields, "\",\""), schemaName, tableName); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, List fields, + List orders) { + String sql = String.format("select \"%s\" from \"%s\".\"%s\" order by \"%s\" asc ", + StringUtils.join(fields, "\",\""), schemaName, tableName, + StringUtils.join(orders, "\",\"")); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, + List fields) { + String sql = String.format("select \"%s\" from \"%s\".\"%s\" ", + StringUtils.join(fields, "\",\""), schemaName, tableName); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public void truncateTableData(String schemaName, String tableName) { + String sql = String.format("TRUNCATE TABLE \"%s\".\"%s\" immediate ", schemaName, tableName); + this.executeSql(sql); + } + + @Override + public void dropTable(String schemaName, String tableName) { + String sql = String.format("DROP TABLE \"%s\".\"%s\" ", schemaName, tableName); + this.executeSql(sql); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/DmDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/DmDatabaseOperator.java new file mode 100644 index 0000000..0d1963e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/DmDatabaseOperator.java @@ -0,0 +1,25 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import javax.sql.DataSource; + +/** + * DM数据库实现类 + * + * @author jrl + */ +public class DmDatabaseOperator extends OracleDatabaseOperator { + + public DmDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/DorisDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/DorisDatabaseOperator.java new file mode 100644 index 0000000..55d08bd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/DorisDatabaseOperator.java @@ -0,0 +1,27 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import srt.cloud.framework.dbswitch.dbcommon.database.IDatabaseOperator; + +import javax.sql.DataSource; + +/** + * MySQL数据库实现类 + * + * @author jrl + */ +public class DorisDatabaseOperator extends MysqlDatabaseOperator implements IDatabaseOperator { + + public DorisDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/GreenplumDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/GreenplumDatabaseOperator.java new file mode 100644 index 0000000..a07f058 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/GreenplumDatabaseOperator.java @@ -0,0 +1,24 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import javax.sql.DataSource; + +/** + * Greenplum数据库实现类 + * + * @author jrl + */ +public class GreenplumDatabaseOperator extends PostgreSqlDatabaseOperator { + + public GreenplumDatabaseOperator(DataSource dataSource) { + super(dataSource); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/HiveDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/HiveDatabaseOperator.java new file mode 100644 index 0000000..31e736c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/HiveDatabaseOperator.java @@ -0,0 +1,62 @@ +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import lombok.extern.slf4j.Slf4j; +import srt.cloud.framework.dbswitch.common.util.HivePrepareUtils; +import srt.cloud.framework.dbswitch.dbcommon.constant.Constants; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.ResultSet; +import java.util.List; + +@Slf4j +public class HiveDatabaseOperator extends MysqlDatabaseOperator { + + public HiveDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, List fields, + List orders) { + String sql = String.format("select `%s` from `%s`.`%s` order by `%s` asc ", + StringUtils.join(fields, "`,`"), + schemaName, tableName, StringUtils.join(orders, "`,`")); + return selectHiveTableData(sql, schemaName, tableName); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, + List fields) { + String sql = String.format("select `%s` from `%s`.`%s` ", + StringUtils.join(fields, "`,`"), schemaName, tableName); + return selectHiveTableData(sql, schemaName, tableName); + } + + private StatementResultSet selectHiveTableData(String sql, String schemaName, String tableName) { + if (log.isDebugEnabled()) { + log.debug("Query table data sql :{}", sql); + } + + try { + Connection connection = dataSource.getConnection(); + HivePrepareUtils.prepare(connection, schemaName, tableName); + + StatementResultSet srs = new StatementResultSet(); + srs.setConnection(connection); + srs.setAutoCommit(srs.getConnection().getAutoCommit()); + srs.getConnection().setAutoCommit(false); + srs.setStatement(srs.getConnection() + .createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)); + srs.getStatement().setQueryTimeout(Constants.DEFAULT_QUERY_TIMEOUT_SECONDS); + srs.getStatement().setFetchSize(this.fetchSize); + srs.setResultset(srs.getStatement().executeQuery(sql)); + return srs; + } catch (Throwable t) { + throw new RuntimeException(t); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/KingbaseDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/KingbaseDatabaseOperator.java new file mode 100644 index 0000000..60c5c76 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/KingbaseDatabaseOperator.java @@ -0,0 +1,26 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +/** + * Kingbase8数据库实现类 + * + * @author jrl + */ + +import javax.sql.DataSource; + +public class KingbaseDatabaseOperator extends PostgreSqlDatabaseOperator { + + public KingbaseDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/MysqlDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/MysqlDatabaseOperator.java new file mode 100644 index 0000000..59c095c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/MysqlDatabaseOperator.java @@ -0,0 +1,65 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import srt.cloud.framework.dbswitch.dbcommon.database.AbstractDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.IDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * MySQL数据库实现类 + * + * @author jrl + */ +public class MysqlDatabaseOperator extends AbstractDatabaseOperator implements IDatabaseOperator { + + public MysqlDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + + @Override + public String getSelectTableSql(String schemaName, String tableName, List fields) { + return String.format("select `%s` from `%s`.`%s` ", + StringUtils.join(fields, "`,`"), schemaName, tableName); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, List fields, + List orders) { + String sql = String.format("select `%s` from `%s`.`%s` order by `%s` asc ", + StringUtils.join(fields, "`,`"), + schemaName, tableName, StringUtils.join(orders, "`,`")); + return this.selectTableData(sql, Integer.MIN_VALUE); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, + List fields) { + String sql = String.format("select `%s` from `%s`.`%s` ", + StringUtils.join(fields, "`,`"), schemaName, tableName); + return this.selectTableData(sql, Integer.MIN_VALUE); + } + + @Override + public void truncateTableData(String schemaName, String tableName) { + String sql = String.format("TRUNCATE TABLE `%s`.`%s` ", schemaName, tableName); + this.executeSql(sql); + } + + @Override + public void dropTable(String schemaName, String tableName) { + String sql = String.format("DROP TABLE `%s`.`%s` ", schemaName, tableName); + this.executeSql(sql); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/OracleDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/OracleDatabaseOperator.java new file mode 100644 index 0000000..ce14c67 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/OracleDatabaseOperator.java @@ -0,0 +1,65 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import srt.cloud.framework.dbswitch.dbcommon.database.AbstractDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.IDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * Oracle数据库实现类 + * + * @author jrl + */ +public class OracleDatabaseOperator extends AbstractDatabaseOperator implements IDatabaseOperator { + + public OracleDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + + @Override + public String getSelectTableSql(String schemaName, String tableName, List fields) { + return String.format("select \"%s\" from \"%s\".\"%s\" ", + StringUtils.join(fields, "\",\""), schemaName, tableName); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, List fields, + List orders) { + String sql = String.format("select \"%s\" from \"%s\".\"%s\" order by \"%s\" asc ", + StringUtils.join(fields, "\",\""), schemaName, tableName, + StringUtils.join(orders, "\",\"")); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, + List fields) { + String sql = String.format("select \"%s\" from \"%s\".\"%s\" ", + StringUtils.join(fields, "\",\""), schemaName, tableName); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public void truncateTableData(String schemaName, String tableName) { + String sql = String.format("TRUNCATE TABLE \"%s\".\"%s\" ", schemaName, tableName); + this.executeSql(sql); + } + + @Override + public void dropTable(String schemaName, String tableName) { + String sql = String.format("DROP TABLE \"%s\".\"%s\" ", schemaName, tableName); + this.executeSql(sql); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/OscarDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/OscarDatabaseOperator.java new file mode 100644 index 0000000..7f95fc6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/OscarDatabaseOperator.java @@ -0,0 +1,25 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import javax.sql.DataSource; + +public class OscarDatabaseOperator extends OracleDatabaseOperator { + + public OscarDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + + @Override + public void dropTable(String schemaName, String tableName) { + String sql = String.format("DROP TABLE \"%s\".\"%s\" CASCADE ", schemaName, tableName); + this.executeSql(sql); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/PostgreSqlDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/PostgreSqlDatabaseOperator.java new file mode 100644 index 0000000..61125a9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/PostgreSqlDatabaseOperator.java @@ -0,0 +1,70 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import srt.cloud.framework.dbswitch.dbcommon.database.AbstractDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.IDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * PostgreSQL数据库实现类 + * + * @author jrl + */ +public class PostgreSqlDatabaseOperator extends AbstractDatabaseOperator implements + IDatabaseOperator { + + public PostgreSqlDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + + @Override + public String getSelectTableSql(String schemaName, String tableName, List fields) { + return String.format("select \"%s\" from \"%s\".\"%s\" ", + StringUtils.join(fields, "\",\""), schemaName, + tableName); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, List fields, + List orders) { + String sql = String.format("select \"%s\" from \"%s\".\"%s\" order by \"%s\" asc ", + StringUtils.join(fields, "\",\""), schemaName, tableName, + StringUtils.join(orders, "\",\"")); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, + List fields) { + String sql = String.format("select \"%s\" from \"%s\".\"%s\" ", + StringUtils.join(fields, "\",\""), schemaName, + tableName); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public void truncateTableData(String schemaName, String tableName) { + String sql = String.format("TRUNCATE TABLE \"%s\".\"%s\" RESTART IDENTITY ", + schemaName, tableName); + this.executeSql(sql); + } + + @Override + public void dropTable(String schemaName, String tableName) { + String sql = String.format("DROP TABLE \"%s\".\"%s\" cascade ", + schemaName, tableName); + this.executeSql(sql); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/SqlServerDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/SqlServerDatabaseOperator.java new file mode 100644 index 0000000..e930b96 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/SqlServerDatabaseOperator.java @@ -0,0 +1,66 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import srt.cloud.framework.dbswitch.dbcommon.database.AbstractDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.IDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * SQLServer数据库实现类 + * + * @author jrl + */ +public class SqlServerDatabaseOperator extends AbstractDatabaseOperator implements + IDatabaseOperator { + + public SqlServerDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + + @Override + public String getSelectTableSql(String schemaName, String tableName, List fields) { + return String.format("select [%s] from [%s].[%s] ", + StringUtils.join(fields, "],["), schemaName, tableName); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, List fields, + List orders) { + String sql = String.format("select [%s] from [%s].[%s] order by [%s] asc ", + StringUtils.join(fields, "],["), + schemaName, tableName, StringUtils.join(orders, "],[")); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, + List fields) { + String sql = String.format("select [%s] from [%s].[%s] ", + StringUtils.join(fields, "],["), schemaName, tableName); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public void truncateTableData(String schemaName, String tableName) { + String sql = String.format("TRUNCATE TABLE [%s].[%s] ", schemaName, tableName); + this.executeSql(sql); + } + + @Override + public void dropTable(String schemaName, String tableName) { + String sql = String.format("DROP TABLE [%s].[%s] ", schemaName, tableName); + this.executeSql(sql); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/SqliteDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/SqliteDatabaseOperator.java new file mode 100644 index 0000000..d015d2f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/SqliteDatabaseOperator.java @@ -0,0 +1,74 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + + +import srt.cloud.framework.dbswitch.dbcommon.database.AbstractDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.database.IDatabaseOperator; +import srt.cloud.framework.dbswitch.dbcommon.domain.StatementResultSet; +import org.apache.commons.lang.StringUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * SQLite数据库实现类 + * + * @author jrl + */ +public class SqliteDatabaseOperator extends AbstractDatabaseOperator implements IDatabaseOperator { + + public SqliteDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + + @Override + public String getSelectTableSql(String schemaName, String tableName, List fields) { + return String.format("select \"%s\" from \"%s\".\"%s\" ", + StringUtils.join(fields, "\",\""), schemaName, tableName); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, List fields, + List orders) { + String sql = String.format("select \"%s\" from \"%s\".\"%s\" order by \"%s\" asc ", + StringUtils.join(fields, "\",\""), schemaName, tableName, + StringUtils.join(orders, "\",\"")); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public StatementResultSet queryTableData(String schemaName, String tableName, + List fields) { + String sql = String.format("select \"%s\" from \"%s\".\"%s\" ", + StringUtils.join(fields, "\",\""), schemaName, tableName); + return this.selectTableData(sql, this.fetchSize); + } + + @Override + public void truncateTableData(String schemaName, String tableName) { + String sql = String.format("DELETE FROM \"%s\".\"%s\" ", schemaName, tableName); + this.executeSql(sql); + + try { + sql = String.format("DELETE FROM sqlite_sequence WHERE name = '%s' ", tableName); + this.executeSql(sql); + } catch (Exception e) { + // ignore + } + + } + + @Override + public void dropTable(String schemaName, String tableName) { + String sql = String.format("DROP TABLE \"%s\".\"%s\" ", schemaName, tableName); + this.executeSql(sql); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/SybaseDatabaseOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/SybaseDatabaseOperator.java new file mode 100644 index 0000000..942869b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/database/impl/SybaseDatabaseOperator.java @@ -0,0 +1,25 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.database.impl; + +import javax.sql.DataSource; + +/** + * Sybase数据库实现类 + * + * @author tang + */ +public class SybaseDatabaseOperator extends SqlServerDatabaseOperator { + + public SybaseDatabaseOperator(DataSource dataSource) { + super(dataSource); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/domain/StatementResultSet.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/domain/StatementResultSet.java new file mode 100644 index 0000000..64aa830 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbcommon/domain/StatementResultSet.java @@ -0,0 +1,47 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbcommon.domain; + +import lombok.Data; +import lombok.extern.slf4j.Slf4j; +import org.springframework.jdbc.support.JdbcUtils; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +/** + * JDBC连接及结果集实体参数定义类 + * + * @author jrl + */ +@Slf4j +@Data +public class StatementResultSet implements AutoCloseable { + + private boolean isAutoCommit; + private Connection connection; + private Statement statement; + private ResultSet resultset; + + @Override + public void close() { + try { + connection.setAutoCommit(isAutoCommit); + } catch (SQLException e) { + log.warn("Jdbc Connect setAutoCommit() failed, error: {}", e.getMessage()); + } + + JdbcUtils.closeResultSet(resultset); + JdbcUtils.closeStatement(statement); + JdbcUtils.closeConnection(connection); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/AbstractDatabaseSynchronize.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/AbstractDatabaseSynchronize.java new file mode 100644 index 0000000..fc216f9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/AbstractDatabaseSynchronize.java @@ -0,0 +1,312 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch; + +import lombok.extern.slf4j.Slf4j; +import org.springframework.jdbc.core.JdbcTemplate; +import org.springframework.jdbc.datasource.DataSourceTransactionManager; +import org.springframework.jdbc.support.JdbcUtils; +import org.springframework.transaction.PlatformTransactionManager; +import org.springframework.transaction.TransactionDefinition; +import org.springframework.transaction.TransactionException; +import org.springframework.transaction.TransactionStatus; +import org.springframework.transaction.support.DefaultTransactionDefinition; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; + +/** + * 数据同步抽象基类 + * + * @author jrl + */ +@Slf4j +public abstract class AbstractDatabaseSynchronize implements IDatabaseSynchronize { + + protected DataSource dataSource; + private JdbcTemplate jdbcTemplate; + /*private PlatformTransactionManager transactionManager;*/ + private Map columnType; + protected List fieldOrders; + protected List pksOrders; + protected String insertStatementSql; + protected String updateStatementSql; + protected String deleteStatementSql; + protected int[] insertArgsType; + protected int[] updateArgsType; + protected int[] deleteArgsType; + + public AbstractDatabaseSynchronize(DataSource ds) { + this.dataSource = ds; + this.jdbcTemplate = new JdbcTemplate(ds); + /*this.transactionManager = new DataSourceTransactionManager(ds);*/ + this.columnType = new HashMap<>(); + } + + @Override + public DataSource getDataSource() { + return this.jdbcTemplate.getDataSource(); + } + + /*protected TransactionDefinition getTransactionDefinition() { + DefaultTransactionDefinition definition = new DefaultTransactionDefinition(); + definition.setIsolationLevel(TransactionDefinition.ISOLATION_READ_COMMITTED); + definition.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED); + return definition; + } +*/ + + /** + * 获取查询列元信息的SQL语句 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @return SQL语句 + */ + public abstract String getColumnMetaDataSql(String schemaName, String tableName); + + /** + * 生成Insert操作的SQL语句 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param fieldNames 字段列表 + * @return Insert操作的SQL语句 + */ + public abstract String getInsertPrepareStatementSql(String schemaName, String tableName, + List fieldNames); + + /** + * 生成Update操作的SQL语句 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param fieldNames 字段列表 + * @param pks 主键列表 + * @return Update操作的SQL语句 + */ + public abstract String getUpdatePrepareStatementSql(String schemaName, String tableName, + List fieldNames, List pks); + + /** + * 生成Delete操作的SQL语句 + * + * @param schemaName 模式名称 + * @param tableName 表名称 + * @param pks 主键列表 + * @return Delete操作的SQL语句 + */ + public abstract String getDeletePrepareStatementSql(String schemaName, String tableName, + List pks); + + @Override + public void prepare(String schemaName, String tableName, List fieldNames, + List pks) { + if (fieldNames.isEmpty() || pks.isEmpty() || fieldNames.size() < pks.size()) { + throw new IllegalArgumentException("字段列表和主键列表不能为空,或者字段总个数应不小于主键总个数"); + } + + if (!fieldNames.containsAll(pks)) { + throw new IllegalArgumentException("字段列表必须包含主键列表"); + } + + String sql = this.getColumnMetaDataSql(schemaName, tableName); + columnType.clear(); + + this.jdbcTemplate.execute((Connection conn) -> { + Statement stmt = null; + ResultSet rs = null; + try { + stmt = conn.createStatement(); + rs = stmt.executeQuery(sql); + ResultSetMetaData rsMetaData = rs.getMetaData(); + for (int i = 0, len = rsMetaData.getColumnCount(); i < len; i++) { + columnType.put(rsMetaData.getColumnName(i + 1), rsMetaData.getColumnType(i + 1)); + } + + return true; + } catch (Exception e) { + throw new RuntimeException( + String.format("获取表:%s.%s 的字段的元信息时失败. 请联系 DBA 核查该库、表信息.", schemaName, tableName), e); + } finally { + JdbcUtils.closeResultSet(rs); + JdbcUtils.closeStatement(stmt); + } + }); + + this.fieldOrders = new ArrayList<>(fieldNames); + this.pksOrders = new ArrayList<>(pks); + + this.insertStatementSql = this.getInsertPrepareStatementSql(schemaName, tableName, fieldNames); + this.updateStatementSql = this + .getUpdatePrepareStatementSql(schemaName, tableName, fieldNames, pks); + this.deleteStatementSql = this.getDeletePrepareStatementSql(schemaName, tableName, pks); + + insertArgsType = new int[fieldNames.size()]; + for (int k = 0; k < fieldNames.size(); ++k) { + String field = fieldNames.get(k); + insertArgsType[k] = this.columnType.get(field); + } + + updateArgsType = new int[fieldNames.size()]; + int idx = 0; + for (String field : fieldNames) { + if (!pks.contains(field)) { + updateArgsType[idx++] = this.columnType.get(field); + } + } + for (String pk : pks) { + updateArgsType[idx++] = this.columnType.get(pk); + } + + deleteArgsType = new int[pks.size()]; + for (int j = 0; j < pks.size(); ++j) { + String pk = pks.get(j); + deleteArgsType[j] = this.columnType.get(pk); + } + } + + @Override + public long executeInsert(List records) { + batchInsert(records, this.insertStatementSql, this.insertArgsType); + return records.size(); + /*TransactionStatus status = transactionManager.getTransaction(getTransactionDefinition()); + if (log.isDebugEnabled()) { + log.debug("Execute Insert SQL : {}", this.insertStatementSql); + } + + try { + int[] affects = jdbcTemplate + .batchUpdate(this.insertStatementSql, records, this.insertArgsType); + transactionManager.commit(status); + return affects.length; + } catch (Exception e) { + transactionManager.rollback(status); + throw e; + }*/ + } + + @Override + public long executeUpdate(List records) { + List datas = new LinkedList<>(); + for (Object[] r : records) { + + Object[] nr = new Object[this.fieldOrders.size()]; + int idx = 0; + + for (int i = 0; i < this.fieldOrders.size(); ++i) { + String field = this.fieldOrders.get(i); + if (!this.pksOrders.contains(field)) { + int index = this.fieldOrders.indexOf(field); + nr[idx++] = r[index]; + } + } + + for (String pk : this.pksOrders) { + int index = this.fieldOrders.indexOf(pk); + nr[idx++] = r[index]; + } + + datas.add(nr); + } + + batchUpdate(datas, this.updateStatementSql, this.updateArgsType); + return records.size(); + /*TransactionStatus status = transactionManager.getTransaction(getTransactionDefinition()); + if (log.isDebugEnabled()) { + log.debug("Execute Update SQL : {}", this.updateStatementSql); + } + + try { + int[] affects = jdbcTemplate.batchUpdate(this.updateStatementSql, datas, this.updateArgsType); + + transactionManager.commit(status); + return affects.length; + } catch (Exception e) { + transactionManager.rollback(status); + throw e; + }*/ + } + + @Override + public long executeDelete(List records) { + List datas = new LinkedList<>(); + for (Object[] r : records) { + Object[] nr = new Object[this.pksOrders.size()]; + for (int i = 0; i < this.pksOrders.size(); ++i) { + String pk = this.pksOrders.get(i); + int index = this.fieldOrders.indexOf(pk); + nr[i] = r[index]; + } + + datas.add(nr); + } + batchDelete(datas, this.deleteStatementSql, this.deleteArgsType); + return records.size(); + + /*TransactionStatus status = transactionManager.getTransaction(getTransactionDefinition()); + if (log.isDebugEnabled()) { + log.debug("Execute Delete SQL : {}", this.deleteStatementSql); + } + + try { + int[] affects = jdbcTemplate.batchUpdate(this.deleteStatementSql, datas, this.deleteArgsType); + + transactionManager.commit(status); + return affects.length; + } catch (Exception e) { + transactionManager.rollback(status); + throw e; + } finally { + datas.clear(); + }*/ + } + + protected void batchOperator(List recordValues, String batchSql, int[] argTypes) { + try (Connection connection = dataSource.getConnection(); + PreparedStatement ps = connection.prepareStatement(batchSql);) { + connection.setAutoCommit(false); + for (Object[] recordValue : recordValues) { + for (int j = 0; j < argTypes.length; j++) { + ps.setObject(j + 1, recordValue[j], argTypes[j]); + } + ps.addBatch(); + } + ps.executeBatch(); + ps.clearBatch(); + connection.commit(); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + + protected void batchInsert(List recordValues, String batchSql, int[] argTypes) { + batchOperator(recordValues, batchSql, argTypes); + } + + protected void batchDelete(List recordValues, String batchSql, int[] argTypes) { + batchOperator(recordValues, batchSql, argTypes); + } + + protected void batchUpdate(List recordValues, String batchSql, int[] argTypes) { + batchOperator(recordValues, batchSql, argTypes); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/DatabaseSynchronizeFactory.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/DatabaseSynchronizeFactory.java new file mode 100644 index 0000000..33c8d09 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/DatabaseSynchronizeFactory.java @@ -0,0 +1,92 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.DatabaseAwareUtils; +import srt.cloud.framework.dbswitch.dbsynch.db2.DB2DatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.dm.DmDatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.doris.DorisDatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.kingbase.KingbaseDatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.mssql.SqlServerDatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.mysql.MySqlDatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.oracle.OracleDatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.oscar.OscarDatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.pgsql.GreenplumDatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.pgsql.PostgresqlDatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.sqlite.Sqlite3DatabaseSyncImpl; +import srt.cloud.framework.dbswitch.dbsynch.sybase.SybaseDatabaseSyncImpl; + +import javax.sql.DataSource; +import java.util.HashMap; +import java.util.Map; +import java.util.function.Function; + +/** + * 数据库同步器构造工厂类 + * + * @author jrl + */ +public final class DatabaseSynchronizeFactory { + + private static final Map> DATABASE_SYNC_MAPPER + = new HashMap>() { + + private static final long serialVersionUID = -2359773637275934408L; + + { + put(ProductTypeEnum.MYSQL, MySqlDatabaseSyncImpl::new); + put(ProductTypeEnum.MARIADB, MySqlDatabaseSyncImpl::new); + put(ProductTypeEnum.ORACLE, OracleDatabaseSyncImpl::new); + put(ProductTypeEnum.SQLSERVER, SqlServerDatabaseSyncImpl::new); + put(ProductTypeEnum.SQLSERVER2000, SqlServerDatabaseSyncImpl::new); + put(ProductTypeEnum.POSTGRESQL, PostgresqlDatabaseSyncImpl::new); + put(ProductTypeEnum.GREENPLUM, GreenplumDatabaseSyncImpl::new); + put(ProductTypeEnum.DB2, DB2DatabaseSyncImpl::new); + put(ProductTypeEnum.DM, DmDatabaseSyncImpl::new); + put(ProductTypeEnum.SYBASE, SybaseDatabaseSyncImpl::new); + put(ProductTypeEnum.KINGBASE, KingbaseDatabaseSyncImpl::new); + put(ProductTypeEnum.OSCAR, OscarDatabaseSyncImpl::new); + put(ProductTypeEnum.GBASE8A, MySqlDatabaseSyncImpl::new); + put(ProductTypeEnum.SQLITE3, Sqlite3DatabaseSyncImpl::new); + put(ProductTypeEnum.DORIS, DorisDatabaseSyncImpl::new); + } + }; + + /** + * 获取指定数据源的同步器 + * + * @param dataSource 数据源 + * @return 同步器对象 + */ + public static IDatabaseSynchronize createDatabaseWriter(DataSource dataSource) { + ProductTypeEnum type = DatabaseAwareUtils.getDatabaseTypeByDataSource(dataSource); + if (!DATABASE_SYNC_MAPPER.containsKey(type)) { + throw new RuntimeException( + String.format("[dbsynch] Unsupported database type (%s)", type)); + } + + return DATABASE_SYNC_MAPPER.get(type).apply(dataSource); + } + + /** + * 获取指定数据源的同步器 + * + * @param dataSource 数据源 + * @return 同步器对象 + */ + public static IDatabaseSynchronize createDatabaseWriter(DataSource dataSource, ProductTypeEnum productType) { + if (!DATABASE_SYNC_MAPPER.containsKey(productType)) { + throw new RuntimeException( + String.format("[dbsynch] Unsupported database type (%s)", productType)); + } + return DATABASE_SYNC_MAPPER.get(productType).apply(dataSource); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/IDatabaseSynchronize.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/IDatabaseSynchronize.java new file mode 100644 index 0000000..da5b1c8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/IDatabaseSynchronize.java @@ -0,0 +1,62 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch; + +import javax.sql.DataSource; +import java.util.List; + +/** + * 数据同步接口定义 + * + * @author jrl + */ +public interface IDatabaseSynchronize { + + /** + * 获取数据源对象 + * + * @return DataSource数据源对象 + */ + DataSource getDataSource(); + + /** + * 批量Insert/Update/Delete预处理 + * + * @param schemaName schema名称 + * @param tableName table名称 + * @param fieldNames 字段列表 + * @param pks 主键字段列表 + */ + void prepare(String schemaName, String tableName, List fieldNames, List pks); + + /** + * 批量数据Insert + * + * @param records 数据记录 + * @return 返回实际影响的记录条数 + */ + long executeInsert(List records); + + /** + * 批量数据Update + * + * @param records 数据记录 + * @return 返回实际影响的记录条数 + */ + long executeUpdate(List records); + + /** + * 批量数据Delete + * + * @param records 数据记录 + * @return 返回实际影响的记录条数 + */ + long executeDelete(List records); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/db2/DB2DatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/db2/DB2DatabaseSyncImpl.java new file mode 100644 index 0000000..aab147f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/db2/DB2DatabaseSyncImpl.java @@ -0,0 +1,106 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.db2; + +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.dbsynch.AbstractDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * DB2数据库DML同步实现类 + * + * @author jrl + */ +public class DB2DatabaseSyncImpl extends AbstractDatabaseSynchronize implements + IDatabaseSynchronize { + + public DB2DatabaseSyncImpl(DataSource ds) { + super(ds); + } + + @Override + public String getColumnMetaDataSql(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" WHERE 1=2", schemaName, tableName); + } + + @Override + public String getInsertPrepareStatementSql(String schemaName, String tableName, + List fieldNames) { + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + return String.format("INSERT INTO \"%s\".\"%s\" ( \"%s\" ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "\",\""), + StringUtils.join(placeHolders, ",")); + } + + @Override + public String getUpdatePrepareStatementSql(String schemaName, String tableName, + List fieldNames, List pks) { + List uf = fieldNames.stream() + .filter(field -> !pks.contains(field)) + .map(field -> String.format("\"%s\"=?", field)) + .collect(Collectors.toList()); + + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("UPDATE \"%s\".\"%s\" SET %s WHERE %s", + schemaName, tableName, StringUtils.join(uf, " , "), + StringUtils.join(uw, " AND ")); + } + + @Override + public String getDeletePrepareStatementSql(String schemaName, String tableName, + List pks) { + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("DELETE FROM \"%s\".\"%s\" WHERE %s ", + schemaName, tableName, StringUtils.join(uw, " AND ")); + } + + @Override + public long executeInsert(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeInsert(records); + } + + @Override + public long executeUpdate(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeUpdate(records); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/dm/DmDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/dm/DmDatabaseSyncImpl.java new file mode 100644 index 0000000..b222205 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/dm/DmDatabaseSyncImpl.java @@ -0,0 +1,106 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.dm; + +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.dbsynch.AbstractDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * DM数据库DML同步实现类 + * + * @author jrl + */ +public class DmDatabaseSyncImpl extends AbstractDatabaseSynchronize implements + IDatabaseSynchronize { + + public DmDatabaseSyncImpl(DataSource ds) { + super(ds); + } + + @Override + public String getColumnMetaDataSql(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" WHERE 1=2", schemaName, tableName); + } + + @Override + public String getInsertPrepareStatementSql(String schemaName, String tableName, + List fieldNames) { + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + return String.format("INSERT INTO \"%s\".\"%s\" ( \"%s\" ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "\",\""), + StringUtils.join(placeHolders, ",")); + } + + @Override + public String getUpdatePrepareStatementSql(String schemaName, String tableName, + List fieldNames, List pks) { + List uf = fieldNames.stream() + .filter(field -> !pks.contains(field)) + .map(field -> String.format("\"%s\"=?", field)) + .collect(Collectors.toList()); + + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("UPDATE \"%s\".\"%s\" SET %s WHERE %s", + schemaName, tableName, StringUtils.join(uf, " , "), + StringUtils.join(uw, " AND ")); + } + + @Override + public String getDeletePrepareStatementSql(String schemaName, String tableName, + List pks) { + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("DELETE FROM \"%s\".\"%s\" WHERE %s ", + schemaName, tableName, StringUtils.join(uw, " AND ")); + } + + @Override + public long executeInsert(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeInsert(records); + } + + @Override + public long executeUpdate(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeUpdate(records); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/doris/DorisDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/doris/DorisDatabaseSyncImpl.java new file mode 100644 index 0000000..7e585b5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/doris/DorisDatabaseSyncImpl.java @@ -0,0 +1,62 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.doris; + +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbsynch.mysql.MySqlDatabaseSyncImpl; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.util.List; + +/** + * MySQL数据库DML同步实现类 + * + * @author jrl + */ +public class DorisDatabaseSyncImpl extends MySqlDatabaseSyncImpl implements + IDatabaseSynchronize { + + public DorisDatabaseSyncImpl(DataSource ds) { + super(ds); + } + + @Override + protected void batchDelete(List recordValues, String batchSql, int[] argTypes) { + udOperator(recordValues, batchSql, argTypes); + } + + @Override + protected void batchUpdate(List recordValues, String batchSql, int[] argTypes) { + udOperator(recordValues, batchSql, argTypes); + } + + /** + * doris批量操作会报错,不支持批量更新和删除 + * @param recordValues + * @param batchSql + * @param argTypes + */ + private void udOperator(List recordValues, String batchSql, int[] argTypes) { + try (Connection connection = dataSource.getConnection(); + PreparedStatement ps = connection.prepareStatement(batchSql);) { + for (Object[] recordValue : recordValues) { + for (int j = 0; j < argTypes.length; j++) { + ps.setObject(j + 1, recordValue[j], argTypes[j]); + } + ps.executeUpdate(); + } + } catch (SQLException se) { + throw new RuntimeException(se); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/kingbase/KingbaseDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/kingbase/KingbaseDatabaseSyncImpl.java new file mode 100644 index 0000000..3662055 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/kingbase/KingbaseDatabaseSyncImpl.java @@ -0,0 +1,27 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.kingbase; + +import srt.cloud.framework.dbswitch.dbsynch.pgsql.PostgresqlDatabaseSyncImpl; + +import javax.sql.DataSource; + +/** + * kingbase8数据库DML同步实现类 + * + * @author jrl + */ +public class KingbaseDatabaseSyncImpl extends PostgresqlDatabaseSyncImpl { + + public KingbaseDatabaseSyncImpl(DataSource ds) { + super(ds); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/mssql/SqlServerDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/mssql/SqlServerDatabaseSyncImpl.java new file mode 100644 index 0000000..2492bbd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/mssql/SqlServerDatabaseSyncImpl.java @@ -0,0 +1,106 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.mssql; + +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.dbsynch.AbstractDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * SQLServer数据库DML同步实现类 + * + * @author jrl + */ +public class SqlServerDatabaseSyncImpl extends AbstractDatabaseSynchronize implements + IDatabaseSynchronize { + + public SqlServerDatabaseSyncImpl(DataSource ds) { + super(ds); + } + + @Override + public String getColumnMetaDataSql(String schemaName, String tableName) { + return String.format("SELECT * FROM [%s].[%s] WHERE 1=2", schemaName, tableName); + } + + @Override + public String getInsertPrepareStatementSql(String schemaName, String tableName, + List fieldNames) { + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + return String.format("INSERT INTO [%s].[%s] ( [%s] ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "],["), + StringUtils.join(placeHolders, ",")); + } + + @Override + public String getUpdatePrepareStatementSql(String schemaName, String tableName, + List fieldNames, List pks) { + List uf = fieldNames.stream() + .filter(field -> !pks.contains(field)) + .map(field -> String.format("[%s]=?", field)) + .collect(Collectors.toList()); + + List uw = pks.stream() + .map(pk -> String.format("[%s]=?", pk)) + .collect(Collectors.toList()); + + return String.format("UPDATE [%s].[%s] SET %s WHERE %s", + schemaName, tableName, StringUtils.join(uf, " , "), + StringUtils.join(uw, " AND ")); + } + + @Override + public String getDeletePrepareStatementSql(String schemaName, String tableName, + List pks) { + List uw = pks.stream() + .map(pk -> String.format("[%s]=?", pk)) + .collect(Collectors.toList()); + + return String.format("DELETE FROM [%s].[%s] WHERE %s ", + schemaName, tableName, StringUtils.join(uw, " AND ")); + } + + @Override + public long executeInsert(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeInsert(records); + } + + @Override + public long executeUpdate(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeUpdate(records); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/mysql/MySqlDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/mysql/MySqlDatabaseSyncImpl.java new file mode 100644 index 0000000..f9f8482 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/mysql/MySqlDatabaseSyncImpl.java @@ -0,0 +1,106 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.mysql; + +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.dbsynch.AbstractDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * MySQL数据库DML同步实现类 + * + * @author jrl + */ +public class MySqlDatabaseSyncImpl extends AbstractDatabaseSynchronize implements + IDatabaseSynchronize { + + public MySqlDatabaseSyncImpl(DataSource ds) { + super(ds); + } + + @Override + public String getColumnMetaDataSql(String schemaName, String tableName) { + return String.format("SELECT * FROM `%s`.`%s` WHERE 1=2", schemaName, tableName); + } + + @Override + public String getInsertPrepareStatementSql(String schemaName, String tableName, + List fieldNames) { + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + return String.format("INSERT INTO `%s`.`%s` ( `%s` ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "`,`"), + StringUtils.join(placeHolders, ",")); + } + + @Override + public String getUpdatePrepareStatementSql(String schemaName, String tableName, + List fieldNames, List pks) { + List uf = fieldNames.stream() + .filter(field -> !pks.contains(field)) + .map(field -> String.format("`%s`=?", field)) + .collect(Collectors.toList()); + + List uw = pks.stream() + .map(pk -> String.format("`%s`=?", pk)) + .collect(Collectors.toList()); + + return String.format("UPDATE `%s`.`%s` SET %s WHERE %s", + schemaName, tableName, StringUtils.join(uf, " , "), + StringUtils.join(uw, " AND ")); + } + + @Override + public String getDeletePrepareStatementSql(String schemaName, String tableName, + List pks) { + List uw = pks.stream() + .map(pk -> String.format("`%s`=?", pk)) + .collect(Collectors.toList()); + + return String.format("DELETE FROM `%s`.`%s` WHERE %s ", + schemaName, tableName, StringUtils.join(uw, " AND ")); + } + + @Override + public long executeInsert(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeInsert(records); + } + + @Override + public long executeUpdate(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeUpdate(records); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/oracle/OracleDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/oracle/OracleDatabaseSyncImpl.java new file mode 100644 index 0000000..4d1bc44 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/oracle/OracleDatabaseSyncImpl.java @@ -0,0 +1,205 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.oracle; + +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.dbsynch.AbstractDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import org.apache.commons.lang3.StringUtils; +import org.springframework.jdbc.core.SqlTypeValue; + +import javax.sql.DataSource; +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Types; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Objects; +import java.util.stream.Collectors; + +/** + * Oracle数据库DML同步实现类 + * + * @author jrl + */ +public class OracleDatabaseSyncImpl extends AbstractDatabaseSynchronize implements + IDatabaseSynchronize { + + public OracleDatabaseSyncImpl(DataSource ds) { + super(ds); + } + + @Override + public String getColumnMetaDataSql(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" WHERE 1=2", schemaName, tableName); + } + + @Override + public String getInsertPrepareStatementSql(String schemaName, String tableName, + List fieldNames) { + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + return String.format("INSERT INTO \"%s\".\"%s\" ( \"%s\" ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "\",\""), + StringUtils.join(placeHolders, ",")); + } + + @Override + public String getUpdatePrepareStatementSql(String schemaName, String tableName, + List fieldNames, List pks) { + List uf = fieldNames.stream() + .filter(field -> !pks.contains(field)) + .map(field -> String.format("\"%s\"=?", field)) + .collect(Collectors.toList()); + + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("UPDATE \"%s\".\"%s\" SET %s WHERE %s", + schemaName, tableName, StringUtils.join(uf, " , "), + StringUtils.join(uw, " AND ")); + } + + @Override + public String getDeletePrepareStatementSql(String schemaName, String tableName, + List pks) { + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("DELETE FROM \"%s\".\"%s\" WHERE %s ", + schemaName, tableName, StringUtils.join(uw, " AND ")); + } + + @Override + public long executeInsert(List records) { + List iss = new ArrayList<>(); + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + switch (this.insertArgsType[i]) { + case Types.CLOB: + case Types.NCLOB: + row[i] = Objects.isNull(row[i]) + ? null + : TypeConvertUtils.castToString(row[i]); + break; + case Types.BLOB: + final byte[] bytes = Objects.isNull(row[i]) + ? null + : TypeConvertUtils.castToByteArray(row[i]); + row[i] = new SqlTypeValue() { + @Override + public void setTypeValue(PreparedStatement ps, int paramIndex, int sqlType, + String typeName) throws SQLException { + if (null != bytes) { + InputStream is = new ByteArrayInputStream(bytes); + ps.setBlob(paramIndex, is); + iss.add(is); + } else { + ps.setNull(paramIndex, sqlType); + } + } + }; + break; + case Types.ROWID: + case Types.ARRAY: + case Types.REF: + case Types.SQLXML: + row[i] = null; + break; + default: + break; + } + } catch (Exception e) { + row[i] = null; + } + } + }); + + try { + return super.executeInsert(records); + } finally { + iss.forEach(is -> { + try { + is.close(); + } catch (Exception ignore) { + } + }); + } + } + + @Override + public long executeUpdate(List records) { + List iss = new ArrayList<>(); + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + switch (this.updateArgsType[i]) { + case Types.CLOB: + case Types.NCLOB: + row[i] = Objects.isNull(row[i]) + ? null + : TypeConvertUtils.castToString(row[i]); + break; + case Types.BLOB: + final byte[] bytes = Objects.isNull(row[i]) + ? null + : TypeConvertUtils.castToByteArray(row[i]); + row[i] = new SqlTypeValue() { + @Override + public void setTypeValue(PreparedStatement ps, int paramIndex, int sqlType, + String typeName) throws SQLException { + if (null != bytes) { + InputStream is = new ByteArrayInputStream(bytes); + ps.setBlob(paramIndex, is); + iss.add(is); + } else { + ps.setNull(paramIndex, sqlType); + } + } + }; + break; + case Types.ROWID: + case Types.ARRAY: + case Types.REF: + case Types.SQLXML: + row[i] = null; + break; + default: + break; + } + } catch (Exception e) { + row[i] = null; + } + } + }); + + try { + return super.executeUpdate(records); + } finally { + iss.forEach(is -> { + try { + is.close(); + } catch (Exception ignore) { + } + }); + } + } + + @Override + public long executeDelete(List records) { + return super.executeDelete(records); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/oscar/OscarDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/oscar/OscarDatabaseSyncImpl.java new file mode 100644 index 0000000..3cbb4f1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/oscar/OscarDatabaseSyncImpl.java @@ -0,0 +1,107 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.oscar; + + +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.dbsynch.AbstractDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * 神通数据库DML同步实现类 + * + * @author tang + */ +public class OscarDatabaseSyncImpl extends AbstractDatabaseSynchronize implements + IDatabaseSynchronize { + + public OscarDatabaseSyncImpl(DataSource ds) { + super(ds); + } + + @Override + public String getColumnMetaDataSql(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" WHERE 1=2", schemaName, tableName); + } + + @Override + public String getInsertPrepareStatementSql(String schemaName, String tableName, + List fieldNames) { + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + return String.format("INSERT INTO \"%s\".\"%s\" ( \"%s\" ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "\",\""), + StringUtils.join(placeHolders, ",")); + } + + @Override + public String getUpdatePrepareStatementSql(String schemaName, String tableName, + List fieldNames, List pks) { + List uf = fieldNames.stream() + .filter(field -> !pks.contains(field)) + .map(field -> String.format("\"%s\"=?", field)) + .collect(Collectors.toList()); + + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("UPDATE \"%s\".\"%s\" SET %s WHERE %s", + schemaName, tableName, StringUtils.join(uf, " , "), + StringUtils.join(uw, " AND ")); + } + + @Override + public String getDeletePrepareStatementSql(String schemaName, String tableName, + List pks) { + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("DELETE FROM \"%s\".\"%s\" WHERE %s ", + schemaName, tableName, StringUtils.join(uw, " AND ")); + } + + @Override + public long executeInsert(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeInsert(records); + } + + @Override + public long executeUpdate(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeUpdate(records); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/pgsql/GreenplumDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/pgsql/GreenplumDatabaseSyncImpl.java new file mode 100644 index 0000000..20084de --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/pgsql/GreenplumDatabaseSyncImpl.java @@ -0,0 +1,28 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.pgsql; + +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; + +import javax.sql.DataSource; + +/** + * Greenplum数据库DML同步实现类 + * + * @author jrl + */ +public class GreenplumDatabaseSyncImpl extends PostgresqlDatabaseSyncImpl implements + IDatabaseSynchronize { + + public GreenplumDatabaseSyncImpl(DataSource ds) { + super(ds); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/pgsql/PostgresqlDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/pgsql/PostgresqlDatabaseSyncImpl.java new file mode 100644 index 0000000..cb888c8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/pgsql/PostgresqlDatabaseSyncImpl.java @@ -0,0 +1,105 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.pgsql; + +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.dbsynch.AbstractDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import org.apache.commons.lang3.StringUtils; + +import javax.sql.DataSource; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * PostgreSQL数据库DML同步实现类 + * + * @author jrl + */ +public class PostgresqlDatabaseSyncImpl extends AbstractDatabaseSynchronize implements + IDatabaseSynchronize { + + public PostgresqlDatabaseSyncImpl(DataSource ds) { + super(ds); + } + + @Override + public String getColumnMetaDataSql(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" WHERE 1=2", schemaName, tableName); + } + + @Override + public String getInsertPrepareStatementSql(String schemaName, String tableName, + List fieldNames) { + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + return String + .format("INSERT INTO \"%s\".\"%s\" ( \"%s\" ) VALUES ( %s )", schemaName, tableName, + StringUtils.join(fieldNames, "\",\""), StringUtils.join(placeHolders, ",")); + } + + @Override + public String getUpdatePrepareStatementSql(String schemaName, String tableName, + List fieldNames, List pks) { + List uf = fieldNames.stream() + .filter(field -> !pks.contains(field)) + .map(field -> String.format("\"%s\"=?", field)) + .collect(Collectors.toList()); + + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("UPDATE \"%s\".\"%s\" SET %s WHERE %s", schemaName, tableName, + StringUtils.join(uf, " , "), + StringUtils.join(uw, " AND ")); + } + + @Override + public String getDeletePrepareStatementSql(String schemaName, String tableName, + List pks) { + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("DELETE FROM \"%s\".\"%s\" WHERE %s ", schemaName, tableName, + StringUtils.join(uw, " AND ")); + } + + @Override + public long executeInsert(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeInsert(records); + } + + @Override + public long executeUpdate(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeUpdate(records); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/sqlite/Sqlite3DatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/sqlite/Sqlite3DatabaseSyncImpl.java new file mode 100644 index 0000000..4e7a3ce --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/sqlite/Sqlite3DatabaseSyncImpl.java @@ -0,0 +1,116 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.sqlite; + +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.dbsynch.AbstractDatabaseSynchronize; +import srt.cloud.framework.dbswitch.dbsynch.IDatabaseSynchronize; +import org.apache.commons.lang3.StringUtils; +import org.springframework.transaction.TransactionDefinition; +import org.springframework.transaction.support.DefaultTransactionDefinition; + +import javax.sql.DataSource; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * SQLite数据库DML同步实现类 + * + * @author jrl + */ +public class Sqlite3DatabaseSyncImpl extends AbstractDatabaseSynchronize implements + IDatabaseSynchronize { + + public Sqlite3DatabaseSyncImpl(DataSource ds) { + super(ds); + } + + /*@Override + protected TransactionDefinition getTransactionDefinition() { + DefaultTransactionDefinition definition = new DefaultTransactionDefinition(); + definition.setIsolationLevel(TransactionDefinition.ISOLATION_SERIALIZABLE); + definition.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED); + return definition; + }*/ + + @Override + public String getColumnMetaDataSql(String schemaName, String tableName) { + return String.format("SELECT * FROM \"%s\".\"%s\" WHERE 1=2", schemaName, tableName); + } + + @Override + public String getInsertPrepareStatementSql(String schemaName, String tableName, + List fieldNames) { + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + return String.format("INSERT INTO \"%s\".\"%s\" ( \"%s\" ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "\",\""), + StringUtils.join(placeHolders, ",")); + } + + @Override + public String getUpdatePrepareStatementSql(String schemaName, String tableName, + List fieldNames, List pks) { + List uf = fieldNames.stream() + .filter(field -> !pks.contains(field)) + .map(field -> String.format("\"%s\"=?", field)) + .collect(Collectors.toList()); + + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("UPDATE \"%s\".\"%s\" SET %s WHERE %s", + schemaName, tableName, StringUtils.join(uf, " , "), + StringUtils.join(uw, " AND ")); + } + + @Override + public String getDeletePrepareStatementSql(String schemaName, String tableName, + List pks) { + List uw = pks.stream() + .map(pk -> String.format("\"%s\"=?", pk)) + .collect(Collectors.toList()); + + return String.format("DELETE FROM \"%s\".\"%s\" WHERE %s ", + schemaName, tableName, StringUtils.join(uw, " AND ")); + } + + @Override + public long executeInsert(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeInsert(records); + } + + @Override + public long executeUpdate(List records) { + records.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = TypeConvertUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.executeUpdate(records); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/sybase/SybaseDatabaseSyncImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/sybase/SybaseDatabaseSyncImpl.java new file mode 100644 index 0000000..873430e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbsynch/sybase/SybaseDatabaseSyncImpl.java @@ -0,0 +1,28 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbsynch.sybase; + + +import srt.cloud.framework.dbswitch.dbsynch.mssql.SqlServerDatabaseSyncImpl; + +import javax.sql.DataSource; + +/** + * Sybase数据库DML同步实现类 + * + * @author tang + */ +public class SybaseDatabaseSyncImpl extends SqlServerDatabaseSyncImpl { + + public SybaseDatabaseSyncImpl(DataSource ds) { + super(ds); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/AbstractDatabaseWriter.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/AbstractDatabaseWriter.java new file mode 100644 index 0000000..2e3e1e7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/AbstractDatabaseWriter.java @@ -0,0 +1,165 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter; + +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; +import org.springframework.jdbc.core.JdbcTemplate; +import org.springframework.util.CollectionUtils; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; + +/** + * 数据库写入抽象基类 + * + * @author jrl + */ +@Slf4j +public abstract class AbstractDatabaseWriter implements IDatabaseWriter { + + protected DataSource dataSource; + protected JdbcTemplate jdbcTemplate; + protected String schemaName; + protected String tableName; + protected Map columnType; + + public AbstractDatabaseWriter(DataSource dataSource) { + this.dataSource = dataSource; + this.jdbcTemplate = new JdbcTemplate(this.dataSource); + this.schemaName = null; + this.tableName = null; + this.columnType = null; + } + + @Override + public DataSource getDataSource() { + return this.dataSource; + } + + @Override + public void prepareWrite(String schemaName, String tableName, List fieldNames) { + String sql = this.selectTableMetaDataSqlString(schemaName, tableName, fieldNames); + Map columnMetaData = new HashMap<>(); + jdbcTemplate.execute((Connection conn) -> { + try (Statement stmt = conn.createStatement(); + ResultSet rs = stmt.executeQuery(sql);) { + ResultSetMetaData rsMetaData = rs.getMetaData(); + for (int i = 0, len = rsMetaData.getColumnCount(); i < len; i++) { + columnMetaData.put(rsMetaData.getColumnName(i + 1), rsMetaData.getColumnType(i + 1)); + } + + return true; + } catch (Exception e) { + throw new RuntimeException( + String.format("获取表:%s.%s 的字段的元信息时失败. 请联系 DBA 核查该库、表信息.", schemaName, tableName), e); + } + }); + + this.schemaName = schemaName; + this.tableName = tableName; + this.columnType = Objects.requireNonNull(columnMetaData); + if (this.columnType.isEmpty()) { + throw new RuntimeException( + String.format("获取表:%s.%s 的字段的元信息时失败. 请联系 DBA 核查该库、表信息.", schemaName, tableName)); + } + + } + + protected String selectTableMetaDataSqlString(String schemaName, String tableName, + List fieldNames) { + if (CollectionUtils.isEmpty(fieldNames)) { + return String.format("SELECT * FROM \"%s\".\"%s\" WHERE 1=2", schemaName, tableName); + } else { + return String.format("SELECT \"%s\" FROM \"%s\".\"%s\" WHERE 1=2", + StringUtils.join(fieldNames, "\",\""), schemaName, tableName); + } + } + + protected abstract String getDatabaseProductName(); + + /*protected TransactionDefinition getTransactionDefinition() { + DefaultTransactionDefinition definition = new DefaultTransactionDefinition(); + definition.setIsolationLevel(TransactionDefinition.ISOLATION_READ_COMMITTED); + definition.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED); + return definition; + }*/ + + @Override + public long write(List fieldNames, List recordValues) { + if (recordValues.isEmpty()) { + return 0; + } + + String sqlInsert = String.format("INSERT INTO \"%s\".\"%s\" ( \"%s\" ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "\",\""), + StringUtils.join(Collections.nCopies(fieldNames.size(), "?"), ",")); + + int[] argTypes = new int[fieldNames.size()]; + for (int i = 0; i < fieldNames.size(); ++i) { + String col = fieldNames.get(i); + argTypes[i] = this.columnType.get(col); + } + + batchWrite(fieldNames, recordValues, sqlInsert, argTypes); + + return recordValues.size(); + + /*PlatformTransactionManager transactionManager = new DataSourceTransactionManager( + this.dataSource); + TransactionStatus status = transactionManager.getTransaction(getTransactionDefinition()); + + try { + //int[] affects = jdbcTemplate.batchUpdate(sqlInsert, recordValues, argTypes); + //int affectCount = Arrays.stream(affects).sum(); + jdbcTemplate.batchUpdate(sqlInsert, recordValues, argTypes); + transactionManager.commit(status); + int affectCount = recordValues.size(); + recordValues.clear(); + if (log.isDebugEnabled()) { + log.debug("{} insert data affect count: {}", getDatabaseProductName(), affectCount); + } + return affectCount; + } catch (Exception e) { + transactionManager.rollback(status); + throw e; + }*/ + } + + protected void batchWrite(List fieldNames, List recordValues, String sqlInsert, int[] argTypes) { + try (Connection connection = dataSource.getConnection(); + PreparedStatement ps = connection.prepareStatement(sqlInsert);) { + connection.setAutoCommit(false); + for (Object[] recordValue : recordValues) { + for (int j = 0; j < fieldNames.size(); j++) { + ps.setObject(j + 1, recordValue[j], argTypes[j]); + } + ps.addBatch(); + } + ps.executeBatch(); + ps.clearBatch(); + connection.commit(); + } catch (SQLException se) { + throw new RuntimeException(se); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/DatabaseWriterFactory.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/DatabaseWriterFactory.java new file mode 100644 index 0000000..0016b48 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/DatabaseWriterFactory.java @@ -0,0 +1,119 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.common.util.DatabaseAwareUtils; +import srt.cloud.framework.dbswitch.dbwriter.db2.DB2WriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.dm.DmWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.doris.DorisWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.gpdb.GreenplumCopyWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.gpdb.GreenplumInsertWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.kingbase.KingbaseInsertWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.mssql.SqlServerWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.mysql.MySqlWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.oracle.OracleWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.oscar.OscarWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.sqlite.Sqlite3WriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.sybase.SybaseWriterImpl; + +import javax.sql.DataSource; +import java.util.HashMap; +import java.util.Map; +import java.util.function.Function; + +/** + * 数据库写入器构造工厂类 + * + * @author jrl + */ +public class DatabaseWriterFactory { + + private static final Map> DATABASE_WRITER_MAPPER + = new HashMap>() { + + private static final long serialVersionUID = 3365136872693503697L; + + { + put(ProductTypeEnum.MYSQL, MySqlWriterImpl::new); + put(ProductTypeEnum.MARIADB, MySqlWriterImpl::new); + put(ProductTypeEnum.ORACLE, OracleWriterImpl::new); + put(ProductTypeEnum.SQLSERVER, SqlServerWriterImpl::new); + put(ProductTypeEnum.SQLSERVER2000, SqlServerWriterImpl::new); + put(ProductTypeEnum.POSTGRESQL, GreenplumInsertWriterImpl::new); + put(ProductTypeEnum.GREENPLUM, GreenplumCopyWriterImpl::new); + put(ProductTypeEnum.DB2, DB2WriterImpl::new); + put(ProductTypeEnum.DM, DmWriterImpl::new); + put(ProductTypeEnum.SYBASE, SybaseWriterImpl::new); + //对于kingbase当前只能使用insert模式 + put(ProductTypeEnum.KINGBASE, KingbaseInsertWriterImpl::new); + put(ProductTypeEnum.OSCAR, OscarWriterImpl::new); + put(ProductTypeEnum.GBASE8A, MySqlWriterImpl::new); + put(ProductTypeEnum.SQLITE3, Sqlite3WriterImpl::new); + put(ProductTypeEnum.DORIS, DorisWriterImpl::new); + } + }; + + /** + * 获取指定数据库类型的写入器 + * + * @param dataSource 连接池数据源 + * @return 写入器对象 + */ + public static IDatabaseWriter createDatabaseWriter(DataSource dataSource) { + return DatabaseWriterFactory.createDatabaseWriter(dataSource, true); + } + + /** + * 获取指定数据库类型的写入器 + * + * @param dataSource 连接池数据源 + * @param insert 对于GP/GP数据库来说是否使用insert引擎写入 + * @return 写入器对象 + */ + public static IDatabaseWriter createDatabaseWriter(DataSource dataSource, boolean insert) { + ProductTypeEnum type = DatabaseAwareUtils.getDatabaseTypeByDataSource(dataSource); + if (insert) { + if (ProductTypeEnum.POSTGRESQL.equals(type)) { + return new GreenplumInsertWriterImpl(dataSource); + } + } + + if (!DATABASE_WRITER_MAPPER.containsKey(type)) { + throw new RuntimeException( + String.format("[dbwrite] Unsupported database type (%s)", type)); + } + + return DATABASE_WRITER_MAPPER.get(type).apply(dataSource); + } + + /** + * 获取指定数据库类型的写入器 + * + * @param dataSource 连接池数据源 + * @param insert 对于GP/GP数据库来说是否使用insert引擎写入 + * @return 写入器对象 + */ + public static IDatabaseWriter createDatabaseWriter(DataSource dataSource, ProductTypeEnum productType, boolean insert) { + if (insert) { + if (ProductTypeEnum.POSTGRESQL.equals(productType)) { + return new GreenplumInsertWriterImpl(dataSource); + } + } + + if (!DATABASE_WRITER_MAPPER.containsKey(productType)) { + throw new RuntimeException( + String.format("[dbwrite] Unsupported database type (%s)", productType)); + } + + return DATABASE_WRITER_MAPPER.get(productType).apply(dataSource); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/IDatabaseWriter.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/IDatabaseWriter.java new file mode 100644 index 0000000..3c205a9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/IDatabaseWriter.java @@ -0,0 +1,45 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter; + +import javax.sql.DataSource; +import java.util.List; + +/** + * 数据库批量写入定义接口 + * + * @author jrl + */ +public interface IDatabaseWriter { + + /** + * 获取数据源对象 + * + * @return DataSource数据源对象 + */ + DataSource getDataSource(); + + /** + * 批量写入预处理 + * + * @param schemaName schema名称 + * @param tableName table名称 + */ + void prepareWrite(String schemaName, String tableName, List fieldNames); + + /** + * 批量数据写入 + * + * @param fieldNames 字段名称列表 + * @param recordValues 数据记录 + * @return 返回实际写入的数据记录条数 + */ + long write(List fieldNames, List recordValues); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/db2/DB2WriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/db2/DB2WriterImpl.java new file mode 100644 index 0000000..feba14e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/db2/DB2WriterImpl.java @@ -0,0 +1,51 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.db2; + +import lombok.extern.slf4j.Slf4j; +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.util.ObjectCastUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * DB2数据库写入实现类 + * + * @author jrl + */ +@Slf4j +public class DB2WriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + public DB2WriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "DB2"; + } + + @Override + public long write(List fieldNames, List recordValues) { + recordValues.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = ObjectCastUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.write(fieldNames, recordValues); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/dm/DmWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/dm/DmWriterImpl.java new file mode 100644 index 0000000..02cc2b2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/dm/DmWriterImpl.java @@ -0,0 +1,49 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.dm; + +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.util.ObjectCastUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * DM数据库写入实现类 + * + * @author jrl + */ +public class DmWriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + public DmWriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "DM"; + } + + @Override + public long write(List fieldNames, List recordValues) { + recordValues.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = ObjectCastUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.write(fieldNames, recordValues); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/doris/DorisWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/doris/DorisWriterImpl.java new file mode 100644 index 0000000..d263a10 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/doris/DorisWriterImpl.java @@ -0,0 +1,122 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.doris; + +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; +import org.springframework.util.CollectionUtils; +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.mysql.MySqlWriterImpl; +import srt.cloud.framework.dbswitch.dbwriter.util.ObjectCastUtils; + +import javax.sql.DataSource; +import java.util.Collections; +import java.util.List; + +/** + * MySQL数据库写入实现类 + * + * @author jrl + */ +@Slf4j +public class DorisWriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + /*private DefaultTransactionDefinition definition;*/ + + public DorisWriterImpl(DataSource dataSource) { + super(dataSource); + + /*this.definition = new DefaultTransactionDefinition(); + this.definition.setIsolationLevel(TransactionDefinition.ISOLATION_READ_COMMITTED); + this.definition.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED); + this.definition.setTimeout(3600);*/ + } + + @Override + protected String getDatabaseProductName() { + return "Doris"; + } + + @Override + protected String selectTableMetaDataSqlString(String schemaName, String tableName, + List fieldNames) { + if (CollectionUtils.isEmpty(fieldNames)) { + return String.format("SELECT * FROM `%s`.`%s` WHERE 1=2", schemaName, tableName); + } else { + return String.format("SELECT `%s` FROM `%s`.`%s` WHERE 1=2", + StringUtils.join(fieldNames, "`,`"), schemaName, tableName); + } + } + + @Override + public long write(List fieldNames, List recordValues) { + if (recordValues.isEmpty()) { + return 0; + } + + recordValues.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = ObjectCastUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + String sqlInsert = String.format("INSERT INTO `%s`.`%s` ( `%s` ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "`,`"), + StringUtils.join(placeHolders, ",")); + + int[] argTypes = new int[fieldNames.size()]; + for (int i = 0; i < fieldNames.size(); ++i) { + String col = fieldNames.get(i); + argTypes[i] = this.columnType.get(col); + } + + try { + jdbcTemplate.execute("set enable_insert_strict = true"); + } catch (Exception e) { + log.error("doris [set enable_insert_strict] error", e); + } + + batchWrite(fieldNames, recordValues, sqlInsert, argTypes); + /*PlatformTransactionManager transactionManager = new DataSourceTransactionManager( + this.dataSource); + TransactionTemplate transactionTemplate = new TransactionTemplate(transactionManager, + definition); + Integer ret = transactionTemplate.execute((TransactionStatus transactionStatus) -> { + try { + int[] affects = jdbcTemplate.batchUpdate(sqlInsert, recordValues, argTypes); + Integer affectCount = affects.length; + if (log.isDebugEnabled()) { + log.debug("{} insert data affect count: {}", getDatabaseProductName(), affectCount); + } + return affectCount; + } catch (Throwable t) { + transactionStatus.setRollbackOnly(); + throw t; + } + });*/ + + int size = recordValues.size(); + if (log.isDebugEnabled()) { + /*log.debug("MySQL insert write data affect count:{}", ret.longValue());*/ + log.debug("Doris insert write data affect count:{}", size); + } + + recordValues.clear(); + return size; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/gpdb/GreenplumCopyWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/gpdb/GreenplumCopyWriterImpl.java new file mode 100644 index 0000000..38091db --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/gpdb/GreenplumCopyWriterImpl.java @@ -0,0 +1,415 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.gpdb; + +import lombok.extern.slf4j.Slf4j; +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.util.ObjectCastUtils; +import srt.cloud.framework.dbswitch.pgwriter.row.SimpleRow; +import srt.cloud.framework.dbswitch.pgwriter.row.SimpleRowWriter; +import srt.cloud.framework.dbswitch.pgwriter.util.PostgreSqlUtils; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.SQLException; +import java.sql.Types; +import java.time.LocalDate; +import java.time.LocalDateTime; +import java.time.LocalTime; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.function.Consumer; + +/** + * Greenplum数据库Copy写入实现类 + * + * @author jrl + */ +@Slf4j +public class GreenplumCopyWriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + private static Set unsupportedClassTypeName; + + static { + unsupportedClassTypeName = new HashSet<>(); + unsupportedClassTypeName.add("oracle.sql.TIMESTAMPLTZ"); + unsupportedClassTypeName.add("oracle.sql.TIMESTAMPTZ"); + } + + public GreenplumCopyWriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "Greenplum"; + } + + @Override + public long write(List fieldNames, List recordValues) { + if (recordValues.isEmpty()) { + return 0; + } + if (fieldNames.isEmpty()) { + throw new IllegalArgumentException("第一个参数[fieldNames]为空,无效!"); + } + if (null == this.columnType || this.columnType.isEmpty()) { + throw new RuntimeException("请先调用prepareWrite()函数,或者出现内部代码集成调用错误!"); + } + + String[] columnNames = new String[fieldNames.size()]; + for (int i = 0; i < fieldNames.size(); ++i) { + String s = fieldNames.get(i); + if (!this.columnType.containsKey(s)) { + throw new RuntimeException( + String.format("表%s.%s 中不存在字段名为%s的字段,请检查参数传入!", schemaName, tableName, s)); + } + + columnNames[i] = s; + } + + SimpleRowWriter.Table table = new SimpleRowWriter.Table(schemaName, tableName, columnNames); + try (Connection connection = dataSource.getConnection(); + SimpleRowWriter pgwriter = + new SimpleRowWriter(table, PostgreSqlUtils.getPGConnection(connection), true)) { + pgwriter.enableNullCharacterHandler(); + for (Object[] objects : recordValues) { + if (fieldNames.size() != objects.length) { + throw new RuntimeException( + String.format("传入的参数有误,字段列数%d与记录中的值个数%d不相符合", fieldNames.size(), objects.length)); + } + + pgwriter.startRow(this.getConsumer(fieldNames, objects)); + } + + return recordValues.size(); + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + + /** + * 数据类型转换参考 + *

+ * 1. spring-jdbc: {@code org.springframework.jdbc.core.StatementCreatorUtils} + *

+ * 2. postgresql-driver: {@code org.postgresql.jdbc.PgPreparedStatement} + */ + private Consumer getConsumer(List fieldNames, Object[] objects) { + return (row) -> { + for (int i = 0; i < objects.length; ++i) { + String fieldName = fieldNames.get(i); + Object fieldValue = objects[i]; + Integer fieldType = columnType.get(fieldName); + switch (fieldType) { + case Types.CHAR: + case Types.NCHAR: + case Types.VARCHAR: + case Types.LONGVARCHAR: + case Types.NVARCHAR: + case Types.LONGNVARCHAR: + if (null == fieldValue) { + row.setVarChar(i, null); + } else if (unsupportedClassTypeName.contains(fieldValue.getClass().getName())) { + row.setVarChar(i, null); + } else { + String val = ObjectCastUtils.castToString(fieldValue); + if (null == val) { + throw new RuntimeException(String.format( + "表[%s.%s]的字段名[%s]数据类型转换错误,应该为java.lang.String/java.sql.Clob,而实际的数据类型为%s", + schemaName, tableName, fieldName, fieldValue.getClass().getName())); + } + + row.setVarChar(i, val); + } + break; + case Types.CLOB: + case Types.NCLOB: + if (null == fieldValue) { + row.setText(i, null); + } else if (unsupportedClassTypeName.contains(fieldValue.getClass().getName())) { + row.setText(i, null); + } else { + String val = ObjectCastUtils.castToString(fieldValue); + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转换错误,应该为java.lang.String/java.sql.Clob,而实际的数据类型为%s", + schemaName, tableName, fieldName, fieldValue.getClass().getName())); + } + + row.setText(i, val); + } + break; + case Types.TINYINT: + if (null == fieldValue) { + row.setByte(i, null); + } else { + Byte val = null; + try { + val = ObjectCastUtils.castToByte(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转错误,应该为java.lang.Byte,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + + row.setByte(i, val); + } + break; + case Types.SMALLINT: + if (null == fieldValue) { + row.setShort(i, null); + } else { + Short val = null; + try { + val = ObjectCastUtils.castToShort(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转换错误,应该为java.lang.Short,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + + row.setShort(i, val); + } + break; + case Types.INTEGER: + if (null == fieldValue) { + row.setInteger(i, null); + } else { + Integer val = null; + try { + val = ObjectCastUtils.castToInteger(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转换错误,应该为java.lang.Integer,而实际的数据类型为%s", + schemaName, tableName, fieldName, fieldValue.getClass().getName())); + } + + row.setInteger(i, val); + } + break; + case Types.BIGINT: + if (null == fieldValue) { + row.setLong(i, null); + } else { + Long val = null; + try { + val = ObjectCastUtils.castToLong(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转换错误,应该为java.lang.Long,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + + row.setLong(i, val); + } + break; + case Types.NUMERIC: + case Types.DECIMAL: + if (null == fieldValue) { + row.setNumeric(i, null); + } else { + Number val = null; + try { + val = ObjectCastUtils.castToNumeric(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转换错误,应该为java.lang.Number,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + + row.setNumeric(i, val); + } + break; + case Types.FLOAT: + case Types.REAL: + if (null == fieldValue) { + row.setFloat(i, null); + } else { + Float val = null; + try { + val = ObjectCastUtils.castToFloat(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转换错误,应该为java.lang.Float,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + + row.setFloat(i, val); + } + break; + case Types.DOUBLE: + if (null == fieldValue) { + row.setDouble(i, null); + } else { + Double val = null; + try { + val = ObjectCastUtils.castToDouble(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转换错误,应该为java.lang.Double,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + row.setDouble(i, val); + } + break; + case Types.BOOLEAN: + case Types.BIT: + if (null == fieldValue) { + row.setBoolean(i, null); + } else { + Boolean val = null; + try { + val = ObjectCastUtils.castToBoolean(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型错误,应该为java.lang.Boolean,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + row.setBoolean(i, val); + } + break; + case Types.TIME: + if (null == fieldValue) { + row.setTime(i, null); + } else if (unsupportedClassTypeName.contains(fieldValue.getClass().getName())) { + row.setTime(i, null); + } else { + LocalTime val = null; + try { + val = ObjectCastUtils.castToLocalTime(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转换错误,应该为java.sql.Time,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + row.setTime(i, val); + } + break; + case Types.DATE: + if (null == fieldValue) { + row.setDate(i, null); + } else if (unsupportedClassTypeName.contains(fieldValue.getClass().getName())) { + row.setDate(i, null); + } else { + LocalDate val = null; + try { + val = ObjectCastUtils.castToLocalDate(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型转换错误,应该为java.sql.Date,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + row.setDate(i, val); + } + break; + case Types.TIMESTAMP: + if (null == fieldValue) { + row.setTimeStamp(i, null); + } else if (unsupportedClassTypeName.contains(fieldValue.getClass().getName())) { + row.setTimeStamp(i, null); + } else { + LocalDateTime val = null; + try { + val = ObjectCastUtils.castToLocalDateTime(fieldValue); + } catch (RuntimeException e) { + throw new RuntimeException(String.format("表名[%s.%s]的字段名[%s]数据类型转错误,%s", + schemaName, tableName, fieldName, e.getMessage())); + } + + if (null == val) { + throw new RuntimeException(String.format( + "表名[%s.%s]的字段名[%s]数据类型错误,应该为java.sql.Timestamp,而实际的数据类型为%s", schemaName, + tableName, fieldName, fieldValue.getClass().getName())); + } + + row.setTimeStamp(i, val); + } + break; + case Types.BINARY: + case Types.VARBINARY: + case Types.BLOB: + case Types.LONGVARBINARY: + if (null == fieldValue) { + row.setByteArray(i, null); + } else { + row.setByteArray(i, ObjectCastUtils.castToByteArray(fieldValue)); + } + break; + case Types.NULL: + case Types.OTHER: + if (null == fieldValue) { + row.setText(i, null); + } else { + row.setText(i, fieldValue.toString()); + } + break; + default: + throw new RuntimeException( + String.format("不支持的数据库字段类型,表名[%s.%s] 字段名[%s].", schemaName, + tableName, fieldName)); + } + } + }; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/gpdb/GreenplumInsertWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/gpdb/GreenplumInsertWriterImpl.java new file mode 100644 index 0000000..70d4d66 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/gpdb/GreenplumInsertWriterImpl.java @@ -0,0 +1,51 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.gpdb; + +import lombok.extern.slf4j.Slf4j; +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.util.ObjectCastUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * Greenplum数据库Insert写入实现类 + * + * @author jrl + */ +@Slf4j +public class GreenplumInsertWriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + public GreenplumInsertWriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "Greenplum"; + } + + @Override + public long write(List fieldNames, List recordValues) { + recordValues.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = ObjectCastUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.write(fieldNames, recordValues); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/kingbase/KingbaseCopyWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/kingbase/KingbaseCopyWriterImpl.java new file mode 100644 index 0000000..d494479 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/kingbase/KingbaseCopyWriterImpl.java @@ -0,0 +1,33 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.kingbase; + +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.gpdb.GreenplumCopyWriterImpl; + +import javax.sql.DataSource; + +/** + * Kingbase8数据库Copy写入实现类 + * + * @author jrl + */ +public class KingbaseCopyWriterImpl extends GreenplumCopyWriterImpl implements IDatabaseWriter { + + public KingbaseCopyWriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "Kingbase"; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/kingbase/KingbaseInsertWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/kingbase/KingbaseInsertWriterImpl.java new file mode 100644 index 0000000..1a48a94 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/kingbase/KingbaseInsertWriterImpl.java @@ -0,0 +1,33 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.kingbase; + +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.gpdb.GreenplumInsertWriterImpl; + +import javax.sql.DataSource; + +/** + * Kingbase8数据库Insert写入实现类 + * + * @author jrl + */ +public class KingbaseInsertWriterImpl extends GreenplumInsertWriterImpl implements IDatabaseWriter { + + public KingbaseInsertWriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "Kingbase"; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/mssql/SqlServerWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/mssql/SqlServerWriterImpl.java new file mode 100644 index 0000000..8723b35 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/mssql/SqlServerWriterImpl.java @@ -0,0 +1,111 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.mssql; + +import lombok.extern.slf4j.Slf4j; +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.util.ObjectCastUtils; +import org.apache.commons.lang3.StringUtils; +import org.springframework.jdbc.datasource.DataSourceTransactionManager; +import org.springframework.transaction.PlatformTransactionManager; +import org.springframework.transaction.TransactionDefinition; +import org.springframework.transaction.TransactionException; +import org.springframework.transaction.TransactionStatus; +import org.springframework.transaction.support.DefaultTransactionDefinition; +import org.springframework.util.CollectionUtils; + +import javax.sql.DataSource; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +/** + * SQLServer批量写入实现类 + * + * @author jrl + */ +@Slf4j +public class SqlServerWriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + public SqlServerWriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "SQL Server"; + } + + @Override + protected String selectTableMetaDataSqlString(String schemaName, String tableName, + List fieldNames) { + if (CollectionUtils.isEmpty(fieldNames)) { + return String.format("SELECT * FROM [%s].[%s] WHERE 1=2", schemaName, tableName); + } else { + return String.format("SELECT [%s] FROM [%s].[%s] WHERE 1=2", + StringUtils.join(fieldNames, "],["), schemaName, tableName); + } + } + + @Override + public long write(List fieldNames, List recordValues) { + if (recordValues.isEmpty()) { + return 0; + } + + recordValues.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = ObjectCastUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + String sqlInsert = String.format("INSERT INTO [%s].[%s] ( [%s] ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "],["), + StringUtils.join(placeHolders, ",")); + + int[] argTypes = new int[fieldNames.size()]; + for (int i = 0; i < fieldNames.size(); ++i) { + String col = fieldNames.get(i); + argTypes[i] = this.columnType.get(col); + } + + /*DefaultTransactionDefinition definition = new DefaultTransactionDefinition(); + definition.setIsolationLevel(TransactionDefinition.ISOLATION_READ_COMMITTED); + definition.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED); + PlatformTransactionManager transactionManager = new DataSourceTransactionManager( + this.dataSource); + TransactionStatus status = transactionManager.getTransaction(definition);*/ + + batchWrite(fieldNames, recordValues, sqlInsert, argTypes); + return recordValues.size(); + /*try { + int[] affects = jdbcTemplate.batchUpdate(sqlInsert, recordValues, argTypes); + int affectCount = affects.length; + recordValues.clear(); + transactionManager.commit(status); + if (log.isDebugEnabled()) { + log.debug("{} insert data affect count: {}", getDatabaseProductName(), affectCount); + } + return affectCount; + } catch (Exception e) { + transactionManager.rollback(status); + throw e; + }*/ + + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/mysql/MySqlWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/mysql/MySqlWriterImpl.java new file mode 100644 index 0000000..404c37d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/mysql/MySqlWriterImpl.java @@ -0,0 +1,149 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.mysql; + +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; +import org.springframework.util.CollectionUtils; +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.util.ObjectCastUtils; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.util.Collections; +import java.util.List; + +/** + * MySQL数据库写入实现类 + * + * @author jrl + */ +@Slf4j +public class MySqlWriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + /*private DefaultTransactionDefinition definition;*/ + + public MySqlWriterImpl(DataSource dataSource) { + super(dataSource); + + /*this.definition = new DefaultTransactionDefinition(); + this.definition.setIsolationLevel(TransactionDefinition.ISOLATION_READ_COMMITTED); + this.definition.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED); + this.definition.setTimeout(3600);*/ + } + + @Override + protected String getDatabaseProductName() { + return "MySQL"; + } + + @Override + protected String selectTableMetaDataSqlString(String schemaName, String tableName, + List fieldNames) { + if (CollectionUtils.isEmpty(fieldNames)) { + return String.format("SELECT * FROM `%s`.`%s` WHERE 1=2", schemaName, tableName); + } else { + return String.format("SELECT `%s` FROM `%s`.`%s` WHERE 1=2", + StringUtils.join(fieldNames, "`,`"), schemaName, tableName); + } + } + + @Override + public long write(List fieldNames, List recordValues) { + if (recordValues.isEmpty()) { + return 0; + } + + recordValues.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = ObjectCastUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + /*List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + StringBuilder sqlInsert = new StringBuilder(String.format("INSERT INTO `%s`.`%s` ( `%s` ) VALUES ", + schemaName, tableName, + StringUtils.join(fieldNames, "`,`"))); + //mysql 一条语句插入多条数据可以提升近一倍速率 sql语句长度有限制,合并sql语句时要注意。长度限制可以通过max_allowed_packet配置项修改,默认为1M。 + for (int i = 0; i < recordValues.size(); i++) { + sqlInsert.append(String.format("( %s )", StringUtils.join(placeHolders, ","))); + if (i < recordValues.size() - 1) { + sqlInsert.append(","); + } + }*/ + List placeHolders = Collections.nCopies(fieldNames.size(), "?"); + String sqlInsert = String.format("INSERT INTO `%s`.`%s` ( `%s` ) VALUES ( %s )", + schemaName, tableName, + StringUtils.join(fieldNames, "`,`"), + StringUtils.join(placeHolders, ",")); + + int[] argTypes = new int[fieldNames.size()]; + for (int i = 0; i < fieldNames.size(); ++i) { + String col = fieldNames.get(i); + argTypes[i] = this.columnType.get(col); + } + + batchWrite(fieldNames, recordValues, sqlInsert, argTypes); + + /*PlatformTransactionManager transactionManager = new DataSourceTransactionManager( + this.dataSource); + TransactionTemplate transactionTemplate = new TransactionTemplate(transactionManager, + definition); + Integer ret = transactionTemplate.execute((TransactionStatus transactionStatus) -> { + try { + long start = System.currentTimeMillis(); + int[] affects = jdbcTemplate.batchUpdate(sqlInsert, recordValues, argTypes); + log.info("sync use time:" + (System.currentTimeMillis() - start)); + Integer affectCount = affects.length; + if (log.isDebugEnabled()) { + log.debug("{} insert data affect count: {}", getDatabaseProductName(), affectCount); + } + return affectCount; + } catch (Throwable t) { + transactionStatus.setRollbackOnly(); + throw t; + } + });*/ + + int size = recordValues.size(); + if (log.isDebugEnabled()) { + /*log.debug("MySQL insert write data affect count:{}", ret.longValue());*/ + log.debug("MySQL insert write data affect count:{}", size); + } + + recordValues.clear(); + return size; + } + + /*@Override + public void batchWrite(List fieldNames, List recordValues, String sqlInsert, int[] argTypes) { + try (Connection connection = dataSource.getConnection(); + PreparedStatement ps = connection.prepareStatement(sqlInsert);) { + int i = 1; + for (Object[] recordValue : recordValues) { + for (int j = 0; j < fieldNames.size(); j++) { + ps.setObject(i, recordValue[j], argTypes[j]); + i++; + } + } + ps.executeUpdate(); + } catch (SQLException se) { + throw new RuntimeException(se); + } + }*/ + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/oracle/OracleWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/oracle/OracleWriterImpl.java new file mode 100644 index 0000000..56bdcf4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/oracle/OracleWriterImpl.java @@ -0,0 +1,112 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.oracle; + +import lombok.extern.slf4j.Slf4j; +import srt.cloud.framework.dbswitch.common.util.TypeConvertUtils; +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import org.springframework.jdbc.core.SqlTypeValue; + +import javax.sql.DataSource; +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Types; +import java.util.ArrayList; +import java.util.List; +import java.util.Objects; + +/** + * Oracle数据库写入实现类 + * + * @author jrl + */ +@Slf4j +public class OracleWriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + public OracleWriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "Oracle"; + } + + @Override + public long write(List fieldNames, List recordValues) { + /** + * 将java.sql.Array 类型转换为java.lang.String + *

+ * Oracle 没有数组类型,这里以文本类型进行存在 + *

+ * Oracle的CLOB和BLOB类型写入请见: + *

+ * oracle.jdbc.driver.OraclePreparedStatement.setObjectCritical + */ + List iss = new ArrayList<>(); + recordValues.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + int dataType = this.columnType.get(fieldNames.get(i)); + switch (dataType) { + case Types.CLOB: + case Types.NCLOB: + row[i] = Objects.isNull(row[i]) + ? null + : TypeConvertUtils.castToString(row[i]); + break; + case Types.BLOB: + final byte[] bytes = Objects.isNull(row[i]) + ? null + : TypeConvertUtils.castToByteArray(row[i]); + row[i] = new SqlTypeValue() { + @Override + public void setTypeValue(PreparedStatement ps, int paramIndex, int sqlType, + String typeName) throws SQLException { + if (null != bytes) { + InputStream is = new ByteArrayInputStream(bytes); + ps.setBlob(paramIndex, is); + iss.add(is); + } else { + ps.setNull(paramIndex, sqlType); + } + } + }; + break; + case Types.ROWID: + case Types.ARRAY: + case Types.REF: + case Types.SQLXML: + row[i] = null; + break; + default: + break; + } + } catch (Exception e) { + row[i] = null; + } + } + }); + + try { + return super.write(fieldNames, recordValues); + } finally { + iss.forEach(is -> { + try { + is.close(); + } catch (Exception ignore) { + } + }); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/oscar/OscarWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/oscar/OscarWriterImpl.java new file mode 100644 index 0000000..b087180 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/oscar/OscarWriterImpl.java @@ -0,0 +1,50 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.oscar; + + +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.util.ObjectCastUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * 神通数据库写入实现类 + * + * @author tang + */ +public class OscarWriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + public OscarWriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "Oscar"; + } + + @Override + public long write(List fieldNames, List recordValues) { + recordValues.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = ObjectCastUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.write(fieldNames, recordValues); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/sqlite/Sqlite3WriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/sqlite/Sqlite3WriterImpl.java new file mode 100644 index 0000000..60cb49f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/sqlite/Sqlite3WriterImpl.java @@ -0,0 +1,58 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.sqlite; + + +import srt.cloud.framework.dbswitch.dbwriter.AbstractDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.IDatabaseWriter; +import srt.cloud.framework.dbswitch.dbwriter.util.ObjectCastUtils; + +import javax.sql.DataSource; +import java.util.List; + +/** + * SQLite数据库写入实现类 + * + * @author jrl + */ +public class Sqlite3WriterImpl extends AbstractDatabaseWriter implements IDatabaseWriter { + + public Sqlite3WriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "SQLite"; + } + + /*@Override + protected TransactionDefinition getTransactionDefinition() { + DefaultTransactionDefinition definition = new DefaultTransactionDefinition(); + definition.setIsolationLevel(TransactionDefinition.ISOLATION_SERIALIZABLE); + definition.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED); + return definition; + }*/ + + @Override + public long write(List fieldNames, List recordValues) { + recordValues.parallelStream().forEach((Object[] row) -> { + for (int i = 0; i < row.length; ++i) { + try { + row[i] = ObjectCastUtils.castByDetermine(row[i]); + } catch (Exception e) { + row[i] = null; + } + } + }); + + return super.write(fieldNames, recordValues); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/sybase/SybaseWriterImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/sybase/SybaseWriterImpl.java new file mode 100644 index 0000000..814e1c7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/sybase/SybaseWriterImpl.java @@ -0,0 +1,35 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.dbwriter.sybase; + + +import lombok.extern.slf4j.Slf4j; +import srt.cloud.framework.dbswitch.dbwriter.mssql.SqlServerWriterImpl; + +import javax.sql.DataSource; + +/** + * Sybase批量写入实现类 + * + * @author tang + */ +@Slf4j +public class SybaseWriterImpl extends SqlServerWriterImpl { + + public SybaseWriterImpl(DataSource dataSource) { + super(dataSource); + } + + @Override + protected String getDatabaseProductName() { + return "Sybase"; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/util/ObjectCastUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/util/ObjectCastUtils.java new file mode 100644 index 0000000..68057ac --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/dbwriter/util/ObjectCastUtils.java @@ -0,0 +1,908 @@ +package srt.cloud.framework.dbswitch.dbwriter.util; + +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; + +import java.io.ByteArrayOutputStream; +import java.io.ObjectOutputStream; +import java.lang.reflect.Method; +import java.sql.SQLException; +import java.sql.Struct; +import java.time.Instant; +import java.time.LocalDate; +import java.time.LocalDateTime; +import java.time.LocalTime; +import java.time.ZoneId; + +@Slf4j +public final class ObjectCastUtils { + + private ObjectCastUtils() { + } + + /** + * 将java.sql.Clob类型转换为java.lang.String类型 + * + * @param clob java.sql.Clob类型对象 + * @return java.lang.String类型数据 + */ + public static String clob2Str(java.sql.Clob clob) { + if (null == clob) { + return null; + } + + try (java.io.Reader is = clob.getCharacterStream()) { + java.io.BufferedReader reader = new java.io.BufferedReader(is); + String line = reader.readLine(); + StringBuilder sb = new StringBuilder(); + while (line != null) { + sb.append(line); + line = reader.readLine(); + } + return sb.toString(); + } catch (SQLException | java.io.IOException e) { + log.warn("Field Value convert from java.sql.Clob to java.lang.String failed:", e); + return null; + } + } + + /** + * 将java.sql.Blob类型转换为byte数组 + * + * @param blob java.sql.Blob类型对象 + * @return byte数组 + */ + public static byte[] blob2Bytes(java.sql.Blob blob) { + if (null == blob) { + return null; + } + + try (java.io.InputStream inputStream = blob.getBinaryStream();) { + try (java.io.BufferedInputStream is = new java.io.BufferedInputStream(inputStream)) { + byte[] bytes = new byte[(int) blob.length()]; + int len = bytes.length; + int offset = 0; + int read = 0; + while (offset < len && (read = is.read(bytes, offset, len - offset)) >= 0) { + offset += read; + } + return bytes; + } + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + /** + * 将Object对象转换为字节数组 + * + * @param obj 对象 + * @return 字节数组 + */ + public static byte[] toByteArray(Object obj) { + if (null == obj) { + return null; + } + + try (ByteArrayOutputStream bos = new ByteArrayOutputStream(); + ObjectOutputStream oos = new ObjectOutputStream(bos)) { + oos.writeObject(obj); + oos.flush(); + return bos.toByteArray(); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + /** + * 将任意类型转换为java.lang.String类型 + * + * @param in 任意类型的对象实例 + * @return java.lang.String类型 + */ + public static String castToString(final Object in) { + if (in instanceof Character) { + return in.toString(); + } else if (in instanceof String) { + return in.toString(); + } else if (in instanceof Character) { + return in.toString(); + } else if (in instanceof java.sql.Clob) { + return clob2Str((java.sql.Clob) in); + } else if (in instanceof Number) { + return in.toString(); + } else if (in instanceof java.sql.RowId) { + return in.toString(); + } else if (in instanceof Boolean) { + return in.toString(); + } else if (in instanceof java.util.Date) { + return in.toString(); + } else if (in instanceof LocalDate) { + return in.toString(); + } else if (in instanceof LocalTime) { + return in.toString(); + } else if (in instanceof LocalDateTime) { + return in.toString(); + } else if (in instanceof java.time.OffsetDateTime) { + return in.toString(); + } else if (in instanceof java.util.UUID) { + return in.toString(); + } else if (in instanceof org.postgresql.util.PGobject) { + return in.toString(); + } else if (in instanceof org.postgresql.jdbc.PgSQLXML) { + try { + return ((org.postgresql.jdbc.PgSQLXML) in).getString(); + } catch (Exception e) { + return ""; + } + } else if (in instanceof java.sql.SQLXML) { + return in.toString(); + } else if (in instanceof java.sql.Array) { + return in.toString(); + } else if (in.getClass().getName().equals("oracle.sql.INTERVALDS")) { + return in.toString(); + } else if (in.getClass().getName().equals("oracle.sql.INTERVALYM")) { + return in.toString(); + } else if (in.getClass().getName().equals("oracle.sql.TIMESTAMPLTZ")) { + return in.toString(); + } else if (in.getClass().getName().equals("oracle.sql.TIMESTAMPTZ")) { + return in.toString(); + } else if (in.getClass().getName().equals("oracle.sql.BFILE")) { + Class clz = in.getClass(); + try { + Method methodFileExists = clz.getMethod("fileExists"); + boolean exists = (boolean) methodFileExists.invoke(in); + if (!exists) { + return ""; + } + + Method methodOpenFile = clz.getMethod("openFile"); + methodOpenFile.invoke(in); + + try { + Method methodCharacterStreamValue = clz.getMethod("getBinaryStream"); + java.io.InputStream is = (java.io.InputStream) methodCharacterStreamValue.invoke(in); + + String line; + StringBuilder sb = new StringBuilder(); + + java.io.BufferedReader br = new java.io.BufferedReader(new java.io.InputStreamReader(is)); + while ((line = br.readLine()) != null) { + sb.append(line); + } + + return sb.toString(); + } finally { + Method methodCloseFile = clz.getMethod("closeFile"); + methodCloseFile.invoke(in); + } + } catch (java.lang.reflect.InvocationTargetException ex) { + log.warn("Error for handle oracle.sql.BFILE: ", ex); + return ""; + } catch (Exception e) { + throw new RuntimeException(e); + } + } else if (in.getClass().getName().equals("microsoft.sql.DateTimeOffset")) { + return in.toString(); + } else if (in instanceof byte[]) { + return new String((byte[]) in); + } + + return null; + } + + /** + * 将任意类型转换为java.lang.Byte类型 + * + * @param in 任意类型的对象实例 + * @return java.lang.Byte类型 + */ + public static Byte castToByte(final Object in) { + if (in instanceof Number) { + return ((Number) in).byteValue(); + } else if (in instanceof java.util.Date) { + return Long.valueOf(((java.util.Date) in).getTime()).byteValue(); + } else if (in instanceof String) { + try { + return Byte.parseByte(in.toString()); + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.lang.Byte类型:%s", e.getMessage())); + } + } else if (in instanceof Character) { + try { + return Byte.parseByte(in.toString()); + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.lang.Character类型转换为java.lang.Byte类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + return Byte.parseByte(clob2Str((java.sql.Clob) in)); + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.lang.Byte类型:%s", e.getMessage())); + } + } else if (in instanceof Boolean) { + return (Boolean) in ? (byte) 1 : (byte) 0; + } + + return null; + } + + /** + * 将任意类型转换为java.lang.Short类型 + * + * @param in 任意类型的对象实例 + * @return java.lang.Short类型 + */ + public static Short castToShort(final Object in) { + if (in instanceof Number) { + return ((Number) in).shortValue(); + } else if (in instanceof Byte) { + return (short) (((byte) in) & 0xff); + } else if (in instanceof java.util.Date) { + return (short) ((java.util.Date) in).getTime(); + } else if (in instanceof java.util.Calendar) { + return (short) ((java.util.Calendar) in).getTime().getTime(); + } else if (in instanceof LocalDateTime) { + return (short) java.sql.Timestamp.valueOf((LocalDateTime) in).getTime(); + } else if (in instanceof java.time.OffsetDateTime) { + return (short) java.sql.Timestamp.valueOf(((java.time.OffsetDateTime) in).toLocalDateTime()) + .getTime(); + } else if (in instanceof String || in instanceof Character) { + try { + String s = in.toString().trim(); + if (s.equalsIgnoreCase("true")) { + return Short.valueOf((short) 1); + } else if (s.equalsIgnoreCase("false")) { + return Short.valueOf((short) 0); + } else { + return Short.parseShort(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.lang.Short类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + String s = clob2Str((java.sql.Clob) in).trim(); + if (s.equalsIgnoreCase("true")) { + return Short.valueOf((short) 1); + } else if (s.equalsIgnoreCase("false")) { + return Short.valueOf((short) 0); + } else { + return Short.parseShort(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.lang.Short类型:%s", e.getMessage())); + } + } else if (in instanceof Boolean) { + return (Boolean) in ? (short) 1 : (short) 0; + } + + return null; + } + + /** + * 将任意类型转换为java.lang.Integer类型 + * + * @param in 任意类型的对象实例 + * @return java.lang.Integer类型 + */ + public static Integer castToInteger(final Object in) { + if (in instanceof Number) { + return ((Number) in).intValue(); + } else if (in instanceof Byte) { + return (((byte) in) & 0xff); + } else if (in instanceof java.util.Date) { + return (int) ((java.util.Date) in).getTime(); + } else if (in instanceof java.util.Calendar) { + return (int) ((java.util.Calendar) in).getTime().getTime(); + } else if (in instanceof LocalDateTime) { + return (int) java.sql.Timestamp.valueOf((LocalDateTime) in).getTime(); + } else if (in instanceof java.time.OffsetDateTime) { + return (int) java.sql.Timestamp.valueOf(((java.time.OffsetDateTime) in).toLocalDateTime()) + .getTime(); + } else if (in instanceof String || in instanceof Character) { + try { + String s = in.toString().trim(); + if (s.equalsIgnoreCase("true")) { + return Integer.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Integer.valueOf(0); + } else { + return Integer.parseInt(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.lang.Integer类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + String s = clob2Str((java.sql.Clob) in).trim(); + if (s.equalsIgnoreCase("true")) { + return Integer.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Integer.valueOf(0); + } else { + return Integer.parseInt(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.lang.Integer类型:%s", e.getMessage())); + } + } else if (in instanceof Boolean) { + return (Boolean) in ? (int) 1 : (int) 0; + } + + return null; + } + + /** + * 将任意类型转换为java.lang.Long类型 + * + * @param in 任意类型的对象实例 + * @return java.lang.Long类型 + */ + public static Long castToLong(final Object in) { + if (in instanceof Number) { + return ((Number) in).longValue(); + } else if (in instanceof Byte) { + return (long) (((byte) in) & 0xff); + } else if (in instanceof java.util.Date) { + return ((java.util.Date) in).getTime(); + } else if (in instanceof java.util.Calendar) { + return ((java.util.Calendar) in).getTime().getTime(); + } else if (in instanceof LocalDateTime) { + return java.sql.Timestamp.valueOf((LocalDateTime) in).getTime(); + } else if (in instanceof java.time.OffsetDateTime) { + return java.sql.Timestamp.valueOf(((java.time.OffsetDateTime) in).toLocalDateTime()) + .getTime(); + } else if (in instanceof String || in instanceof Character) { + try { + String s = in.toString().trim(); + if (s.equalsIgnoreCase("true")) { + return Long.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Long.valueOf(0); + } else { + return Long.parseLong(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.lang.Long类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + String s = clob2Str((java.sql.Clob) in).trim(); + if (s.equalsIgnoreCase("true")) { + return Long.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Long.valueOf(0); + } else { + return Long.parseLong(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.lang.Long类型:%s", e.getMessage())); + } + } else if (in instanceof Boolean) { + return (Boolean) in ? (long) 1 : (long) 0; + } + + return null; + } + + /** + * 将任意类型转换为java.lang.Number类型 + * + * @param in 任意类型的对象实例 + * @return java.lang.Number类型 + */ + public static Number castToNumeric(final Object in) { + if (in instanceof Number) { + return (Number) in; + } else if (in instanceof java.util.Date) { + return ((java.util.Date) in).getTime(); + } else if (in instanceof java.util.Calendar) { + return ((java.util.Calendar) in).getTime().getTime(); + } else if (in instanceof LocalDateTime) { + return java.sql.Timestamp.valueOf((LocalDateTime) in).getTime(); + } else if (in instanceof java.time.OffsetDateTime) { + return java.sql.Timestamp.valueOf(((java.time.OffsetDateTime) in).toLocalDateTime()) + .getTime(); + } else if (in instanceof String || in instanceof Character) { + try { + String s = in.toString().trim(); + if (s.equalsIgnoreCase("true")) { + return Integer.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Integer.valueOf(0); + } else { + return new java.math.BigDecimal(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.lang.Number类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + String s = clob2Str((java.sql.Clob) in).trim(); + if (s.equalsIgnoreCase("true")) { + return Integer.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Integer.valueOf(0); + } else { + return new java.math.BigDecimal(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.lang.Number类型:%s", e.getMessage())); + } + } else if (in instanceof Boolean) { + return (Boolean) in ? (long) 1 : (long) 0; + } + + return null; + } + + /** + * 将任意类型转换为java.lang.Float类型 + * + * @param in 任意类型的对象实例 + * @return java.lang.Float类型 + */ + public static Float castToFloat(final Object in) { + if (in instanceof Number) { + return ((Number) in).floatValue(); + } else if (in instanceof java.util.Date) { + return (float) ((java.util.Date) in).getTime(); + } else if (in instanceof java.util.Calendar) { + return (float) ((java.util.Calendar) in).getTime().getTime(); + } else if (in instanceof LocalDateTime) { + return (float) java.sql.Timestamp.valueOf((LocalDateTime) in).getTime(); + } else if (in instanceof java.time.OffsetDateTime) { + return (float) java.sql.Timestamp.valueOf(((java.time.OffsetDateTime) in).toLocalDateTime()) + .getTime(); + } else if (in instanceof String || in instanceof Character) { + try { + String s = in.toString().trim(); + if (s.equalsIgnoreCase("true")) { + return Float.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Float.valueOf(0); + } else { + return Float.parseFloat(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.lang.Float类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + String s = clob2Str((java.sql.Clob) in).trim(); + if (s.equalsIgnoreCase("true")) { + return Float.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Float.valueOf(0); + } else { + return Float.parseFloat(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.lang.Float类型:%s", e.getMessage())); + } + } else if (in instanceof Boolean) { + return (Boolean) in ? 1f : 0f; + } + + return null; + } + + /** + * 将任意类型转换为java.lang.Double类型 + * + * @param in 任意类型的对象实例 + * @return java.lang.Double类型 + */ + public static Double castToDouble(final Object in) { + if (in instanceof Number) { + return ((Number) in).doubleValue(); + } else if (in instanceof java.util.Date) { + return (double) ((java.util.Date) in).getTime(); + } else if (in instanceof java.util.Calendar) { + return (double) ((java.util.Calendar) in).getTime().getTime(); + } else if (in instanceof LocalDateTime) { + return (double) java.sql.Timestamp.valueOf((LocalDateTime) in).getTime(); + } else if (in instanceof java.time.OffsetDateTime) { + return (double) java.sql.Timestamp.valueOf(((java.time.OffsetDateTime) in).toLocalDateTime()) + .getTime(); + } else if (in instanceof String || in instanceof Character) { + try { + String s = in.toString().trim(); + if (s.equalsIgnoreCase("true")) { + return Double.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Double.valueOf(0); + } else { + return Double.parseDouble(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将将java.lang.String类型转换为java.lang.Double类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + String s = clob2Str((java.sql.Clob) in).trim(); + if (s.equalsIgnoreCase("true")) { + return Double.valueOf(1); + } else if (s.equalsIgnoreCase("false")) { + return Double.valueOf(0); + } else { + return Double.parseDouble(s); + } + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.lang.Double类型:%s", e.getMessage())); + } + } else if (in instanceof Boolean) { + return (Boolean) in ? 1d : 0d; + } + + return null; + } + + /** + * 将任意类型转换为java.time.LocalDate类型 + * + * @param in 任意类型的对象实例 + * @return java.time.LocalDate类型 + */ + public static LocalDate castToLocalDate(final Object in) { + if (in instanceof java.sql.Time) { + java.sql.Time date = (java.sql.Time) in; + LocalDate localDate = Instant.ofEpochMilli(date.getTime()).atZone(ZoneId.systemDefault()) + .toLocalDate(); + return localDate; + } else if (in instanceof java.sql.Timestamp) { + java.sql.Timestamp t = (java.sql.Timestamp) in; + LocalDateTime localDateTime = LocalDateTime.ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime.toLocalDate(); + } else if (in instanceof java.util.Date) { + java.util.Date date = (java.util.Date) in; + LocalDate localDate = Instant.ofEpochMilli(date.getTime()).atZone(ZoneId.systemDefault()) + .toLocalDate(); + return localDate; + } else if (in instanceof java.util.Calendar) { + java.sql.Date date = new java.sql.Date(((java.util.Calendar) in).getTime().getTime()); + LocalDate localDate = Instant.ofEpochMilli(date.getTime()).atZone(ZoneId.systemDefault()) + .toLocalDate(); + return localDate; + } else if (in instanceof LocalDate) { + return (LocalDate) in; + } else if (in instanceof LocalTime) { + return LocalDate.MIN; + } else if (in instanceof LocalDateTime) { + return ((LocalDateTime) in).toLocalDate(); + } else if (in instanceof java.time.OffsetDateTime) { + return ((java.time.OffsetDateTime) in).toLocalDate(); + } else if (in.getClass().getName().equals("oracle.sql.TIMESTAMP")) { + Class clz = in.getClass(); + try { + Method m = clz.getMethod("timestampValue"); + java.sql.Timestamp date = (java.sql.Timestamp) m.invoke(in); + LocalDate localDate = date.toInstant().atZone(ZoneId.systemDefault()).toLocalDate(); + return localDate; + } catch (Exception e) { + throw new RuntimeException(e); + } + } else if (in.getClass().getName().equals("microsoft.sql.DateTimeOffset")) { + Class clz = in.getClass(); + try { + Method m = clz.getMethod("getTimestamp"); + java.sql.Timestamp t = (java.sql.Timestamp) m.invoke(in); + LocalDateTime localDateTime = LocalDateTime + .ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime.toLocalDate(); + } catch (Exception e) { + throw new RuntimeException(e); + } + } else if (in instanceof String || in instanceof Character) { + try { + java.sql.Time date = java.sql.Time.valueOf(in.toString()); + LocalDate localDate = Instant.ofEpochMilli(date.getTime()).atZone(ZoneId.systemDefault()) + .toLocalDate(); + return localDate; + } catch (IllegalArgumentException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.sql.Time类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + java.sql.Time date = java.sql.Time.valueOf(clob2Str((java.sql.Clob) in)); + LocalDate localDate = Instant.ofEpochMilli(date.getTime()).atZone(ZoneId.systemDefault()) + .toLocalDate(); + return localDate; + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.sql.Time类型:%s", e.getMessage())); + } + } else if (in instanceof Number) { + java.sql.Timestamp t = new java.sql.Timestamp(((Number) in).longValue()); + LocalDateTime localDateTime = LocalDateTime.ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime.toLocalDate(); + } + + return null; + } + + /** + * 将任意类型转换为java.time.LocalTime类型 + * + * @param in 任意类型的对象实例 + * @return java.time.LocalDate类型 + */ + public static LocalTime castToLocalTime(final Object in) { + if (in instanceof java.sql.Time) { + java.sql.Time date = (java.sql.Time) in; + LocalTime localTime = Instant.ofEpochMilli(date.getTime()).atZone(ZoneId.systemDefault()) + .toLocalTime(); + return localTime; + } else if (in instanceof java.sql.Timestamp) { + java.sql.Timestamp t = (java.sql.Timestamp) in; + LocalDateTime localDateTime = LocalDateTime.ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime.toLocalTime(); + } else if (in instanceof java.util.Date) { + return LocalTime.of(0, 0, 0); + } else if (in instanceof java.util.Calendar) { + java.sql.Date date = new java.sql.Date(((java.util.Calendar) in).getTime().getTime()); + LocalDateTime localDateTime = Instant.ofEpochMilli(date.getTime()) + .atZone(ZoneId.systemDefault()) + .toLocalDateTime(); + return localDateTime.toLocalTime(); + } else if (in instanceof LocalDate) { + return LocalTime.of(0, 0, 0); + } else if (in instanceof LocalTime) { + return (LocalTime) in; + } else if (in instanceof LocalDateTime) { + return ((LocalDateTime) in).toLocalTime(); + } else if (in instanceof java.time.OffsetDateTime) { + return ((java.time.OffsetDateTime) in).toLocalTime(); + } else if (in.getClass().getName().equals("oracle.sql.TIMESTAMP")) { + Class clz = in.getClass(); + try { + Method m = clz.getMethod("timestampValue"); + java.sql.Timestamp date = (java.sql.Timestamp) m.invoke(in); + LocalDateTime localDateTime = date.toInstant().atZone(ZoneId.systemDefault()) + .toLocalDateTime(); + return localDateTime.toLocalTime(); + } catch (Exception e) { + throw new RuntimeException(e); + } + } else if (in.getClass().getName().equals("microsoft.sql.DateTimeOffset")) { + Class clz = in.getClass(); + try { + Method m = clz.getMethod("getTimestamp"); + java.sql.Timestamp t = (java.sql.Timestamp) m.invoke(in); + LocalDateTime localDateTime = LocalDateTime + .ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime.toLocalTime(); + } catch (Exception e) { + throw new RuntimeException(e); + } + } else if (in instanceof String || in instanceof Character) { + try { + java.sql.Time date = java.sql.Time.valueOf(in.toString()); + return LocalTime.ofSecondOfDay(date.getTime()); + } catch (IllegalArgumentException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.sql.Time类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + java.sql.Time date = java.sql.Time.valueOf(clob2Str((java.sql.Clob) in)); + return LocalTime.ofSecondOfDay(date.getTime()); + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.sql.Time类型:%s", e.getMessage())); + } + } else if (in instanceof Number) { + java.sql.Timestamp t = new java.sql.Timestamp(((Number) in).longValue()); + LocalDateTime localDateTime = LocalDateTime.ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime.toLocalTime(); + } + + return null; + } + + /** + * 将任意类型转换为java.time.LocalDateTime类型 + * + * @param in 任意类型的对象实例 + * @return java.time.LocalDateTime类型 + */ + public static LocalDateTime castToLocalDateTime(final Object in) { + if (in instanceof java.sql.Timestamp) { + java.sql.Timestamp t = (java.sql.Timestamp) in; + LocalDateTime localDateTime = LocalDateTime.ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime; + } else if (in instanceof java.sql.Date) { + java.sql.Date date = (java.sql.Date) in; + LocalDate localDate = date.toLocalDate(); + LocalTime localTime = LocalTime.of(0, 0, 0); + LocalDateTime localDateTime = LocalDateTime.of(localDate, localTime); + return localDateTime; + } else if (in instanceof java.sql.Time) { + java.sql.Time date = (java.sql.Time) in; + java.sql.Timestamp t = new java.sql.Timestamp(date.getTime()); + LocalDateTime localDateTime = LocalDateTime.ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime; + } else if (in instanceof java.util.Date) { + java.sql.Timestamp t = new java.sql.Timestamp(((java.util.Date) in).getTime()); + LocalDateTime localDateTime = LocalDateTime.ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime; + } else if (in instanceof java.util.Calendar) { + java.sql.Timestamp t = new java.sql.Timestamp(((java.util.Calendar) in).getTime().getTime()); + LocalDateTime localDateTime = LocalDateTime.ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime; + } else if (in instanceof LocalDate) { + LocalDate localDate = (LocalDate) in; + LocalTime localTime = LocalTime.of(0, 0, 0); + LocalDateTime localDateTime = LocalDateTime.of(localDate, localTime); + return localDateTime; + } else if (in instanceof LocalTime) { + LocalDate localDate = LocalDate.MIN; + LocalTime localTime = (LocalTime) in; + LocalDateTime localDateTime = LocalDateTime.of(localDate, localTime); + return localDateTime; + } else if (in instanceof LocalDateTime) { + return (LocalDateTime) in; + } else if (in instanceof java.time.OffsetDateTime) { + return ((java.time.OffsetDateTime) in).toLocalDateTime(); + } else if (in.getClass().getName().equals("oracle.sql.TIMESTAMP")) { + Class clz = in.getClass(); + try { + Method m = clz.getMethod("timestampValue"); + java.sql.Timestamp t = (java.sql.Timestamp) m.invoke(in); + LocalDateTime localDateTime = LocalDateTime + .ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime; + } catch (Exception e) { + throw new RuntimeException(e); + } + } else if (in.getClass().getName().equals("microsoft.sql.DateTimeOffset")) { + Class clz = in.getClass(); + try { + Method m = clz.getMethod("getTimestamp"); + java.sql.Timestamp t = (java.sql.Timestamp) m.invoke(in); + LocalDateTime localDateTime = LocalDateTime + .ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime; + } catch (Exception e) { + throw new RuntimeException(e); + } + } else if (in instanceof String || in instanceof Character) { + try { + java.sql.Timestamp t = java.sql.Timestamp.valueOf(in.toString()); + LocalDateTime localDateTime = LocalDateTime + .ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime; + } catch (IllegalArgumentException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.sql.TimeStamp类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + java.sql.Timestamp t = java.sql.Timestamp.valueOf(clob2Str((java.sql.Clob) in)); + LocalDateTime localDateTime = LocalDateTime + .ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime; + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.sql.TimeStamp类型:%s", e.getMessage())); + } + } else if (in instanceof Number) { + java.sql.Timestamp t = new java.sql.Timestamp(((Number) in).longValue()); + LocalDateTime localDateTime = LocalDateTime.ofInstant(t.toInstant(), ZoneId.systemDefault()); + return localDateTime; + } + + return null; + } + + /** + * 将任意类型转换为byte[]类型 + * + * @param in 任意类型的对象实例 + * @return byte[]类型 + */ + public static byte[] castToByteArray(final Object in) { + if (in instanceof byte[]) { + return (byte[]) in; + } else if (in instanceof java.util.Date) { + return in.toString().getBytes(); + } else if (in instanceof java.sql.Blob) { + return blob2Bytes((java.sql.Blob) in); + } else if (in instanceof String || in instanceof Character) { + return in.toString().getBytes(); + } else if (in instanceof java.sql.Clob) { + return clob2Str((java.sql.Clob) in).toString().getBytes(); + } else { + return toByteArray(in); + } + } + + /** + * 将任意类型转换为Boolean类型 + * + * @param in 任意类型的对象实例 + * @return Boolean类型 + */ + public static Boolean castToBoolean(final Object in) { + if (in instanceof Boolean) { + return (Boolean) in; + } else if (in instanceof Number) { + return ((Number) in).intValue() != 0; + } else if (in instanceof String || in instanceof Character) { + try { + return Boolean.parseBoolean(in.toString()); + } catch (IllegalArgumentException e) { + throw new RuntimeException( + String.format("无法将java.lang.String类型转换为java.lang.Boolean类型:%s", e.getMessage())); + } + } else if (in instanceof java.sql.Clob) { + try { + return Boolean.parseBoolean(clob2Str((java.sql.Clob) in)); + } catch (NumberFormatException e) { + throw new RuntimeException( + String.format("无法将java.sql.Clob类型转换为java.lang.Boolean类型:%s", e.getMessage())); + } + } + + return null; + } + + public static Object castByDetermine(final Object in) { + if (null == in) { + return null; + } + + if (in instanceof java.sql.Clob) { + return clob2Str((java.sql.Clob) in); + } else if (in instanceof java.sql.Array + || in instanceof java.sql.SQLXML) { + try { + return objectToString(in); + } catch (Exception e) { + log.warn("Unsupported type for convert {} to java.lang.String", in.getClass().getName()); + return null; + } + } else if (in instanceof java.sql.Blob) { + try { + return blob2Bytes((java.sql.Blob) in); + } catch (Exception e) { + log.warn("Unsupported type for convert {} to byte[] ", in.getClass().getName()); + return null; + } + } else if (in instanceof Struct) { + log.warn("Unsupported type for convert {} to java.lang.String", in.getClass().getName()); + return null; + } + + return in; + } + + public static String objectToString(final Object in) { + String v = in.toString(); + String a = in.getClass().getName() + "@" + Integer.toHexString(in.hashCode()); + if (a.length() == v.length() && StringUtils.equals(a, v)) { + throw new UnsupportedOperationException("Unsupported convert " + + in.getClass().getName() + " to java.lang.String"); + } + + return v; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/IPgBulkInsert.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/IPgBulkInsert.java new file mode 100644 index 0000000..1a68b9e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/IPgBulkInsert.java @@ -0,0 +1,11 @@ +package srt.cloud.framework.dbswitch.pgwriter; + +import org.postgresql.PGConnection; + +import java.sql.SQLException; +import java.util.stream.Stream; + +public interface IPgBulkInsert { + + void saveAll(PGConnection connection, Stream entities) throws SQLException; +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/PgBulkInsert.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/PgBulkInsert.java new file mode 100644 index 0000000..9d40bc8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/PgBulkInsert.java @@ -0,0 +1,68 @@ +package srt.cloud.framework.dbswitch.pgwriter; + +import srt.cloud.framework.dbswitch.pgwriter.configuration.Configuration; +import srt.cloud.framework.dbswitch.pgwriter.configuration.IConfiguration; +import srt.cloud.framework.dbswitch.pgwriter.exceptions.SaveEntityFailedException; +import srt.cloud.framework.dbswitch.pgwriter.mapping.AbstractMapping; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.PgBinaryWriter; +import org.postgresql.PGConnection; +import org.postgresql.copy.PGCopyOutputStream; + +import java.sql.SQLException; +import java.util.Collection; +import java.util.Objects; +import java.util.stream.Stream; + +public class PgBulkInsert implements IPgBulkInsert { + + private final IConfiguration configuration; + private final AbstractMapping mapping; + + public PgBulkInsert(AbstractMapping mapping) { + this(new Configuration(), mapping); + } + + public PgBulkInsert(IConfiguration configuration, AbstractMapping mapping) { + Objects.requireNonNull(configuration, "'configuration' has to be set"); + Objects.requireNonNull(mapping, "'mapping' has to be set"); + + this.configuration = configuration; + this.mapping = mapping; + } + + @Override + public void saveAll(PGConnection connection, Stream entities) throws SQLException { + // Wrap the CopyOutputStream in our own Writer: + try (PgBinaryWriter bw = new PgBinaryWriter( + new PGCopyOutputStream(connection, mapping.getCopyCommand(), 1), + configuration.getBufferSize())) { + // Insert Each Column: + entities.forEach(entity -> saveEntitySynchonized(bw, entity)); + } + } + + public void saveAll(PGConnection connection, Collection entities) throws SQLException { + saveAll(connection, entities.stream()); + } + + private void saveEntity(PgBinaryWriter bw, TEntity entity) throws SaveEntityFailedException { + // Start a new Row in PostgreSQL: + bw.startRow(mapping.getColumns().size()); + + try { + // Iterate over each column mapping: + mapping.getColumns().forEach(column -> { + column.getWrite().accept(bw, entity); + }); + } catch (Exception e) { + throw new SaveEntityFailedException(e); + } + } + + private void saveEntitySynchonized(PgBinaryWriter bw, TEntity entity) + throws SaveEntityFailedException { + synchronized (bw) { + saveEntity(bw, entity); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/bulkprocessor/BulkProcessor.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/bulkprocessor/BulkProcessor.java new file mode 100644 index 0000000..df3d0c1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/bulkprocessor/BulkProcessor.java @@ -0,0 +1,117 @@ +package srt.cloud.framework.dbswitch.pgwriter.bulkprocessor; + +import srt.cloud.framework.dbswitch.pgwriter.bulkprocessor.handler.IBulkWriteHandler; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.Optional; +import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.ScheduledThreadPoolExecutor; +import java.util.concurrent.TimeUnit; + +public class BulkProcessor implements AutoCloseable { + + private final ScheduledThreadPoolExecutor scheduler; + + private final ScheduledFuture scheduledFuture; + + private volatile boolean closed = false; + + private final IBulkWriteHandler handler; + + private final int bulkSize; + + private List batchedEntities; + + public BulkProcessor(IBulkWriteHandler handler, int bulkSize) { + this(handler, bulkSize, null); + } + + public BulkProcessor(IBulkWriteHandler handler, int bulkSize, Duration flushInterval) { + + this.handler = handler; + this.bulkSize = bulkSize; + + // Start with an empty List of batched entities: + this.batchedEntities = new ArrayList<>(); + + if (flushInterval != null) { + // Create a Scheduler for the time-based Flush Interval: + this.scheduler = new ScheduledThreadPoolExecutor(1); + this.scheduler.setExecuteExistingDelayedTasksAfterShutdownPolicy(false); + this.scheduler.setContinueExistingPeriodicTasksAfterShutdownPolicy(false); + this.scheduledFuture = this.scheduler + .scheduleWithFixedDelay(new Flush(), flushInterval.toMillis(), flushInterval.toMillis(), + TimeUnit.MILLISECONDS); + } else { + this.scheduler = null; + this.scheduledFuture = null; + } + } + + public synchronized BulkProcessor add(TEntity entity) { + batchedEntities.add(entity); + executeIfNeccessary(); + return this; + } + + @Override + public void close() throws Exception { + // If the Processor has already been closed, do not proceed: + if (closed) { + return; + } + closed = true; + + // Quit the Scheduled FlushInterval Future: + Optional.ofNullable(this.scheduledFuture).ifPresent(future -> future.cancel(false)); + Optional.ofNullable(this.scheduler).ifPresent(ScheduledThreadPoolExecutor::shutdown); + + // Are there any entities left to write? + if (batchedEntities.size() > 0) { + execute(); + } + } + + private void executeIfNeccessary() { + if (batchedEntities.size() >= bulkSize) { + execute(); + } + } + + // (currently) needs to be executed under a lock + private void execute() { + // Assign to a new List: + final List entities = batchedEntities; + // We can restart batching entities: + batchedEntities = new ArrayList<>(); + // Write the previously batched entities to PostgreSQL: + write(entities); + } + + private void write(List entities) { + try { + handler.write(entities); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + class Flush implements Runnable { + + @Override + public void run() { + synchronized (BulkProcessor.this) { + if (closed) { + return; + } + if (batchedEntities.size() == 0) { + return; + } + execute(); + } + + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/bulkprocessor/handler/BulkWriteHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/bulkprocessor/handler/BulkWriteHandler.java new file mode 100644 index 0000000..1b22829 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/bulkprocessor/handler/BulkWriteHandler.java @@ -0,0 +1,32 @@ +package srt.cloud.framework.dbswitch.pgwriter.bulkprocessor.handler; + +import srt.cloud.framework.dbswitch.pgwriter.IPgBulkInsert; +import srt.cloud.framework.dbswitch.pgwriter.util.PostgreSqlUtils; +import org.postgresql.PGConnection; + +import java.sql.Connection; +import java.util.List; +import java.util.function.Supplier; + +public class BulkWriteHandler implements IBulkWriteHandler { + + private final IPgBulkInsert client; + + private final Supplier connectionFactory; + + public BulkWriteHandler(IPgBulkInsert client, Supplier connectionFactory) { + this.client = client; + this.connectionFactory = connectionFactory; + } + + @Override + public void write(List entities) throws Exception { + // Obtain a new Connection and execute it in a try with resources block, so it gets closed properly: + try (Connection connection = connectionFactory.get()) { + // Now get the underlying PGConnection for the COPY API wrapping: + final PGConnection pgConnection = PostgreSqlUtils.getPGConnection(connection); + // And finally save all entities by using the COPY API: + client.saveAll(pgConnection, entities.stream()); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/bulkprocessor/handler/IBulkWriteHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/bulkprocessor/handler/IBulkWriteHandler.java new file mode 100644 index 0000000..d752839 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/bulkprocessor/handler/IBulkWriteHandler.java @@ -0,0 +1,9 @@ +package srt.cloud.framework.dbswitch.pgwriter.bulkprocessor.handler; + +import java.util.List; + +public interface IBulkWriteHandler { + + void write(List entities) throws Exception; + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/configuration/Configuration.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/configuration/Configuration.java new file mode 100644 index 0000000..9908fd0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/configuration/Configuration.java @@ -0,0 +1,19 @@ +package srt.cloud.framework.dbswitch.pgwriter.configuration; + +public class Configuration implements IConfiguration { + + private final int bufferSize; + + public Configuration() { + this(65536); + } + + public Configuration(int bufferSize) { + this.bufferSize = bufferSize; + } + + @Override + public int getBufferSize() { + return bufferSize; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/configuration/IConfiguration.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/configuration/IConfiguration.java new file mode 100644 index 0000000..c7ddcc2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/configuration/IConfiguration.java @@ -0,0 +1,6 @@ +package srt.cloud.framework.dbswitch.pgwriter.configuration; + +public interface IConfiguration { + + int getBufferSize(); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/BinaryWriteFailedException.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/BinaryWriteFailedException.java new file mode 100644 index 0000000..035a199 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/BinaryWriteFailedException.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.pgwriter.exceptions; + +public class BinaryWriteFailedException extends RuntimeException { + + public BinaryWriteFailedException(String message) { + super(message); + } + + public BinaryWriteFailedException() { + } + + public BinaryWriteFailedException(String message, Throwable cause) { + super(message, cause); + } + + public BinaryWriteFailedException(Throwable cause) { + super(cause); + } + + public BinaryWriteFailedException(String message, Throwable cause, boolean enableSuppression, + boolean writableStackTrace) { + super(message, cause, enableSuppression, writableStackTrace); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/PgConnectionException.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/PgConnectionException.java new file mode 100644 index 0000000..9a9e271 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/PgConnectionException.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.pgwriter.exceptions; + +public class PgConnectionException extends RuntimeException { + + public PgConnectionException(String message) { + super(message); + } + + public PgConnectionException() { + } + + public PgConnectionException(String message, Throwable cause) { + super(message, cause); + } + + public PgConnectionException(Throwable cause) { + super(cause); + } + + public PgConnectionException(String message, Throwable cause, boolean enableSuppression, + boolean writableStackTrace) { + super(message, cause, enableSuppression, writableStackTrace); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/SaveEntityFailedException.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/SaveEntityFailedException.java new file mode 100644 index 0000000..20b1621 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/SaveEntityFailedException.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.pgwriter.exceptions; + +public class SaveEntityFailedException extends RuntimeException { + + public SaveEntityFailedException(String message) { + super(message); + } + + public SaveEntityFailedException() { + } + + public SaveEntityFailedException(String message, Throwable cause) { + super(message, cause); + } + + public SaveEntityFailedException(Throwable cause) { + super(cause); + } + + public SaveEntityFailedException(String message, Throwable cause, boolean enableSuppression, + boolean writableStackTrace) { + super(message, cause, enableSuppression, writableStackTrace); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/ValueHandlerAlreadyRegisteredException.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/ValueHandlerAlreadyRegisteredException.java new file mode 100644 index 0000000..3868f06 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/ValueHandlerAlreadyRegisteredException.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.pgwriter.exceptions; + +public class ValueHandlerAlreadyRegisteredException extends RuntimeException { + + public ValueHandlerAlreadyRegisteredException(String message) { + super(message); + } + + public ValueHandlerAlreadyRegisteredException() { + } + + public ValueHandlerAlreadyRegisteredException(String message, Throwable cause) { + super(message, cause); + } + + public ValueHandlerAlreadyRegisteredException(Throwable cause) { + super(cause); + } + + public ValueHandlerAlreadyRegisteredException(String message, Throwable cause, + boolean enableSuppression, boolean writableStackTrace) { + super(message, cause, enableSuppression, writableStackTrace); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/ValueHandlerNotRegisteredException.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/ValueHandlerNotRegisteredException.java new file mode 100644 index 0000000..6c508aa --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/exceptions/ValueHandlerNotRegisteredException.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.pgwriter.exceptions; + +public class ValueHandlerNotRegisteredException extends RuntimeException { + + public ValueHandlerNotRegisteredException(String message) { + super(message); + } + + public ValueHandlerNotRegisteredException() { + } + + public ValueHandlerNotRegisteredException(String message, Throwable cause) { + super(message, cause); + } + + public ValueHandlerNotRegisteredException(Throwable cause) { + super(cause); + } + + public ValueHandlerNotRegisteredException(String message, Throwable cause, + boolean enableSuppression, boolean writableStackTrace) { + super(message, cause, enableSuppression, writableStackTrace); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/function/ToBooleanFunction.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/function/ToBooleanFunction.java new file mode 100644 index 0000000..18f265c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/function/ToBooleanFunction.java @@ -0,0 +1,13 @@ +package srt.cloud.framework.dbswitch.pgwriter.function; + +@FunctionalInterface +public interface ToBooleanFunction { + + /** + * Applies this function to the given argument. + * + * @param value the function argument + * @return the function result + */ + boolean applyAsBoolean(T value); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/function/ToFloatFunction.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/function/ToFloatFunction.java new file mode 100644 index 0000000..66ff288 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/function/ToFloatFunction.java @@ -0,0 +1,13 @@ +package srt.cloud.framework.dbswitch.pgwriter.function; + +@FunctionalInterface +public interface ToFloatFunction { + + /** + * Applies this function to the given argument. + * + * @param value the function argument + * @return the function result + */ + float applyAsFloat(T value); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/mapping/AbstractMapping.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/mapping/AbstractMapping.java new file mode 100644 index 0000000..192ee2d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/mapping/AbstractMapping.java @@ -0,0 +1,404 @@ +package srt.cloud.framework.dbswitch.pgwriter.mapping; + +import srt.cloud.framework.dbswitch.pgwriter.function.ToBooleanFunction; +import srt.cloud.framework.dbswitch.pgwriter.function.ToFloatFunction; +import srt.cloud.framework.dbswitch.pgwriter.model.ColumnDefinition; +import srt.cloud.framework.dbswitch.pgwriter.model.TableDefinition; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.PgBinaryWriter; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.constants.DataType; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.constants.ObjectIdentifier; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.CollectionValueHandler; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.IValueHandler; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.IValueHandlerProvider; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.RangeValueHandler; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.ValueHandlerProvider; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Box; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Circle; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Line; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.LineSegment; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Path; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Point; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Polygon; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.network.MacAddress; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.range.Range; +import srt.cloud.framework.dbswitch.pgwriter.util.PostgreSqlUtils; + +import java.net.Inet4Address; +import java.net.Inet6Address; +import java.time.LocalDate; +import java.time.LocalDateTime; +import java.time.LocalTime; +import java.time.ZonedDateTime; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Map; +import java.util.UUID; +import java.util.function.BiConsumer; +import java.util.function.Function; +import java.util.function.ToDoubleFunction; +import java.util.function.ToIntFunction; +import java.util.function.ToLongFunction; +import java.util.stream.Collectors; + +public abstract class AbstractMapping { + + protected boolean usePostgresQuoting; + + protected final IValueHandlerProvider provider; + + protected final TableDefinition table; + + protected final List> columns; + + protected AbstractMapping(String schemaName, String tableName) { + this(new ValueHandlerProvider(), schemaName, tableName, false); + } + + protected AbstractMapping(String schemaName, String tableName, boolean usePostgresQuoting) { + this(new ValueHandlerProvider(), schemaName, tableName, usePostgresQuoting); + } + + protected AbstractMapping(IValueHandlerProvider provider, String schemaName, String tableName, + boolean usePostgresQuoting) { + this.provider = provider; + this.table = new TableDefinition(schemaName, tableName); + this.usePostgresQuoting = usePostgresQuoting; + this.columns = new ArrayList<>(); + } + + protected void usePostgresQuoting(boolean enabled) { + this.usePostgresQuoting = enabled; + } + + protected > void mapCollection( + String columnName, DataType dataType, Function propertyGetter) { + + final IValueHandler valueHandler = provider.resolve(dataType); + final int valueOID = ObjectIdentifier.mapFrom(dataType); + + map(columnName, new CollectionValueHandler<>(valueOID, valueHandler), propertyGetter); + } + + protected void map(String columnName, DataType dataType, + Function propertyGetter) { + final IValueHandler valueHandler = provider.resolve(dataType); + + map(columnName, valueHandler, propertyGetter); + } + + protected void map(String columnName, IValueHandler valueHandler, + Function propertyGetter) { + addColumn(columnName, (binaryWriter, entity) -> { + binaryWriter.write(valueHandler, propertyGetter.apply(entity)); + }); + } + + // region Numeric + + protected void mapBoolean(String columnName, Function propertyGetter) { + map(columnName, DataType.Boolean, propertyGetter); + } + + protected void mapBoolean(String columnName, ToBooleanFunction propertyGetter) { + addColumn(columnName, (binaryWriter, entity) -> { + binaryWriter.writeBoolean(propertyGetter.applyAsBoolean(entity)); + }); + } + + protected void mapByte(String columnName, Function propertyGetter) { + map(columnName, DataType.Char, propertyGetter); + } + + protected void mapByte(String columnName, ToIntFunction propertyGetter) { + addColumn(columnName, (binaryWriter, entity) -> { + binaryWriter.writeByte(propertyGetter.applyAsInt(entity)); + }); + } + + protected void mapShort(String columnName, Function propertyGetter) { + map(columnName, DataType.Int2, propertyGetter); + } + + protected void mapShort(String columnName, ToIntFunction propertyGetter) { + addColumn(columnName, (binaryWriter, entity) -> { + binaryWriter.writeShort(propertyGetter.applyAsInt(entity)); + }); + } + + protected void mapInteger(String columnName, Function propertyGetter) { + map(columnName, DataType.Int4, propertyGetter); + } + + protected void mapInteger(String columnName, ToIntFunction propertyGetter) { + addColumn(columnName, (binaryWriter, entity) -> { + binaryWriter.writeInt(propertyGetter.applyAsInt(entity)); + }); + } + + protected void mapNumeric(String columnName, Function propertyGetter) { + map(columnName, DataType.Numeric, propertyGetter); + } + + protected void mapLong(String columnName, Function propertyGetter) { + map(columnName, DataType.Int8, propertyGetter); + } + + protected void mapLong(String columnName, ToLongFunction propertyGetter) { + addColumn(columnName, (binaryWriter, entity) -> { + binaryWriter.writeLong(propertyGetter.applyAsLong(entity)); + }); + } + + protected void mapFloat(String columnName, Function propertyGetter) { + map(columnName, DataType.SinglePrecision, propertyGetter); + } + + protected void mapFloat(String columnName, ToFloatFunction propertyGetter) { + addColumn(columnName, (binaryWriter, entity) -> { + binaryWriter.writeFloat(propertyGetter.applyAsFloat(entity)); + }); + } + + protected void mapDouble(String columnName, Function propertyGetter) { + map(columnName, DataType.DoublePrecision, propertyGetter); + } + + protected void mapDouble(String columnName, ToDoubleFunction propertyGetter) { + addColumn(columnName, (binaryWriter, entity) -> { + binaryWriter.writeDouble(propertyGetter.applyAsDouble(entity)); + }); + } + + // endregion + + // region Network + protected void mapInet4Addr(String columnName, Function propertyGetter) { + map(columnName, DataType.Inet4, propertyGetter); + } + + protected void mapInet6Addr(String columnName, Function propertyGetter) { + map(columnName, DataType.Inet6, propertyGetter); + } + + protected void mapMacAddress(String columnName, Function propertyGetter) { + map(columnName, DataType.MacAddress, propertyGetter); + } + + // endregion + + // region Temporal + + protected void mapDate(String columnName, Function propertyGetter) { + map(columnName, DataType.Date, propertyGetter); + } + + protected void mapTime(String columnName, Function propertyGetter) { + map(columnName, DataType.Time, propertyGetter); + } + + protected void mapTimeStamp(String columnName, Function propertyGetter) { + map(columnName, DataType.Timestamp, propertyGetter); + } + + protected void mapTimeStampTz(String columnName, + Function propertyGetter) { + map(columnName, DataType.TimestampTz, propertyGetter); + } + + // endregion + + // region Text + + protected void mapText(String columnName, Function propertyGetter) { + map(columnName, DataType.Text, propertyGetter); + } + + protected void mapVarChar(String columnName, Function propertyGetter) { + map(columnName, DataType.Text, propertyGetter); + } + + // engregion + + // region UUID + + protected void mapUUID(String columnName, Function propertyGetter) { + map(columnName, DataType.Uuid, propertyGetter); + } + + // endregion + + // region JSON + + protected void mapJsonb(String columnName, Function propertyGetter) { + map(columnName, DataType.Jsonb, propertyGetter); + } + + // endregion + + // region hstore + + protected void mapHstore(String columnName, + Function> propertyGetter) { + map(columnName, DataType.Hstore, propertyGetter); + } + + // endregion + + // region Geo + + protected void mapPoint(String columnName, Function propertyGetter) { + map(columnName, DataType.Point, propertyGetter); + } + + protected void mapBox(String columnName, Function propertyGetter) { + map(columnName, DataType.Box, propertyGetter); + } + + protected void mapPath(String columnName, Function propertyGetter) { + map(columnName, DataType.Path, propertyGetter); + } + + protected void mapPolygon(String columnName, Function propertyGetter) { + map(columnName, DataType.Polygon, propertyGetter); + } + + protected void mapLine(String columnName, Function propertyGetter) { + map(columnName, DataType.Line, propertyGetter); + } + + protected void mapLineSegment(String columnName, Function propertyGetter) { + map(columnName, DataType.LineSegment, propertyGetter); + } + + protected void mapCircle(String columnName, Function propertyGetter) { + map(columnName, DataType.Circle, propertyGetter); + } + + // endregion + + // region Arrays + + protected void mapBooleanArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.Boolean, propertyGetter); + } + + protected void mapByteArray(String columnName, Function propertyGetter) { + map(columnName, DataType.Bytea, propertyGetter); + } + + protected void mapShortArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.Int2, propertyGetter); + } + + protected void mapIntegerArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.Int4, propertyGetter); + } + + protected void mapLongArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.Int8, propertyGetter); + } + + protected void mapTextArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.Text, propertyGetter); + } + + protected void mapVarCharArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.VarChar, propertyGetter); + } + + protected void mapFloatArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.SinglePrecision, propertyGetter); + } + + protected void mapDoubleArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.DoublePrecision, propertyGetter); + } + + protected void mapNumericArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.Numeric, propertyGetter); + } + + protected void mapUUIDArray(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.Uuid, propertyGetter); + } + + protected void mapInet4Array(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.Inet4, propertyGetter); + } + + protected void mapInet6Array(String columnName, + Function> propertyGetter) { + mapCollection(columnName, DataType.Inet6, propertyGetter); + } + + // endregion + + // region Ranges + + protected void mapRange(String columnName, DataType dataType, + Function> propertyGetter) { + final IValueHandler valueHandler = provider.resolve(dataType); + + map(columnName, new RangeValueHandler<>(valueHandler), propertyGetter); + } + + protected void mapTsRange(String columnName, + Function> propertyGetter) { + map(columnName, DataType.TsRange, propertyGetter); + } + + protected void mapTsTzRange(String columnName, + Function> propertyGetter) { + map(columnName, DataType.TsTzRange, propertyGetter); + } + + protected void mapInt4Range(String columnName, Function> propertyGetter) { + map(columnName, DataType.Int4Range, propertyGetter); + } + + protected void mapInt8Range(String columnName, Function> propertyGetter) { + map(columnName, DataType.Int8Range, propertyGetter); + } + + protected void mapNumRange(String columnName, Function> propertyGetter) { + map(columnName, DataType.NumRange, propertyGetter); + } + + protected void mapDateRange(String columnName, + Function> propertyGetter) { + map(columnName, DataType.DateRange, propertyGetter); + } + + // endregion + + private void addColumn(String columnName, BiConsumer action) { + columns.add(new ColumnDefinition<>(columnName, action)); + } + + public List> getColumns() { + return columns; + } + + public String getCopyCommand() { + String commaSeparatedColumns = columns.stream() + .map(x -> x.getColumnName()) + .map(x -> usePostgresQuoting ? PostgreSqlUtils.quoteIdentifier(x) : x) + .collect(Collectors.joining(", ")); + + return String.format("COPY %1$s(%2$s) FROM STDIN BINARY", + table.GetFullyQualifiedTableName(usePostgresQuoting), + commaSeparatedColumns); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/model/ColumnDefinition.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/model/ColumnDefinition.java new file mode 100644 index 0000000..bace1f9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/model/ColumnDefinition.java @@ -0,0 +1,31 @@ +package srt.cloud.framework.dbswitch.pgwriter.model; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.PgBinaryWriter; + +import java.util.function.BiConsumer; + +public class ColumnDefinition { + + private final String columnName; + + private final BiConsumer write; + + public ColumnDefinition(String columnName, BiConsumer write) { + this.columnName = columnName; + this.write = write; + } + + public String getColumnName() { + return columnName; + } + + public BiConsumer getWrite() { + return write; + } + + @Override + public String toString() { + return String + .format("ColumnDefinition (ColumnName = {%1$s}, Serialize = {%2$s})", columnName, write); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/model/TableDefinition.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/model/TableDefinition.java new file mode 100644 index 0000000..6ace00a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/model/TableDefinition.java @@ -0,0 +1,37 @@ +package srt.cloud.framework.dbswitch.pgwriter.model; + +import srt.cloud.framework.dbswitch.pgwriter.util.PostgreSqlUtils; + +public class TableDefinition { + + private final String schema; + + private final String tableName; + + public TableDefinition(String tableName) { + this("", tableName); + } + + public TableDefinition(String schema, String tableName) { + this.schema = schema; + this.tableName = tableName; + } + + public String getSchema() { + return schema; + } + + public String getTableName() { + return tableName; + } + + public String GetFullyQualifiedTableName(boolean usePostgresQuoting) { + return PostgreSqlUtils.getFullyQualifiedTableName(schema, tableName, usePostgresQuoting); + } + + @Override + public String toString() { + return String.format("TableDefinition (Schema = {%1$s}, TableName = {%2$s})", + schema, tableName); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/PgBinaryWriter.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/PgBinaryWriter.java new file mode 100644 index 0000000..f84fc8c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/PgBinaryWriter.java @@ -0,0 +1,232 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql; + +import srt.cloud.framework.dbswitch.pgwriter.exceptions.BinaryWriteFailedException; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.IValueHandler; + +import java.io.BufferedOutputStream; +import java.io.DataOutputStream; +import java.io.IOException; +import java.io.OutputStream; + +public class PgBinaryWriter implements AutoCloseable { + + private final transient DataOutputStream buffer; + + public PgBinaryWriter(final OutputStream out) { + this(out, 65536); + } + + public PgBinaryWriter(final OutputStream out, final int bufferSize) { + buffer = new DataOutputStream(new BufferedOutputStream(out, bufferSize)); + writeHeader(); + } + + public void startRow(int numColumns) { + try { + buffer.writeShort(numColumns); + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + public void write(final IValueHandler handler, + final TTargetType value) { + handler.handle(buffer, value); + } + + /** + * Writes primitive boolean to the output stream + * + * @param value value to write + */ + public void writeBoolean(boolean value) { + try { + buffer.writeInt(1); + if (value) { + buffer.writeByte(1); + } else { + buffer.writeByte(0); + } + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + + /** + * Writes primitive byte to the output stream + * + * @param value value to write + */ + public void writeByte(int value) { + try { + buffer.writeInt(1); + buffer.writeByte(value); + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + /** + * Writes primitive short to the output stream + * + * @param value value to write + */ + public void writeShort(int value) { + try { + buffer.writeInt(2); + buffer.writeShort(value); + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + /** + * Writes primitive integer to the output stream + * + * @param value value to write + */ + public void writeInt(int value) { + try { + buffer.writeInt(4); + buffer.writeInt(value); + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + /** + * Writes primitive long to the output stream + * + * @param value value to write + */ + public void writeLong(long value) { + try { + buffer.writeInt(8); + buffer.writeLong(value); + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + /** + * Writes primitive float to the output stream + * + * @param value value to write + */ + public void writeFloat(float value) { + try { + buffer.writeInt(4); + buffer.writeFloat(value); + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + /** + * Writes primitive double to the output stream + * + * @param value value to write + */ + public void writeDouble(double value) { + try { + buffer.writeInt(8); + buffer.writeDouble(value); + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + /** + * Writes a Null Value. + */ + public void writeNull() { + try { + buffer.writeInt(-1); + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + @Override + public void close() { + try { + buffer.writeShort(-1); + + buffer.flush(); + buffer.close(); + } catch (Exception e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + private void writeHeader() { + try { + + // 11 bytes required header + buffer.writeBytes("PGCOPY\n\377\r\n\0"); + // 32 bit integer indicating no OID + buffer.writeInt(0); + // 32 bit header extension area length + buffer.writeInt(0); + + } catch (IOException e) { + Throwable t = e.getCause(); + if (null != t) { + throw new BinaryWriteFailedException(t); + } else { + throw new BinaryWriteFailedException(e); + } + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/constants/DataType.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/constants/DataType.java new file mode 100644 index 0000000..a59864e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/constants/DataType.java @@ -0,0 +1,59 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.constants; + +public enum DataType { + + Boolean, + Bytea, + Char, + Int8, + Int2, + Int4, + Text, + Jsonb, + SinglePrecision, + DoublePrecision, + Cash, + Money, + MacAddress, + Inet4, + Inet6, + Cidr, + Unknown, + Date, + Timestamp, + Uuid, + Point, + Box, + Line, + LineSegment, + Circle, + Path, + Polygon, + Hstore, + VarChar, + Xml, + Name, + Oid, + Tid, + Xid, + Cid, + AbsTime, + RelTime, + TInterval, + MacAddress8, + CharLength, + Time, + TimestampTz, + Interval, + TimeTz, + Bit, + VarBit, + Record, + Numeric, + TsRange, + TsTzRange, + Int4Range, + Int8Range, + NumRange, + DateRange +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/constants/ObjectIdentifier.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/constants/ObjectIdentifier.java new file mode 100644 index 0000000..7550789 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/constants/ObjectIdentifier.java @@ -0,0 +1,255 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.constants; + +import java.util.HashMap; +import java.util.Map; + +// https://github.com/postgres/postgres/blob/master/src/include/catalog/pg_type.h +public class ObjectIdentifier { + + // region OID 1 - 99 + + // boolean, 'true'/'false' + public static int Boolean = 16; + + // variable-length string, binary values escaped + public static int Bytea = 17; + + // single character + public static int Char = 18; + + // 63-byte type for storing system identifiers + public static int Name = 19; + + // ~18 digit integer, 8-byte storage + public static int Int8 = 20; + + // -32 thousand to 32 thousand, 2-byte storage + public static int Int2 = 21; + + // -2 billion to 2 billion integer, 4-byte storage + public static int Int4 = 23; + + // variable-length string, no limit specified + public static int Text = 25; + + // object identifier(oid), maximum 4 billion + public static int Oid = 26; + + // (block, offset), physical location of tuple + public static int Tid = 27; + + // transaction id + public static int Xid = 28; + + // command identifier type, sequence in transaction id + public static int Cid = 29; + + // endregion + + // region OID 100 - 199 + + // JSON + public static int Jsonb = 114; + + // XML content + public static int Xml = 115; + + // endregion + + // region OID 600 - 699 + + // geometric point '(x, y)' + public static int Point = 600; + + // geometric line segment '(pt1, pt2)' + public static int LineSegment = 601; + + // geometric path '(pt1,...)' + public static int Path = 602; + + // geometric box '(lower left, upper right)' + public static int Box = 603; + + // geometric polygon '(pt1, ...)' + public static int Polygon = 604; + + // geometric line + public static int Line = 628; + + // endregion + + // region OID 700 - 799 + + // single-precision floating point number, 4-byte storage + public static int SinglePrecision = 700; + + // double-precision floating point number, 8-byte storage + public static int DoublePrecision = 701; + + // absolute, limited-range date and time (Unix system time) + public static int AbsTime = 702; + + // relative, limited-range time interval (Unix delta time) + public static int RelTime = 703; + + // (abstime, abstime), time interval + public static int TInterval = 704; + + // unknown + public static int Unknown = 705; + + // geometric circle '(center, radius)' + public static int Circle = 705; + + // monetary amounts, $d,ddd.cc + public static int Cash = 790; + + // money + public static int Money = 791; + + // endregion + + // region OID 800 - 899 + + // XX:XX:XX:XX:XX:XX, MAC address + public static int MacAddress = 829; + + // IP address/netmask, host address, netmask optional + public static int Inet = 869; + + // network IP address/netmask, network address + public static int Cidr = 650; + + // XX:XX:XX:XX:XX:XX:XX:XX, MAC address + public static int MacAddress8 = 774; + + // endregion + + // region OIDS 1000 - 1099 + + // char(length), blank-padded string, fixed storage length + public static int CharLength = 1042; + + // varchar(length), non-blank-padded string, variable storage length + public static int VarCharLength = 1043; + + // Date + public static int Date = 1082; + + // Time Of Day + public static int Time = 1082; + + // endregion + + // region OIDS 1100 - 1199 + + // date and time + public static int Timestamp = 1114; + + // date and time with time zone + public static int TimestampTz = 1184; + + // Interval + public static int Interval = 1186; + + // endregion + + // region OIDS 1200 - 1299 + + // time of day with time zone + public static int TimeTz = 1266; + + // endregion + + // region OIDS 1500 - 1599 + + // fixed-length bit string + public static int Bit = 1560; + + // variable-length bit string + public static int VarBit = 1562; + + // endregion + + // region OIDS 1700 - 1799 + + public static int Numeric = 1700; + + // endregion + + // region UUID + + public static int Uuid = 2950; + + // endregion + + // region Pseudo-Types + + public static int Record = 2249; + + // endregion + + private static Map mapping = buildLookupTable(); + + private static Map buildLookupTable() { + + final Map mapping = new HashMap<>(); + + mapping.put(DataType.Boolean, Boolean); + mapping.put(DataType.Bytea, Bytea); + mapping.put(DataType.Char, Char); + mapping.put(DataType.Name, Name); + mapping.put(DataType.Int8, Int8); + mapping.put(DataType.Int2, Int2); + mapping.put(DataType.Int4, Int4); + mapping.put(DataType.Text, Text); + mapping.put(DataType.Oid, Oid); + mapping.put(DataType.Tid, Tid); + mapping.put(DataType.Xid, Xid); + mapping.put(DataType.Cid, Cid); + mapping.put(DataType.Jsonb, Jsonb); + mapping.put(DataType.Xml, Xml); + mapping.put(DataType.Point, Point); + mapping.put(DataType.LineSegment, LineSegment); + mapping.put(DataType.Path, Path); + mapping.put(DataType.Box, Box); + mapping.put(DataType.Polygon, Polygon); + mapping.put(DataType.Line, Line); + mapping.put(DataType.SinglePrecision, SinglePrecision); + mapping.put(DataType.DoublePrecision, DoublePrecision); + mapping.put(DataType.AbsTime, AbsTime); + mapping.put(DataType.RelTime, RelTime); + mapping.put(DataType.TInterval, TInterval); + mapping.put(DataType.Unknown, Unknown); + mapping.put(DataType.Circle, Circle); + mapping.put(DataType.Cash, Cash); + mapping.put(DataType.Money, Money); + mapping.put(DataType.MacAddress, MacAddress); + mapping.put(DataType.Inet4, Inet); + mapping.put(DataType.Inet6, Inet); + mapping.put(DataType.Cidr, Cidr); + mapping.put(DataType.MacAddress8, MacAddress8); + mapping.put(DataType.CharLength, CharLength); + mapping.put(DataType.VarChar, VarCharLength); + mapping.put(DataType.Date, Date); + mapping.put(DataType.Time, Time); + mapping.put(DataType.Timestamp, Timestamp); + mapping.put(DataType.TimestampTz, TimestampTz); + mapping.put(DataType.Interval, Interval); + mapping.put(DataType.TimeTz, TimeTz); + mapping.put(DataType.Bit, Bit); + mapping.put(DataType.VarBit, VarBit); + mapping.put(DataType.Numeric, Numeric); + mapping.put(DataType.Uuid, Uuid); + mapping.put(DataType.Record, Record); + + return mapping; + } + + public static int mapFrom(DataType type) { + if (mapping.containsKey(type)) { + return mapping.get(type); + } + return Unknown; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/IValueConverter.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/IValueConverter.java new file mode 100644 index 0000000..b91b6da --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/IValueConverter.java @@ -0,0 +1,7 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.converter; + +public interface IValueConverter { + + TTarget convert(TSource source); + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/LocalDateConverter.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/LocalDateConverter.java new file mode 100644 index 0000000..a04cd58 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/LocalDateConverter.java @@ -0,0 +1,14 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.converter; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.utils.TimeStampUtils; + +import java.time.LocalDate; + +public class LocalDateConverter implements IValueConverter { + + @Override + public Integer convert(final LocalDate date) { + return TimeStampUtils.toPgDays(date); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/LocalDateTimeConverter.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/LocalDateTimeConverter.java new file mode 100644 index 0000000..db2c99b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/LocalDateTimeConverter.java @@ -0,0 +1,13 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.converter; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.utils.TimeStampUtils; + +import java.time.LocalDateTime; + +public class LocalDateTimeConverter implements IValueConverter { + + @Override + public Long convert(final LocalDateTime dateTime) { + return TimeStampUtils.convertToPostgresTimeStamp(dateTime); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/LocalTimeConverter.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/LocalTimeConverter.java new file mode 100644 index 0000000..bc385c1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/converter/LocalTimeConverter.java @@ -0,0 +1,12 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.converter; + +import java.time.LocalTime; + +public class LocalTimeConverter implements IValueConverter { + + @Override + public Long convert(final LocalTime time) { + return time.toNanoOfDay() / 1000L; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BaseValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BaseValueHandler.java new file mode 100644 index 0000000..ef55a5f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BaseValueHandler.java @@ -0,0 +1,28 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.exceptions.BinaryWriteFailedException; + +import java.io.DataOutputStream; +import java.io.IOException; + +public abstract class BaseValueHandler implements IValueHandler { + + @Override + public void handle(DataOutputStream buffer, final T value) { + try { + if (value == null) { + buffer.writeInt(-1); + return; + } + internalHandle(buffer, value); + } catch (IOException e) { + if (null != e.getCause()) { + throw new BinaryWriteFailedException(e.getCause()); + } else { + throw new BinaryWriteFailedException(e); + } + } + } + + protected abstract void internalHandle(DataOutputStream buffer, final T value) throws IOException; +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BigDecimalValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BigDecimalValueHandler.java new file mode 100644 index 0000000..16c9510 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BigDecimalValueHandler.java @@ -0,0 +1,99 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.util.BigDecimalUtils; + +import java.io.DataOutputStream; +import java.io.IOException; +import java.math.BigDecimal; +import java.math.BigInteger; +import java.util.ArrayList; +import java.util.List; + +/** + * The Algorithm for turning a BigDecimal into a Postgres Numeric is heavily inspired by the + * Intermine Implementation: + *

+ * https://github.com/intermine/intermine/blob/master/intermine/objectstore/main/src/org/intermine/sql/writebatch/BatchWriterPostgresCopyImpl.java + */ +public class BigDecimalValueHandler extends BaseValueHandler { + + private static final int DECIMAL_DIGITS = 4; + private static final BigInteger TEN_THOUSAND = new BigInteger("10000"); + + @Override + protected void internalHandle(final DataOutputStream buffer, final T value) throws IOException { + final BigDecimal tmpValue = getNumericAsBigDecimal(value); + + // Number of fractional digits: + final int fractionDigits = tmpValue.scale(); + + // Number of Fraction Groups: + final int fractionGroups = fractionDigits > 0 ? (fractionDigits + 3) / 4 : 0; + + final List digits = digits(tmpValue); + + buffer.writeInt(8 + (2 * digits.size())); + buffer.writeShort(digits.size()); + buffer.writeShort(digits.size() - fractionGroups - 1); + buffer.writeShort(tmpValue.signum() == 1 ? 0x0000 : 0x4000); + buffer.writeShort(Math.max(fractionDigits, 0)); + + // Now write each digit: + for (int pos = digits.size() - 1; pos >= 0; pos--) { + final int valueToWrite = digits.get(pos); + buffer.writeShort(valueToWrite); + } + } + + @Override + public int getLength(final T value) { + final List digits = digits(getNumericAsBigDecimal(value)); + return (8 + (2 * digits.size())); + } + + private static BigDecimal getNumericAsBigDecimal(final Number source) { + if (!(source instanceof BigDecimal)) { + return BigDecimalUtils.toBigDecimal(source.doubleValue()); + } + + return (BigDecimal) source; + } + + private List digits(final BigDecimal value) { + BigInteger unscaledValue = value.unscaledValue(); + + if (value.signum() == -1) { + unscaledValue = unscaledValue.negate(); + } + + final List digits = new ArrayList<>(); + + if (value.scale() > 0) { + // The scale needs to be a multiple of 4: + int scaleRemainder = value.scale() % 4; + + // Scale the first value: + if (scaleRemainder != 0) { + final BigInteger[] result = unscaledValue.divideAndRemainder(BigInteger.TEN.pow(scaleRemainder)); + final int digit = result[1].intValue() * (int) Math.pow(10, DECIMAL_DIGITS - scaleRemainder); + digits.add(digit); + unscaledValue = result[0]; + } + + while (!unscaledValue.equals(BigInteger.ZERO)) { + final BigInteger[] result = unscaledValue.divideAndRemainder(TEN_THOUSAND); + digits.add(result[1].intValue()); + unscaledValue = result[0]; + } + } else { + BigInteger originalValue = unscaledValue.multiply(BigInteger.TEN.pow(Math.abs(value.scale()))); + while (!originalValue.equals(BigInteger.ZERO)) { + final BigInteger[] result = originalValue.divideAndRemainder(TEN_THOUSAND); + digits.add(result[1].intValue()); + originalValue = result[0]; + } + } + + return digits; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BooleanValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BooleanValueHandler.java new file mode 100644 index 0000000..601fa91 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BooleanValueHandler.java @@ -0,0 +1,22 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class BooleanValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final Boolean value) throws IOException { + buffer.writeInt(1); + if (value) { + buffer.writeByte(1); + } else { + buffer.writeByte(0); + } + } + + @Override + public int getLength(Boolean value) { + return 1; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BoxValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BoxValueHandler.java new file mode 100644 index 0000000..c9f5439 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/BoxValueHandler.java @@ -0,0 +1,23 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.utils.GeometricUtils; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Box; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class BoxValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final Box value) throws IOException { + buffer.writeInt(32); + + GeometricUtils.writePoint(buffer, value.getHigh()); + GeometricUtils.writePoint(buffer, value.getLow()); + } + + @Override + public int getLength(Box value) { + return 32; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ByteArrayValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ByteArrayValueHandler.java new file mode 100644 index 0000000..66e9c86 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ByteArrayValueHandler.java @@ -0,0 +1,18 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class ByteArrayValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final byte[] value) throws IOException { + buffer.writeInt(value.length); + buffer.write(value, 0, value.length); + } + + @Override + public int getLength(byte[] value) { + return value.length; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ByteValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ByteValueHandler.java new file mode 100644 index 0000000..c8a0322 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ByteValueHandler.java @@ -0,0 +1,18 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class ByteValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final T value) throws IOException { + buffer.writeInt(1); + buffer.writeByte(value.byteValue()); + } + + @Override + public int getLength(T value) { + return 1; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/CircleValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/CircleValueHandler.java new file mode 100644 index 0000000..659fd9c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/CircleValueHandler.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.utils.GeometricUtils; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Circle; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class CircleValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final Circle value) throws IOException { + buffer.writeInt(24); + // First encode the Center Point: + GeometricUtils.writePoint(buffer, value.getCenter()); + // ... and then the Radius: + buffer.writeDouble(value.getRadius()); + } + + @Override + public int getLength(Circle value) { + return 24; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/CollectionValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/CollectionValueHandler.java new file mode 100644 index 0000000..2292db5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/CollectionValueHandler.java @@ -0,0 +1,44 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.ByteArrayOutputStream; +import java.io.DataOutputStream; +import java.io.IOException; +import java.util.Collection; + +public class CollectionValueHandler> extends + BaseValueHandler { + + private final int oid; + private final IValueHandler valueHandler; + + public CollectionValueHandler(int oid, IValueHandler valueHandler) { + this.oid = oid; + this.valueHandler = valueHandler; + } + + @Override + protected void internalHandle(DataOutputStream buffer, TCollectionType value) throws IOException { + + ByteArrayOutputStream byteArrayOutput = new ByteArrayOutputStream(); + DataOutputStream arrayOutput = new DataOutputStream(byteArrayOutput); + + arrayOutput.writeInt(1); // Dimensions, use 1 for one-dimensional arrays at the moment + arrayOutput.writeInt(1); // The Array can contain Null Values + arrayOutput.writeInt(oid); // Write the Values using the OID + arrayOutput.writeInt(value.size()); // Write the number of elements + arrayOutput.writeInt(1); // Ignore Lower Bound. Use PG Default for now + + // Now write the actual Collection elements using the inner handler: + for (TElementType element : value) { + valueHandler.handle(arrayOutput, element); + } + + buffer.writeInt(byteArrayOutput.size()); + buffer.write(byteArrayOutput.toByteArray()); + } + + @Override + public int getLength(TCollectionType value) { + throw new UnsupportedOperationException(); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/DoubleValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/DoubleValueHandler.java new file mode 100644 index 0000000..ba6e93d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/DoubleValueHandler.java @@ -0,0 +1,18 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class DoubleValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final T value) throws IOException { + buffer.writeInt(8); + buffer.writeDouble(value.doubleValue()); + } + + @Override + public int getLength(T value) { + return 8; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/FloatValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/FloatValueHandler.java new file mode 100644 index 0000000..995109d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/FloatValueHandler.java @@ -0,0 +1,18 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class FloatValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final T value) throws IOException { + buffer.writeInt(4); + buffer.writeFloat(value.floatValue()); + } + + @Override + public int getLength(T value) { + return 4; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/HstoreValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/HstoreValueHandler.java new file mode 100644 index 0000000..4843e37 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/HstoreValueHandler.java @@ -0,0 +1,61 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.util.StringUtils; + +import java.io.ByteArrayOutputStream; +import java.io.DataOutputStream; +import java.io.IOException; +import java.util.Map; + +public class HstoreValueHandler extends BaseValueHandler> { + + @Override + protected void internalHandle(DataOutputStream buffer, final Map value) + throws IOException { + + // Write into a Temporary ByteArrayOutputStream: + ByteArrayOutputStream byteArrayOutput = new ByteArrayOutputStream(); + + // And wrap it in a DataOutputStream: + DataOutputStream hstoreOutput = new DataOutputStream(byteArrayOutput); + + // First the Amount of Values to write: + hstoreOutput.writeInt(value.size()); + + // Now Iterate over the Array and write each value: + for (Map.Entry entry : value.entrySet()) { + // Write the Key: + writeKey(hstoreOutput, entry.getKey()); + // The Value can be null, use a different method: + writeValue(hstoreOutput, entry.getValue()); + } + + // Now write the entire ByteArray to the COPY Buffer: + buffer.writeInt(byteArrayOutput.size()); + buffer.write(byteArrayOutput.toByteArray()); + } + + private void writeKey(DataOutputStream buffer, String key) throws IOException { + writeText(buffer, key); + } + + private void writeValue(DataOutputStream buffer, String value) throws IOException { + if (value == null) { + buffer.writeInt(-1); + } else { + writeText(buffer, value); + } + } + + private void writeText(DataOutputStream buffer, String text) throws IOException { + byte[] textBytes = StringUtils.getUtf8Bytes(text); + + buffer.writeInt(textBytes.length); + buffer.write(textBytes); + } + + @Override + public int getLength(Map value) { + throw new UnsupportedOperationException(); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/IValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/IValueHandler.java new file mode 100644 index 0000000..bf8cdb9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/IValueHandler.java @@ -0,0 +1,10 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; + +public interface IValueHandler extends ValueHandler { + + void handle(DataOutputStream buffer, final TTargetType value); + + int getLength(final TTargetType value); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/IValueHandlerProvider.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/IValueHandlerProvider.java new file mode 100644 index 0000000..0543c50 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/IValueHandlerProvider.java @@ -0,0 +1,8 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.constants.DataType; + +public interface IValueHandlerProvider { + + IValueHandler resolve(DataType targetType); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/Inet4AddressValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/Inet4AddressValueHandler.java new file mode 100644 index 0000000..9ea978b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/Inet4AddressValueHandler.java @@ -0,0 +1,32 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; +import java.net.Inet4Address; + +public class Inet4AddressValueHandler extends BaseValueHandler { + + private static final byte IPv4 = 2; + private static final byte MASK = 32; + private static final byte IS_CIDR = 0; + + @Override + protected void internalHandle(DataOutputStream buffer, final Inet4Address value) + throws IOException { + buffer.writeInt(8); + + buffer.writeByte(IPv4); + buffer.writeByte(MASK); + buffer.writeByte(IS_CIDR); + + byte[] inet4AddressBytes = value.getAddress(); + + buffer.writeByte(inet4AddressBytes.length); + buffer.write(inet4AddressBytes); + } + + @Override + public int getLength(Inet4Address value) { + return 8; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/Inet6AddressValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/Inet6AddressValueHandler.java new file mode 100644 index 0000000..70c6155 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/Inet6AddressValueHandler.java @@ -0,0 +1,31 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; +import java.net.Inet6Address; + +public class Inet6AddressValueHandler extends BaseValueHandler { + + private static final byte IPv6 = 3; + private static final int MASK = 128; + private static final byte IS_CIDR = 0; + + @Override + protected void internalHandle(DataOutputStream buffer, final Inet6Address value) + throws IOException { + buffer.writeInt(20); + + buffer.writeByte(IPv6); + buffer.writeByte(MASK); + buffer.writeByte(IS_CIDR); + + byte[] inet6AddressBytes = value.getAddress(); + buffer.writeByte(inet6AddressBytes.length); + buffer.write(inet6AddressBytes); + } + + @Override + public int getLength(Inet6Address value) { + return 20; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/IntegerValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/IntegerValueHandler.java new file mode 100644 index 0000000..5f4eb0d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/IntegerValueHandler.java @@ -0,0 +1,18 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class IntegerValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final T value) throws IOException { + buffer.writeInt(4); + buffer.writeInt(value.intValue()); + } + + @Override + public int getLength(T value) { + return 4; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/JsonbValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/JsonbValueHandler.java new file mode 100644 index 0000000..0aa95a5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/JsonbValueHandler.java @@ -0,0 +1,35 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class JsonbValueHandler extends BaseValueHandler { + + private final int jsonbProtocolVersion; + + public JsonbValueHandler() { + this(1); + } + + public JsonbValueHandler(int jsonbProtocolVersion) { + this.jsonbProtocolVersion = jsonbProtocolVersion; + } + + @Override + protected void internalHandle(DataOutputStream buffer, final String value) throws IOException { + + byte[] utf8Bytes = value.getBytes("UTF-8"); + + // Write the Length of the Data to Copy: + buffer.writeInt(utf8Bytes.length + 1); + // Write the Jsonb Protocol Version: + buffer.writeByte(jsonbProtocolVersion); + // Copy the Data: + buffer.write(utf8Bytes); + } + + @Override + public int getLength(String value) { + throw new UnsupportedOperationException(); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LineSegmentValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LineSegmentValueHandler.java new file mode 100644 index 0000000..07c2a82 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LineSegmentValueHandler.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.utils.GeometricUtils; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.LineSegment; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class LineSegmentValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final LineSegment value) + throws IOException { + buffer.writeInt(32); + + GeometricUtils.writePoint(buffer, value.getP1()); + GeometricUtils.writePoint(buffer, value.getP2()); + } + + @Override + public int getLength(LineSegment value) { + return 32; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LineValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LineValueHandler.java new file mode 100644 index 0000000..5fcc87a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LineValueHandler.java @@ -0,0 +1,23 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Line; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class LineValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final Line value) throws IOException { + buffer.writeInt(24); + + buffer.writeDouble(value.getA()); + buffer.writeDouble(value.getB()); + buffer.writeDouble(value.getC()); + } + + @Override + public int getLength(Line value) { + return 24; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LocalDateTimeValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LocalDateTimeValueHandler.java new file mode 100644 index 0000000..ebdfb0d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LocalDateTimeValueHandler.java @@ -0,0 +1,33 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.converter.IValueConverter; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.converter.LocalDateTimeConverter; + +import java.io.DataOutputStream; +import java.io.IOException; +import java.time.LocalDateTime; + +public class LocalDateTimeValueHandler extends BaseValueHandler { + + private IValueConverter dateTimeConverter; + + public LocalDateTimeValueHandler() { + this(new LocalDateTimeConverter()); + } + + public LocalDateTimeValueHandler(IValueConverter dateTimeConverter) { + this.dateTimeConverter = dateTimeConverter; + } + + @Override + protected void internalHandle(DataOutputStream buffer, final LocalDateTime value) + throws IOException { + buffer.writeInt(8); + buffer.writeLong(dateTimeConverter.convert(value)); + } + + @Override + public int getLength(LocalDateTime value) { + return 8; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LocalDateValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LocalDateValueHandler.java new file mode 100644 index 0000000..88f70fd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LocalDateValueHandler.java @@ -0,0 +1,32 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.converter.IValueConverter; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.converter.LocalDateConverter; + +import java.io.DataOutputStream; +import java.io.IOException; +import java.time.LocalDate; + +public class LocalDateValueHandler extends BaseValueHandler { + + private IValueConverter dateConverter; + + public LocalDateValueHandler() { + this(new LocalDateConverter()); + } + + public LocalDateValueHandler(IValueConverter dateTimeConverter) { + this.dateConverter = dateTimeConverter; + } + + @Override + protected void internalHandle(DataOutputStream buffer, final LocalDate value) throws IOException { + buffer.writeInt(4); + buffer.writeInt(dateConverter.convert(value)); + } + + @Override + public int getLength(LocalDate value) { + return 4; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LocalTimeValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LocalTimeValueHandler.java new file mode 100644 index 0000000..0c4edbc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LocalTimeValueHandler.java @@ -0,0 +1,32 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.converter.IValueConverter; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.converter.LocalTimeConverter; + +import java.io.DataOutputStream; +import java.io.IOException; +import java.time.LocalTime; + +public class LocalTimeValueHandler extends BaseValueHandler { + + private IValueConverter timeConverter; + + public LocalTimeValueHandler() { + this(new LocalTimeConverter()); + } + + public LocalTimeValueHandler(IValueConverter timeConverter) { + this.timeConverter = timeConverter; + } + + @Override + protected void internalHandle(DataOutputStream buffer, final LocalTime value) throws IOException { + buffer.writeInt(8); + buffer.writeLong(timeConverter.convert(value)); + } + + @Override + public int getLength(LocalTime value) { + return 8; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LongValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LongValueHandler.java new file mode 100644 index 0000000..1cf94c6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/LongValueHandler.java @@ -0,0 +1,18 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class LongValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final T value) throws IOException { + buffer.writeInt(8); + buffer.writeLong(value.longValue()); + } + + @Override + public int getLength(T value) { + return 8; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/MacAddressValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/MacAddressValueHandler.java new file mode 100644 index 0000000..3cdf996 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/MacAddressValueHandler.java @@ -0,0 +1,21 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.network.MacAddress; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class MacAddressValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final MacAddress value) + throws IOException { + buffer.writeInt(6); + buffer.write(value.getAddressBytes()); + } + + @Override + public int getLength(MacAddress value) { + return 6; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/PathValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/PathValueHandler.java new file mode 100644 index 0000000..9bad5f7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/PathValueHandler.java @@ -0,0 +1,37 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.utils.GeometricUtils; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Path; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Point; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class PathValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final Path value) throws IOException { + // Write a Byte to indicate if a Path is closed or not: + byte pathIsClosed = (byte) (value.isClosed() ? 1 : 0); + + // The total number of bytes to write: + int totalBytesToWrite = 1 + 4 + 16 * value.size(); + + // The Number of Bytes to follow: + buffer.writeInt(totalBytesToWrite); + // Is the Circle close? + buffer.writeByte(pathIsClosed); + // Write Points: + buffer.writeInt(value.getPoints().size()); + // Write each Point in List: + for (Point p : value.getPoints()) { + GeometricUtils.writePoint(buffer, p); + } + + } + + @Override + public int getLength(Path value) { + throw new UnsupportedOperationException(); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/PointValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/PointValueHandler.java new file mode 100644 index 0000000..7148775 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/PointValueHandler.java @@ -0,0 +1,22 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.utils.GeometricUtils; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Point; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class PointValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final Point value) throws IOException { + buffer.writeInt(16); + + GeometricUtils.writePoint(buffer, value); + } + + @Override + public int getLength(Point value) { + return 16; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/PolygonValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/PolygonValueHandler.java new file mode 100644 index 0000000..8031089 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/PolygonValueHandler.java @@ -0,0 +1,34 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.utils.GeometricUtils; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Point; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Polygon; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class PolygonValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final Polygon value) throws IOException { + // The total number of bytes to write: + int totalBytesToWrite = 4 + 16 * value.size(); + + // The Number of Bytes to follow: + buffer.writeInt(totalBytesToWrite); + + // Write Points: + buffer.writeInt(value.getPoints().size()); + + // Write each Point in List: + for (Point p : value.getPoints()) { + GeometricUtils.writePoint(buffer, p); + } + + } + + @Override + public int getLength(Polygon value) { + throw new UnsupportedOperationException(); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/RangeValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/RangeValueHandler.java new file mode 100644 index 0000000..90cdea4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/RangeValueHandler.java @@ -0,0 +1,54 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.range.Range; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class RangeValueHandler extends BaseValueHandler> { + + private final IValueHandler valueHandler; + + public RangeValueHandler(IValueHandler valueHandler) { + this.valueHandler = valueHandler; + } + + @SuppressWarnings("NullAway") // infinite bound checks only pass when bound value is not null + @Override + protected void internalHandle(DataOutputStream buffer, Range value) + throws IOException { + buffer.writeInt(getLength(value)); + buffer.writeByte(value.getFlags()); + + if (value.isEmpty()) { + return; + } + + if (!value.isLowerBoundInfinite()) { + valueHandler.handle(buffer, value.getLowerBound()); + } + + if (!value.isUpperBoundInfinite()) { + valueHandler.handle(buffer, value.getUpperBound()); + } + } + + @SuppressWarnings("NullAway") // infinite bound checks only pass when bound value is not null + @Override + public int getLength(Range value) { + int totalLen = 1; + + if (!value.isEmpty()) { + if (!value.isLowerBoundInfinite()) { + totalLen += 4 + valueHandler.getLength(value.getLowerBound()); + } + + if (!value.isUpperBoundInfinite()) { + totalLen += 4 + valueHandler.getLength(value.getUpperBound()); + } + } + + return totalLen; + } +} + diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ShortValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ShortValueHandler.java new file mode 100644 index 0000000..e10ae35 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ShortValueHandler.java @@ -0,0 +1,18 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class ShortValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final T value) throws IOException { + buffer.writeInt(2); + buffer.writeShort(value.shortValue()); + } + + @Override + public int getLength(T value) { + return 2; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/StringValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/StringValueHandler.java new file mode 100644 index 0000000..808b757 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/StringValueHandler.java @@ -0,0 +1,22 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.util.StringUtils; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class StringValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final String value) throws IOException { + byte[] utf8Bytes = StringUtils.getUtf8Bytes(value); + buffer.writeInt(utf8Bytes.length); + buffer.write(utf8Bytes); + } + + @Override + public int getLength(String value) { + byte[] utf8Bytes = StringUtils.getUtf8Bytes(value); + return utf8Bytes.length; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/UUIDValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/UUIDValueHandler.java new file mode 100644 index 0000000..755b31e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/UUIDValueHandler.java @@ -0,0 +1,30 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import java.io.DataOutputStream; +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.Arrays; +import java.util.UUID; + +public class UUIDValueHandler extends BaseValueHandler { + + @Override + protected void internalHandle(DataOutputStream buffer, final UUID value) throws IOException { + buffer.writeInt(16); + + ByteBuffer bb = ByteBuffer.wrap(new byte[16]); + bb.putLong(value.getMostSignificantBits()); + bb.putLong(value.getLeastSignificantBits()); + + buffer.writeInt(bb.getInt(0)); + buffer.writeShort(bb.getShort(4)); + buffer.writeShort(bb.getShort(6)); + + buffer.write(Arrays.copyOfRange(bb.array(), 8, 16)); + } + + @Override + public int getLength(UUID value) { + return 16; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ValueHandler.java new file mode 100644 index 0000000..bbb970a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ValueHandler.java @@ -0,0 +1,6 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +// Marker Interface +public interface ValueHandler { + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ValueHandlerProvider.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ValueHandlerProvider.java new file mode 100644 index 0000000..b90321c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ValueHandlerProvider.java @@ -0,0 +1,92 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.exceptions.ValueHandlerAlreadyRegisteredException; +import srt.cloud.framework.dbswitch.pgwriter.exceptions.ValueHandlerNotRegisteredException; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.constants.DataType; + +import java.util.EnumMap; +import java.util.Map; +import java.util.stream.Collectors; + +public class ValueHandlerProvider implements IValueHandlerProvider { + + private final Map valueHandlers; + + public ValueHandlerProvider() { + valueHandlers = new EnumMap<>(DataType.class); + + add(DataType.Boolean, new BooleanValueHandler()); + add(DataType.Char, new ByteValueHandler<>()); + add(DataType.Numeric, new BigDecimalValueHandler<>()); + add(DataType.DoublePrecision, new DoubleValueHandler<>()); + add(DataType.SinglePrecision, new FloatValueHandler<>()); + add(DataType.Date, new LocalDateValueHandler()); + add(DataType.Time, new LocalTimeValueHandler()); + add(DataType.Timestamp, new LocalDateTimeValueHandler()); + add(DataType.TimestampTz, new ZonedDateTimeValueHandler()); + add(DataType.Int2, new ShortValueHandler<>()); + add(DataType.Int4, new IntegerValueHandler<>()); + add(DataType.Int8, new LongValueHandler<>()); + add(DataType.Text, new StringValueHandler()); + add(DataType.VarChar, new StringValueHandler()); + add(DataType.Inet4, new Inet4AddressValueHandler()); + add(DataType.Inet6, new Inet6AddressValueHandler()); + add(DataType.Uuid, new UUIDValueHandler()); + add(DataType.Bytea, new ByteArrayValueHandler()); + add(DataType.Jsonb, new JsonbValueHandler()); + add(DataType.Hstore, new HstoreValueHandler()); + add(DataType.Point, new PointValueHandler()); + add(DataType.Box, new BoxValueHandler()); + add(DataType.Line, new LineValueHandler()); + add(DataType.LineSegment, new LineSegmentValueHandler()); + add(DataType.Path, new PathValueHandler()); + add(DataType.Polygon, new PolygonValueHandler()); + add(DataType.Circle, new CircleValueHandler()); + add(DataType.MacAddress, new MacAddressValueHandler()); + add(DataType.TsRange, new RangeValueHandler<>(new LocalDateTimeValueHandler())); + add(DataType.TsTzRange, new RangeValueHandler<>(new ZonedDateTimeValueHandler())); + add(DataType.Int4Range, new RangeValueHandler<>(new IntegerValueHandler<>())); + add(DataType.Int8Range, new RangeValueHandler<>(new LongValueHandler<>())); + add(DataType.NumRange, new RangeValueHandler<>(new BigDecimalValueHandler<>())); + add(DataType.DateRange, new RangeValueHandler<>(new LocalDateValueHandler())); + } + + public ValueHandlerProvider add(DataType targetType, + IValueHandler valueHandler) { + if (valueHandlers.containsKey(targetType)) { + throw new ValueHandlerAlreadyRegisteredException( + String.format("TargetType '%s' has already been registered", targetType)); + } + + valueHandlers.put(targetType, valueHandler); + + return this; + } + + @Override + public IValueHandler resolve(DataType dataType) { + + @SuppressWarnings("unchecked") + IValueHandler handler = valueHandlers.get(dataType); + if (handler == null) { + throw new ValueHandlerNotRegisteredException( + String.format("DataType '%s' has not been registered", dataType)); + } + return handler; + } + + + @Override + public String toString() { + + String valueHandlersString = + valueHandlers.entrySet() + .stream() + .map(e -> e.getValue().toString()) + .collect(Collectors.joining(", ")); + + return "ValueHandlerProvider{" + + "valueHandlers=[" + valueHandlersString + "]" + + '}'; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ZonedDateTimeValueHandler.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ZonedDateTimeValueHandler.java new file mode 100644 index 0000000..4638133 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/ZonedDateTimeValueHandler.java @@ -0,0 +1,44 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.converter.IValueConverter; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.converter.LocalDateTimeConverter; + +import java.io.DataOutputStream; +import java.io.IOException; +import java.time.LocalDateTime; +import java.time.ZoneOffset; +import java.time.ZonedDateTime; + +public class ZonedDateTimeValueHandler extends BaseValueHandler { + + private IValueConverter dateTimeConverter; + + public ZonedDateTimeValueHandler() { + this(new ToUTCStripTimezone()); + } + + public ZonedDateTimeValueHandler(IValueConverter dateTimeConverter) { + this.dateTimeConverter = dateTimeConverter; + } + + @Override + protected void internalHandle(DataOutputStream buffer, ZonedDateTime value) throws IOException { + buffer.writeInt(8); + buffer.writeLong(dateTimeConverter.convert(value)); + } + + @Override + public int getLength(ZonedDateTime value) { + return 8; + } + + private static final class ToUTCStripTimezone implements IValueConverter { + + private final IValueConverter converter = new LocalDateTimeConverter(); + + @Override + public Long convert(final ZonedDateTime value) { + return converter.convert(value.withZoneSameInstant(ZoneOffset.UTC).toLocalDateTime()); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/utils/GeometricUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/utils/GeometricUtils.java new file mode 100644 index 0000000..1af0340 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/handlers/utils/GeometricUtils.java @@ -0,0 +1,15 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.utils; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Point; + +import java.io.DataOutputStream; +import java.io.IOException; + +public class GeometricUtils { + + public static void writePoint(DataOutputStream buffer, final Point value) throws IOException { + buffer.writeDouble(value.getX()); + buffer.writeDouble(value.getY()); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Box.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Box.java new file mode 100644 index 0000000..40916dd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Box.java @@ -0,0 +1,27 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric; + +public class Box { + + private final Point high; + private final Point low; + + public Box(Point high, Point low) { + if (high == null) { + throw new IllegalArgumentException("high"); + } + if (low == null) { + throw new IllegalArgumentException("low"); + } + this.high = high; + this.low = low; + } + + public Point getHigh() { + return high; + } + + public Point getLow() { + return low; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Circle.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Circle.java new file mode 100644 index 0000000..5fc487c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Circle.java @@ -0,0 +1,24 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric; + +public class Circle { + + private final Point center; + private final double radius; + + public Circle(Point center, double radius) { + if (center == null) { + throw new IllegalArgumentException("center"); + } + this.center = center; + this.radius = radius; + } + + public Point getCenter() { + return center; + } + + public double getRadius() { + return radius; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Line.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Line.java new file mode 100644 index 0000000..96cd327 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Line.java @@ -0,0 +1,27 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric; + +public class Line { + + private double a; + private double b; + private double c; + + public Line(double a, double b, double c) { + this.a = a; + this.b = b; + this.c = c; + } + + public double getA() { + return a; + } + + public double getB() { + return b; + } + + public double getC() { + return c; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/LineSegment.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/LineSegment.java new file mode 100644 index 0000000..f3e813e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/LineSegment.java @@ -0,0 +1,30 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric; + +public class LineSegment { + + private final Point p1; + private final Point p2; + + public LineSegment(Point p1, Point p2) { + + if (p1 == null) { + throw new IllegalArgumentException("p1"); + } + + if (p2 == null) { + throw new IllegalArgumentException("p2"); + } + + this.p1 = p1; + this.p2 = p2; + } + + public Point getP1() { + return p1; + } + + public Point getP2() { + return p2; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Path.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Path.java new file mode 100644 index 0000000..65195fb --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Path.java @@ -0,0 +1,31 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric; + +import java.util.List; + +public class Path { + + private final boolean isClosed; + private final List points; + + public Path(boolean closed, List points) { + + if (points == null) { + throw new IllegalArgumentException("points"); + } + + this.isClosed = closed; + this.points = points; + } + + public boolean isClosed() { + return isClosed; + } + + public List getPoints() { + return points; + } + + public int size() { + return points.size(); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Point.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Point.java new file mode 100644 index 0000000..2f8b1d0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Point.java @@ -0,0 +1,21 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric; + +public class Point { + + private final double x; + private final double y; + + public Point(double x, double y) { + this.x = x; + this.y = y; + } + + public double getX() { + return x; + } + + public double getY() { + return y; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Polygon.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Polygon.java new file mode 100644 index 0000000..ef6b91b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/geometric/Polygon.java @@ -0,0 +1,25 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric; + +import java.util.List; + +public class Polygon { + + private final List points; + + public Polygon(List points) { + + if (points == null) { + throw new IllegalArgumentException("points"); + } + + this.points = points; + } + + public List getPoints() { + return points; + } + + public int size() { + return points.size(); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/network/MacAddress.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/network/MacAddress.java new file mode 100644 index 0000000..20b9f26 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/network/MacAddress.java @@ -0,0 +1,39 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.network; + +import java.util.List; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +public class MacAddress { + + private final byte[] addressBytes; + + public MacAddress(byte[] addressBytes) { + + if (addressBytes == null) { + throw new IllegalArgumentException("addressBytes"); + } + + if (addressBytes.length != 6) { + throw new IllegalArgumentException("addressBytes"); + } + + this.addressBytes = addressBytes; + } + + public byte[] getAddressBytes() { + return addressBytes; + } + + @Override + public String toString() { + + List bytesAsHexString = IntStream + .range(0, addressBytes.length) + .map(idx -> addressBytes[idx]) + .mapToObj(value -> String.format("0x%x", value)) + .collect(Collectors.toList()); + + return String.join("-", bytesAsHexString); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/range/Range.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/range/Range.java new file mode 100644 index 0000000..5f14e32 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/range/Range.java @@ -0,0 +1,143 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.range; + +import java.util.Objects; + +// https://github.com/npgsql/npgsql/blob/d4132d0d546594629bcef658bcb1418b4a8624cc/src/Npgsql/NpgsqlTypes/NpgsqlRange.cs +public class Range { + + private int flags; + + private TElementType lowerBound; + + private TElementType upperBound; + + public Range(TElementType lowerBound, TElementType upperBound) { + this(lowerBound, true, false, upperBound, true, false); + } + + public Range(TElementType lowerBound, boolean lowerBoundIsInclusive, TElementType upperBound, + boolean upperBoundIsInclusive) { + this(lowerBound, lowerBoundIsInclusive, false, upperBound, upperBoundIsInclusive, false); + } + + public Range(TElementType lowerBound, boolean lowerBoundIsInclusive, boolean lowerBoundInfinite, + TElementType upperBound, boolean upperBoundIsInclusive, boolean upperBoundInfinite) { + this(lowerBound, upperBound, + evaluateBoundaryFlags(lowerBoundIsInclusive, upperBoundIsInclusive, lowerBoundInfinite, + upperBoundInfinite)); + } + + + private Range(TElementType lowerBound, TElementType upperBound, int flags) { + this.lowerBound = (flags & RangeFlags.LowerBoundInfinite) != 0 ? null : lowerBound; + this.upperBound = (flags & RangeFlags.UpperBoundInfinite) != 0 ? null : upperBound; + this.flags = flags; + + if (lowerBound == null) { + this.flags |= RangeFlags.LowerBoundInfinite; + } + + if (upperBound == null) { + this.flags |= RangeFlags.UpperBoundInfinite; + } + + if (isEmptyRange(lowerBound, upperBound, flags)) { + this.lowerBound = null; + this.upperBound = null; + this.flags = RangeFlags.Empty; + } + } + + private boolean isEmptyRange(TElementType lowerBound, TElementType upperBound, int flags) { + // --------------------------------------------------------------------------------- + // We only want to check for those conditions that are unambiguously erroneous: + // 1. The bounds must not be default values (including null). + // 2. The bounds must be definite (non-infinite). + // 3. The bounds must be inclusive. + // 4. The bounds must be considered equal. + // + // See: + // - https://github.com/npgsql/npgsql/pull/1939 + // - https://github.com/npgsql/npgsql/issues/1943 + // --------------------------------------------------------------------------------- + + if ((flags & RangeFlags.Empty) == RangeFlags.Empty) { + return true; + } + + if ((flags & RangeFlags.Infinite) == RangeFlags.Infinite) { + return false; + } + + if ((flags & RangeFlags.Inclusive) == RangeFlags.Inclusive) { + return false; + } + + return Objects.equals(lowerBound, upperBound); + } + + + private static int evaluateBoundaryFlags(boolean lowerBoundIsInclusive, + boolean upperBoundIsInclusive, boolean lowerBoundInfinite, boolean upperBoundInfinite) { + + int result = RangeFlags.None; + + // This is the only place flags are calculated. + if (lowerBoundIsInclusive) { + result |= RangeFlags.LowerBoundInclusive; + } + if (upperBoundIsInclusive) { + result |= RangeFlags.UpperBoundInclusive; + } + if (lowerBoundInfinite) { + result |= RangeFlags.LowerBoundInfinite; + } + if (upperBoundInfinite) { + result |= RangeFlags.UpperBoundInfinite; + } + + // PostgreSQL automatically converts inclusive-infinities. + // See: https://www.postgresql.org/docs/current/static/rangetypes.html#RANGETYPES-INFINITE + if ((result & RangeFlags.LowerInclusiveInfinite) == RangeFlags.LowerInclusiveInfinite) { + result &= ~RangeFlags.LowerBoundInclusive; + } + + if ((result & RangeFlags.UpperInclusiveInfinite) == RangeFlags.UpperInclusiveInfinite) { + result &= ~RangeFlags.UpperBoundInclusive; + } + + return result; + } + + public int getFlags() { + return flags; + } + + public boolean isEmpty() { + return (flags & RangeFlags.Empty) != 0; + } + + public boolean isLowerBoundInfinite() { + return (flags & RangeFlags.LowerBoundInfinite) != 0; + } + + public boolean isUpperBoundInfinite() { + return (flags & RangeFlags.UpperBoundInfinite) != 0; + } + + public TElementType getLowerBound() { + return lowerBound; + } + + public void setLowerBound(TElementType lowerBound) { + this.lowerBound = lowerBound; + } + + public TElementType getUpperBound() { + return upperBound; + } + + public void setUpperBound(TElementType upperBound) { + this.upperBound = upperBound; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/range/RangeFlags.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/range/RangeFlags.java new file mode 100644 index 0000000..b77d23a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/model/range/RangeFlags.java @@ -0,0 +1,25 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.model.range; + +// https://github.com/npgsql/npgsql/blob/d4132d0d546594629bcef658bcb1418b4a8624cc/src/Npgsql/NpgsqlTypes/NpgsqlRange.cs +public class RangeFlags { + + public static final int None = 0; + + public static final int Empty = 1; + + public static final int LowerBoundInclusive = 2; + + public static final int UpperBoundInclusive = 4; + + public static final int LowerBoundInfinite = 8; + + public static final int UpperBoundInfinite = 16; + + public static final int Inclusive = LowerBoundInclusive | UpperBoundInclusive; + + public static final int Infinite = LowerBoundInfinite | UpperBoundInfinite; + + public static final int LowerInclusiveInfinite = LowerBoundInclusive | LowerBoundInfinite; + + public static final int UpperInclusiveInfinite = UpperBoundInclusive | UpperBoundInfinite; +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/utils/TimeStampUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/utils/TimeStampUtils.java new file mode 100644 index 0000000..e04d5bc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/pgsql/utils/TimeStampUtils.java @@ -0,0 +1,104 @@ +package srt.cloud.framework.dbswitch.pgwriter.pgsql.utils; + +import java.time.LocalDate; +import java.time.LocalDateTime; +import java.time.OffsetDateTime; +import java.time.ZoneOffset; +import java.time.temporal.ChronoUnit; +import java.util.concurrent.TimeUnit; + +public class TimeStampUtils { + + private TimeStampUtils() { + } + + private static final LocalDateTime JavaEpoch = LocalDateTime.of(1970, 1, 1, 0, 0, 0); + + private static final LocalDateTime PostgresEpoch = LocalDateTime.of(2000, 1, 1, 0, 0, 0); + + private static final long DaysBetweenJavaAndPostgresEpochs = ChronoUnit.DAYS + .between(JavaEpoch, PostgresEpoch); + + public static long convertToPostgresTimeStamp(LocalDateTime localDateTime) { + + if (localDateTime == null) { + throw new IllegalArgumentException("localDateTime"); + } + // Extract the Time of the Day in Nanoseconds: + long timeInNanoseconds = localDateTime + .toLocalTime() + .toNanoOfDay(); + + // Convert the Nanoseconds to Microseconds: + long timeInMicroseconds = timeInNanoseconds / 1000; + + // Now Calculate the Postgres Timestamp: + if (localDateTime.isBefore(PostgresEpoch)) { + long dateInMicroseconds = + (localDateTime.toLocalDate().toEpochDay() - DaysBetweenJavaAndPostgresEpochs) + * 86400000000L; + + return dateInMicroseconds + timeInMicroseconds; + } else { + long dateInMicroseconds = + (DaysBetweenJavaAndPostgresEpochs - localDateTime.toLocalDate().toEpochDay()) + * 86400000000L; + + return -(dateInMicroseconds - timeInMicroseconds); + } + } + + public static int toPgDays(LocalDate date) { + // Adjust TimeZone Offset: + LocalDateTime dateTime = date.atStartOfDay(); + // pg time 0 is 2000-01-01 00:00:00: + long secs = toPgSecs(getSecondsSinceJavaEpoch(dateTime)); + // Needs Days: + return (int) TimeUnit.SECONDS.toDays(secs); + } + + public static Long toPgSecs(LocalDateTime dateTime) { + // pg time 0 is 2000-01-01 00:00:00: + long secs = toPgSecs(getSecondsSinceJavaEpoch(dateTime)); + // Needs Microseconds: + return TimeUnit.SECONDS.toMicros(secs); + } + + private static long getSecondsSinceJavaEpoch(LocalDateTime localDateTime) { + // Adjust TimeZone Offset: + OffsetDateTime zdt = localDateTime.atOffset(ZoneOffset.UTC); + // Get the Epoch Milliseconds: + long milliseconds = zdt.toInstant().toEpochMilli(); + // Turn into Seconds: + return TimeUnit.MILLISECONDS.toSeconds(milliseconds); + } + + /** + * Converts the given java seconds to postgresql seconds. The conversion is valid for any year 100 + * BC onwards. + *

+ * from /org/postgresql/jdbc2/TimestampUtils.java + * + * @param seconds Postgresql seconds. + * @return Java seconds. + */ + @SuppressWarnings("checkstyle:magicnumber") + private static long toPgSecs(final long seconds) { + long secs = seconds; + // java epoc to postgres epoc + secs -= 946684800L; + + // Julian/Greagorian calendar cutoff point + if (secs < -13165977600L) { // October 15, 1582 -> October 4, 1582 + secs -= 86400 * 10; + if (secs < -15773356800L) { // 1500-03-01 -> 1500-02-28 + int years = (int) ((secs + 15773356800L) / -3155823050L); + years++; + years -= years / 4; + secs += years * 86400; + } + } + + return secs; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/row/SimpleRow.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/row/SimpleRow.java new file mode 100644 index 0000000..5deb5dd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/row/SimpleRow.java @@ -0,0 +1,545 @@ +package srt.cloud.framework.dbswitch.pgwriter.row; + +import srt.cloud.framework.dbswitch.pgwriter.pgsql.PgBinaryWriter; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.constants.DataType; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.constants.ObjectIdentifier; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.CollectionValueHandler; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.IValueHandler; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.RangeValueHandler; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.ValueHandlerProvider; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Box; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Circle; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Line; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.LineSegment; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Path; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Point; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.geometric.Polygon; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.network.MacAddress; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.model.range.Range; + +import java.net.Inet4Address; +import java.net.Inet6Address; +import java.time.LocalDate; +import java.time.LocalDateTime; +import java.time.LocalTime; +import java.time.ZonedDateTime; +import java.util.Collection; +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; +import java.util.UUID; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SimpleRow { + + private final ValueHandlerProvider provider; + private final Map lookup; + private final Map> actions; + private final Function nullCharacterHandler; + + public SimpleRow(ValueHandlerProvider provider, Map lookup, + Function nullCharacterHandler) { + this.provider = provider; + this.lookup = lookup; + this.actions = new HashMap<>(); + this.nullCharacterHandler = nullCharacterHandler; + } + + public void setValue(String columnName, DataType type, TTargetType value) { + final int ordinal = Objects.requireNonNull(lookup.get(columnName), columnName + " not found"); + + setValue(ordinal, type, value); + } + + public void setValue(int ordinal, DataType type, TTargetType value) { + final IValueHandler handler = provider.resolve(type); + + actions.put(ordinal, (writer) -> writer.write(handler, value)); + } + + public void setValue(String columnName, IValueHandler handler, + TTargetType value) { + final int ordinal = Objects.requireNonNull(lookup.get(columnName), columnName + " not found"); + + setValue(ordinal, handler, value); + } + + public void setValue(int ordinal, IValueHandler handler, + TTargetType value) { + actions.put(ordinal, (writer) -> writer.write(handler, value)); + } + + public > void setCollection( + String columnName, DataType type, TCollectionType value) { + final int ordinal = Objects.requireNonNull(lookup.get(columnName), columnName + " not found"); + + setCollection(ordinal, type, value); + } + + + public > void setCollection( + int ordinal, DataType type, TCollectionType value) { + final CollectionValueHandler handler = new CollectionValueHandler<>( + ObjectIdentifier.mapFrom(type), provider.resolve(type)); + + actions.put(ordinal, (writer) -> writer.write(handler, value)); + } + + + public void writeRow(PgBinaryWriter writer) { + for (int ordinalIdx = 0; ordinalIdx < lookup.keySet().size(); ordinalIdx++) { + + // If this Ordinal wasn't set, we assume a NULL: + final Consumer action = actions.get(ordinalIdx); + if (action == null) { + writer.writeNull(); + } else { + action.accept(writer); + } + } + } + + // region Numeric + + public void setBoolean(String columnName, Boolean value) { + setValue(columnName, DataType.Boolean, value); + } + + public void setBoolean(int ordinal, Boolean value) { + setValue(ordinal, DataType.Boolean, value); + } + + public void setByte(String columnName, Byte value) { + setValue(columnName, DataType.Char, value); + } + + public void setByte(int ordinal, Byte value) { + setValue(ordinal, DataType.Char, value); + } + + public void setShort(String columnName, Short value) { + setValue(columnName, DataType.Int2, value); + } + + public void setShort(int ordinal, Short value) { + setValue(ordinal, DataType.Int2, value); + } + + public void setInteger(String columnName, Integer value) { + setValue(columnName, DataType.Int4, value); + } + + public void setInteger(int ordinal, Integer value) { + setValue(ordinal, DataType.Int4, value); + } + + public void setNumeric(String columnName, Number value) { + setValue(columnName, DataType.Numeric, value); + } + + public void setNumeric(int ordinal, Number value) { + setValue(ordinal, DataType.Numeric, value); + } + + public void setLong(String columnName, Long value) { + setValue(columnName, DataType.Int8, value); + } + + public void setLong(int ordinal, Long value) { + setValue(ordinal, DataType.Int8, value); + } + + public void setFloat(String columnName, Float value) { + setValue(columnName, DataType.SinglePrecision, value); + } + + public void setFloat(int ordinal, Float value) { + setValue(ordinal, DataType.SinglePrecision, value); + } + + public void setDouble(String columnName, Double value) { + setValue(columnName, DataType.DoublePrecision, value); + } + + public void setDouble(int ordinal, Double value) { + setValue(ordinal, DataType.DoublePrecision, value); + } + + // endregion + + // region Temporal + + public void setDate(String columnName, LocalDate value) { + setValue(columnName, DataType.Date, value); + } + + public void setDate(int ordinal, LocalDate value) { + setValue(ordinal, DataType.Date, value); + } + + public void setTime(String columnName, LocalTime value) { + setValue(columnName, DataType.Time, value); + } + + public void setTime(int ordinal, LocalTime value) { + setValue(ordinal, DataType.Time, value); + } + + public void setTimeStamp(String columnName, LocalDateTime value) { + setValue(columnName, DataType.Timestamp, value); + } + + public void setTimeStamp(int ordinal, LocalDateTime value) { + + setValue(ordinal, DataType.Timestamp, value); + } + + public void setTimeStampTz(String columnName, ZonedDateTime value) { + setValue(columnName, DataType.TimestampTz, value); + } + + public void setTimeStampTz(int ordinal, ZonedDateTime value) { + setValue(ordinal, DataType.TimestampTz, value); + } + + // endregion + + // region Network + + public void setInet6Addr(String columnName, Inet6Address value) { + setValue(columnName, DataType.Inet6, value); + } + + public void setInet6Addr(int ordinal, Inet6Address value) { + setValue(ordinal, DataType.Inet6, value); + } + + public void setInet4Addr(String columnName, Inet4Address value) { + setValue(columnName, DataType.Inet4, value); + } + + public void setInet4Addr(int ordinal, Inet4Address value) { + setValue(ordinal, DataType.Inet4, value); + } + + public void setMacAddress(String columnName, MacAddress value) { + setValue(columnName, DataType.MacAddress, value); + } + + public void setMacAddress(int ordinal, MacAddress value) { + setValue(ordinal, DataType.MacAddress, value); + } + + // endregion + + // region Text + + public void setText(String columnName, String value) { + setValue(columnName, DataType.Text, nullCharacterHandler.apply(value)); + } + + public void setText(int ordinal, String value) { + setValue(ordinal, DataType.Text, nullCharacterHandler.apply(value)); + } + + public void setVarChar(String columnName, String value) { + setValue(columnName, DataType.Text, nullCharacterHandler.apply(value)); + } + + public void setVarChar(int ordinal, String value) { + setValue(ordinal, DataType.Text, nullCharacterHandler.apply(value)); + } + + // endregion + + // region UUID + + public void setUUID(String columnName, UUID value) { + setValue(columnName, DataType.Uuid, value); + } + + public void setUUID(int ordinal, UUID value) { + setValue(ordinal, DataType.Uuid, value); + } + + // endregion + + // region JSON + + public void setJsonb(String columnName, String value) { + setValue(columnName, DataType.Jsonb, nullCharacterHandler.apply(value)); + } + + public void setJsonb(int ordinal, String value) { + setValue(ordinal, DataType.Jsonb, nullCharacterHandler.apply(value)); + } + + // endregion + + // region hstore + + public void setHstore(String columnName, Map value) { + setValue(columnName, DataType.Hstore, value); + } + + public void setHstore(int ordinal, Map value) { + setValue(ordinal, DataType.Hstore, value); + } + + // endregion + + // region Geo + + public void setPoint(String columnName, Point value) { + setValue(columnName, DataType.Point, value); + } + + public void setPoint(int ordinal, Point value) { + setValue(ordinal, DataType.Point, value); + } + + public void setBox(String columnName, Box value) { + setValue(columnName, DataType.Box, value); + } + + public void setBox(int ordinal, Box value) { + setValue(ordinal, DataType.Box, value); + } + + public void setPath(String columnName, Path value) { + setValue(columnName, DataType.Path, value); + } + + public void setPath(int ordinal, Path value) { + setValue(ordinal, DataType.Path, value); + } + + public void setPolygon(String columnName, Polygon value) { + setValue(columnName, DataType.Polygon, value); + } + + public void setPolygon(int ordinal, Polygon value) { + setValue(ordinal, DataType.Polygon, value); + } + + public void setLine(String columnName, Line value) { + setValue(columnName, DataType.Line, value); + } + + public void setLine(int ordinal, Line value) { + setValue(ordinal, DataType.Line, value); + } + + public void setLineSegment(String columnName, LineSegment value) { + setValue(columnName, DataType.LineSegment, value); + } + + public void setLineSegment(int ordinal, LineSegment value) { + setValue(ordinal, DataType.LineSegment, value); + } + + public void setCircle(String columnName, Circle value) { + setValue(columnName, DataType.Circle, value); + } + + public void setCircle(int ordinal, Circle value) { + setValue(ordinal, DataType.Circle, value); + } + + // endregion + + // region Arrays + + public void setByteArray(String columnName, byte[] value) { + setValue(columnName, DataType.Bytea, value); + } + + public void setByteArray(int ordinal, byte[] value) { + setValue(ordinal, DataType.Bytea, value); + } + + public void setBooleanArray(String columnName, Collection value) { + setCollection(columnName, DataType.Boolean, value); + } + + public void setBooleanArray(int ordinal, Collection value) { + setCollection(ordinal, DataType.Boolean, value); + } + + public void setShortArray(String columnName, Collection value) { + setCollection(columnName, DataType.Int2, value); + } + + public void setShortArray(int ordinal, Collection value) { + setCollection(ordinal, DataType.Int2, value); + } + + public void setIntegerArray(String columnName, Collection value) { + setCollection(columnName, DataType.Int4, value); + } + + public void setIntegerArray(int ordinal, Collection value) { + setCollection(ordinal, DataType.Int4, value); + } + + public void setLongArray(String columnName, Collection value) { + setCollection(columnName, DataType.Int8, value); + } + + public void setLongArray(int ordinal, Collection value) { + setCollection(ordinal, DataType.Int8, value); + } + + public void setTextArray(String columnName, Collection value) { + Collection values = value.stream() + .map(x -> nullCharacterHandler.apply(x)) + .collect(Collectors.toList()); + + setCollection(columnName, DataType.Text, values); + } + + public void setTextArray(int ordinal, Collection value) { + Collection values = value.stream() + .map(x -> nullCharacterHandler.apply(x)) + .collect(Collectors.toList()); + + setCollection(ordinal, DataType.Text, values); + } + + public void setVarCharArray(String columnName, Collection value) { + + Collection values = value.stream() + .map(x -> nullCharacterHandler.apply(x)) + .collect(Collectors.toList()); + + setCollection(columnName, DataType.VarChar, values); + } + + public void setVarCharArray(int ordinal, Collection value) { + Collection values = value.stream() + .map(x -> nullCharacterHandler.apply(x)) + .collect(Collectors.toList()); + + setCollection(ordinal, DataType.VarChar, values); + } + + public void setFloatArray(String columnName, Collection value) { + setCollection(columnName, DataType.SinglePrecision, value); + } + + public void setFloatArray(int ordinal, Collection value) { + setCollection(ordinal, DataType.SinglePrecision, value); + } + + public void setDoubleArray(String columnName, Collection value) { + setCollection(columnName, DataType.DoublePrecision, value); + } + + public void setDoubleArray(int ordinal, Collection value) { + setCollection(ordinal, DataType.DoublePrecision, value); + } + + public void setNumericArray(String columnName, Collection value) { + setCollection(columnName, DataType.Numeric, value); + } + + public void setNumericArray(int ordinal, Collection value) { + setCollection(ordinal, DataType.Numeric, value); + } + + public void setUUIDArray(String columnName, Collection value) { + setCollection(columnName, DataType.Uuid, value); + } + + public void setUUIDArray(int ordinal, Collection value) { + setCollection(ordinal, DataType.Uuid, value); + } + + public void setInet4Array(String columnName, Collection value) { + setCollection(columnName, DataType.Inet4, value); + } + + public void setInet4Array(int ordinal, Collection value) { + setCollection(ordinal, DataType.Inet4, value); + } + + public void setInet6Array(String columnName, Collection value) { + setCollection(columnName, DataType.Inet6, value); + } + + public void setInet6Array(int ordinal, Collection value) { + setCollection(ordinal, DataType.Inet6, value); + } + + // endregion + + // region Ranges + + public void setRange(String columnName, DataType dataType, + Range value) { + + final IValueHandler valueHandler = provider.resolve(dataType); + + setValue(columnName, new RangeValueHandler<>(valueHandler), value); + } + + public void setRange(int ordinal, DataType dataType, Range value) { + + final IValueHandler valueHandler = provider.resolve(dataType); + + setValue(ordinal, new RangeValueHandler<>(valueHandler), value); + } + + public void setTsRange(String columnName, Range value) { + setValue(columnName, DataType.TsRange, value); + } + + public void setTsRange(int ordinal, Range value) { + setValue(ordinal, DataType.TsRange, value); + } + + public void setTsTzRange(String columnName, Range value) { + setValue(columnName, DataType.TsTzRange, value); + } + + public void setTsTzRange(int ordinal, Range value) { + setValue(ordinal, DataType.TsTzRange, value); + } + + public void setInt4Range(String columnName, Range value) { + setValue(columnName, DataType.Int4Range, value); + } + + public void setInt4Range(int ordinal, Range value) { + setValue(ordinal, DataType.Int4Range, value); + } + + public void setInt8Range(String columnName, Range value) { + setValue(columnName, DataType.Int8Range, value); + } + + public void setInt8Range(int ordinal, Range value) { + setValue(ordinal, DataType.Int8Range, value); + } + + public void setNumRange(String columnName, Range value) { + setValue(columnName, DataType.NumRange, value); + } + + public void setNumRange(int ordinal, Range value) { + setValue(ordinal, DataType.NumRange, value); + } + + + public void setDateRange(String columnName, Range value) { + setValue(columnName, DataType.DateRange, value); + } + + public void setDateRange(int ordinal, Range value) { + setValue(ordinal, DataType.DateRange, value); + } + + // endregion +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/row/SimpleRowWriter.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/row/SimpleRowWriter.java new file mode 100644 index 0000000..be24199 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/row/SimpleRowWriter.java @@ -0,0 +1,152 @@ +package srt.cloud.framework.dbswitch.pgwriter.row; + +import srt.cloud.framework.dbswitch.pgwriter.exceptions.BinaryWriteFailedException; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.PgBinaryWriter; +import srt.cloud.framework.dbswitch.pgwriter.pgsql.handlers.ValueHandlerProvider; +import srt.cloud.framework.dbswitch.pgwriter.util.PostgreSqlUtils; +import srt.cloud.framework.dbswitch.pgwriter.util.StringUtils; +import org.postgresql.PGConnection; +import org.postgresql.copy.PGCopyOutputStream; + +import java.sql.SQLException; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class SimpleRowWriter implements AutoCloseable { + + public static class Table { + + private final String schema; + private final String table; + private final String[] columns; + + public Table(String table, String... columns) { + this(null, table, columns); + } + + public Table(String schema, String table, String... columns) { + this.schema = schema; + this.table = table; + this.columns = columns; + } + + public String getSchema() { + return schema; + } + + public String getTable() { + return table; + } + + public String[] getColumns() { + return columns; + } + + public String getFullyQualifiedTableName(boolean usePostgresQuoting) { + return PostgreSqlUtils.getFullyQualifiedTableName(schema, table, usePostgresQuoting); + } + + public String getCopyCommand(boolean usePostgresQuoting) { + + String commaSeparatedColumns = Arrays.stream(columns) + .map(x -> usePostgresQuoting ? PostgreSqlUtils.quoteIdentifier(x) : x) + .collect(Collectors.joining(", ")); + + return String.format("COPY %1$s(%2$s) FROM STDIN BINARY", + getFullyQualifiedTableName(usePostgresQuoting), + commaSeparatedColumns); + } + } + + private final Table table; + private final PgBinaryWriter writer; + private final ValueHandlerProvider provider; + private final Map lookup; + + private Function nullCharacterHandler; + private boolean isOpened; + private boolean isClosed; + + public SimpleRowWriter(final Table table, final PGConnection connection) throws SQLException { + this(table, connection, false); + } + + public SimpleRowWriter(final Table table, final PGConnection connection, + final boolean usePostgresQuoting) throws SQLException { + this.table = table; + this.isClosed = false; + this.isOpened = false; + this.nullCharacterHandler = (val) -> val; + + this.provider = new ValueHandlerProvider(); + this.lookup = new HashMap<>(); + + for (int ordinal = 0; ordinal < table.columns.length; ordinal++) { + lookup.put(table.columns[ordinal], ordinal); + } + + this.writer = new PgBinaryWriter( + new PGCopyOutputStream(connection, table.getCopyCommand(usePostgresQuoting), 1)); + + isClosed = false; + isOpened = true; + } + + public synchronized void startRow(Consumer consumer) { + + // We try to write a Row, but the underlying Stream to PostgreSQL has not + // been opened yet. We should not proceed and throw an Exception: + if (!isOpened) { + throw new BinaryWriteFailedException("The SimpleRowWriter has not been opened"); + } + + // We try to write a Row, but the underlying Stream to PostgreSQL has already + // been closed. We should not proceed and throw an Exception: + if (isClosed) { + throw new BinaryWriteFailedException("The PGCopyOutputStream has already been closed"); + } + + try { + + writer.startRow(table.columns.length); + + SimpleRow row = new SimpleRow(provider, lookup, nullCharacterHandler); + + consumer.accept(row); + + row.writeRow(writer); + + } catch (Exception e) { + + try { + close(); + } catch (Exception ex) { + // There is nothing more we can do ... + } + + throw e; + } + } + + @Override + public void close() { + + // This stream shouldn't be reused, so let's store a flag here: + isOpened = false; + isClosed = true; + + writer.close(); + } + + public void enableNullCharacterHandler() { + this.nullCharacterHandler = (val) -> StringUtils.removeNullCharacter(val); + } + + public void setNullCharacterHandler(Function nullCharacterHandler) { + this.nullCharacterHandler = nullCharacterHandler; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/util/BigDecimalUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/util/BigDecimalUtils.java new file mode 100644 index 0000000..9276d86 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/util/BigDecimalUtils.java @@ -0,0 +1,43 @@ +package srt.cloud.framework.dbswitch.pgwriter.util; + +import java.math.BigDecimal; +import java.math.MathContext; + +public final class BigDecimalUtils { + + private BigDecimalUtils() { + } + + public static BigDecimal toBigDecimal(Integer intValue) { + return new BigDecimal(intValue.toString()); + } + + public static BigDecimal toBigDecimal(Integer intValue, MathContext mathContext) { + return new BigDecimal(intValue.toString(), mathContext); + } + + public static BigDecimal toBigDecimal(Long longValue) { + return new BigDecimal(longValue.toString()); + } + + public static BigDecimal toBigDecimal(Long longValue, MathContext mathContext) { + return new BigDecimal(longValue.toString(), mathContext); + } + + public static BigDecimal toBigDecimal(Float floatValue) { + return new BigDecimal(floatValue.toString()); + } + + public static BigDecimal toBigDecimal(Float floatValue, MathContext mathContext) { + return new BigDecimal(floatValue.toString(), mathContext); + } + + public static BigDecimal toBigDecimal(Double doubleValue) { + return new BigDecimal(doubleValue.toString()); + } + + public static BigDecimal toBigDecimal(Double doubleValue, MathContext mathContext) { + return new BigDecimal(doubleValue.toString(), mathContext); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/util/PostgreSqlUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/util/PostgreSqlUtils.java new file mode 100644 index 0000000..f71af2b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/util/PostgreSqlUtils.java @@ -0,0 +1,76 @@ +package srt.cloud.framework.dbswitch.pgwriter.util; + +import srt.cloud.framework.dbswitch.pgwriter.exceptions.PgConnectionException; +import org.postgresql.PGConnection; + +import java.sql.Connection; +import java.util.Optional; + +public final class PostgreSqlUtils { + + private PostgreSqlUtils() { + } + + public static PGConnection getPGConnection(final Connection connection) { + return tryGetPGConnection(connection) + .orElseThrow(() -> new PgConnectionException("Could not obtain a PGConnection")); + } + + public static Optional tryGetPGConnection(final Connection connection) { + final Optional fromCast = tryCastConnection(connection); + if (fromCast.isPresent()) { + return fromCast; + } + return tryUnwrapConnection(connection); + } + + private static Optional tryCastConnection(final Connection connection) { + if (connection instanceof PGConnection) { + return Optional.of((PGConnection) connection); + } + return Optional.empty(); + } + + private static Optional tryUnwrapConnection(final Connection connection) { + try { + if (connection.isWrapperFor(PGConnection.class)) { + return Optional.of(connection.unwrap(PGConnection.class)); + } + } catch (Exception e) { + // do nothing + } + return Optional.empty(); + } + + public static final char QuoteChar = '"'; + + public static String quoteIdentifier(String identifier) { + return requiresQuoting(identifier) ? (QuoteChar + identifier + QuoteChar) : identifier; + } + + public static String getFullyQualifiedTableName(String schemaName, String tableName, + boolean usePostgresQuoting) { + if (usePostgresQuoting) { + return StringUtils.isNullOrWhiteSpace(schemaName) ? quoteIdentifier(tableName) + : String.format("%s.%s", quoteIdentifier(schemaName), quoteIdentifier(tableName)); + } + + if (StringUtils.isNullOrWhiteSpace(schemaName)) { + return tableName; + } + + return String.format("%1$s.%2$s", schemaName, tableName); + } + + private static boolean requiresQuoting(String identifier) { + + char first = identifier.charAt(0); + char last = identifier.charAt(identifier.length() - 1); + + if (first == QuoteChar && last == QuoteChar) { + return false; + } + + return true; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/util/StringUtils.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/util/StringUtils.java new file mode 100644 index 0000000..061f6eb --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/pgwriter/util/StringUtils.java @@ -0,0 +1,28 @@ +package srt.cloud.framework.dbswitch.pgwriter.util; + +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; + +public class StringUtils { + + private static Charset utf8Charset = StandardCharsets.UTF_8; + + private StringUtils() { + } + + public static boolean isNullOrWhiteSpace(String input) { + return input == null || input.trim().length() == 0; + } + + public static byte[] getUtf8Bytes(String value) { + return value.getBytes(utf8Charset); + } + + public static String removeNullCharacter(String data) { + if (data == null) { + return data; + } + + return data.replaceAll("\u0000", ""); + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheMssqlSqlDialect.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheMssqlSqlDialect.java new file mode 100644 index 0000000..3d638db --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheMssqlSqlDialect.java @@ -0,0 +1,148 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.calcite; + +import org.apache.calcite.avatica.util.TimeUnitRange; +import org.apache.calcite.sql.SqlCall; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlFunction; +import org.apache.calcite.sql.SqlFunctionCategory; +import org.apache.calcite.sql.SqlKind; +import org.apache.calcite.sql.SqlLiteral; +import org.apache.calcite.sql.SqlOperator; +import org.apache.calcite.sql.SqlUtil; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.dialect.MssqlSqlDialect; +import org.apache.calcite.sql.fun.SqlRowOperator; +import org.apache.calcite.sql.fun.SqlStdOperatorTable; +import org.apache.calcite.sql.type.ReturnTypes; + +/** + * 这里重写了MssqlSqlDialect的unparseCall()方法 + * + * @author jrl + */ +public class TheMssqlSqlDialect extends MssqlSqlDialect { + + public static final SqlDialect DEFAULT = new TheMssqlSqlDialect(EMPTY_CONTEXT + .withDatabaseProduct(DatabaseProduct.MSSQL).withIdentifierQuoteString("[") + .withCaseSensitive(false)); + + private static final SqlFunction MSSQL_SUBSTRING = new SqlFunction("SUBSTRING", + SqlKind.OTHER_FUNCTION, + ReturnTypes.ARG0_NULLABLE_VARYING, null, null, SqlFunctionCategory.STRING); + + /** + * Creates a MssqlSqlDialect. + */ + public TheMssqlSqlDialect(Context context) { + super(context); + } + + @Override + public void unparseCall(SqlWriter writer, SqlCall call, int leftPrec, int rightPrec) { + if (call.getOperator() == SqlStdOperatorTable.SUBSTRING) { + if (call.operandCount() != 3) { + throw new IllegalArgumentException("MSSQL SUBSTRING requires FROM and FOR arguments"); + } + SqlUtil.unparseFunctionSyntax(MSSQL_SUBSTRING, writer, call); + } else { + switch (call.getKind()) { + case FLOOR: + if (call.operandCount() != 2) { + super.unparseCall(writer, call, leftPrec, rightPrec); + return; + } + unparseFloor(writer, call); + break; + + default: + SqlOperator operator = call.getOperator(); + if (operator instanceof SqlRowOperator) { + SqlUtil.unparseFunctionSyntax(new TheSqlRowOperator(), writer, call); + } else { + super.unparseCall(writer, call, leftPrec, rightPrec); + } + break; + } + } + } + + /** + * Unparses datetime floor for Microsoft SQL Server. There is no TRUNC function, so simulate this + * using calls to CONVERT. + * + * @param writer Writer + * @param call Call + */ + private void unparseFloor(SqlWriter writer, SqlCall call) { + SqlLiteral node = call.operand(1); + TimeUnitRange unit = (TimeUnitRange) node.getValue(); + + switch (unit) { + case YEAR: + unparseFloorWithUnit(writer, call, 4, "-01-01"); + break; + case MONTH: + unparseFloorWithUnit(writer, call, 7, "-01"); + break; + case WEEK: + writer.print( + "CONVERT(DATETIME, CONVERT(VARCHAR(10), " + "DATEADD(day, - (6 + DATEPART(weekday, "); + call.operand(0).unparse(writer, 0, 0); + writer.print(")) % 7, "); + call.operand(0).unparse(writer, 0, 0); + writer.print("), 126))"); + break; + case DAY: + unparseFloorWithUnit(writer, call, 10, ""); + break; + case HOUR: + unparseFloorWithUnit(writer, call, 13, ":00:00"); + break; + case MINUTE: + unparseFloorWithUnit(writer, call, 16, ":00"); + break; + case SECOND: + unparseFloorWithUnit(writer, call, 19, ":00"); + break; + default: + throw new IllegalArgumentException("MSSQL does not support FLOOR for time unit: " + unit); + } + } + + private void unparseFloorWithUnit(SqlWriter writer, SqlCall call, int charLen, String offset) { + writer.print("CONVERT"); + SqlWriter.Frame frame = writer.startList("(", ")"); + writer.print("DATETIME, CONVERT(VARCHAR(" + charLen + "), "); + call.operand(0).unparse(writer, 0, 0); + writer.print(", 126)"); + + if (offset.length() > 0) { + writer.print("+'" + offset + "'"); + } + writer.endList(frame); + } + + /** + * Appends a string literal to a buffer. + * + * @param buf Buffer + * @param charsetName Character set name, e.g. "utf16", or null + * @param val String value + */ + @Override + public void quoteStringLiteral(StringBuilder buf, String charsetName, String val) { + buf.append(literalQuoteString); + buf.append(val.replace(literalEndQuoteString, literalEscapedQuote)); + buf.append(literalEndQuoteString); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheMysqlSqlDialect.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheMysqlSqlDialect.java new file mode 100644 index 0000000..45d74bf --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheMysqlSqlDialect.java @@ -0,0 +1,142 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.calcite; + +import org.apache.calcite.avatica.util.Casing; +import org.apache.calcite.avatica.util.TimeUnitRange; +import org.apache.calcite.config.NullCollation; +import org.apache.calcite.sql.SqlCall; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlLiteral; +import org.apache.calcite.sql.SqlOperator; +import org.apache.calcite.sql.SqlOrderBy; +import org.apache.calcite.sql.SqlUtil; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.dialect.MysqlSqlDialect; +import org.apache.calcite.sql.fun.SqlRowOperator; + +/** + * 这里重写了MysqlSqlDialect的unparseCall()方法 + * + * @author jrl + */ +public class TheMysqlSqlDialect extends MysqlSqlDialect { + + public static final SqlDialect DEFAULT = new TheMysqlSqlDialect( + EMPTY_CONTEXT.withDatabaseProduct(DatabaseProduct.MYSQL).withIdentifierQuoteString("`") + .withUnquotedCasing(Casing.UNCHANGED).withNullCollation(NullCollation.LOW)); + + public TheMysqlSqlDialect(Context context) { + super(context); + } + + /** + * Unparses datetime floor for MySQL. There is no TRUNC function, so simulate this using calls to + * DATE_FORMAT. + * + * @param writer Writer + * @param call Call + */ + private void unparseFloor(SqlWriter writer, SqlCall call) { + SqlLiteral node = call.operand(1); + TimeUnitRange unit = (TimeUnitRange) node.getValue(); + + if (unit == TimeUnitRange.WEEK) { + writer.print("STR_TO_DATE"); + SqlWriter.Frame frame = writer.startList("(", ")"); + + writer.print("DATE_FORMAT("); + call.operand(0).unparse(writer, 0, 0); + writer.print(", '%x%v-1'), '%x%v-%w'"); + writer.endList(frame); + return; + } + + String format; + switch (unit) { + case YEAR: + format = "%Y-01-01"; + break; + case MONTH: + format = "%Y-%m-01"; + break; + case DAY: + format = "%Y-%m-%d"; + break; + case HOUR: + format = "%Y-%m-%d %H:00:00"; + break; + case MINUTE: + format = "%Y-%m-%d %H:%i:00"; + break; + case SECOND: + format = "%Y-%m-%d %H:%i:%s"; + break; + default: + throw new AssertionError("MYSQL does not support FLOOR for time unit: " + unit); + } + + writer.print("DATE_FORMAT"); + SqlWriter.Frame frame = writer.startList("(", ")"); + call.operand(0).unparse(writer, 0, 0); + writer.sep(",", true); + writer.print("'" + format + "'"); + writer.endList(frame); + } + + @Override + public void unparseCall(SqlWriter writer, SqlCall call, int leftPrec, int rightPrec) { + switch (call.getKind()) { + case FLOOR: + if (call.operandCount() != 2) { + super.unparseCall(writer, call, leftPrec, rightPrec); + return; + } + + unparseFloor(writer, call); + break; + + default: + SqlOperator operator = call.getOperator(); + if (operator instanceof SqlRowOperator) { + // 这里处理INSERT语句的ROW关键词问题 + SqlUtil.unparseFunctionSyntax(new TheSqlRowOperator(), writer, call); + } else if (call instanceof SqlOrderBy) { + // 这里处理分页的LIMIT OFFSET问题 + SqlOrderBy thecall = (SqlOrderBy) call; + TheSqlOrderBy newcall = new TheSqlOrderBy(call.getParserPosition(), thecall.query, + thecall.orderList, + thecall.offset, thecall.fetch); + newcall.getOperator().unparse(writer, newcall, leftPrec, rightPrec); + // TheSqlOrderBy.OPERATOR.unparse(writer, thecall, leftPrec, rightPrec); + } else { + // 其他情况走这里 + operator.unparse(writer, call, leftPrec, rightPrec); + } + break; + + } + } + + /** + * Appends a string literal to a buffer. + * + * @param buf Buffer + * @param charsetName Character set name, e.g. "utf16", or null + * @param val String value + */ + @Override + public void quoteStringLiteral(StringBuilder buf, String charsetName, String val) { + buf.append(literalQuoteString); + buf.append(val.replace(literalEndQuoteString, literalEscapedQuote)); + buf.append(literalEndQuoteString); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheOracleSqlDialect.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheOracleSqlDialect.java new file mode 100644 index 0000000..7c3e614 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheOracleSqlDialect.java @@ -0,0 +1,111 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.calcite; + +import org.apache.calcite.avatica.util.TimeUnitRange; +import org.apache.calcite.rel.type.RelDataTypeSystem; +import org.apache.calcite.rel.type.RelDataTypeSystemImpl; +import org.apache.calcite.sql.SqlCall; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlLiteral; +import org.apache.calcite.sql.SqlOperator; +import org.apache.calcite.sql.SqlUtil; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.dialect.OracleSqlDialect; +import org.apache.calcite.sql.fun.SqlFloorFunction; +import org.apache.calcite.sql.fun.SqlLibraryOperators; +import org.apache.calcite.sql.fun.SqlRowOperator; +import org.apache.calcite.sql.fun.SqlStdOperatorTable; +import org.apache.calcite.sql.type.SqlTypeName; + +/** + * 这里重写了OracleSqlDialect的unparseCall()方法 + * + * @author jrl + */ +public class TheOracleSqlDialect extends OracleSqlDialect { + + /** + * OracleDB type system. + */ + private static final RelDataTypeSystem ORACLE_TYPE_SYSTEM = new RelDataTypeSystemImpl() { + @Override + public int getMaxPrecision(SqlTypeName typeName) { + switch (typeName) { + case VARCHAR: + // Maximum size of 4000 bytes for varchar2. + return 4000; + default: + return super.getMaxPrecision(typeName); + } + } + }; + + public static final SqlDialect DEFAULT = new TheOracleSqlDialect( + EMPTY_CONTEXT.withDatabaseProduct(DatabaseProduct.ORACLE).withIdentifierQuoteString("\"") + .withDataTypeSystem(ORACLE_TYPE_SYSTEM)); + + /** + * Creates an OracleSqlDialect. + */ + public TheOracleSqlDialect(Context context) { + super(context); + } + + @Override + public void unparseCall(SqlWriter writer, SqlCall call, int leftPrec, int rightPrec) { + if (call.getOperator() == SqlStdOperatorTable.SUBSTRING) { + SqlUtil.unparseFunctionSyntax(SqlLibraryOperators.SUBSTR, writer, call); + } else { + switch (call.getKind()) { + case FLOOR: + if (call.operandCount() != 2) { + super.unparseCall(writer, call, leftPrec, rightPrec); + return; + } + + final SqlLiteral timeUnitNode = call.operand(1); + final TimeUnitRange timeUnit = timeUnitNode.getValueAs(TimeUnitRange.class); + + SqlCall call2 = SqlFloorFunction.replaceTimeUnitOperand(call, timeUnit.name(), + timeUnitNode.getParserPosition()); + SqlFloorFunction.unparseDatetimeFunction(writer, call2, "TRUNC", true); + break; + + default: + SqlOperator operator = call.getOperator(); + if (operator instanceof SqlRowOperator) { + SqlUtil.unparseFunctionSyntax(new TheSqlRowOperator(), writer, call); + } else { + super.unparseCall(writer, call, leftPrec, rightPrec); + } + break; + + } + } + } + + /** + * Appends a string literal to a buffer. + * + * @param buf Buffer + * @param charsetName Character set name, e.g. "utf16", or null + * @param val String value + */ + @Override + public void quoteStringLiteral(StringBuilder buf, String charsetName, String val) { + buf.append(literalQuoteString); + buf.append(val.replace(literalEndQuoteString, literalEscapedQuote)); + buf.append(literalEndQuoteString); + } + +} + +// End OracleSqlDialect.java diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/ThePostgresqlSqlDialect.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/ThePostgresqlSqlDialect.java new file mode 100644 index 0000000..d02558c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/ThePostgresqlSqlDialect.java @@ -0,0 +1,109 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.calcite; + +import org.apache.calcite.avatica.util.Casing; +import org.apache.calcite.avatica.util.TimeUnitRange; +import org.apache.calcite.rel.type.RelDataTypeSystem; +import org.apache.calcite.rel.type.RelDataTypeSystemImpl; +import org.apache.calcite.sql.SqlCall; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlLiteral; +import org.apache.calcite.sql.SqlOperator; +import org.apache.calcite.sql.SqlUtil; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.dialect.PostgresqlSqlDialect; +import org.apache.calcite.sql.fun.SqlFloorFunction; +import org.apache.calcite.sql.fun.SqlRowOperator; +import org.apache.calcite.sql.type.SqlTypeName; + +/** + * 这里重写了PostgresqlSqlDialect的unparseCall()方法 + * + * @author jrl + */ +public class ThePostgresqlSqlDialect extends PostgresqlSqlDialect { + + /** + * PostgreSQL type system. + */ + private static final RelDataTypeSystem POSTGRESQL_TYPE_SYSTEM = new RelDataTypeSystemImpl() { + @Override + public int getMaxPrecision(SqlTypeName typeName) { + switch (typeName) { + case VARCHAR: + // From htup_details.h in postgresql: + // MaxAttrSize is a somewhat arbitrary upper limit on the declared size of + // data fields of char(n) and similar types. It need not have anything + // directly to do with the *actual* upper limit of varlena values, which + // is currently 1Gb (see TOAST structures in postgres.h). I've set it + // at 10Mb which seems like a reasonable number --- tgl 8/6/00. */ + return 10 * 1024 * 1024; + default: + return super.getMaxPrecision(typeName); + } + } + }; + + public static final SqlDialect DEFAULT = new ThePostgresqlSqlDialect( + EMPTY_CONTEXT.withDatabaseProduct(DatabaseProduct.POSTGRESQL).withIdentifierQuoteString("\"") + .withUnquotedCasing(Casing.TO_LOWER).withDataTypeSystem(POSTGRESQL_TYPE_SYSTEM)); + + /** + * Creates a PostgresqlSqlDialect. + */ + public ThePostgresqlSqlDialect(Context context) { + super(context); + } + + @Override + public void unparseCall(SqlWriter writer, SqlCall call, int leftPrec, int rightPrec) { + switch (call.getKind()) { + case FLOOR: + if (call.operandCount() != 2) { + super.unparseCall(writer, call, leftPrec, rightPrec); + return; + } + + final SqlLiteral timeUnitNode = call.operand(1); + final TimeUnitRange timeUnit = timeUnitNode.getValueAs(TimeUnitRange.class); + + SqlCall call2 = SqlFloorFunction.replaceTimeUnitOperand(call, timeUnit.name(), + timeUnitNode.getParserPosition()); + SqlFloorFunction.unparseDatetimeFunction(writer, call2, "DATE_TRUNC", false); + break; + + default: + SqlOperator operator = call.getOperator(); + if (operator instanceof SqlRowOperator) { + SqlUtil.unparseFunctionSyntax(new TheSqlRowOperator(), writer, call); + } else { + super.unparseCall(writer, call, leftPrec, rightPrec); + } + break; + + } + } + + /** + * Appends a string literal to a buffer. + * + * @param buf Buffer + * @param charsetName Character set name, e.g. "utf16", or null + * @param val String value + */ + @Override + public void quoteStringLiteral(StringBuilder buf, String charsetName, String val) { + buf.append(literalQuoteString); + buf.append(val.replace(literalEndQuoteString, literalEscapedQuote)); + buf.append(literalEndQuoteString); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheSqlOrderBy.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheSqlOrderBy.java new file mode 100644 index 0000000..48412bd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheSqlOrderBy.java @@ -0,0 +1,120 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.calcite; + +import org.apache.calcite.sql.SqlCall; +import org.apache.calcite.sql.SqlKind; +import org.apache.calcite.sql.SqlLiteral; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlNodeList; +import org.apache.calcite.sql.SqlOperator; +import org.apache.calcite.sql.SqlOrderBy; +import org.apache.calcite.sql.SqlSpecialOperator; +import org.apache.calcite.sql.SqlSyntax; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.util.ImmutableNullableList; + +import java.util.List; + +/** + * 重新Calcite的SqlOrderBy + * + * @author jrl + */ +public class TheSqlOrderBy extends SqlOrderBy { + + public static final SqlSpecialOperator OPERATOR = new Operator() { + @Override + public SqlCall createCall(SqlLiteral functionQualifier, SqlParserPos pos, SqlNode... operands) { + return new TheSqlOrderBy(pos, operands[0], (SqlNodeList) operands[1], operands[2], + operands[3]); + } + }; + + public final SqlNode query; + public final SqlNodeList orderList; + public final SqlNode offset; + public final SqlNode fetch; + + // ~ Constructors ----------------------------------------------------------- + + public TheSqlOrderBy(SqlParserPos pos, SqlNode query, SqlNodeList orderList, SqlNode offset, + SqlNode fetch) { + super(pos, query, orderList, offset, fetch); + this.query = query; + this.orderList = orderList; + this.offset = offset; + this.fetch = fetch; + } + + // ~ Methods ---------------------------------------------------------------- + + @Override + public SqlKind getKind() { + return SqlKind.ORDER_BY; + } + + @Override + public SqlOperator getOperator() { + return OPERATOR; + } + + @Override + public List getOperandList() { + return ImmutableNullableList.of(query, orderList, offset, fetch); + } + + /** + * Definition of {@code ORDER BY} operator. + */ + private static class Operator extends SqlSpecialOperator { + + private Operator() { + // NOTE: make precedence lower then SELECT to avoid extra parens + super("ORDER BY", SqlKind.ORDER_BY, 0); + } + + @Override + public SqlSyntax getSyntax() { + return SqlSyntax.POSTFIX; + } + + @Override + public void unparse(SqlWriter writer, SqlCall call, int leftPrec, int rightPrec) { + SqlOrderBy orderBy = (SqlOrderBy) call; + final SqlWriter.Frame frame = writer.startList(SqlWriter.FrameTypeEnum.ORDER_BY); + orderBy.query.unparse(writer, getLeftPrec(), getRightPrec()); + if (orderBy.orderList != SqlNodeList.EMPTY) { + writer.sep(getName()); + final SqlWriter.Frame listFrame = writer.startList(SqlWriter.FrameTypeEnum.ORDER_BY_LIST); + unparseListClause(writer, orderBy.orderList); + writer.endList(listFrame); + } + + if (orderBy.fetch != null) { + final SqlWriter.Frame frame3 = writer.startList(SqlWriter.FrameTypeEnum.FETCH); + writer.newlineAndIndent(); + writer.keyword("LIMIT"); + orderBy.fetch.unparse(writer, -1, -1); + writer.endList(frame3); + } + + if (orderBy.offset != null) { + final SqlWriter.Frame frame2 = writer.startList(SqlWriter.FrameTypeEnum.OFFSET); + writer.keyword("OFFSET"); + orderBy.offset.unparse(writer, -1, -1); + writer.endList(frame2); + } + + writer.endList(frame); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheSqlRowOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheSqlRowOperator.java new file mode 100644 index 0000000..88cd782 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/calcite/TheSqlRowOperator.java @@ -0,0 +1,81 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.calcite; + +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.sql.SqlCall; +import org.apache.calcite.sql.SqlKind; +import org.apache.calcite.sql.SqlOperatorBinding; +import org.apache.calcite.sql.SqlSpecialOperator; +import org.apache.calcite.sql.SqlSyntax; +import org.apache.calcite.sql.SqlUtil; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.type.InferTypes; +import org.apache.calcite.sql.type.OperandTypes; +import org.apache.calcite.util.Pair; + +import java.util.AbstractList; +import java.util.Map; + +/** + * 代码来源于org.apache.calcite.sql.fun.SqlRowOperator的代码,这里 重写了unparse()方法,以处理INSERT语句的ROW问题。 + * + * @author jrl + */ +public class TheSqlRowOperator extends SqlSpecialOperator { + // ~ Constructors ----------------------------------------------------------- + + public TheSqlRowOperator() { + super("", SqlKind.ROW, MDX_PRECEDENCE, false, null, InferTypes.RETURN_TYPE, + OperandTypes.VARIADIC); + } + + // ~ Methods ---------------------------------------------------------------- + + // implement SqlOperator + @Override + public SqlSyntax getSyntax() { + // Function syntax would work too. + return SqlSyntax.SPECIAL; + } + + @Override + public RelDataType inferReturnType(final SqlOperatorBinding opBinding) { + // The type of a ROW(e1,e2) expression is a record with the types + // {e1type,e2type}. According to the standard, field names are + // implementation-defined. + return opBinding.getTypeFactory() + .createStructType(new AbstractList>() { + + @Override + public Map.Entry get(int index) { + return Pair.of(SqlUtil.deriveAliasFromOrdinal(index), opBinding.getOperandType(index)); + } + + @Override + public int size() { + return opBinding.getOperandCount(); + } + }); + } + + @Override + public void unparse(SqlWriter writer, SqlCall call, int leftPrec, int rightPrec) { + SqlUtil.unparseFunctionSyntax(this, writer, call); + } + + // override SqlOperator + @Override + public boolean requiresDecimalExpansion() { + return false; + } +} + +// End TheSqlRowOperator.java diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/constant/Const.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/constant/Const.java new file mode 100644 index 0000000..3f7ca79 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/constant/Const.java @@ -0,0 +1,80 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.constant; + +/** + * 常量定义 + * + * @author jrl + */ +public class Const { + + /** + * What's the file systems file separator on this operating system? + */ + public static final String FILE_SEPARATOR = System.getProperty("file.separator"); + + /** + * What's the path separator on this operating system? + */ + public static final String PATH_SEPARATOR = System.getProperty("path.separator"); + + /** + * CR: operating systems specific Carriage Return + */ + public static final String CR = System.getProperty("line.separator"); + + /** + * DOSCR: MS-DOS specific Carriage Return + */ + public static final String DOSCR = "\n\r"; + + /** + * An empty ("") String. + */ + public static final String EMPTY_STRING = ""; + + /** + * The Java runtime version + */ + public static final String JAVA_VERSION = System.getProperty("java.vm.version"); + + /** + * Create Table Statement Prefix String + */ + public static final String CREATE_TABLE = " CREATE TABLE "; + + /** + * Alter Table Statement Prefix String + */ + public static final String ALTER_TABLE = " ALTER TABLE "; + + /** + * Drop Table Statement Prefix String + */ + public static final String DROP_TABLE = " DROP TABLE "; + + /** + * Constant Keyword String + */ + public static final String IF_NOT_EXISTS = " IF NOT EXISTS "; + + /** + * Constant Keyword String + */ + public static final String IF_EXISTS = " IF EXISTS "; + + /** + * Constructor function + */ + private Const() { + + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/AbstractDatabaseDialect.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/AbstractDatabaseDialect.java new file mode 100644 index 0000000..aff167e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/AbstractDatabaseDialect.java @@ -0,0 +1,48 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl; + +import srt.cloud.framework.dbswitch.sql.ddl.pojo.ColumnDefinition; +import org.apache.commons.lang3.StringUtils; + +import java.util.List; + +/** + * 数据库方言抽象类 + * + * @author jrl + */ +public abstract class AbstractDatabaseDialect { + + public String getSchemaTableName(String schemaName, String tableName) { + return String.format("\"%s\".\"%s\"", schemaName.trim(), tableName.trim()); + } + + public String getQuoteFieldName(String fieldName) { + return String.format("\"%s\"", fieldName.trim()); + } + + public abstract String getFieldTypeName(ColumnDefinition column); + + public abstract String getFieldDefination(ColumnDefinition column); + + public String getPrimaryKeyAsString(List pks) { + if (!pks.isEmpty()) { + StringBuilder sb = new StringBuilder(); + sb.append("\""); + sb.append(StringUtils.join(pks, "\" , \"")); + sb.append("\""); + return sb.toString(); + } + + return ""; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/AbstractSqlDdlOperator.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/AbstractSqlDdlOperator.java new file mode 100644 index 0000000..6469df2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/AbstractSqlDdlOperator.java @@ -0,0 +1,37 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl; + +import java.util.Objects; + +/** + * DDL操作抽象类 + * + * @author jrl + */ +public abstract class AbstractSqlDdlOperator { + + private String name; + + public AbstractSqlDdlOperator(String name) { + this.name = Objects.requireNonNull(name); + } + + public String getName() { + return this.name; + } + + @Override + public String toString() { + return this.name; + } + + public abstract String toSqlString(AbstractDatabaseDialect dialect); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/pojo/ColumnDefinition.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/pojo/ColumnDefinition.java new file mode 100644 index 0000000..3679db3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/pojo/ColumnDefinition.java @@ -0,0 +1,112 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.pojo; + +import java.util.Objects; + +/** + * 类定义实体类 + * + * @author jrl + */ +public class ColumnDefinition { + + private String columnName; + private String columnType; + private String columnComment; + private Integer lengthOrPrecision; + private Integer scale; + private boolean primaryKey; + private boolean autoIncrement; + private boolean nullable; + private String defaultValue; + + public String getColumnName() { + return columnName; + } + + public void setColumnName(String columnName) { + this.columnName = Objects.requireNonNull(columnName); + } + + public String getColumnType() { + return columnType; + } + + public void setColumnType(String columnType) { + this.columnType = Objects.requireNonNull(columnType); + } + + public String getColumnComment() { + return columnComment; + } + + public void setColumnComment(String columnComment) { + this.columnComment = columnComment; + } + + public Integer getLengthOrPrecision() { + return lengthOrPrecision; + } + + public void setLengthOrPrecision(Integer lenOrPre) { + this.lengthOrPrecision = Objects.requireNonNull(lenOrPre); + } + + public Integer getScale() { + return scale; + } + + public void setScale(Integer scale) { + this.scale = scale; + } + + public boolean isPrimaryKey() { + return primaryKey; + } + + public void setPrimaryKey(boolean primaryKey) { + this.primaryKey = primaryKey; + } + + public boolean isAutoIncrement() { + return this.autoIncrement; + } + + public void setAutoIncrement(boolean autoIncrement) { + this.autoIncrement = autoIncrement; + } + + public boolean isNullable() { + return nullable; + } + + public void setNullable(boolean nullable) { + this.nullable = nullable; + } + + public String getDefaultValue() { + return defaultValue; + } + + public void setDefaultValue(String defaultValue) { + this.defaultValue = defaultValue; + } + + @Override + public String toString() { + return "ColumnDefinition [columnName=" + columnName + ", columnType=" + columnType + + ", columnComment=" + + columnComment + ", lengthOrPrecision=" + lengthOrPrecision + ", scale=" + scale + + ", primaryKey=" + + primaryKey + ", nullable=" + nullable + ", defaultValue=" + defaultValue + "]"; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/pojo/TableDefinition.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/pojo/TableDefinition.java new file mode 100644 index 0000000..525b912 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/pojo/TableDefinition.java @@ -0,0 +1,66 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.pojo; + +import java.util.ArrayList; +import java.util.List; +import java.util.Objects; + +/** + * 表定义实体类 + * + * @author jrl + */ +public class TableDefinition { + + private String schemaName; + private String tableName; + private String tableComment; + private List columns = new ArrayList<>(); + + public String getSchemaName() { + return schemaName; + } + + public void setSchemaName(String schemaName) { + this.schemaName = Objects.requireNonNull(schemaName); + } + + public String getTableName() { + return tableName; + } + + public void setTableName(String tableName) { + this.tableName = Objects.requireNonNull(tableName); + } + + public String getTableComment() { + return tableComment; + } + + public void setTableComment(String tableComment) { + this.tableComment = tableComment; + } + + public List getColumns() { + return columns; + } + + public void addColumns(ColumnDefinition column) { + columns.add(column); + } + + @Override + public String toString() { + return "TableDefinition [schemaName=" + schemaName + ", tableName=" + tableName + + ", tableComment=" + tableComment + ", columns=" + columns + "]"; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlAlterTable.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlAlterTable.java new file mode 100644 index 0000000..a350e0c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlAlterTable.java @@ -0,0 +1,147 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.sql; + +import srt.cloud.framework.dbswitch.sql.constant.Const; +import srt.cloud.framework.dbswitch.sql.ddl.AbstractDatabaseDialect; +import srt.cloud.framework.dbswitch.sql.ddl.AbstractSqlDdlOperator; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.ColumnDefinition; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.TableDefinition; +import srt.cloud.framework.dbswitch.sql.ddl.sql.impl.GreenplumDialectImpl; +import srt.cloud.framework.dbswitch.sql.ddl.sql.impl.PostgresDialectImpl; + +import java.util.Objects; + +/** + * Alter语句操作类 + * + * @author jrl + */ +public class DdlSqlAlterTable extends AbstractSqlDdlOperator { + + protected enum AlterTypeEnum { + /** + * 添加字段操作 + */ + ADD(1), + + /** + * 删除字段操作 + */ + DROP(2), + + /** + * 修改字段操作 + */ + MODIFY(3), + + /** + * 重命名操作 + */ + RENAME(4); + + private int index; + + AlterTypeEnum(int idx) { + this.index = idx; + } + + public int getIndex() { + return index; + } + } + + private TableDefinition table; + private AlterTypeEnum alterType; + + public DdlSqlAlterTable(TableDefinition t, String handle) { + super(Const.ALTER_TABLE); + this.table = t; + alterType = AlterTypeEnum.valueOf(handle.toUpperCase()); + } + + @Override + public String toSqlString(AbstractDatabaseDialect dialect) { + String fullTableName = dialect.getSchemaTableName(table.getSchemaName(), table.getTableName()); + + StringBuilder sb = new StringBuilder(); + sb.append(this.getName()); + sb.append(fullTableName); + + if (table.getColumns().size() < 1) { + throw new RuntimeException("Alter table need one column at least!"); + } + + if (AlterTypeEnum.ADD == alterType) { + if (dialect instanceof PostgresDialectImpl || dialect instanceof GreenplumDialectImpl) { + //PostgreSQL/Greenplum数据库的add只支持一列,不支持多列 + if (table.getColumns().size() != 1) { + throw new RuntimeException( + "Alter table for PostgreSQL/Greenplum only can add one column!"); + } + + sb.append(" ADD "); + ColumnDefinition cd = table.getColumns().get(0); + sb.append(dialect.getFieldDefination(cd)); + } else { + sb.append(" ADD ("); + for (int i = 0; i < table.getColumns().size(); ++i) { + ColumnDefinition cd = table.getColumns().get(i); + sb.append((i > 0) ? "," : " "); + sb.append(dialect.getFieldDefination(cd)); + } + sb.append(")"); + } + } else if (AlterTypeEnum.DROP == alterType) { + if (table.getColumns().size() != 1) { + throw new RuntimeException("Alter table only can drop one column!"); + } + + ColumnDefinition cd = table.getColumns().get(0); + sb.append(" DROP "); + sb.append(dialect.getQuoteFieldName(cd.getColumnName())); + } else if (AlterTypeEnum.MODIFY == alterType) { + if (table.getColumns().size() != 1) { + throw new RuntimeException("Alter table only can modify one column!"); + } + + ColumnDefinition cd = table.getColumns().get(0); + if (dialect instanceof PostgresDialectImpl || dialect instanceof GreenplumDialectImpl) { + //PostgreSQL/Greenplum数据库的modify需要单独拆分 + String typename = dialect.getFieldTypeName(cd); + boolean nullable = cd.isNullable(); + String defaultValue = cd.getDefaultValue(); + sb.append( + " ALTER COLUMN " + dialect.getQuoteFieldName(cd.getColumnName()) + " TYPE " + typename); + if (nullable) { + sb.append(",ALTER COLUMN " + dialect.getQuoteFieldName(cd.getColumnName()) + + " SET DEFAULT NULL"); + } else if (Objects.nonNull(defaultValue) && !defaultValue.isEmpty() && !"NULL" + .equalsIgnoreCase(defaultValue)) { + sb.append( + ",ALTER COLUMN " + dialect.getQuoteFieldName(cd.getColumnName()) + " SET DEFAULT '" + + defaultValue + "'"); + } else { + sb.append( + ",ALTER COLUMN " + dialect.getQuoteFieldName(cd.getColumnName()) + " SET NOT NULL"); + } + } else { + sb.append(" MODIFY "); + sb.append(dialect.getFieldDefination(cd)); + } + } else { + // 当前不支持rename及其他操作 + throw new RuntimeException("Alter table unsupported operation : " + alterType.name()); + } + + return sb.toString(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlCreateTable.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlCreateTable.java new file mode 100644 index 0000000..02c8b29 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlCreateTable.java @@ -0,0 +1,78 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.sql; + +import srt.cloud.framework.dbswitch.sql.constant.Const; +import srt.cloud.framework.dbswitch.sql.ddl.AbstractDatabaseDialect; +import srt.cloud.framework.dbswitch.sql.ddl.AbstractSqlDdlOperator; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.ColumnDefinition; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.TableDefinition; +import srt.cloud.framework.dbswitch.sql.ddl.sql.impl.MySqlDialectImpl; + +import java.util.ArrayList; +import java.util.List; + +/** + * Create语句操作类 + * + * @author jrl + */ +public class DdlSqlCreateTable extends AbstractSqlDdlOperator { + + private TableDefinition table; + + public DdlSqlCreateTable(TableDefinition t) { + super(Const.CREATE_TABLE); + this.table = t; + } + + @Override + public String toSqlString(AbstractDatabaseDialect dialect) { + StringBuilder sb = new StringBuilder(); + sb.append(this.getName()); + String fullTableName = dialect.getSchemaTableName(table.getSchemaName(), table.getTableName()); + sb.append(fullTableName); + sb.append(" ("); + sb.append(Const.CR); + + List columns = table.getColumns(); + List pks = new ArrayList<>(); + for (int i = 0; i < columns.size(); ++i) { + ColumnDefinition c = columns.get(i); + if (c.isPrimaryKey()) { + pks.add(c.getColumnName()); + } + + if (i > 0) { + sb.append(","); + } else { + sb.append(" "); + } + + String definition = dialect.getFieldDefination(c); + sb.append(definition); + sb.append(Const.CR); + } + + if (!pks.isEmpty()) { + String pk = dialect.getPrimaryKeyAsString(pks); + sb.append(", PRIMARY KEY (").append(pk).append(")").append(Const.CR); + } + + sb.append(" )"); + if (dialect instanceof MySqlDialectImpl) { + sb.append(" ENGINE=InnoDB DEFAULT CHARSET=utf8 "); + } + + sb.append(Const.CR); + return sb.toString(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlDropTable.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlDropTable.java new file mode 100644 index 0000000..9732df3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlDropTable.java @@ -0,0 +1,40 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.sql; + +import srt.cloud.framework.dbswitch.sql.constant.Const; +import srt.cloud.framework.dbswitch.sql.ddl.AbstractDatabaseDialect; +import srt.cloud.framework.dbswitch.sql.ddl.AbstractSqlDdlOperator; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.TableDefinition; + +/** + * Drop语句操作类 + * + * @author jrl + */ +public class DdlSqlDropTable extends AbstractSqlDdlOperator { + + private TableDefinition table; + + public DdlSqlDropTable(TableDefinition t) { + super(Const.DROP_TABLE); + this.table = t; + } + + @Override + public String toSqlString(AbstractDatabaseDialect dialect) { + StringBuilder sb = new StringBuilder(); + sb.append(this.getName()); + String fullTableName = dialect.getSchemaTableName(table.getSchemaName(), table.getTableName()); + sb.append(fullTableName); + return sb.toString(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlTruncateTable.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlTruncateTable.java new file mode 100644 index 0000000..0c85fa7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/DdlSqlTruncateTable.java @@ -0,0 +1,39 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.sql; + +import srt.cloud.framework.dbswitch.sql.ddl.AbstractDatabaseDialect; +import srt.cloud.framework.dbswitch.sql.ddl.AbstractSqlDdlOperator; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.TableDefinition; + +/** + * Truncate语句操作类 + * + * @author jrl + */ +public class DdlSqlTruncateTable extends AbstractSqlDdlOperator { + + private TableDefinition table; + + public DdlSqlTruncateTable(TableDefinition t) { + super("TRUNCATE TABLE "); + this.table = t; + } + + @Override + public String toSqlString(AbstractDatabaseDialect dialect) { + StringBuilder sb = new StringBuilder(); + sb.append(this.getName()); + String fullTableName = dialect.getSchemaTableName(table.getSchemaName(), table.getTableName()); + sb.append(fullTableName); + return sb.toString(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/GreenplumDialectImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/GreenplumDialectImpl.java new file mode 100644 index 0000000..755468c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/GreenplumDialectImpl.java @@ -0,0 +1,107 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.sql.impl; + +import srt.cloud.framework.dbswitch.sql.ddl.pojo.ColumnDefinition; +import srt.cloud.framework.dbswitch.sql.ddl.type.GreenplumDataTypeEnum; + +import java.util.ArrayList; +import java.util.List; +import java.util.Objects; + +/** + * Greenplum方言实现类 + * + * @author jrl + */ +public class GreenplumDialectImpl extends PostgresDialectImpl { + + protected static List integerTypes; + + static { + integerTypes = new ArrayList<>(); + integerTypes.add(GreenplumDataTypeEnum.SERIAL2); + integerTypes.add(GreenplumDataTypeEnum.SERIAL4); + integerTypes.add(GreenplumDataTypeEnum.SERIAL8); + integerTypes.add(GreenplumDataTypeEnum.SMALLSERIAL); + integerTypes.add(GreenplumDataTypeEnum.SERIAL); + integerTypes.add(GreenplumDataTypeEnum.BIGSERIAL); + } + + @Override + public String getFieldTypeName(ColumnDefinition column) { + int length = column.getLengthOrPrecision(); + int scale = column.getScale(); + + StringBuilder sb = new StringBuilder(); + GreenplumDataTypeEnum type = null; + try { + type = GreenplumDataTypeEnum.valueOf(column.getColumnType().toUpperCase()); + } catch (IllegalArgumentException e) { + throw new RuntimeException( + String.format("Invalid Greenplum data type: %s", column.getColumnType())); + } + + if (column.isAutoIncrement()) { + if (!GreenplumDialectImpl.integerTypes.contains(type)) { + throw new RuntimeException(String + .format("Invalid Greenplum auto increment data type: %s", column.getColumnType())); + } + } + + sb.append(type.name()); + switch (type) { + case NUMERIC: + case DECIMAL: + if (Objects.isNull(length) || length < 0) { + throw new RuntimeException( + String.format("Invalid Greenplum data type length: %s(%d)", column.getColumnType(), + length)); + } + + if (Objects.isNull(scale) || scale < 0) { + throw new RuntimeException( + String.format("Invalid Greenplum data type scale: %s(%d,%d)", column.getColumnType(), + length, scale)); + } + + sb.append(String.format("(%d,%d)", length, scale)); + break; + case CHAR: + case VARCHAR: + if (Objects.isNull(length) || length < 0) { + throw new RuntimeException( + String.format("Invalid Greenplum data type length: %s(%d)", column.getColumnType(), + length)); + } + sb.append(String.format(" (%d) ", length)); + break; + case TIMESTAMP: + if (Objects.isNull(length) || length < 0) { + sb.append(" (0) "); + } else if (0 == length || 6 == length) { + sb.append(String.format(" (%d) ", length)); + } else { + throw new RuntimeException( + String.format("Invalid Greenplum data type length: %s(%d)", column.getColumnType(), + length)); + } + break; + case DOUBLE: + sb.append(" PRECISION "); + break; + default: + break; + } + + return sb.toString(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/MySqlDialectImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/MySqlDialectImpl.java new file mode 100644 index 0000000..91b627e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/MySqlDialectImpl.java @@ -0,0 +1,172 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.sql.impl; + +import srt.cloud.framework.dbswitch.sql.ddl.AbstractDatabaseDialect; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.ColumnDefinition; +import srt.cloud.framework.dbswitch.sql.ddl.type.MySqlDataTypeEnum; +import org.apache.commons.lang3.StringUtils; + +import java.util.ArrayList; +import java.util.List; +import java.util.Objects; + +/** + * 关于MySQL的的自增列问题: + *

+ * (1)一张表中,只能有一列为自增长列。 + *

+ * (2)列的数据类型,必须为数值型。 + *

+ * (3)不能设置默认值。 + *

+ * (4)会自动应用not null。 + * + * @author jrl + */ +public class MySqlDialectImpl extends AbstractDatabaseDialect { + + private static List integerTypes; + + static { + integerTypes = new ArrayList<>(); + integerTypes.add(MySqlDataTypeEnum.TINYINT); + integerTypes.add(MySqlDataTypeEnum.SMALLINT); + integerTypes.add(MySqlDataTypeEnum.MEDIUMINT); + integerTypes.add(MySqlDataTypeEnum.INTEGER); + integerTypes.add(MySqlDataTypeEnum.INT); + integerTypes.add(MySqlDataTypeEnum.BIGINT); + } + + @Override + public String getSchemaTableName(String schemaName, String tableName) { + if (Objects.isNull(schemaName) || schemaName.trim().isEmpty()) { + return String.format("`%s`", tableName); + } + return String.format("`%s`.`%s`", schemaName, tableName); + } + + @Override + public String getQuoteFieldName(String fieldName) { + return String.format("`%s`", fieldName.trim()); + } + + @Override + public String getPrimaryKeyAsString(List pks) { + if (!pks.isEmpty()) { + StringBuilder sb = new StringBuilder(); + sb.append("`"); + sb.append(StringUtils.join(pks, "` , `")); + sb.append("`"); + return sb.toString(); + } + + return ""; + } + + @Override + public String getFieldTypeName(ColumnDefinition column) { + int length = column.getLengthOrPrecision(); + int scale = column.getScale(); + StringBuilder sb = new StringBuilder(); + MySqlDataTypeEnum type = null; + try { + type = MySqlDataTypeEnum.valueOf(column.getColumnType().toUpperCase()); + } catch (IllegalArgumentException e) { + throw new RuntimeException( + String.format("Invalid MySQL data type: %s", column.getColumnType())); + } + + if (column.isAutoIncrement()) { + if (!MySqlDialectImpl.integerTypes.contains(type)) { + throw new RuntimeException( + String.format("Invalid MySQL auto increment data type: %s", column.getColumnType())); + } + } + + sb.append(type.name()); + switch (type) { + case FLOAT: + case DOUBLE: + case DECIMAL: + if (Objects.isNull(length) || length < 0) { + throw new RuntimeException( + String.format("Invalid MySQL data type length: %s(%d)", column.getColumnType(), + length)); + } + + if (Objects.isNull(scale) || scale < 0) { + throw new RuntimeException( + String.format("Invalid MySQL data type scale: %s(%d,%d)", column.getColumnType(), + length, scale)); + } + + sb.append(String.format("(%d,%d)", length, scale)); + break; + case TINYINT: + case SMALLINT: + case MEDIUMINT: + case INTEGER: + case INT: + case BIGINT: + case CHAR: + case VARCHAR: + if (Objects.isNull(length) || length < 0) { + throw new RuntimeException( + String.format("Invalid MySQL data type length: %s(%d)", column.getColumnType(), + length)); + } + sb.append(String.format(" (%d) ", length)); + default: + break; + } + + return sb.toString(); + } + + @Override + public String getFieldDefination(ColumnDefinition column) { + String fieldname = column.getColumnName(); + boolean nullable = column.isNullable(); + String defaultValue = column.getDefaultValue(); + String comment = column.getColumnComment(); + + StringBuilder sb = new StringBuilder(); + sb.append(String.format("`%s` ", fieldname.trim())); + sb.append(this.getFieldTypeName(column)); + + if (column.isAutoIncrement() && column.isPrimaryKey()) { + //在MySQL数据库里只有主键是自增的 + sb.append(" NOT NULL AUTO_INCREMENT "); + } else { + if (nullable) { + sb.append(" DEFAULT NULL"); + } else if (Objects.nonNull(defaultValue) && !defaultValue.isEmpty()) { + if ("NULL".equalsIgnoreCase(defaultValue)) { + sb.append(" DEFAULT NULL"); + } else if (defaultValue.toUpperCase().trim().startsWith("CURRENT_TIMESTAMP")) { + // 处理时间字段的默认当前时间问题 + sb.append(String.format(" DEFAULT %s", defaultValue)); + } else { + sb.append(String.format(" DEFAULT '%s'", defaultValue)); + } + } else { + sb.append(" NOT NULL"); + } + } + + if (Objects.nonNull(comment) && !comment.isEmpty()) { + sb.append(String.format(" COMMENT '%s'", comment)); + } + + return sb.toString(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/OracleDialectImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/OracleDialectImpl.java new file mode 100644 index 0000000..5cf216d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/OracleDialectImpl.java @@ -0,0 +1,135 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.sql.impl; + +import srt.cloud.framework.dbswitch.sql.ddl.AbstractDatabaseDialect; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.ColumnDefinition; +import srt.cloud.framework.dbswitch.sql.ddl.type.OracleDataTypeEnum; + +import java.util.ArrayList; +import java.util.List; +import java.util.Objects; + +/** + * 关于Oracle12c的自增列问题: + *

+ * (1)一张表中,只能有一列为自增长列。 + *

+ * (2)列的数据类型,必须为数值型。 + *

+ * (3)不能设置默认值。 + *

+ * (4)会自动应用not null和not deferrable。 + *

+ * (5)使用CTAS方式无法继承自增长列的属性。 + *

+ * (6)如果执行回滚,事务会回滚,但是序列中的值不会回滚。 + * + * @author jrl + */ +public class OracleDialectImpl extends AbstractDatabaseDialect { + + private static List integerTypes; + + static { + integerTypes = new ArrayList<>(); + integerTypes.add(OracleDataTypeEnum.NUMBER); + } + + @Override + public String getFieldTypeName(ColumnDefinition column) { + int length = column.getLengthOrPrecision(); + int scale = column.getScale(); + StringBuilder sb = new StringBuilder(); + OracleDataTypeEnum type = null; + try { + type = OracleDataTypeEnum.valueOf(column.getColumnType().toUpperCase()); + } catch (IllegalArgumentException e) { + throw new RuntimeException( + String.format("Invalid Oracle data type: %s", column.getColumnType())); + } + + if (column.isAutoIncrement()) { + if (!OracleDialectImpl.integerTypes.contains(type)) { + throw new RuntimeException( + String.format("Invalid Oracle auto increment data type: %s", column.getColumnType())); + } + } + + sb.append(type.name()); + switch (type) { + case NUMBER: + if (Objects.isNull(length) || length < 0) { + throw new RuntimeException( + String.format("Invalid Oracle data type length: %s(%d)", column.getColumnType(), + length)); + } + + if (length > 0) { + sb.append(String.format("(%d)", length)); + } else { + if (Objects.isNull(scale) || scale < 0) { + throw new RuntimeException(String.format("Invalid Oracle data type scale: %s(%d,%d)", + column.getColumnType(), length, scale)); + } + + sb.append(String.format("(%d,%d)", length, scale)); + } + break; + case CHAR: + case NCHAR: + case VARCHAR: + case VARCHAR2: + if (Objects.isNull(length) || length < 0) { + throw new RuntimeException(String + .format("Invalid Oracle data type length: %s(%d)", column.getColumnType(), length)); + } + sb.append(String.format(" (%d) ", length)); + default: + break; + } + + return sb.toString(); + } + + @Override + public String getFieldDefination(ColumnDefinition column) { + String fieldname = column.getColumnName(); + boolean nullable = column.isNullable(); + String defaultValue = column.getDefaultValue(); + //String comment=column.getColumnComment(); + + StringBuilder sb = new StringBuilder(); + sb.append(String.format("\"%s\" ", fieldname.trim())); + sb.append(this.getFieldTypeName(column)); + + if (column.isAutoIncrement() && column.isPrimaryKey()) { + // 在Oracle12c数据库里只有主键是自增的 + sb.append(" GENERATED BY DEFAULT ON NULL AS IDENTITY "); + } else { + if (nullable) { + sb.append(" DEFAULT NULL"); + } else if (Objects.nonNull(defaultValue) && !defaultValue.isEmpty()) { + if (defaultValue.equalsIgnoreCase("NULL")) { + sb.append(" DEFAULT NULL"); + } else if (defaultValue.equalsIgnoreCase("SYSDATE")) { + sb.append(" DEFAULT SYSDATE"); + } else { + sb.append(String.format(" DEFAULT '%s'", defaultValue)); + } + } else { + sb.append(" NOT NULL"); + } + } + + return sb.toString(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/PostgresDialectImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/PostgresDialectImpl.java new file mode 100644 index 0000000..e0855b6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/sql/impl/PostgresDialectImpl.java @@ -0,0 +1,141 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.sql.impl; + +import srt.cloud.framework.dbswitch.sql.ddl.AbstractDatabaseDialect; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.ColumnDefinition; +import srt.cloud.framework.dbswitch.sql.ddl.type.PostgresDataTypeEnum; + +import java.util.ArrayList; +import java.util.List; +import java.util.Objects; + +/** + * PostgreSQL方言实现类 + * + * @author jrl + */ +public class PostgresDialectImpl extends AbstractDatabaseDialect { + + private static List integerTypes; + + static { + integerTypes = new ArrayList<>(); + integerTypes.add(PostgresDataTypeEnum.SERIAL2); + integerTypes.add(PostgresDataTypeEnum.SERIAL4); + integerTypes.add(PostgresDataTypeEnum.SERIAL8); + integerTypes.add(PostgresDataTypeEnum.SMALLSERIAL); + integerTypes.add(PostgresDataTypeEnum.SERIAL); + integerTypes.add(PostgresDataTypeEnum.BIGSERIAL); + } + + @Override + public String getFieldTypeName(ColumnDefinition column) { + int length = column.getLengthOrPrecision(); + int scale = column.getScale(); + + StringBuilder sb = new StringBuilder(); + PostgresDataTypeEnum type = null; + try { + type = PostgresDataTypeEnum.valueOf(column.getColumnType().toUpperCase()); + } catch (IllegalArgumentException e) { + throw new RuntimeException( + String.format("Invalid PostgreSQL data type: %s", column.getColumnType())); + } + + if (column.isAutoIncrement()) { + if (!PostgresDialectImpl.integerTypes.contains(type)) { + throw new RuntimeException(String + .format("Invalid PostgreSQL auto increment data type: %s", column.getColumnType())); + } + } + + sb.append(type.name()); + switch (type) { + case NUMERIC: + case DECIMAL: + if (Objects.isNull(length) || length < 0) { + throw new RuntimeException( + String.format("Invalid PostgreSQL data type length: %s(%d)", column.getColumnType(), + length)); + } + + if (Objects.isNull(scale) || scale < 0) { + throw new RuntimeException( + String.format("Invalid PostgreSQL data type scale: %s(%d,%d)", column.getColumnType(), + length, scale)); + } + + sb.append(String.format("(%d,%d)", length, scale)); + break; + case CHAR: + case VARCHAR: + if (Objects.isNull(length) || length < 0) { + throw new RuntimeException( + String.format("Invalid PostgreSQL data type length: %s(%d)", column.getColumnType(), + length)); + } + sb.append(String.format(" (%d) ", length)); + break; + case TIMESTAMP: + if (Objects.isNull(length) || length < 0) { + sb.append(" (0) "); + } else if (0 == length || 6 == length) { + sb.append(String.format(" (%d) ", length)); + } else { + throw new RuntimeException( + String.format("Invalid PostgreSQL data type length: %s(%d)", column.getColumnType(), + length)); + } + break; + case DOUBLE: + sb.append(" PRECISION "); + break; + default: + break; + } + + return sb.toString(); + } + + @Override + public String getFieldDefination(ColumnDefinition column) { + String fieldname = column.getColumnName(); + boolean nullable = column.isNullable(); + String defaultValue = column.getDefaultValue(); + //String comment=column.getColumnComment(); + + StringBuilder sb = new StringBuilder(); + sb.append(String.format("\"%s\" ", fieldname.trim())); + sb.append(this.getFieldTypeName(column)); + + if (column.isAutoIncrement()) { + //PostgreSQL/Greenplum数据库里可以有多个自增列 + sb.append(" "); + } else { + if (nullable) { + sb.append(" DEFAULT NULL"); + } else if (Objects.nonNull(defaultValue) && !defaultValue.isEmpty()) { + if (defaultValue.equalsIgnoreCase("NULL")) { + sb.append(" DEFAULT NULL"); + } else if ("now()".equalsIgnoreCase(defaultValue)) { + sb.append(" DEFAULT now() "); + } else { + sb.append(String.format(" DEFAULT '%s'", defaultValue)); + } + } else { + sb.append(" NOT NULL"); + } + } + + return sb.toString(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/GreenplumDataTypeEnum.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/GreenplumDataTypeEnum.java new file mode 100644 index 0000000..c4dea32 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/GreenplumDataTypeEnum.java @@ -0,0 +1,73 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.type; + +import java.sql.Types; + +/** + * PostgreSQL的数据类型 + *

+ * 参考地址:https://www.yiibai.com/postgresql/postgresql-datatypes.html + * + * @author jrl + */ +public enum GreenplumDataTypeEnum { + + //~~~~~整型类型~~~~~~~~ + SMALLINT(0, Types.SMALLINT), + INT2(1, Types.SMALLINT), + INTEGER(2, Types.INTEGER), + INT4(3, Types.INTEGER), + BIGINT(4, Types.BIGINT), + INT8(5, Types.BIGINT), + DECIMAL(6, Types.DECIMAL), + NUMERIC(7, Types.NUMERIC), + REAL(8, Types.REAL),//equal float4 + FLOAT4(9, Types.FLOAT), + DOUBLE(10, Types.DOUBLE), + FLOAT8(11, Types.DOUBLE), + SMALLSERIAL(12, Types.SMALLINT), + SERIAL2(13, Types.SMALLINT), + SERIAL(14, Types.INTEGER), + SERIAL4(15, Types.INTEGER), + BIGSERIAL(16, Types.BIGINT), + SERIAL8(17, Types.BIGINT), + + //~~~~~日期和时间类型~~~~~~~~ + DATE(18, Types.DATE), + TIME(19, Types.TIME), + TIMESTAMP(20, Types.TIMESTAMP), + + //~~~~~字符串类型~~~~~~~~ + CHAR(21, Types.CHAR), + VARCHAR(22, Types.VARCHAR), + TEXT(23, Types.CLOB), + BYTEA(24, Types.BLOB), + + //~~~~~~~其他类型~~~~~~~~ + BOOL(25, Types.BOOLEAN); + + private int index; + private int jdbctype; + + GreenplumDataTypeEnum(int idx, int jdbcType) { + this.index = idx; + this.jdbctype = jdbcType; + } + + public int getIndex() { + return index; + } + + public int getJdbcType() { + return this.jdbctype; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/MySqlDataTypeEnum.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/MySqlDataTypeEnum.java new file mode 100644 index 0000000..8b3fb2f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/MySqlDataTypeEnum.java @@ -0,0 +1,69 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.type; + +import java.sql.Types; + +/** + * MySQL的数据类型 + *

+ * 参考地址:https://www.yiibai.com/mysql/data-types.html + * + * @author jrl + */ +public enum MySqlDataTypeEnum { + + //~~~~~整型类型~~~~~~~~ + TINYINT(0, Types.TINYINT), + SMALLINT(1, Types.SMALLINT), + MEDIUMINT(2, Types.INTEGER), + INTEGER(3, Types.INTEGER), + INT(4, Types.INTEGER), + BIGINT(5, Types.BIGINT), + FLOAT(6, Types.FLOAT), + DOUBLE(7, Types.DOUBLE), + DECIMAL(8, Types.DECIMAL), + + //~~~~~日期和时间类型~~~~~~~~ + DATE(9, Types.DATE), + TIME(10, Types.TIME), + YEAR(11, Types.DATE), + DATETIME(12, Types.TIMESTAMP), + TIMESTAMP(13, Types.TIMESTAMP), + + //~~~~~字符串类型~~~~~~~~ + CHAR(14, Types.CHAR), + VARCHAR(15, Types.VARCHAR), + TINYBLOB(16, Types.VARBINARY), + TINYTEXT(17, Types.CLOB), + BLOB(18, Types.VARBINARY), + TEXT(19, Types.CLOB), + MEDIUMBLOB(20, Types.LONGVARBINARY), + MEDIUMTEXT(21, Types.LONGVARCHAR), + LONGBLOB(22, Types.LONGVARBINARY), + LONGTEXT(23, Types.LONGVARCHAR); + + private int index; + private int jdbctype; + + MySqlDataTypeEnum(int idx, int jdbcType) { + this.index = idx; + this.jdbctype = jdbcType; + } + + public int getIndex() { + return index; + } + + public int getJdbcType() { + return this.jdbctype; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/OracleDataTypeEnum.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/OracleDataTypeEnum.java new file mode 100644 index 0000000..3282981 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/OracleDataTypeEnum.java @@ -0,0 +1,55 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.type; + +import java.sql.Types; + +/** + * Oracle的数据类型 + *

+ * 参考地址:http://blog.itpub.net/26736162/viewspace-2149685 + * + * @author jrl + */ +public enum OracleDataTypeEnum { + + //~~~~~整型类型~~~~~~~~ + NUMBER(1, Types.NUMERIC), + + //~~~~~日期和时间类型~~~~~~~~ + DATE(2, Types.DATE), + TIMESTAMP(3, Types.TIMESTAMP), + + //~~~~~字符串类型~~~~~~~~ + CHAR(4, Types.CHAR), + NCHAR(5, Types.CHAR), + VARCHAR(6, Types.VARCHAR), + VARCHAR2(7, Types.VARCHAR), + LONG(8, Types.LONGVARBINARY), + CLOB(9, Types.CLOB), + BLOB(10, Types.BLOB); + + private int index; + private int jdbctype; + + OracleDataTypeEnum(int idx, int jdbcType) { + this.index = idx; + this.jdbctype = jdbcType; + } + + public int getIndex() { + return index; + } + + public int getJdbcType() { + return this.jdbctype; + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/PostgresDataTypeEnum.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/PostgresDataTypeEnum.java new file mode 100644 index 0000000..67ad2cd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/ddl/type/PostgresDataTypeEnum.java @@ -0,0 +1,72 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.ddl.type; + +import java.sql.Types; + +/** + * PostgreSQL的数据类型 + *

+ * 参考地址:https://www.yiibai.com/postgresql/postgresql-datatypes.html + * + * @author jrl + */ +public enum PostgresDataTypeEnum { + + //~~~~~整型类型~~~~~~~~ + SMALLINT(0, Types.SMALLINT), + INT2(1, Types.SMALLINT), + INTEGER(2, Types.INTEGER), + INT4(3, Types.INTEGER), + BIGINT(4, Types.BIGINT), + INT8(5, Types.BIGINT), + DECIMAL(6, Types.DECIMAL), + NUMERIC(7, Types.NUMERIC), + REAL(8, Types.REAL),//equal float4 + FLOAT4(9, Types.FLOAT), + DOUBLE(10, Types.DOUBLE), + FLOAT8(11, Types.DOUBLE), + SMALLSERIAL(12, Types.SMALLINT), + SERIAL2(13, Types.SMALLINT), + SERIAL(14, Types.INTEGER), + SERIAL4(15, Types.INTEGER), + BIGSERIAL(16, Types.BIGINT), + SERIAL8(17, Types.BIGINT), + + //~~~~~日期和时间类型~~~~~~~~ + DATE(18, Types.DATE), + TIME(19, Types.TIME), + TIMESTAMP(20, Types.TIMESTAMP), + + //~~~~~字符串类型~~~~~~~~ + CHAR(21, Types.CHAR), + VARCHAR(22, Types.VARCHAR), + TEXT(23, Types.CLOB), + BYTEA(24, Types.BLOB), + + //~~~~~~~其他类型~~~~~~~~ + BOOL(25, Types.BOOLEAN); + + private int index; + private int jdbctype; + + PostgresDataTypeEnum(int idx, int jdbcType) { + this.index = idx; + this.jdbctype = jdbcType; + } + + public int getIndex() { + return index; + } + + public int getJdbcType() { + return this.jdbctype; + } +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/ISqlConvertService.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/ISqlConvertService.java new file mode 100644 index 0000000..71b0890 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/ISqlConvertService.java @@ -0,0 +1,93 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.service; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; + +import java.util.Map; + +/** + * SQL语言共分为四大类:数据查询语言DQL,数据操纵语言DML,数据定义语言DDL,数据控制语言DCL + * + * @author jrl + * + */ +public interface ISqlConvertService { + + /** + * 标准DQL/DML类SQL的转换 + * + * @param sql 待转换的SQL语句 + * @return 转换为三种数据库Oracle/MySQL/PostgreSQL数据库类型后的SQL语句 + */ + public Map dmlSentence(String sql); + + /** + * 标准DQL/DML类SQL的转换 + * + * @param sql 待转换的SQL语句 + * @return 转换为指定数据库类型后的SQL语句 + */ + public String dmlSentence(String sql, ProductTypeEnum target); + + /** + * 指定源数据库到目的数据库的DQL/DML类SQL的转换 + * + * @param source 源数据库类型 + * @param target 目的数据库类型 + * @param sql 待转换的SQL语句 + * @return 转换为目的数据库类型后的SQL语句 + */ + public String dmlSentence(ProductTypeEnum source, ProductTypeEnum target, String sql); + + /** + * 标准DDL类SQL的转换 + * + * @param sql 待转换的SQL语句 + * @return 转换为三种数据库Oracle/MySQL/PostgreSQL数据库类型后的SQL语句 + */ + public Map ddlSentence(String sql); + + /** + * 标准DDL类SQL的转换 + * + * @param sql 待转换的SQL语句 + * @return 转换为指定数据库类型后的SQL语句 + */ + public String ddlSentence(String sql, ProductTypeEnum target); + + /** + * 指定源数据库到目的数据库的DDL类SQL的转换 + * + * @param source 源数据库类型 + * @param target 目的数据库类型 + * @param sql 待转换的SQL语句 + * @return 转换为目的数据库类型后的SQL语句 + */ + public String ddlSentence(ProductTypeEnum source, ProductTypeEnum target, String sql); + + /** + * 标准DCL类SQL的转换 + * + * @param sql 待转换的SQL语句 + * @return 转换为三种数据库Oracle/MySQL/PostgreSQL数据库类型后的SQL语句 + */ + public Map dclSentence(String sql); + + /** + * 指定源数据库到目的数据库的DCL类SQL的转换 + * + * @param source 源数据库类型 + * @param target 目的数据库类型 + * @param sql 待转换的SQL语句 + * @return 转换为目的数据库类型后的SQL语句 + */ + public String dclSentence(ProductTypeEnum source, ProductTypeEnum target, String sql); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/ISqlGeneratorService.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/ISqlGeneratorService.java new file mode 100644 index 0000000..f91bd6a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/ISqlGeneratorService.java @@ -0,0 +1,58 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.service; + +import srt.cloud.framework.dbswitch.sql.ddl.pojo.TableDefinition; + +/** + * SQL生成接口类 + * + * @author jrl + * + */ +public interface ISqlGeneratorService { + + /** + * 生成建表语句 + * + * @param dbtype 数据库类型 + * @param t 表描述 + * @return 建表语句 + */ + public String createTable(String dbtype, TableDefinition t); + + /** + * 生成改表语句 + * + * @param dbtype 数据库类型 + * @param handle 操作类型 + * @param t 表描述 + * @return 建表语句 + */ + public String alterTable(String dbtype, String handle, TableDefinition t); + + /** + * 生成删表语句 + * + * @param dbtype 数据库类型 + * @param t 表描述 + * @return 建表语句 + */ + public String dropTable(String dbtype, TableDefinition t); + + /** + * 生成清表语句 + * + * @param dbtype 数据库类型 + * @param t 表描述 + * @return 建表语句 + */ + public String truncateTable(String dbtype, TableDefinition t); +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/impl/CalciteSqlConvertServiceImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/impl/CalciteSqlConvertServiceImpl.java new file mode 100644 index 0000000..3165e91 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/impl/CalciteSqlConvertServiceImpl.java @@ -0,0 +1,201 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.service.impl; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.sql.calcite.TheMssqlSqlDialect; +import srt.cloud.framework.dbswitch.sql.calcite.TheMysqlSqlDialect; +import srt.cloud.framework.dbswitch.sql.calcite.TheOracleSqlDialect; +import srt.cloud.framework.dbswitch.sql.calcite.ThePostgresqlSqlDialect; +import srt.cloud.framework.dbswitch.sql.service.ISqlConvertService; +import org.apache.calcite.config.Lex; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.parser.SqlParseException; +import org.apache.calcite.sql.parser.SqlParser; +import org.apache.calcite.sql.parser.ddl.SqlDdlParserImpl; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.HashMap; +import java.util.Map; + +/** + * SQL语法格式转换 + * + * @author jrl + * + * DDL—数据定义语言(CREATE,ALTER,DROP,DECLARE) + * DML—数据操纵语言(SELECT,DELETE,UPDATE,INSERT) + * DCL—数据控制语言(GRANT,REVOKE,COMMIT,ROLLBACK) + * + */ +public class CalciteSqlConvertServiceImpl implements ISqlConvertService { + + private static final Logger logger = LoggerFactory.getLogger(CalciteSqlConvertServiceImpl.class); + + private Lex getDatabaseLex(ProductTypeEnum type) { + switch (type) { + case MYSQL: + return Lex.MYSQL; + case ORACLE: + return Lex.ORACLE; + case SQLSERVER: + return Lex.SQL_SERVER; + case POSTGRESQL: + return Lex.MYSQL_ANSI; + default: + throw new RuntimeException(String.format("Unkown database type (%s)",type.name())); + } + } + + private SqlDialect getDatabaseDialect(ProductTypeEnum type) { + switch (type) { + case MYSQL: + return TheMysqlSqlDialect.DEFAULT; + case ORACLE: + return TheOracleSqlDialect.DEFAULT; + case SQLSERVER: + return TheMssqlSqlDialect.DEFAULT; + case POSTGRESQL: + return ThePostgresqlSqlDialect.DEFAULT; + case GREENPLUM: + return ThePostgresqlSqlDialect.DEFAULT; + default: + throw new RuntimeException(String.format("Unkown database type (%s)",type.name())); + } + } + + @Override + public Map dmlSentence(String sql){ + SqlParser.Config config = SqlParser.configBuilder() + //.setCaseSensitive(true) + .build(); + SqlParser parser = SqlParser.create(sql, config); + SqlNode sqlNode; + try { + sqlNode = parser.parseStmt(); + } catch (SqlParseException e) { + throw new RuntimeException(e); + } + + Mapret=new HashMap<>(); + ret.put("oracle",sqlNode.toSqlString(TheOracleSqlDialect.DEFAULT).toString().replace("\r\n", " ").replace("\n", " ")); + ret.put("postgresql",sqlNode.toSqlString(ThePostgresqlSqlDialect.DEFAULT).toString().replace("\r\n", " ").replace("\n", " ")); + ret.put("mysql",sqlNode.toSqlString(TheMysqlSqlDialect.DEFAULT).toString().replace("\r\n", " ").replace("\n", " ")); + ret.put("sqlserver",sqlNode.toSqlString(TheMssqlSqlDialect.DEFAULT).toString().replace("\r\n", " ").replace("\n", " ")); + return ret; + } + + @Override + public String dmlSentence(String sql, ProductTypeEnum target) { + logger.info("DML SQL: [{}] {} ", target.name(), sql); + SqlParser.Config config = SqlParser.configBuilder().build(); + SqlParser parser = SqlParser.create(sql, config); + SqlNode sqlNode; + try { + sqlNode = parser.parseStmt(); + } catch (SqlParseException e) { + throw new RuntimeException(e); + } + + return sqlNode.toSqlString(this.getDatabaseDialect(target)).toString().replace("\r\n", " "); + } + + @Override + public String dmlSentence(ProductTypeEnum source, ProductTypeEnum target, String sql) { + logger.info("DML SQL: [{}->{}] {} ", source.name(), target.name(), sql); + SqlParser.Config config = SqlParser.configBuilder().setLex(this.getDatabaseLex(source)).build(); + SqlParser parser = SqlParser.create(sql, config); + SqlNode sqlNode; + try { + sqlNode = parser.parseStmt(); + } catch (SqlParseException e) { + throw new RuntimeException(e); + } + + return sqlNode.toSqlString(this.getDatabaseDialect(target)).toString().replace("\r\n", " "); + } + + @Override + public Map ddlSentence(String sql){ + logger.info("DDL Sentence SQL:{}",sql); + SqlParser.Config config = SqlParser.configBuilder() + .setParserFactory(SqlDdlParserImpl.FACTORY) + // .setCaseSensitive(true) + .build(); + + SqlParser parser = SqlParser.create(sql, config); + SqlNode sqlNode; + try { + sqlNode = parser.parseStmt(); + } catch (SqlParseException e) { + logger.error("ERROR: Invalid SQL format:{} --->",sql,e); + throw new RuntimeException(e); + } + + Mapret=new HashMap(); + ret.put("oracle",sqlNode.toSqlString(TheOracleSqlDialect.DEFAULT).toString().replace("\r\n", " ").replace("\n", " ")); + ret.put("postgresql",sqlNode.toSqlString(ThePostgresqlSqlDialect.DEFAULT).toString().replace("\r\n", " ").replace("\n", " ")); + ret.put("mysql",sqlNode.toSqlString(TheMysqlSqlDialect.DEFAULT).toString().replace("\r\n", " ").replace("\n", " ")); + ret.put("sqlserver",sqlNode.toSqlString(TheMssqlSqlDialect.DEFAULT).toString().replace("\r\n", " ").replace("\n", " ")); + return ret; + } + + @Override + public String ddlSentence(String sql, ProductTypeEnum target) { + logger.info("DDL SQL: [{}] {} ", target.name(), sql); + SqlParser.Config config = SqlParser.configBuilder() + .setParserFactory(SqlDdlParserImpl.FACTORY) + // .setCaseSensitive(true) + .build(); + + SqlParser parser = SqlParser.create(sql, config); + SqlNode sqlNode; + try { + sqlNode = parser.parseStmt(); + } catch (SqlParseException e) { + throw new RuntimeException(e); + } + + return sqlNode.toSqlString(this.getDatabaseDialect(target)).toString().replace("\r\n", " "); + } + + @Override + public String ddlSentence(ProductTypeEnum source, ProductTypeEnum target, String sql) { + logger.info("DDL SQL: [{}->{}] {} ", source.name(), target.name(), sql); + SqlParser.Config config = SqlParser.configBuilder() + .setParserFactory(SqlDdlParserImpl.FACTORY) + .setLex(this.getDatabaseLex(source)) + // .setCaseSensitive(true) + .build(); + + SqlParser parser = SqlParser.create(sql, config); + SqlNode sqlNode; + try { + sqlNode = parser.parseStmt(); + } catch (SqlParseException e) { + throw new RuntimeException(e); + } + + return sqlNode.toSqlString(this.getDatabaseDialect(target)).toString().replace("\r\n", " "); + } + + @Override + public Map dclSentence(String sql){ + throw new RuntimeException("Unimplement!"); + } + + @Override + public String dclSentence(ProductTypeEnum source, ProductTypeEnum target, String sql) { + throw new RuntimeException("Unimplement!"); + } + +} diff --git a/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/impl/MyselfSqlGeneratorServiceImpl.java b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/impl/MyselfSqlGeneratorServiceImpl.java new file mode 100644 index 0000000..4bfc3fc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-dbswitch/src/main/java/srt/cloud/framework/dbswitch/sql/service/impl/MyselfSqlGeneratorServiceImpl.java @@ -0,0 +1,91 @@ +// Copyright tang. All rights reserved. +// https://gitee.com/inrgihc/dbswitch +// +// Use of this source code is governed by a BSD-style license +// +// Author: tang (inrgihc@126.com) +// Date : 2020/1/2 +// Location: beijing , china +///////////////////////////////////////////////////////////// +package srt.cloud.framework.dbswitch.sql.service.impl; + +import srt.cloud.framework.dbswitch.common.type.ProductTypeEnum; +import srt.cloud.framework.dbswitch.sql.ddl.AbstractDatabaseDialect; +import srt.cloud.framework.dbswitch.sql.ddl.AbstractSqlDdlOperator; +import srt.cloud.framework.dbswitch.sql.ddl.pojo.TableDefinition; +import srt.cloud.framework.dbswitch.sql.ddl.sql.DdlSqlAlterTable; +import srt.cloud.framework.dbswitch.sql.ddl.sql.DdlSqlCreateTable; +import srt.cloud.framework.dbswitch.sql.ddl.sql.DdlSqlDropTable; +import srt.cloud.framework.dbswitch.sql.ddl.sql.DdlSqlTruncateTable; +import srt.cloud.framework.dbswitch.sql.ddl.sql.impl.GreenplumDialectImpl; +import srt.cloud.framework.dbswitch.sql.ddl.sql.impl.MySqlDialectImpl; +import srt.cloud.framework.dbswitch.sql.ddl.sql.impl.OracleDialectImpl; +import srt.cloud.framework.dbswitch.sql.ddl.sql.impl.PostgresDialectImpl; +import srt.cloud.framework.dbswitch.sql.service.ISqlGeneratorService; + +import java.util.HashMap; +import java.util.Map; + +/** + * 拼接生成SQL实现类 + * + * @author jrl + * + */ +public class MyselfSqlGeneratorServiceImpl implements ISqlGeneratorService { + + private static final Map DATABASE_MAPPER = new HashMap(); + + static { + DATABASE_MAPPER.put(ProductTypeEnum.MYSQL, MySqlDialectImpl.class.getName()); + DATABASE_MAPPER.put(ProductTypeEnum.ORACLE, OracleDialectImpl.class.getName()); + DATABASE_MAPPER.put(ProductTypeEnum.POSTGRESQL, PostgresDialectImpl.class.getName()); + DATABASE_MAPPER.put(ProductTypeEnum.GREENPLUM, GreenplumDialectImpl.class.getName()); + } + + public static AbstractDatabaseDialect getDatabaseInstance(ProductTypeEnum type) { + if (DATABASE_MAPPER.containsKey(type)) { + String className = DATABASE_MAPPER.get(type); + try { + return (AbstractDatabaseDialect) Class.forName(className).newInstance(); + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + throw new RuntimeException(String.format("Unkown database type (%s)", type.name())); + } + + @Override + public String createTable(String dbType, TableDefinition t) { + ProductTypeEnum type = ProductTypeEnum.valueOf(dbType.toUpperCase()); + AbstractDatabaseDialect dialect = getDatabaseInstance(type); + AbstractSqlDdlOperator operator = new DdlSqlCreateTable(t); + return operator.toSqlString(dialect); + } + + @Override + public String alterTable(String dbType, String handle, TableDefinition t){ + ProductTypeEnum type = ProductTypeEnum.valueOf(dbType.toUpperCase()); + AbstractDatabaseDialect dialect = getDatabaseInstance(type); + AbstractSqlDdlOperator operator = new DdlSqlAlterTable(t,handle); + return operator.toSqlString(dialect); + } + + @Override + public String dropTable(String dbType, TableDefinition t) { + ProductTypeEnum type = ProductTypeEnum.valueOf(dbType.toUpperCase()); + AbstractDatabaseDialect dialect = getDatabaseInstance(type); + AbstractSqlDdlOperator operator = new DdlSqlDropTable(t); + return operator.toSqlString(dialect); + } + + @Override + public String truncateTable(String dbType, TableDefinition t) { + ProductTypeEnum type = ProductTypeEnum.valueOf(dbType.toUpperCase()); + AbstractDatabaseDialect dialect = getDatabaseInstance(type); + AbstractSqlDdlOperator operator = new DdlSqlTruncateTable(t); + return operator.toSqlString(dialect); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/build/app/flink-app-1.14-2.0.0-jar-with-dependencies.jar b/srt-cloud-framework/srt-cloud-flink/build/app/flink-app-1.14-2.0.0-jar-with-dependencies.jar new file mode 100644 index 0000000..7d8adb7 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-flink/build/app/flink-app-1.14-2.0.0-jar-with-dependencies.jar differ diff --git a/srt-cloud-framework/srt-cloud-flink/build/app/flink-app.jar b/srt-cloud-framework/srt-cloud-flink/build/app/flink-app.jar new file mode 100644 index 0000000..9c3c393 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-flink/build/app/flink-app.jar differ diff --git a/srt-cloud-framework/srt-cloud-flink/build/extends/flink-catalog-mysql-1.14-2.0.0.jar b/srt-cloud-framework/srt-cloud-flink/build/extends/flink-catalog-mysql-1.14-2.0.0.jar new file mode 100644 index 0000000..2041572 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-flink/build/extends/flink-catalog-mysql-1.14-2.0.0.jar differ diff --git a/srt-cloud-framework/srt-cloud-flink/build/extends/flink-client-1.14-2.0.0.jar b/srt-cloud-framework/srt-cloud-flink/build/extends/flink-client-1.14-2.0.0.jar new file mode 100644 index 0000000..f55b038 Binary files /dev/null and b/srt-cloud-framework/srt-cloud-flink/build/extends/flink-client-1.14-2.0.0.jar differ diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/pom.xml new file mode 100644 index 0000000..42124ad --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/pom.xml @@ -0,0 +1,23 @@ + + + + flink-alert + net.srt + 2.0.0 + + 4.0.0 + + flink-alert-base + + + + net.srt + flink-common + ${project.version} + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AbstractAlert.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AbstractAlert.java new file mode 100644 index 0000000..6bc1a85 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AbstractAlert.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.base; + +/** + * AbstractAlert + * + * @author zrx + * @since 2022/2/23 19:22 + **/ +public abstract class AbstractAlert implements Alert { + private AlertConfig config; + + public AlertConfig getConfig() { + return config; + } + + @Override + public Alert setConfig(AlertConfig config) { + this.config = config; + return this; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/Alert.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/Alert.java new file mode 100644 index 0000000..ee9b85b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/Alert.java @@ -0,0 +1,78 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.base; + + +import net.srt.flink.common.assertion.Asserts; + +import java.util.Optional; +import java.util.ServiceLoader; + +/** + * Alert + * + * @author zrx + * @since 2022/2/23 19:05 + **/ +public interface Alert { + + static Optional get(AlertConfig config) { + Asserts.checkNotNull(config, "报警组件配置不能为空"); + ServiceLoader alerts = ServiceLoader.load(Alert.class); + for (Alert alert : alerts) { + if (alert.canHandle(config.getType())) { + return Optional.of(alert.setConfig(config)); + } + } + return Optional.empty(); + } + + static Alert build(AlertConfig config) { + String key = config.getName(); + if (AlertPool.exist(key)) { + return AlertPool.get(key); + } + Optional optionalDriver = Alert.get(config); + if (!optionalDriver.isPresent()) { + throw new AlertException("不支持报警组件类型【" + config.getType() + "】,请在 lib 下添加扩展依赖"); + } + Alert driver = optionalDriver.get(); + AlertPool.push(key, driver); + return driver; + } + + static Alert buildTest(AlertConfig config) { + Optional optionalDriver = Alert.get(config); + if (!optionalDriver.isPresent()) { + throw new AlertException("不支持报警组件类型【" + config.getType() + "】,请在 lib 下添加扩展依赖"); + } + return optionalDriver.get(); + } + + Alert setConfig(AlertConfig config); + + default boolean canHandle(String type) { + return Asserts.isEqualsIgnoreCase(getType(), type); + } + + String getType(); + + AlertResult send(String title, String content); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertConfig.java new file mode 100644 index 0000000..c638c18 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertConfig.java @@ -0,0 +1,71 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.base; + +import java.util.Map; + +/** + * AlertConfig + * + * @author zrx + * @since 2022/2/23 19:09 + **/ +public class AlertConfig { + private String name; + private String type; + private Map param; + + public AlertConfig() { + } + + public AlertConfig(String name, String type, Map param) { + this.name = name; + this.type = type; + this.param = param; + } + + public static AlertConfig build(String name, String type, Map param) { + return new AlertConfig(name, type, param); + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public String getType() { + return type; + } + + public void setType(String type) { + this.type = type; + } + + public Map getParam() { + return param; + } + + public void setParam(Map param) { + this.param = param; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertException.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertException.java new file mode 100644 index 0000000..1b277c2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertException.java @@ -0,0 +1,37 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.base; + +/** + * AlertException + * + * @author zrx + * @since 2022/2/23 19:19 + **/ +public class AlertException extends RuntimeException { + + public AlertException(String message, Throwable cause) { + super(message, cause); + } + + public AlertException(String message) { + super(message); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertMsg.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertMsg.java new file mode 100644 index 0000000..58d01e6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertMsg.java @@ -0,0 +1,105 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.base; + +import lombok.Data; + +/** + * AlertMsg + * + * @author zrx + * @since 2022/3/7 18:30 + **/ +@Data +public class AlertMsg { + + private String alertType; + private String alertTime; + private String jobID; + private String jobName; + private String jobType; + private String jobStatus; + private String jobStartTime; + private String jobEndTime; + private String jobDuration; + + /** + * Flink WebUI link url + */ + private String linkUrl; + + /** + * Flink job Root Exception url + */ + private String exceptionUrl; + + public AlertMsg() { + } + + public AlertMsg(String alertType, + String alertTime, + String jobID, + String jobName, + String jobType, + String jobStatus, + String jobStartTime, + String jobEndTime, + String jobDuration, + String linkUrl, + String exceptionUrl) { + + this.alertType = alertType; + this.alertTime = alertTime; + this.jobID = jobID; + this.jobName = jobName; + this.jobType = jobType; + this.jobStatus = jobStatus; + this.jobStartTime = jobStartTime; + this.jobEndTime = jobEndTime; + this.jobDuration = jobDuration; + this.linkUrl = linkUrl; + this.exceptionUrl = exceptionUrl; + } + + public String toString() { + return "[{ \"Alert Type\":\"" + alertType + "\"," + + + "\"Alert Time\":\"" + alertTime + "\"," + + + "\"Job ID\":\"" + jobID + "\"," + + + "\"Job Name\":\"" + jobName + "\"," + + + "\"Job Type\":\"" + jobType + "\"," + + + "\"Job Status\":\"" + jobStatus + "\"," + + + "\"Job StartTime\": \"" + jobStartTime + "\"," + + + "\"Job EndTime\": \"" + jobEndTime + "\"," + + + "\"Job Duration\": \"" + jobDuration + "\"," + + + "\"Exception Log\" :\"" + exceptionUrl + "\"" + + + "}]"; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertPool.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertPool.java new file mode 100644 index 0000000..2481565 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertPool.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.base; + +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +/** + * AlertPool + * + * @author zrx + * @since 2022/2/23 19:16 + **/ +public class AlertPool { + + private static volatile Map alertMap = new ConcurrentHashMap<>(); + + public static boolean exist(String key) { + if (alertMap.containsKey(key)) { + return true; + } + return false; + } + + public static Integer push(String key, Alert alert) { + alertMap.put(key, alert); + return alertMap.size(); + } + + public static Integer remove(String key) { + alertMap.remove(key); + return alertMap.size(); + } + + public static Alert get(String key) { + return alertMap.get(key); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertResult.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertResult.java new file mode 100644 index 0000000..16b9bf6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertResult.java @@ -0,0 +1,59 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.base; + +/** + * AlertResult + * + * @author zrx + * @since 2022/2/23 20:20 + **/ +public class AlertResult { + private boolean success; + private String message; + + public AlertResult() { + } + + public AlertResult(boolean success, String message) { + this.success = success; + this.message = message; + } + + public boolean getSuccess() { + return success; + } + + public Integer getSuccessCode() { + return success ? 1 : 0; + } + + public void setSuccess(boolean success) { + this.success = success; + } + + public String getMessage() { + return message; + } + + public void setMessage(String message) { + this.message = message; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertSendResponse.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertSendResponse.java new file mode 100644 index 0000000..6eedfe3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/AlertSendResponse.java @@ -0,0 +1,50 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.base; + +/** + * AlertSendResponse + * + * @author zrx + * @since 2022/2/23 20:23 + **/ +public class AlertSendResponse { + private Integer errcode; + private String errmsg; + + public AlertSendResponse() { + } + + public Integer getErrcode() { + return errcode; + } + + public void setErrcode(Integer errcode) { + this.errcode = errcode; + } + + public String getErrmsg() { + return errmsg; + } + + public void setErrmsg(String errmsg) { + this.errmsg = errmsg; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/ShowType.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/ShowType.java new file mode 100644 index 0000000..d6df14e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-base/src/main/java/net/srt/flink/alert/base/ShowType.java @@ -0,0 +1,52 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.base; + +/** + * ShowType + * + * @author zrx + * @since 2022/2/23 21:32 + **/ +public enum ShowType { + + MARKDOWN(0, "markdown"), // 通用markdown格式 + TEXT(1, "text"), //通用文本格式 + POST(2, "post"), // 飞书的富文本msgType + TABLE(0, "table"), // table格式 + ATTACHMENT(3, "attachment"), // 邮件相关 只发送附件 + TABLE_ATTACHMENT(4, "table attachment"); // 邮件相关 邮件表格+附件 + + private int code; + private String value; + + ShowType(int code, String value) { + this.code = code; + this.value = value; + } + + public int getCode() { + return code; + } + + public String getValue() { + return value; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/pom.xml new file mode 100644 index 0000000..10d589c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/pom.xml @@ -0,0 +1,31 @@ + + + + flink-alert + net.srt + 2.0.0 + + 4.0.0 + + flink-alert-dingtalk + + + + net.srt + flink-alert-base + ${project.version} + + + org.apache.httpcomponents + httpclient + + + junit + junit + provided + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/java/net/srt/flink/alert/dingtalk/DingTalkAlert.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/java/net/srt/flink/alert/dingtalk/DingTalkAlert.java new file mode 100644 index 0000000..13ba21b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/java/net/srt/flink/alert/dingtalk/DingTalkAlert.java @@ -0,0 +1,44 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.dingtalk; + + +import net.srt.flink.alert.base.AbstractAlert; +import net.srt.flink.alert.base.AlertResult; + +/** + * DingTalkAlert + * + * @author zrx + * @since 2022/2/23 19:28 + **/ +public class DingTalkAlert extends AbstractAlert { + + @Override + public String getType() { + return DingTalkConstants.TYPE; + } + + @Override + public AlertResult send(String title, String content) { + DingTalkSender sender = new DingTalkSender(getConfig().getParam()); + return sender.send(title, content); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/java/net/srt/flink/alert/dingtalk/DingTalkConstants.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/java/net/srt/flink/alert/dingtalk/DingTalkConstants.java new file mode 100644 index 0000000..edcda4b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/java/net/srt/flink/alert/dingtalk/DingTalkConstants.java @@ -0,0 +1,61 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.dingtalk; + +/** + * DingTalkConstants + * + * @author zrx + * @since 2022/2/23 19:37 + **/ +public final class DingTalkConstants { + + static final String TYPE = "DingTalk"; + + static final String MARKDOWN_QUOTE = "- "; + + static final String MARKDOWN_ENTER = "\n"; + + static final String PROXY_ENABLE = "isEnableProxy"; + + static final String WEB_HOOK = "webhook"; + + static final String KEYWORD = "keyword"; + + static final String SECRET = "secret"; + + static final String MSG_TYPE = "msgtype"; + + static final String AT_MOBILES = "atMobiles"; + + static final String AT_USERIDS = "atUserIds"; + + static final String AT_ALL = "isAtAll"; + + static final String PROXY = "proxy"; + + static final String PORT = "port"; + + static final String USER = "user"; + + static final String PASSWORD = "password"; + + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/java/net/srt/flink/alert/dingtalk/DingTalkSender.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/java/net/srt/flink/alert/dingtalk/DingTalkSender.java new file mode 100644 index 0000000..d38f892 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/java/net/srt/flink/alert/dingtalk/DingTalkSender.java @@ -0,0 +1,290 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.dingtalk; + +import net.srt.flink.alert.base.AlertResult; +import net.srt.flink.alert.base.AlertSendResponse; +import net.srt.flink.alert.base.ShowType; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.utils.JSONUtil; +import org.apache.commons.codec.binary.Base64; +import org.apache.commons.codec.binary.StringUtils; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.auth.AuthScope; +import org.apache.http.auth.UsernamePasswordCredentials; +import org.apache.http.client.CredentialsProvider; +import org.apache.http.client.config.RequestConfig; +import org.apache.http.client.methods.CloseableHttpResponse; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.entity.StringEntity; +import org.apache.http.impl.client.BasicCredentialsProvider; +import org.apache.http.impl.client.CloseableHttpClient; +import org.apache.http.impl.client.HttpClients; +import org.apache.http.util.EntityUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.crypto.Mac; +import javax.crypto.spec.SecretKeySpec; +import java.io.IOException; +import java.net.URLEncoder; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Iterator; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; + +/** + * DingTalkSender + * + * @author zrx + * @since 2022/2/23 19:34 + **/ +public class DingTalkSender { + + private static final Logger logger = LoggerFactory.getLogger(DingTalkSender.class); + private final String url; + private final String keyword; + private final String secret; + private String msgType; + private final String atMobiles; + private final String atUserIds; + private final Boolean atAll; + private final Boolean enableProxy; + private String proxy; + private Integer port; + private String user; + private String password; + + DingTalkSender(Map config) { + url = config.get(DingTalkConstants.WEB_HOOK); + keyword = config.get(DingTalkConstants.KEYWORD); + secret = config.get(DingTalkConstants.SECRET); + msgType = config.get(DingTalkConstants.MSG_TYPE); + atMobiles = config.get(DingTalkConstants.AT_MOBILES); + atUserIds = config.get(DingTalkConstants.AT_USERIDS); + atAll = Boolean.valueOf(config.get(DingTalkConstants.AT_ALL)); + enableProxy = Boolean.valueOf(config.get(DingTalkConstants.PROXY_ENABLE)); + if (Boolean.TRUE.equals(enableProxy)) { + port = Integer.parseInt(config.get(DingTalkConstants.PORT)); + proxy = config.get(DingTalkConstants.PROXY); + user = config.get(DingTalkConstants.USER); + password = config.get(DingTalkConstants.PASSWORD); + } + } + + public AlertResult send(String title, String content) { + AlertResult alertResult; + try { + String resp = sendMsg(title, content); + return checkMsgResult(resp); + } catch (Exception e) { + logger.info("send ding talk alert msg exception : {}", e.getMessage()); + alertResult = new AlertResult(); + alertResult.setSuccess(false); + alertResult.setMessage("send ding talk alert fail."); + } + return alertResult; + } + + private String sendMsg(String title, String content) throws IOException { + String msg = generateMsgJson(title, content); + String httpUrl = url; + if (Asserts.isNotNullString(secret)) { + httpUrl = generateSignedUrl(); + } + HttpPost httpPost = new HttpPost(httpUrl); + StringEntity stringEntity = new StringEntity(msg, StandardCharsets.UTF_8); + httpPost.setEntity(stringEntity); + httpPost.addHeader("Content-Type", "application/json; charset=utf-8"); + CloseableHttpClient httpClient; + if (Boolean.TRUE.equals(enableProxy)) { + HttpHost httpProxy = new HttpHost(proxy, port); + CredentialsProvider provider = new BasicCredentialsProvider(); + provider.setCredentials(new AuthScope(httpProxy), new UsernamePasswordCredentials(user, password)); + httpClient = HttpClients.custom().setDefaultCredentialsProvider(provider).build(); + RequestConfig rcf = RequestConfig.custom().setProxy(httpProxy).build(); + httpPost.setConfig(rcf); + } else { + httpClient = HttpClients.createDefault(); + } + try { + CloseableHttpResponse response = httpClient.execute(httpPost); + String resp; + try { + HttpEntity httpEntity = response.getEntity(); + resp = EntityUtils.toString(httpEntity, "UTF-8"); + EntityUtils.consume(httpEntity); + } finally { + response.close(); + } + return resp; + } finally { + httpClient.close(); + } + } + + private String generateMsgJson(String title, String content) { + if (Asserts.isNullString(msgType)) { + msgType = ShowType.TEXT.getValue(); + } + Map items = new HashMap<>(); + items.put("msgtype", msgType); + Map text = new HashMap<>(); + items.put(msgType, text); + if (ShowType.MARKDOWN.getValue().equals(msgType)) { + generateMarkdownMsg(title, content, text); + } else { + generateTextMsg(title, content, text); + } + setMsgAt(items); + return JSONUtil.toJsonString(items); + } + + private void generateTextMsg(String title, String content, Map text) { + StringBuilder builder = new StringBuilder(); + if (Asserts.isNotNullString(keyword)) { + builder.append(keyword); + builder.append("\n"); + } + String txt = genrateResultMsg(title, content, builder); + text.put("content", txt); + } + + private void generateMarkdownMsg(String title, String content, Map text) { + StringBuilder builder = new StringBuilder("# "); + if (Asserts.isNotNullString(keyword)) { + builder.append(" "); + builder.append(keyword); + } + builder.append("\n\n"); + if (Asserts.isNotNullString(atMobiles)) { + Arrays.stream(atMobiles.split(",")).forEach(value -> { + builder.append("@"); + builder.append(value); + builder.append(" "); + }); + } + if (Asserts.isNotNullString(atUserIds)) { + Arrays.stream(atUserIds.split(",")).forEach(value -> { + builder.append("@"); + builder.append(value); + builder.append(" "); + }); + } + builder.append("\n\n"); + String txt = genrateResultMsg(title, content, builder); + text.put("title", title); + text.put("text", txt); + } + + /** + * 公共生成 markdown 和 text 消息 + * + * @param title 标题 + * @param content 内容 + * @param builder 拼接字符串 + * @return + */ + private String genrateResultMsg(String title, String content, StringBuilder builder) { + List mapSendResultItemsList = JSONUtil.toList(content, LinkedHashMap.class); + if (null == mapSendResultItemsList || mapSendResultItemsList.isEmpty()) { + logger.error("itemsList is null"); + throw new RuntimeException("itemsList is null"); + } + for (LinkedHashMap mapItems : mapSendResultItemsList) { + Set> entries = mapItems.entrySet(); + Iterator> iterator = entries.iterator(); + StringBuilder t = new StringBuilder(String.format("`%s`%s", title, DingTalkConstants.MARKDOWN_ENTER)); + + while (iterator.hasNext()) { + + Map.Entry entry = iterator.next(); + t.append(DingTalkConstants.MARKDOWN_QUOTE); + t.append(entry.getKey()).append(":").append(entry.getValue()); + t.append(DingTalkConstants.MARKDOWN_ENTER); + } + builder.append(t); + } + byte[] byt = StringUtils.getBytesUtf8(builder.toString()); + String txt = StringUtils.newStringUtf8(byt); + return txt; + } + + private String generateSignedUrl() { + Long timestamp = System.currentTimeMillis(); + String stringToSign = timestamp + "\n" + secret; + String sign = ""; + try { + Mac mac = Mac.getInstance("HmacSHA256"); + mac.init(new SecretKeySpec(secret.getBytes("UTF-8"), "HmacSHA256")); + byte[] signData = mac.doFinal(stringToSign.getBytes("UTF-8")); + sign = URLEncoder.encode(new String(Base64.encodeBase64(signData)), "UTF-8"); + } catch (Exception e) { + logger.error("generate sign error, message:{}", e); + } + return url + "×tamp=" + timestamp + "&sign=" + sign; + } + + private void setMsgAt(Map items) { + Map at = new HashMap<>(); + String[] atMobileArray = Asserts.isNotNullString(atMobiles) ? atMobiles.split(",") : new String[0]; + String[] atUserArray = Asserts.isNotNullString(atUserIds) ? atUserIds.split(",") : new String[0]; + boolean isAtAll = Objects.isNull(atAll) ? false : atAll; + at.put("isAtAll", isAtAll); + if (atMobileArray.length > 0) { + at.put("atMobiles", atMobileArray); + } + if (atMobileArray.length > 0) { + at.put("atUserIds", atUserArray); + } + items.put("at", at); + } + + private AlertResult checkMsgResult(String result) { + AlertResult alertResult = new AlertResult(); + alertResult.setSuccess(false); + + if (null == result) { + alertResult.setMessage("send ding talk msg error"); + logger.info("send ding talk msg error,ding talk server resp is null"); + return alertResult; + } + AlertSendResponse response = JSONUtil.parseObject(result, AlertSendResponse.class); + if (null == response) { + alertResult.setMessage("send ding talk msg fail"); + logger.info("send ding talk msg error,resp error"); + return alertResult; + } + if (response.getErrcode() == 0) { + alertResult.setSuccess(true); + alertResult.setMessage("send ding talk msg success"); + return alertResult; + } + alertResult.setMessage(String.format("alert send ding talk msg error : %s", response.getErrmsg())); + logger.info("alert send ding talk msg error : {}", response.getErrmsg()); + return alertResult; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert new file mode 100644 index 0000000..e028ea8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-dingtalk/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert @@ -0,0 +1 @@ +net.srt.flink.alert.dingtalk.DingTalkAlert diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/pom.xml new file mode 100644 index 0000000..13feb9a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/pom.xml @@ -0,0 +1,46 @@ + + + + flink-alert + net.srt + 2.0.0 + + 4.0.0 + + flink-alert-email + + + + net.srt + flink-alert-base + ${project.version} + + + org.apache.poi + poi + + + org.apache.poi + poi-ooxml + + + com.sun.mail + javax.mail + + + org.apache.commons + commons-email + + + org.apache.commons + commons-lang3 + + + junit + junit + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/EmailAlert.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/EmailAlert.java new file mode 100644 index 0000000..5cdc8cc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/EmailAlert.java @@ -0,0 +1,43 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.email; + + +import net.srt.flink.alert.base.AbstractAlert; +import net.srt.flink.alert.base.AlertResult; + +/** + * EmailAlert + * @author zhumingye + * @date: 2022/4/2 + **/ +public class EmailAlert extends AbstractAlert { + + @Override + public String getType() { + return EmailConstants.TYPE; + } + + @Override + public AlertResult send(String title, String content) { + MailSender mailSender = new MailSender(getConfig().getParam()); + return mailSender.send(title, content); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/EmailConstants.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/EmailConstants.java new file mode 100644 index 0000000..88eca54 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/EmailConstants.java @@ -0,0 +1,121 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.email; + +/** + * EmailConstants 邮件常量 + * @author zhumingye + * @date: 2022/4/3 + **/ +public final class EmailConstants { + + public static final String TYPE = "Email"; + + public static final String PLUGIN_DEFAULT_EMAIL_RECEIVERS = "receiver.name"; + public static final String NAME_PLUGIN_DEFAULT_EMAIL_RECEIVERS = "receivers"; + + public static final String PLUGIN_DEFAULT_EMAIL_RECEIVERCCS = "receiverCcs"; + public static final String NAME_PLUGIN_DEFAULT_EMAIL_RECEIVERCCS = "receiverCcs"; + + public static final String NAME_MAIL_PROTOCOL = "mail.protocol"; + + public static final String MAIL_SMTP_HOST = "mail.smtp.host"; + public static final String NAME_MAIL_SMTP_HOST = "serverHost"; + + public static final String MAIL_SMTP_PORT = "mail.smtp.port"; + public static final String NAME_MAIL_SMTP_PORT = "serverPort"; + + public static final String MAIL_SENDER = "sender.name"; + public static final String NAME_MAIL_SENDER = "sender"; + + public static final String MAIL_SMTP_AUTH = "mail.smtp.auth"; + public static final String NAME_MAIL_SMTP_AUTH = "enableSmtpAuth"; + + public static final String MAIL_USER = "mail.smtp.user"; + public static final String NAME_MAIL_USER = "User"; + + public static final String MAIL_PASSWD = "mail.smtp.passwd"; + public static final String NAME_MAIL_PASSWD = "Password"; + + public static final String MAIL_SMTP_STARTTLS_ENABLE = "mail.smtp.starttls.enable"; + public static final String NAME_MAIL_SMTP_STARTTLS_ENABLE = "starttlsEnable"; + + public static final String MAIL_SMTP_SSL_ENABLE = "mail.smtp.ssl.enable"; + public static final String NAME_MAIL_SMTP_SSL_ENABLE = "sslEnable"; + + public static final String MAIL_SMTP_SSL_TRUST = "mail.smtp.ssl.trust"; + public static final String NAME_MAIL_SMTP_SSL_TRUST = "smtpSslTrust"; + + public static final String XLS_FILE_PATH = "xls.file.path"; + + public static final String NAME_SHOW_TYPE = "msgtype"; + + public static final String MAIL_TRANSPORT_PROTOCOL = "mail.transport.protocol"; + + public static final String TEXT_HTML_CHARSET_UTF_8 = "text/html;charset=utf-8"; + + public static final int NUMBER_1000 = 1000; + + public static final String TR = "

"; + + public static final String TD = ""; + + public static final String TR_END = ""; + + public static final String TH = ""; + + public static final String TAB = "\t"; + + public static final String LINE = "\n"; + + public static final String LEFT = ">"; + + public static final String HTML_HEADER_PREFIX = "" + + "" + + "" + + "srt-cloud" + + "" + + "" + + "" + + "" + + "
"; + + public static final String TD_END = "
"; + + public static final String TH_COLSPAN = ""; + + public static final String TH_END = "
"; + + public static final String TABLE_BODY_HTML_TAIL = "
"; + + public static final String UTF_8 = "UTF-8"; + + public static final String EXCEL_SUFFIX_XLSX = ".xlsx"; + + public static final String SINGLE_SLASH = "/"; + + private EmailConstants() { + throw new UnsupportedOperationException("This is a utility class and cannot be instantiated"); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/ExcelUtils.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/ExcelUtils.java new file mode 100644 index 0000000..dd6af07 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/ExcelUtils.java @@ -0,0 +1,133 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.email; + +import net.srt.flink.alert.base.AlertException; +import net.srt.flink.common.utils.JSONUtil; +import org.apache.commons.collections4.CollectionUtils; +import org.apache.poi.ss.usermodel.Cell; +import org.apache.poi.ss.usermodel.CellStyle; +import org.apache.poi.ss.usermodel.HorizontalAlignment; +import org.apache.poi.ss.usermodel.Row; +import org.apache.poi.ss.usermodel.Sheet; +import org.apache.poi.xssf.streaming.SXSSFWorkbook; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.io.FileOutputStream; +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +/** + * ExcelUtils excel工具类 + * @author zhumingye + * @date: 2022/4/3 + **/ +public final class ExcelUtils { + private static final int XLSX_WINDOW_ROW = 10000; + private static final Logger logger = LoggerFactory.getLogger(ExcelUtils.class); + + private ExcelUtils() { + throw new UnsupportedOperationException("This is a utility class and cannot be instantiated"); + } + + /** + * generate excel file + * + * @param content the content + * @param title the title + * @param xlsFilePath the xls path + */ + public static void genExcelFile(String content, String title, String xlsFilePath) { + File file = new File(xlsFilePath); + if (!file.exists() && !file.mkdirs()) { + logger.error("Create xlsx directory error, path:{}", xlsFilePath); + throw new AlertException("Create xlsx directory error"); + } + + List itemsList = JSONUtil.toList(content, LinkedHashMap.class); + + if (CollectionUtils.isEmpty(itemsList)) { + logger.error("itemsList is null"); + throw new AlertException("itemsList is null"); + } + + LinkedHashMap headerMap = itemsList.get(0); + + List headerList = new ArrayList<>(); + + for (Map.Entry en : headerMap.entrySet()) { + headerList.add(en.getKey()); + } + try (SXSSFWorkbook wb = new SXSSFWorkbook(XLSX_WINDOW_ROW); + FileOutputStream fos = new FileOutputStream(String.format("%s/%s.xlsx", xlsFilePath, title))) { + // declare a workbook + // generate a table + Sheet sheet = wb.createSheet(); + Row row = sheet.createRow(0); + //set the height of the first line + row.setHeight((short) 500); + + //set Horizontal right + CellStyle cellStyle = wb.createCellStyle(); + cellStyle.setAlignment(HorizontalAlignment.RIGHT); + + //setting excel headers + for (int i = 0; i < headerList.size(); i++) { + Cell cell = row.createCell(i); + cell.setCellStyle(cellStyle); + cell.setCellValue(headerList.get(i)); + } + + //setting excel body + int rowIndex = 1; + for (LinkedHashMap itemsMap : itemsList) { + Object[] values = itemsMap.values().toArray(); + row = sheet.createRow(rowIndex); + //setting excel body height + row.setHeight((short) 500); + rowIndex++; + for (int j = 0; j < values.length; j++) { + Cell cell1 = row.createCell(j); + cell1.setCellStyle(cellStyle); + if (values[j] instanceof Number) { + cell1.setCellValue(Double.parseDouble(String.valueOf(values[j]))); + } else { + cell1.setCellValue(String.valueOf(values[j])); + } + } + } + + for (int i = 0; i < headerList.size(); i++) { + sheet.setColumnWidth(i, headerList.get(i).length() * 800); + } + + //setting file output + wb.write(fos); + wb.dispose(); + } catch (Exception e) { + throw new AlertException("generate excel error", e); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/MailSender.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/MailSender.java new file mode 100644 index 0000000..9d1ece0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/MailSender.java @@ -0,0 +1,413 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.email; + +import com.sun.mail.smtp.SMTPProvider; +import net.srt.flink.alert.base.AlertException; +import net.srt.flink.alert.base.AlertResult; +import net.srt.flink.alert.base.ShowType; +import net.srt.flink.alert.email.template.AlertTemplate; +import net.srt.flink.alert.email.template.DefaultHTMLTemplate; +import org.apache.commons.collections4.CollectionUtils; +import org.apache.commons.lang3.StringUtils; +import org.apache.commons.mail.EmailException; +import org.apache.commons.mail.HtmlEmail; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.activation.CommandMap; +import javax.activation.MailcapCommandMap; +import javax.mail.Authenticator; +import javax.mail.Message; +import javax.mail.MessagingException; +import javax.mail.PasswordAuthentication; +import javax.mail.Session; +import javax.mail.Transport; +import javax.mail.internet.InternetAddress; +import javax.mail.internet.MimeBodyPart; +import javax.mail.internet.MimeMessage; +import javax.mail.internet.MimeMultipart; +import javax.mail.internet.MimeUtility; +import java.io.File; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +import static java.util.Objects.requireNonNull; + +/** + * MailSender 邮件发送器 + * @author zhumingye + * @date: 2022/4/3 + **/ +public final class MailSender { + private static final Logger logger = LoggerFactory.getLogger(MailSender.class); + + private final List receivers; + private final List receiverCcs; + private final String mailProtocol = "SMTP"; + private final String mailSmtpHost; + private final String mailSmtpPort; + private final String mailSenderNickName; + private final String enableSmtpAuth; + private final String mailUser; + private final String mailPasswd; + private final String mailUseStartTLS; + private final String mailUseSSL; + private final String sslTrust; + private final String showType; + private final AlertTemplate alertTemplate; + private final String mustNotNull = " must not be null"; + private String xlsFilePath; + + public MailSender(Map config) { + String receiversConfig = config.get(EmailConstants.NAME_PLUGIN_DEFAULT_EMAIL_RECEIVERS); + if (receiversConfig == null || "".equals(receiversConfig)) { + throw new AlertException(EmailConstants.NAME_PLUGIN_DEFAULT_EMAIL_RECEIVERS + mustNotNull); + } + + receivers = Arrays.asList(receiversConfig.split(",")); + + String receiverCcsConfig = config.get(EmailConstants.NAME_PLUGIN_DEFAULT_EMAIL_RECEIVERCCS); + + receiverCcs = new ArrayList<>(); + if (receiverCcsConfig != null && !"".equals(receiverCcsConfig)) { + receiverCcs.addAll(Arrays.asList(receiverCcsConfig.split(","))); + } + + mailSmtpHost = config.get(EmailConstants.NAME_MAIL_SMTP_HOST); + requireNonNull(mailSmtpHost, EmailConstants.NAME_MAIL_SMTP_HOST + mustNotNull); + + mailSmtpPort = config.get(EmailConstants.NAME_MAIL_SMTP_PORT); + requireNonNull(mailSmtpPort, EmailConstants.NAME_MAIL_SMTP_PORT + mustNotNull); + + mailSenderNickName = config.get(EmailConstants.NAME_MAIL_SENDER); + requireNonNull(mailSenderNickName, EmailConstants.NAME_MAIL_SENDER + mustNotNull); + + enableSmtpAuth = config.get(EmailConstants.NAME_MAIL_SMTP_AUTH); + + mailUser = config.get(EmailConstants.NAME_MAIL_USER); + requireNonNull(mailUser, EmailConstants.NAME_MAIL_USER + mustNotNull); + + mailPasswd = config.get(EmailConstants.NAME_MAIL_PASSWD); + requireNonNull(mailPasswd, EmailConstants.NAME_MAIL_PASSWD + mustNotNull); + + mailUseStartTLS = config.get(EmailConstants.NAME_MAIL_SMTP_STARTTLS_ENABLE); + + mailUseSSL = config.get(EmailConstants.NAME_MAIL_SMTP_SSL_ENABLE); + + sslTrust = config.get(EmailConstants.NAME_MAIL_SMTP_SSL_TRUST); + + showType = config.get(EmailConstants.NAME_SHOW_TYPE); + requireNonNull(showType, EmailConstants.NAME_SHOW_TYPE + mustNotNull); + + xlsFilePath = config.get(EmailConstants.XLS_FILE_PATH); + if (StringUtils.isBlank(xlsFilePath)) { + xlsFilePath = "/tmp/xls"; + } + + alertTemplate = new DefaultHTMLTemplate(); + } + + /** + * send mail to receivers + * @param title title + * @param content content + */ + public AlertResult send(String title, String content) { + return send(this.receivers, this.receiverCcs, title, content); + } + + /** + * send mail + * + * @param receivers receivers + * @param receiverCcs receiverCcs + * @param title title + * @param content content + */ + public AlertResult send(List receivers, List receiverCcs, String title, String content) { + AlertResult alertResult = new AlertResult(); + alertResult.setSuccess(false); + + // if there is no receivers && no receiversCc, no need to process + if (CollectionUtils.isEmpty(receivers) && CollectionUtils.isEmpty(receiverCcs)) { + return alertResult; + } + + receivers.removeIf(StringUtils::isEmpty); + Thread.currentThread().setContextClassLoader(getClass().getClassLoader()); + + if (showType.equals(ShowType.TABLE.getValue()) || showType.equals(ShowType.TEXT.getValue())) { + // send email + HtmlEmail email = new HtmlEmail(); + + try { + Session session = getSession(); + email.setMailSession(session); + email.setFrom(mailUser, mailSenderNickName); + email.setCharset(EmailConstants.UTF_8); + if (CollectionUtils.isNotEmpty(receivers)) { + // receivers mail + for (String receiver : receivers) { + email.addTo(receiver); + } + } + + if (CollectionUtils.isNotEmpty(receiverCcs)) { + //cc + for (String receiverCc : receiverCcs) { + email.addCc(receiverCc); + } + } + // sender mail + return getStringObjectMap(title, content, alertResult, email); + } catch (Exception e) { + handleException(alertResult, e); + } + } else if (showType.equals(ShowType.ATTACHMENT.getValue()) || showType.equals(ShowType.TABLE_ATTACHMENT.getValue())) { + try { + + String partContent = (showType.equals(ShowType.ATTACHMENT.getValue()) + ? "Please see the attachment " + title + EmailConstants.EXCEL_SUFFIX_XLSX + : htmlTable(title, content, false)); + + attachment(title, content, partContent); + + alertResult.setSuccess(true); + return alertResult; + } catch (Exception e) { + handleException(alertResult, e); + return alertResult; + } + } + return alertResult; + + } + + /** + * html table content + * + * @param content the content + * @param showAll if show the whole content + * @return the html table form + */ + private String htmlTable(String title, String content, boolean showAll) { + return alertTemplate.getMessageFromTemplate(title, content, ShowType.TABLE, showAll); + } + + /** + * html table content + * + * @param content the content + * @return the html table form + */ + private String htmlTable(String title, String content) { + return htmlTable(title,content, true); + } + + /** + * html text content + * + * @param content the content + * @return text in html form + */ + private String htmlText(String title, String content) { + return alertTemplate.getMessageFromTemplate(title, content, ShowType.TEXT); + } + + /** + * send mail as Excel attachment + */ + private void attachment(String title, String content, String partContent) throws Exception { + MimeMessage msg = getMimeMessage(); + + attachContent(title, content, partContent, msg); + } + + /** + * get MimeMessage + */ + private MimeMessage getMimeMessage() throws MessagingException { + + // 1. The first step in creating mail: creating session + Session session = getSession(); + // Setting debug mode, can be turned off + session.setDebug(false); + + // 2. creating mail: Creating a MimeMessage + MimeMessage msg = new MimeMessage(session); + // 3. set sender + msg.setFrom(new InternetAddress(mailUser)); + // 4. set receivers + for (String receiver : receivers) { + msg.addRecipients(Message.RecipientType.TO, InternetAddress.parse(receiver)); + } + return msg; + } + + /** + * get session + * + * @return the new Session + */ + private Session getSession() { + // support multilple email format + MailcapCommandMap mc = (MailcapCommandMap) CommandMap.getDefaultCommandMap(); + mc.addMailcap("text/html;; x-java-content-handler=com.sun.mail.handlers.text_html"); + mc.addMailcap("text/xml;; x-java-content-handler=com.sun.mail.handlers.text_xml"); + mc.addMailcap("text/plain;; x-java-content-handler=com.sun.mail.handlers.text_plain"); + mc.addMailcap("multipart/*;; x-java-content-handler=com.sun.mail.handlers.multipart_mixed"); + mc.addMailcap("message/rfc822;; x-java-content-handler=com.sun.mail.handlers.message_rfc822"); + CommandMap.setDefaultCommandMap(mc); + + Properties props = new Properties(); + props.setProperty(EmailConstants.MAIL_SMTP_HOST, mailSmtpHost); + props.setProperty(EmailConstants.MAIL_SMTP_PORT, mailSmtpPort); + + if (StringUtils.isNotEmpty(enableSmtpAuth)) { + props.setProperty(EmailConstants.MAIL_SMTP_AUTH, enableSmtpAuth); + } + if (StringUtils.isNotEmpty(mailProtocol)) { + props.setProperty(EmailConstants.MAIL_TRANSPORT_PROTOCOL, mailProtocol); + } + + if (StringUtils.isNotEmpty(mailUseSSL)) { + props.setProperty(EmailConstants.MAIL_SMTP_SSL_ENABLE, mailUseSSL); + } + + if (StringUtils.isNotEmpty(mailUseStartTLS)) { + props.setProperty(EmailConstants.MAIL_SMTP_STARTTLS_ENABLE, mailUseStartTLS); + } + + if (StringUtils.isNotEmpty(sslTrust)) { + props.setProperty(EmailConstants.MAIL_SMTP_SSL_TRUST, sslTrust); + } + + Authenticator auth = new Authenticator() { + @Override + protected PasswordAuthentication getPasswordAuthentication() { + // mail username and password + return new PasswordAuthentication(mailUser, mailPasswd); + } + }; + + Session session = Session.getInstance(props, auth); + session.addProvider(new SMTPProvider()); + return session; + } + + /** + * attach content + */ + private void attachContent(String title, String content, String partContent, MimeMessage msg) throws MessagingException, IOException { + /* + * set receiverCc + */ + if (CollectionUtils.isNotEmpty(receiverCcs)) { + for (String receiverCc : receiverCcs) { + msg.addRecipients(Message.RecipientType.CC, InternetAddress.parse(receiverCc)); + } + } + + // set subject + msg.setSubject(title); + MimeMultipart partList = new MimeMultipart(); + // set signature + MimeBodyPart part1 = new MimeBodyPart(); + part1.setContent(partContent, EmailConstants.TEXT_HTML_CHARSET_UTF_8); + // set attach file + MimeBodyPart part2 = new MimeBodyPart(); + File file = new File(xlsFilePath + EmailConstants.SINGLE_SLASH + title + EmailConstants.EXCEL_SUFFIX_XLSX); + if (!file.getParentFile().exists()) { + file.getParentFile().mkdirs(); + } + // make excel file + + ExcelUtils.genExcelFile(content, title, xlsFilePath); + + part2.attachFile(file); + part2.setFileName(MimeUtility.encodeText(title + EmailConstants.EXCEL_SUFFIX_XLSX, EmailConstants.UTF_8, "B")); + // add components to collection + partList.addBodyPart(part1); + partList.addBodyPart(part2); + msg.setContent(partList); + // 5. send Transport + Transport.send(msg); + // 6. delete saved file + deleteFile(file); + } + + /** + * the string object map + */ + private AlertResult getStringObjectMap(String title, String content, AlertResult alertResult, HtmlEmail email) throws EmailException { + + /* + * the subject of the message to be sent + */ + email.setSubject(title); + /* + * to send information, you can use HTML tags in mail content because of the use of HtmlEmail + */ + if (showType.equals(ShowType.TABLE.getValue())) { + email.setMsg(htmlTable(title, content)); + } else if (showType.equals(ShowType.TEXT.getValue())) { + email.setMsg(htmlText(title, content)); + } + + // send + email.setDebug(true); + email.send(); + + alertResult.setSuccess(true); + + return alertResult; + } + + /** + * file delete + * + * @param file the file to delete + */ + public void deleteFile(File file) { + if (file.exists()) { + if (file.delete()) { + logger.info("delete success: {}", file.getAbsolutePath()); + } else { + logger.info("delete fail: {}", file.getAbsolutePath()); + } + } else { + logger.info("file not exists: {}", file.getAbsolutePath()); + } + } + + /** + * handle exception + */ + private void handleException(AlertResult alertResult, Exception e) { + logger.error("Send email to {} failed", receivers, e); + alertResult.setMessage("Send email to {" + String.join(",", receivers) + "} failed," + e.toString()); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/template/AlertTemplate.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/template/AlertTemplate.java new file mode 100644 index 0000000..f3e8408 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/template/AlertTemplate.java @@ -0,0 +1,43 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.email.template; + + +import net.srt.flink.alert.base.ShowType; + +/** + * @Author: zhumingye + * @date: 2022/4/3 + * @Description: 邮件告警模板接口 + */ +public interface AlertTemplate { + + String getMessageFromTemplate(String title, String content, ShowType showType, boolean showAll); + + /** + * default showAll is true + * @param content alert message content + * @param showType show type + * @return a message from a specified alert template + */ + default String getMessageFromTemplate(String title, String content, ShowType showType) { + return getMessageFromTemplate(title,content, showType, true); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/template/DefaultHTMLTemplate.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/template/DefaultHTMLTemplate.java new file mode 100644 index 0000000..836f403 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/java/net/srt/flink/alert/email/template/DefaultHTMLTemplate.java @@ -0,0 +1,154 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.email.template; + +import net.srt.flink.alert.base.ShowType; +import net.srt.flink.alert.email.EmailConstants; +import net.srt.flink.common.utils.JSONUtil; +import org.apache.commons.lang3.StringUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Iterator; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import static java.util.Objects.requireNonNull; + +/** + * @Author: zhumingye + * @date: 2022/4/3 + * @Description: 邮件告警Html模板 + */ +public class DefaultHTMLTemplate implements AlertTemplate { + + public static final Logger logger = LoggerFactory.getLogger(DefaultHTMLTemplate.class); + + @Override + public String getMessageFromTemplate(String title, String content, ShowType showType, boolean showAll) { + + switch (showType) { + case TABLE: + return getTableTypeMessage(title,content, showAll); + case TEXT: + return getTextTypeMessage(title,content); + default: + throw new IllegalArgumentException(String.format("not support showType: %s in DefaultHTMLTemplate", showType)); + } + } + + /** + * get alert message which type is TABLE + * + * @param content message content + * @param showAll weather to show all + * @return alert message + */ + private String getTableTypeMessage(String title,String content, boolean showAll) { + + if (StringUtils.isNotEmpty(content)) { + List mapItemsList = JSONUtil.toList(content,LinkedHashMap.class); + if (!showAll && mapItemsList.size() > EmailConstants.NUMBER_1000) { + mapItemsList = mapItemsList.subList(0, EmailConstants.NUMBER_1000); + } + + StringBuilder contents = new StringBuilder(200); + + boolean flag = true; + + for (LinkedHashMap mapItems : mapItemsList) { + + Set> entries = mapItems.entrySet(); + + Iterator> iterator = entries.iterator(); + + StringBuilder t = new StringBuilder(EmailConstants.TR); + StringBuilder cs = new StringBuilder(EmailConstants.TR); + while (iterator.hasNext()) { + + Map.Entry entry = iterator.next(); + t.append(EmailConstants.TH).append(entry.getKey()).append(EmailConstants.TH_END); + cs.append(EmailConstants.TD).append(entry.getValue()).append(EmailConstants.TD_END); + + } + t.append(EmailConstants.TR_END); + cs.append(EmailConstants.TR_END); + if (flag) { + title = t.toString(); + } + flag = false; + contents.append(cs); + } + + return getMessageFromHtmlTemplate(title, contents.toString()); + } + + return content; + } + + /** + * get alert message which type is TEXT + * @param content message content + * @return alert message + */ + private String getTextTypeMessage(String title, String content) { + StringBuilder stringBuilder = new StringBuilder(100); + + if (StringUtils.isNotEmpty(content)) { + List linkedHashMaps = JSONUtil.toList(content, LinkedHashMap.class); + if (linkedHashMaps.size() > EmailConstants.NUMBER_1000) { + linkedHashMaps = linkedHashMaps.subList(0, EmailConstants.NUMBER_1000); + } + stringBuilder.append(EmailConstants.TR).append(EmailConstants.TH_COLSPAN).append(title).append(EmailConstants.TH_END).append(EmailConstants.TR_END); + for (LinkedHashMap mapItems : linkedHashMaps) { + Set> entries = mapItems.entrySet(); + Iterator> iterator = entries.iterator(); + + while (iterator.hasNext()) { + Map.Entry entry = iterator.next(); + stringBuilder.append(EmailConstants.TR); + stringBuilder.append(EmailConstants.TD).append(entry.getKey()).append(EmailConstants.TD_END); + stringBuilder.append(EmailConstants.TD).append(entry.getValue()).append(EmailConstants.TD_END); + stringBuilder.append(EmailConstants.TR_END); + } + } + return getMessageFromHtmlTemplate(title, stringBuilder.toString()); + } + return stringBuilder.toString(); + } + + /** + * get alert message from a html template + * + * @param title message title + * @param content message content + * @return alert message which use html template + */ + private String getMessageFromHtmlTemplate(String title, String content) { + + requireNonNull(content, "content must not null"); + String htmlTableThead = StringUtils.isEmpty(title) ? "" : String.format("%s%n", title); + + return EmailConstants.HTML_HEADER_PREFIX + htmlTableThead + content + EmailConstants.TABLE_BODY_HTML_TAIL; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert new file mode 100644 index 0000000..d72db2f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-email/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert @@ -0,0 +1 @@ +net.srt.flink.alert.email.EmailAlert diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/pom.xml new file mode 100644 index 0000000..4f44dba --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/pom.xml @@ -0,0 +1,51 @@ + + + + flink-alert + net.srt + 2.0.0 + + 4.0.0 + + flink-alert-feishu + + + + + net.srt + flink-alert-base + ${project.version} + + + + org.apache.httpcomponents + httpclient + + + com.google.guava + guava + + + org.slf4j + slf4j-api + + + com.fasterxml.jackson.core + jackson-annotations + + + org.apache.commons + commons-lang3 + + + junit + junit + test + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/FeiShuAlert.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/FeiShuAlert.java new file mode 100644 index 0000000..8ffb8cc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/FeiShuAlert.java @@ -0,0 +1,43 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.feishu; + + +import net.srt.flink.alert.base.AbstractAlert; +import net.srt.flink.alert.base.AlertResult; + +/** + * FeiShuAlert + * @author zhumingye + * @date: 2022/4/2 + **/ +public class FeiShuAlert extends AbstractAlert { + + @Override + public String getType() { + return FeiShuConstants.TYPE; + } + + @Override + public AlertResult send(String title, String content) { + FeiShuSender sender = new FeiShuSender(getConfig().getParam()); + return sender.send(title,content); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/FeiShuConstants.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/FeiShuConstants.java new file mode 100644 index 0000000..dceac84 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/FeiShuConstants.java @@ -0,0 +1,49 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.feishu; + +/** + * @Author: zhumingye + * @date: 2022/4/2 + * @Description: 参数常量 + */ +public final class FeiShuConstants { + static final String TYPE = "FeiShu"; + static final String MARKDOWN_QUOTE = "> "; + static final String MARKDOWN_ENTER = "/n"; + static final String WEB_HOOK = "webhook"; + static final String KEY_WORD = "keyword"; + static final String SECRET = "secret"; + static final String FEI_SHU_PROXY_ENABLE = "isEnableProxy"; + static final String FEI_SHU_PROXY = "proxy"; + static final String FEI_SHU_PORT = "port"; + static final String FEI_SHU_USER = "user"; + static final String FEI_SHU_PASSWORD = "password"; + static final String MSG_TYPE = "msgtype"; + static final String AT_ALL = "isAtAll"; + static final String AT_USERS = "users"; + static final String FEI_SHU_TEXT_TEMPLATE = "{\"msg_type\":\"{msg_type}\",\"content\":{\"{msg_type}\":\"{msg} {users} \" }}"; + static final String FEI_SHU_POST_TEMPLATE = "{\"msg_type\":\"{msg_type}\",\"content\":{\"{msg_type}\":{\"zh_cn\":{\"title\":\"{keyword}\"," + + "\"content\":[[{\"tag\":\"text\",\"un_escape\": true,\"text\":\"{msg}\"},{users}]]}}}}"; + + private FeiShuConstants() { + throw new UnsupportedOperationException("This is a utility class and cannot be instantiated"); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/FeiShuSender.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/FeiShuSender.java new file mode 100644 index 0000000..69a5286 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/FeiShuSender.java @@ -0,0 +1,298 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.feishu; + +import com.fasterxml.jackson.annotation.JsonProperty; +import net.srt.flink.alert.base.AlertResult; +import net.srt.flink.alert.base.ShowType; +import net.srt.flink.common.utils.JSONUtil; +import org.apache.commons.codec.binary.StringUtils; +import org.apache.http.HttpEntity; +import org.apache.http.HttpStatus; +import org.apache.http.client.methods.CloseableHttpResponse; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.impl.client.CloseableHttpClient; +import org.apache.http.util.EntityUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.util.Iterator; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; + +/** + * @Author: zhumingye + * @date: 2022/4/2 + * @Description: 飞书消息发送器 + */ +public final class FeiShuSender { + private static final Logger logger = LoggerFactory.getLogger(FeiShuSender.class); + + static final String FEI_SHU_PROXY_ENABLE_REGX = "{isEnableProxy}"; + static final String FEI_SHU_PROXY_REGX = "{proxy}"; + static final String FEI_SHU_PORT_REGX = "{port}"; + static final String FEI_SHU_USER_REGX = "{users}"; + static final String FEI_SHU_PASSWORD_REGX = "{password}"; + static final String MSG_RESULT_REGX = "{msg}"; + static final String MSG_TYPE_REGX = "{msg_type}"; + static final String FEI_SHU_MSG_TYPE_REGX = "{keyword}"; + + private final String url; + private final String msgType; + private final Boolean enableProxy; + private final String secret; + private final String keyword; + private String proxy; + private Integer port; + private String user; + private String password; + private final Boolean atAll; + private String atUserIds; + + FeiShuSender(Map config) { + url = config.get(FeiShuConstants.WEB_HOOK); + msgType = config.get(FeiShuConstants.MSG_TYPE); + keyword = config.get(FeiShuConstants.KEY_WORD) != null ? config.get(FeiShuConstants.KEY_WORD).replace("\r\n", "") : ""; + enableProxy = Boolean.valueOf(config.get(FeiShuConstants.FEI_SHU_PROXY_ENABLE)); + secret = config.get(FeiShuConstants.SECRET); + if (Boolean.TRUE.equals(enableProxy)) { + proxy = config.get(FeiShuConstants.FEI_SHU_PROXY); + port = Integer.parseInt(config.get(FeiShuConstants.FEI_SHU_PORT)); + user = config.get(FeiShuConstants.FEI_SHU_USER); + password = config.get(FeiShuConstants.FEI_SHU_PASSWORD); + } + atAll = Boolean.valueOf(config.get(FeiShuConstants.AT_ALL)); + if (Boolean.FALSE.equals(atAll)) { + atUserIds = config.get(FeiShuConstants.AT_USERS); + } + } + + private String toJsonSendMsg(String title, String content) { + String jsonResult = ""; + byte[] byt = StringUtils.getBytesUtf8(formatContent(title,content)); + String contentResult = StringUtils.newStringUtf8(byt); + String userIdsToText = mkUserIds(org.apache.commons.lang3.StringUtils.isBlank(atUserIds) ? "all" : atUserIds); + if (StringUtils.equals(ShowType.TEXT.getValue(), msgType)) { + jsonResult = FeiShuConstants.FEI_SHU_TEXT_TEMPLATE.replace(MSG_TYPE_REGX, msgType) + .replace(MSG_RESULT_REGX, contentResult).replace(FEI_SHU_USER_REGX, userIdsToText).replaceAll("/n", "\\\\n"); + } else { + jsonResult = FeiShuConstants.FEI_SHU_POST_TEMPLATE.replace(MSG_TYPE_REGX, msgType) + .replace(FEI_SHU_MSG_TYPE_REGX, keyword).replace(MSG_RESULT_REGX, contentResult) + .replace(FEI_SHU_USER_REGX, userIdsToText).replaceAll("/n", "\\\\n"); + } + return jsonResult; + } + + private String mkUserIds(String users) { + String userIdsToText = ""; + String[] userList = users.split(","); + if (msgType.equals(ShowType.TEXT.getValue())) { + StringBuilder sb = new StringBuilder(); + for (String user : userList) { + sb.append(""); + } + userIdsToText = sb.toString(); + } else { + StringBuilder sb = new StringBuilder(); + for (String user : userList) { + sb.append("{\"tag\":\"at\",\"user_id\":\"").append(user).append("\"},"); + } + sb.deleteCharAt(sb.length() - 1); + userIdsToText = sb.toString(); + } + return userIdsToText; + } + + public static AlertResult checkSendFeiShuSendMsgResult(String result) { + AlertResult alertResult = new AlertResult(); + alertResult.setSuccess(false); + + if (org.apache.commons.lang3.StringUtils.isBlank(result)) { + alertResult.setMessage("send fei shu msg error"); + logger.info("send fei shu msg error,fei shu server resp is null"); + return alertResult; + } + FeiShuSendMsgResponse sendMsgResponse = JSONUtil.parseObject(result, FeiShuSendMsgResponse.class); + + if (null == sendMsgResponse) { + alertResult.setMessage("send fei shu msg fail"); + logger.info("send fei shu msg error,resp error"); + return alertResult; + } + if (sendMsgResponse.statusCode == 0) { + alertResult.setSuccess(true); + alertResult.setMessage("send fei shu msg success"); + return alertResult; + } + alertResult.setMessage(String.format("alert send fei shu msg error : %s", sendMsgResponse.getStatusMessage())); + logger.info("alert send fei shu msg error : {} ,Extra : {} ", sendMsgResponse.getStatusMessage(), sendMsgResponse.getExtra()); + return alertResult; + } + + public static String formatContent(String title, String content) { + List mapSendResultItemsList = JSONUtil.toList(content, LinkedHashMap.class); + if (null == mapSendResultItemsList || mapSendResultItemsList.isEmpty()) { + logger.error("itemsList is null"); + throw new RuntimeException("itemsList is null"); + } + StringBuilder contents = new StringBuilder(100); + contents.append(String.format("`%s` %s",title,FeiShuConstants.MARKDOWN_ENTER)); + for (LinkedHashMap mapItems : mapSendResultItemsList) { + Set> entries = mapItems.entrySet(); + Iterator> iterator = entries.iterator(); + while (iterator.hasNext()) { + Entry entry = iterator.next(); + String key = entry.getKey(); + String value = entry.getValue().toString(); + contents.append(FeiShuConstants.MARKDOWN_QUOTE); + contents.append(key + ":" + value).append(FeiShuConstants.MARKDOWN_ENTER); + } + return contents.toString(); + } + return null; + } + + public AlertResult send(String title,String content) { + AlertResult alertResult; + try { + String resp = sendMsg(title, content); + return checkSendFeiShuSendMsgResult(resp); + } catch (Exception e) { + logger.info("send fei shu alert msg exception : {}", e.getMessage()); + alertResult = new AlertResult(); + alertResult.setSuccess(false); + alertResult.setMessage("send fei shu alert fail."); + } + return alertResult; + } + + private String sendMsg(String title,String content) throws IOException { + + String msgToJson = toJsonSendMsg(title,content); + HttpPost httpPost = HttpRequestUtil.constructHttpPost(url, msgToJson); + CloseableHttpClient httpClient; + httpClient = HttpRequestUtil.getHttpClient(enableProxy, proxy, port, user, password); + try { + CloseableHttpResponse response = httpClient.execute(httpPost); + + int statusCode = response.getStatusLine().getStatusCode(); + if (statusCode != HttpStatus.SC_OK) { + logger.error("send feishu message error, return http status code: {} ", statusCode); + } + String resp; + try { + HttpEntity entity = response.getEntity(); + resp = EntityUtils.toString(entity, "utf-8"); + EntityUtils.consume(entity); + } finally { + response.close(); + } + logger.info("Fei Shu send title :{} ,content :{}, resp: {}", title, content, resp); + return resp; + } finally { + httpClient.close(); + } + } + + static final class FeiShuSendMsgResponse { + @JsonProperty("Extra") + private String extra; + @JsonProperty("StatusCode") + private Integer statusCode; + @JsonProperty("StatusMessage") + private String statusMessage; + + public FeiShuSendMsgResponse() { + } + + public String getExtra() { + return this.extra; + } + + @JsonProperty("Extra") + public void setExtra(String extra) { + this.extra = extra; + } + + public Integer getStatusCode() { + return this.statusCode; + } + + @JsonProperty("StatusCode") + public void setStatusCode(Integer statusCode) { + this.statusCode = statusCode; + } + + public String getStatusMessage() { + return this.statusMessage; + } + + @JsonProperty("StatusMessage") + public void setStatusMessage(String statusMessage) { + this.statusMessage = statusMessage; + } + + public boolean equals(final Object o) { + if (o == this) { + return true; + } + if (!(o instanceof FeiShuSendMsgResponse)) { + return false; + } + final FeiShuSendMsgResponse other = (FeiShuSendMsgResponse) o; + final Object this$extra = this.getExtra(); + final Object other$extra = other.getExtra(); + if (this$extra == null ? other$extra != null : !this$extra.equals(other$extra)) { + return false; + } + final Object this$statusCode = this.getStatusCode(); + final Object other$statusCode = other.getStatusCode(); + if (this$statusCode == null ? other$statusCode != null : !this$statusCode.equals(other$statusCode)) { + return false; + } + final Object this$statusMessage = this.getStatusMessage(); + final Object other$statusMessage = other.getStatusMessage(); + if (this$statusMessage == null ? other$statusMessage != null : !this$statusMessage.equals(other$statusMessage)) { + return false; + } + return true; + } + + public int hashCode() { + final int PRIME = 59; + int result = 1; + final Object $extra = this.getExtra(); + result = result * PRIME + ($extra == null ? 43 : $extra.hashCode()); + final Object $statusCode = this.getStatusCode(); + result = result * PRIME + ($statusCode == null ? 43 : $statusCode.hashCode()); + final Object $statusMessage = this.getStatusMessage(); + result = result * PRIME + ($statusMessage == null ? 43 : $statusMessage.hashCode()); + return result; + } + + public String toString() { + return "FeiShuSender.FeiShuSendMsgResponse(extra=" + this.getExtra() + ", statusCode=" + this.getStatusCode() + ", statusMessage=" + this.getStatusMessage() + ")"; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/HttpRequestUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/HttpRequestUtil.java new file mode 100644 index 0000000..461c851 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/java/net/srt/flink/alert/feishu/HttpRequestUtil.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.feishu; + +import org.apache.http.HttpHost; +import org.apache.http.auth.AuthScope; +import org.apache.http.auth.UsernamePasswordCredentials; +import org.apache.http.client.CredentialsProvider; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.impl.client.BasicCredentialsProvider; +import org.apache.http.impl.client.CloseableHttpClient; +import org.apache.http.impl.client.HttpClients; + +public final class HttpRequestUtil { + private HttpRequestUtil() { + throw new UnsupportedOperationException("This is a utility class and cannot be instantiated"); + } + + public static CloseableHttpClient getHttpClient(boolean enableProxy, String proxy, Integer port, String user, String password) { + if (enableProxy) { + HttpHost httpProxy = new HttpHost(proxy, port); + CredentialsProvider provider = new BasicCredentialsProvider(); + provider.setCredentials(new AuthScope(httpProxy), new UsernamePasswordCredentials(user, password)); + return HttpClients.custom().setDefaultCredentialsProvider(provider).build(); + } else { + return HttpClients.createDefault(); + } + } + + public static HttpPost constructHttpPost(String url, String msg) { + HttpPost post = new HttpPost(url); + StringEntity entity = new StringEntity(msg, ContentType.APPLICATION_JSON); + post.setEntity(entity); + return post; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert new file mode 100644 index 0000000..1fd894a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-feishu/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert @@ -0,0 +1 @@ +net.srt.flink.alert.feishu.FeiShuAlert diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/pom.xml new file mode 100644 index 0000000..96b0c58 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/pom.xml @@ -0,0 +1,30 @@ + + + + flink-alert + net.srt + 2.0.0 + + 4.0.0 + + flink-alert-wechat + + + + net.srt + flink-alert-base + ${project.version} + + + org.apache.httpcomponents + httpclient + + + junit + junit + provided + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatAlert.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatAlert.java new file mode 100644 index 0000000..035894d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatAlert.java @@ -0,0 +1,43 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.wechat; + + +import net.srt.flink.alert.base.AbstractAlert; +import net.srt.flink.alert.base.AlertResult; + +/** + * WeChatAlert + * + * @author zrx + * @since 2022/2/23 21:09 + **/ +public class WeChatAlert extends AbstractAlert { + @Override + public String getType() { + return WeChatConstants.TYPE; + } + + @Override + public AlertResult send(String title, String content) { + WeChatSender sender = new WeChatSender(getConfig().getParam()); + return sender.send(title, content); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatConstants.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatConstants.java new file mode 100644 index 0000000..8cb40a3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatConstants.java @@ -0,0 +1,68 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.wechat; + +/** + * WeChatConstants + * + * @author zrx + * @since 2022/2/23 21:10 + **/ +public class WeChatConstants { + + static final String TYPE = "WeChat"; + + static final String MARKDOWN_QUOTE = ">"; + + static final String MARKDOWN_ENTER = "\n"; + + static final String CHARSET = "UTF-8"; + + static final String PUSH_URL = "https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token={token}"; + + static final String APP_CHAT_PUSH_URL = "https://qyapi.weixin.qq.com/cgi-bin/appchat/send?access_token={token}"; + + static final String TOKEN_URL = "https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid={corpId}&corpsecret={secret}"; + + static final String WEBHOOK = "webhook"; + + static final String WEBHOOK_TEMPLATE = "{\"msgtype\":\"{msgtype}\",\"{msgtype}\":{\"content\":\"{msg} \"}}"; + + static final String KEYWORD = "keyword"; + + static final String AT_ALL = "isAtAll"; + + static final String CORP_ID = "corpId"; + + static final String SECRET = "secret"; + + static final String TEAM_SEND_MSG = "teamSendMsg"; + + static final String USER_SEND_MSG = "{\"touser\":\"{toUser}\",\"agentid\":{agentId},\"msgtype\":\"{msgtype}\",\"{msgtype}\":{\"content\":\"{msg}\"}}"; + + static final String AGENT_ID = "agentId"; + + static final String USERS = "users"; + + static final String SEND_TYPE = "sendType"; + + static final String SHOW_TYPE = "msgtype"; + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatSender.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatSender.java new file mode 100644 index 0000000..5e7fce8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatSender.java @@ -0,0 +1,320 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.wechat; + +import net.srt.flink.alert.base.AlertResult; +import net.srt.flink.alert.base.AlertSendResponse; +import net.srt.flink.alert.base.ShowType; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.utils.JSONUtil; +import org.apache.http.HttpEntity; +import org.apache.http.client.methods.CloseableHttpResponse; +import org.apache.http.client.methods.HttpGet; +import org.apache.http.client.methods.HttpPost; +import org.apache.http.entity.StringEntity; +import org.apache.http.impl.client.CloseableHttpClient; +import org.apache.http.impl.client.HttpClients; +import org.apache.http.util.EntityUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Iterator; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import static java.util.Objects.requireNonNull; + +/** + * WeChatSender + * + * @author zrx + * @since 2022/2/23 21:11 + **/ +public class WeChatSender { + private static final Logger logger = LoggerFactory.getLogger(WeChatSender.class); + private static final String ALERT_STATUS = "false"; + private static final String AGENT_ID_REG_EXP = "{agentId}"; + private static final String MSG_REG_EXP = "{msg}"; + private static final String USER_REG_EXP = "{toUser}"; + private static final String CORP_ID_REGEX = "{corpId}"; + private static final String SECRET_REGEX = "{secret}"; + private static final String TOKEN_REGEX = "{token}"; + private static final String SHOW_TYPE_REGEX = "{msgtype}"; + private final String weChatAgentId; + private final String weChatUsers; + private final String weChatUserSendMsg; + private final String weChatTokenUrlReplace; + private final String weChatToken; + private final String sendType; + private static String showType; + private final String webhookUrl; + private final String keyWord; + private final Boolean atAll; + + WeChatSender(Map config) { + sendType = config.get(WeChatConstants.SEND_TYPE); + weChatAgentId = sendType.equals(WeChatType.CHAT.getValue()) ? "" : config.get(WeChatConstants.AGENT_ID); + atAll = Boolean.valueOf(config.get(WeChatConstants.AT_ALL)); + weChatUsers = sendType.equals(WeChatType.CHAT.getValue()) ? (atAll && config.get(WeChatConstants.USERS) == null ? "" : config.get(WeChatConstants.USERS)) : config.get(WeChatConstants.USERS); + String weChatCorpId = sendType.equals(WeChatType.CHAT.getValue()) ? "" : config.get(WeChatConstants.CORP_ID); + String weChatSecret = sendType.equals(WeChatType.CHAT.getValue()) ? "" : config.get(WeChatConstants.SECRET); + String weChatTokenUrl = sendType.equals(WeChatType.CHAT.getValue()) ? "" : WeChatConstants.TOKEN_URL; + weChatUserSendMsg = WeChatConstants.USER_SEND_MSG; + showType = config.get(WeChatConstants.SHOW_TYPE); + requireNonNull(showType, WeChatConstants.SHOW_TYPE + " must not null"); + webhookUrl = config.get(WeChatConstants.WEBHOOK); + keyWord = config.get(WeChatConstants.KEYWORD); + if (sendType.equals(WeChatType.CHAT.getValue())) { + requireNonNull(webhookUrl, WeChatConstants.WEBHOOK + " must not null"); + } + weChatTokenUrlReplace = weChatTokenUrl + .replace(CORP_ID_REGEX, weChatCorpId) + .replace(SECRET_REGEX, weChatSecret); + weChatToken = getToken(); + } + + public AlertResult send(String title, String content) { + AlertResult alertResult = new AlertResult(); + List userList = new ArrayList<>(); + if (Asserts.isNotNullString(weChatUsers)) { + userList = Arrays.asList(weChatUsers.split(",")); + } + if (atAll) { + userList.add("所有人"); + } + + String data = ""; + if (sendType.equals(WeChatType.CHAT.getValue())) { + data = markdownByAlert(title, content, userList); + } else { + data = markdownByAlert(title, content, userList); + } + String msg = ""; + if (sendType.equals(WeChatType.APP.getValue())) { + msg = weChatUserSendMsg.replace(USER_REG_EXP, mkUserString(userList)) + .replace(AGENT_ID_REG_EXP, weChatAgentId).replace(MSG_REG_EXP, data) + .replace(SHOW_TYPE_REGEX, showType); + } else { + msg = WeChatConstants.WEBHOOK_TEMPLATE.replace(SHOW_TYPE_REGEX, showType) + .replace(MSG_REG_EXP, data); + } + + if (sendType.equals(WeChatType.APP.getValue()) && Asserts.isNullString(weChatToken)) { + alertResult.setMessage("send we chat alert fail,get weChat token error"); + alertResult.setSuccess(false); + return alertResult; + } + String enterpriseWeChatPushUrlReplace = ""; + if (sendType.equals(WeChatType.APP.getValue())) { + enterpriseWeChatPushUrlReplace = WeChatConstants.PUSH_URL.replace(TOKEN_REGEX, weChatToken); + } else if (sendType.equals(WeChatType.CHAT.getValue())) { + enterpriseWeChatPushUrlReplace = webhookUrl; + } + try { + return checkWeChatSendMsgResult(post(enterpriseWeChatPushUrlReplace, msg)); + } catch (Exception e) { + logger.info("send we chat alert msg exception : {}", e.getMessage()); + alertResult.setMessage("send we chat alert fail"); + alertResult.setSuccess(false); + } + return alertResult; + } + + private static String post(String url, String data) throws IOException { + try (CloseableHttpClient httpClient = HttpClients.createDefault()) { + HttpPost httpPost = new HttpPost(url); + httpPost.setEntity(new StringEntity(data, WeChatConstants.CHARSET)); + CloseableHttpResponse response = httpClient.execute(httpPost); + String resp; + try { + HttpEntity entity = response.getEntity(); + resp = EntityUtils.toString(entity, WeChatConstants.CHARSET); + EntityUtils.consume(entity); + } finally { + response.close(); + } + // logger.info("Enterprise WeChat send [{}], param:{}, resp:{}", url, data, resp); + return resp; + } + } + + private static String mkUserList(Iterable list) { + StringBuilder sb = new StringBuilder("["); + for (String name : list) { + sb.append("\"").append(name).append("\","); + } + sb.deleteCharAt(sb.length() - 1); + sb.append("]"); + return sb.toString(); + } + + private static String mkUserString(Iterable list) { + if (Asserts.isNull(list)) { + return null; + } + StringBuilder sb = new StringBuilder(); + boolean first = true; + for (String item : list) { + if (first) { + first = false; + } else { + sb.append("|"); + } + sb.append(item); + } + return sb.toString(); + } + + /** + *@Author: zhumingye + *@date: 2022/3/26 + *@Description: 将用户列表转换为 <@用户名> 的格式 + * @param userList + * @return java.lang.String + */ + private static String mkMarkDownAtUsers(List userList) { + + StringBuilder builder = new StringBuilder("\n"); + if (Asserts.isNotNull(userList)) { + userList.forEach(value -> { + if (value.equals("所有人") && showType.equals(ShowType.TEXT.getValue())) { + builder.append("@所有人 "); + } else { + builder.append("<@").append(value).append("> "); + } + }); + } + return builder.toString(); + } + + private String markdownByAlert(String title, String content,List userList) { + String result = ""; + if (showType.equals(ShowType.MARKDOWN.getValue())) { + result = markdownTable(title, content,userList,sendType); + } else if (showType.equals(ShowType.TEXT.getValue())) { + result = markdownText(title, content,userList,sendType); + } + return result; + } + + private static String markdownTable(String title, String content, List userList, String sendType) { + return getMsgResult(title, content,userList,sendType); + } + + private static String markdownText(String title, String content, List userList, String sendType) { + return getMsgResult(title, content,userList,sendType); + } + + /** + *@Author: zhumingye + *@date: 2022/3/25 + *@Description: 创建公共方法 用于创建发送消息文本 + * @param title 发送标题 + * @param content 发送内容 + * @param sendType + * @return java.lang.String + *@throws + */ + private static String getMsgResult(String title, String content, List userList, String sendType) { + + List mapItemsList = JSONUtil.toList(content, LinkedHashMap.class); + if (null == mapItemsList || mapItemsList.isEmpty()) { + logger.error("itemsList is null"); + throw new RuntimeException("itemsList is null"); + } + String markDownAtUsers = mkMarkDownAtUsers(userList); + StringBuilder contents = new StringBuilder(200); + for (LinkedHashMap mapItems : mapItemsList) { + Set> entries = mapItems.entrySet(); + Iterator> iterator = entries.iterator(); + StringBuilder t = new StringBuilder(String.format("`%s`%s", title, WeChatConstants.MARKDOWN_ENTER)); + + while (iterator.hasNext()) { + + Map.Entry entry = iterator.next(); + t.append(WeChatConstants.MARKDOWN_QUOTE); + t.append(entry.getKey()).append(":").append(entry.getValue()); + t.append(WeChatConstants.MARKDOWN_ENTER); + } + contents.append(t); + } + if (sendType.equals(WeChatType.CHAT.getValue())) { + contents.append(markDownAtUsers); + } + return contents.toString(); + } + + private String getToken() { + try { + return get(weChatTokenUrlReplace); + } catch (IOException e) { + logger.info("we chat alert get token error{}", e.getMessage()); + } + return null; + } + + private static String get(String url) throws IOException { + String resp; + try (CloseableHttpClient httpClient = HttpClients.createDefault()) { + HttpGet httpGet = new HttpGet(url); + try (CloseableHttpResponse response = httpClient.execute(httpGet)) { + HttpEntity entity = response.getEntity(); + resp = EntityUtils.toString(entity, WeChatConstants.CHARSET); + EntityUtils.consume(entity); + } + HashMap map = JSONUtil.parseObject(resp, HashMap.class); + if (map != null && null != map.get("access_token")) { + return map.get("access_token").toString(); + } else { + return null; + } + } + } + + private static AlertResult checkWeChatSendMsgResult(String result) { + AlertResult alertResult = new AlertResult(); + alertResult.setSuccess(false); + if (null == result) { + alertResult.setMessage("we chat send fail"); + logger.info("send we chat msg error,resp is null"); + return alertResult; + } + AlertSendResponse sendMsgResponse = JSONUtil.parseObject(result, AlertSendResponse.class); + if (null == sendMsgResponse) { + alertResult.setMessage("we chat send fail"); + logger.info("send we chat msg error,resp error"); + return alertResult; + } + if (sendMsgResponse.getErrcode() == 0) { + alertResult.setSuccess(true); + alertResult.setMessage("we chat alert send success"); + return alertResult; + } + alertResult.setSuccess(false); + alertResult.setMessage(sendMsgResponse.getErrmsg()); + return alertResult; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatType.java b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatType.java new file mode 100644 index 0000000..457bedc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/java/net/srt/flink/alert/wechat/WeChatType.java @@ -0,0 +1,48 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.alert.wechat; + +/** + * WeChatType + * + * @author zrx + * @since 2022/2/23 21:36 + **/ +public enum WeChatType { + + APP(1, "应用"), + CHAT(2, "群聊"); + + private final int code; + private final String value; + + WeChatType(int code, String value) { + this.code = code; + this.value = value; + } + + public int getCode() { + return code; + } + + public String getValue() { + return value; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert new file mode 100644 index 0000000..65c5e05 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/flink-alert-wechat/src/main/resources/META-INF/services/net.srt.flink.alert.base.Alert @@ -0,0 +1 @@ +net.srt.flink.alert.wechat.WeChatAlert diff --git a/srt-cloud-framework/srt-cloud-flink/flink-alert/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-alert/pom.xml new file mode 100644 index 0000000..5352e5d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-alert/pom.xml @@ -0,0 +1,23 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-alert + pom + + flink-alert-base + flink-alert-dingtalk + flink-alert-email + flink-alert-feishu + flink-alert-wechat + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.14/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.14/pom.xml new file mode 100644 index 0000000..c0ea8d3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.14/pom.xml @@ -0,0 +1,88 @@ + + + + flink-app + net.srt + 2.0.0 + + 4.0.0 + + flink-app-1.14 + + + net.srt.app.MainApp + UTF-8 + + + + + + net.srt + flink-app-base + ${project.version} + + + mysql + mysql-connector-java + + + net.srt + flink-client-1.14 + + + net.srt + flink-1.14 + provided + + + net.srt + flink-client-base + + + net.srt + flink-executor + + + + + + + src/main/resources + + *.properties + + + + + + + org.apache.maven.plugins + maven-assembly-plugin + 2.6 + + + jar-with-dependencies + + + + + net.srt.flink.app.MainApp + + + ${project.parent.parent.basedir}/build/app + + + + make-assembly + + single + + package + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.14/src/main/java/net/srt/flink/app/MainApp.java b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.14/src/main/java/net/srt/flink/app/MainApp.java new file mode 100644 index 0000000..adbbab2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.14/src/main/java/net/srt/flink/app/MainApp.java @@ -0,0 +1,45 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.app; + +import net.srt.flink.app.base.db.DBConfig; +import net.srt.flink.app.base.flinksql.Submiter; +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.utils.FlinkBaseUtil; +import net.srt.flink.common.assertion.Asserts; + +import java.util.Map; + +/** + * MainApp + * + * @author wenmo + * @since 2021/10/27 + **/ +public class MainApp { + + public static void main(String[] args) { + Map params = FlinkBaseUtil.getParamsFromArgs(args); + String id = params.get(FlinkParamConstant.ID); + Asserts.checkNullString(id, "请配置入参 id "); + DBConfig dbConfig = DBConfig.build(params); + Submiter.submit(Integer.valueOf(id), dbConfig, params.get(FlinkParamConstant.FLINKY_ADDR)); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.16/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.16/pom.xml new file mode 100644 index 0000000..dcf50fd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.16/pom.xml @@ -0,0 +1,86 @@ + + + + flink-app + net.srt + 2.0.0 + + 4.0.0 + + flink-app-1.16 + + + + net.srt + flink-app-base + ${project.version} + + + mysql + mysql-connector-java + + + net.srt + flink-client-1.16 + ${project.version} + + + net.srt + flink-1.16 + provided + + + net.srt + flink-client-base + ${project.version} + + + net.srt + flink-executor + ${project.version} + + + + + + + src/main/resources + + *.properties + + + + + + + org.apache.maven.plugins + maven-assembly-plugin + 2.6 + + + jar-with-dependencies + + + + + net.srt.flink.app.MainApp + + + ${project.parent.parent.basedir}/build/app + + + + make-assembly + + single + + package + + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.16/src/main/java/net/srt/flink/app/MainApp.java b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.16/src/main/java/net/srt/flink/app/MainApp.java new file mode 100644 index 0000000..b01b476 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-1.16/src/main/java/net/srt/flink/app/MainApp.java @@ -0,0 +1,46 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.app; + + +import net.srt.flink.app.base.db.DBConfig; +import net.srt.flink.app.base.flinksql.Submiter; +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.utils.FlinkBaseUtil; +import net.srt.flink.common.assertion.Asserts; + +import java.util.Map; + +/** + * MainApp + * + * @author zrx + * @since 2022/11/05 + **/ +public class MainApp { + + public static void main(String[] args) { + Map params = FlinkBaseUtil.getParamsFromArgs(args); + String id = params.get(FlinkParamConstant.ID); + Asserts.checkNullString(id, "请配置入参 id "); + DBConfig dbConfig = DBConfig.build(params); + Submiter.submit(Integer.valueOf(id), dbConfig, params.get(FlinkParamConstant.FLINKY_ADDR)); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/pom.xml new file mode 100644 index 0000000..ba3e78d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/pom.xml @@ -0,0 +1,68 @@ + + + + flink-app + net.srt + 2.0.0 + + 4.0.0 + + flink-app-base + + + + mysql + mysql-connector-java + + + net.srt + flink-client-base + ${project.version} + + + net.srt + flink-executor + ${project.version} + + + + + + flink-1.16 + + + net.srt + flink-client-1.16 + ${project.version} + provided + + + net.srt + flink-1.16 + ${project.version} + provided + + + + + flink-1.14 + + + net.srt + flink-client-1.14 + ${project.version} + provided + + + net.srt + flink-1.14 + ${project.version} + provided + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/db/DBConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/db/DBConfig.java new file mode 100644 index 0000000..bf71557 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/db/DBConfig.java @@ -0,0 +1,83 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.app.base.db; + + +import net.srt.flink.client.base.constant.FlinkParamConstant; + +import java.util.Map; + +/** + * DBConfig + * + * @author zrx + * @since 2021/10/27 + **/ +public class DBConfig { + + private String driver; + private String url; + private String username; + private String password; + + public DBConfig(String driver, String url, String username, String password) { + this.driver = driver; + this.url = url; + this.username = username; + this.password = password; + } + + public static DBConfig build(String driver, String url, String username, String password) { + return new DBConfig(driver, url, username, password); + } + + public static DBConfig build(Map params) { + return new DBConfig(params.get(FlinkParamConstant.DRIVER), + params.get(FlinkParamConstant.URL), + params.get(FlinkParamConstant.USERNAME), + params.get(FlinkParamConstant.PASSWORD)); + } + + public String getDriver() { + return driver; + } + + public String getUrl() { + return url; + } + + public String getUsername() { + return username; + } + + public String getPassword() { + return password; + } + + @Override + public String toString() { + return "DBConfig{" + + "driver='" + driver + '\'' + + ", url='" + url + '\'' + + ", username='" + username + '\'' + + ", password='" + password + '\'' + + '}'; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/db/DBUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/db/DBUtil.java new file mode 100644 index 0000000..12ad289 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/db/DBUtil.java @@ -0,0 +1,139 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.app.base.db; + +import java.io.IOException; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * DBUtil + * + * @author zrx + * @since 2021/10/27 + **/ +public class DBUtil { + + private static Connection getConnection(DBConfig config) throws IOException { + Connection conn = null; + try { + Class.forName(config.getDriver()); + conn = DriverManager.getConnection(config.getUrl(), config.getUsername(), config.getPassword()); + } catch (SQLException | ClassNotFoundException e) { + e.printStackTrace(); + close(conn); + } + return conn; + } + + private static void close(Connection conn) { + try { + if (conn != null) { + conn.close(); + } + } catch (SQLException e) { + e.printStackTrace(); + } + } + + public static String getOneByID(String sql, DBConfig config) throws SQLException, IOException { + Connection conn = getConnection(config); + String result = null; + try (Statement stmt = conn.createStatement(); + ResultSet rs = stmt.executeQuery(sql)) { + if (rs.next()) { + result = rs.getString(1); + } + } + close(conn); + /*catch (SQLException e1) { + e1.printStackTrace(); + String message = e1.getMessage(); + System.err.println(LocalDateTime.now().toString() + " --> 获取 FlinkSQL 异常,ID 为"); + }*/ + return result; + } + + public static Map getMapByID(String sql, DBConfig config) throws SQLException, IOException { + Connection conn = getConnection(config); + HashMap map = new HashMap(); + try (Statement stmt = conn.createStatement(); + ResultSet rs = stmt.executeQuery(sql)) { + List columnList = new ArrayList<>(); + for (int i = 0; i < rs.getMetaData().getColumnCount(); i++) { + columnList.add(rs.getMetaData().getColumnLabel(i + 1)); + } + if (rs.next()) { + for (int i = 0; i < columnList.size(); i++) { + map.put(columnList.get(i), rs.getString(i + 1)); + } + } + } + close(conn); + return map; + } + /** + * 获取数据源的连接信息,作为全局变量 + * @param sql 查询SQL,必须只有两列的结果,第一个为数据库名称,第二个为配置信息 + * @param config 核心数据库配置 + * */ + + public static String getDbSourceSQLStatement(String sql, DBConfig config) throws SQLException, IOException { + Connection conn = getConnection(config); + String sqlStatements = ""; + try ( + Statement stmt = conn.createStatement(); + ResultSet rs = stmt.executeQuery(sql)) { + while (rs.next()) { + sqlStatements = sqlStatements + rs.getString(1) + ":=" + rs.getString(2) + "\n;\n"; + } + } + close(conn); + return sqlStatements; + } + + public static List> getListByID(String sql, DBConfig config) throws SQLException, IOException { + Connection conn = getConnection(config); + List> list = new ArrayList<>(); + try (Statement stmt = conn.createStatement(); + ResultSet rs = stmt.executeQuery(sql)) { + List columnList = new ArrayList<>(); + for (int i = 0; i < rs.getMetaData().getColumnCount(); i++) { + columnList.add(rs.getMetaData().getColumnName(i)); + } + while (rs.next()) { + HashMap map = new HashMap(); + for (int i = 0; i < columnList.size(); i++) { + map.put(columnList.get(i), rs.getString(i)); + } + list.add(map); + } + } + close(conn); + return list; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/flinksql/StatementParam.java b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/flinksql/StatementParam.java new file mode 100644 index 0000000..30bf27d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/flinksql/StatementParam.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.app.base.flinksql; + + +import net.srt.flink.executor.parser.SqlType; + +/** + * StatementParam + * + * @author zrx + * @since 2021/11/16 + */ +public class StatementParam { + private String value; + private SqlType type; + + public StatementParam(String value, SqlType type) { + this.value = value; + this.type = type; + } + + public String getValue() { + return value; + } + + public void setValue(String value) { + this.value = value; + } + + public SqlType getType() { + return type; + } + + public void setType(SqlType type) { + this.type = type; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/flinksql/Submiter.java b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/flinksql/Submiter.java new file mode 100644 index 0000000..26a620a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/flink-app-base/src/main/java/net/srt/flink/app/base/flinksql/Submiter.java @@ -0,0 +1,298 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.app.base.flinksql; + +import net.srt.flink.app.base.db.DBConfig; +import net.srt.flink.app.base.db.DBUtil; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.SystemConfiguration; +import net.srt.flink.common.utils.SqlUtil; +import net.srt.flink.executor.constant.FlinkSQLConstant; +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.executor.ExecutorSetting; +import net.srt.flink.executor.interceptor.FlinkInterceptor; +import net.srt.flink.executor.parser.SqlType; +import net.srt.flink.executor.trans.Operations; +import org.apache.commons.io.FileUtils; +import org.apache.commons.lang3.StringUtils; +import org.apache.flink.configuration.CheckpointingOptions; +import org.apache.flink.configuration.PipelineOptions; +import org.apache.flink.util.FlinkUserCodeClassLoader; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.net.URLClassLoader; +import java.sql.SQLException; +import java.time.LocalDateTime; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.UUID; + +/** + * FlinkSQLFactory + * + * @author wenmo + * @since 2021/10/27 + **/ +public class Submiter { + + private static final Logger logger = LoggerFactory.getLogger(Submiter.class); + private static final String NULL = "null"; + + private static String getQuerySQL(Integer id) throws SQLException { + if (id == null) { + throw new SQLException("请指定任务ID"); + } + return "select statement from data_production_task_statement where task_id = " + id; + } + + private static String getTaskInfo(Integer id) throws SQLException { + if (id == null) { + throw new SQLException("请指定任务ID"); + } + return "select id, project_id as projectId, name, alias as jobName, type,check_point as checkpoint," + + "save_point_path as savePointPath, parallelism,fragment as useSqlFragment,statement_set as useStatementSet,config_json as config," + + " env_id as envId,batch_model AS useBatchModel from data_production_task where id = " + id; + } + + private static String getFlinkSQLStatement(Integer id, DBConfig config) { + String statement = ""; + try { + statement = DBUtil.getOneByID(getQuerySQL(id), config); + } catch (IOException | SQLException e) { + logger.error("{} --> 获取 FlinkSQL 配置异常,ID 为 {}, 连接信息为:{} ,异常信息为:{} ", LocalDateTime.now(), id, + config.toString(), e.getMessage(), e); + } + return statement; + } + + public static Map getTaskConfig(Integer id, DBConfig config) { + Map task = new HashMap<>(); + try { + task = DBUtil.getMapByID(getTaskInfo(id), config); + } catch (IOException | SQLException e) { + logger.error("{} --> 获取 FlinkSQL 配置异常,ID 为 {}, 连接信息为:{} ,异常信息为:{} ", LocalDateTime.now(), id, + config.toString(), e.getMessage(), e); + } + return task; + } + + public static List getStatements(String sql, String sqlSeparator) { + return Arrays.asList(SqlUtil.getStatements(sql, sqlSeparator)); + } + + public static String getDbSourceSqlStatements(DBConfig dbConfig, Integer id) { + String sql = "select name,null as flink_config from data_database"; + String sqlCheck = "select fragment from data_production_task where id = " + id; + try { + // 首先判断是否开启了全局变量 + String fragment = DBUtil.getOneByID(sqlCheck, dbConfig); + if ("1".equals(fragment)) { + return DBUtil.getDbSourceSQLStatement(sql, dbConfig); + } else { + // 全局变量未开启,返回空字符串 + logger.info("任务 {} 未开启全局变量,不进行变量加载。", id); + return ""; + } + } catch (IOException | SQLException e) { + logger.error("{} --> 获取 数据源信息异常,请检查数据库连接,连接信息为:{} ,异常信息为:{}", LocalDateTime.now(), + dbConfig.toString(), e.getMessage(), e); + } + + return ""; + } + + public static void submit(Integer id, DBConfig dbConfig, String flinkyAddr) { + logger.info(LocalDateTime.now() + "开始提交作业 -- " + id); + if (NULL.equals(flinkyAddr)) { + flinkyAddr = ""; + } + StringBuilder sb = new StringBuilder(); + Map taskConfig = Submiter.getTaskConfig(id, dbConfig); + + if (Asserts.isNotNull(taskConfig.get("envId"))) { + String envId = getFlinkSQLStatement(Integer.valueOf(taskConfig.get("envId")), dbConfig); + if (Asserts.isNotNullString(envId)) { + sb.append(envId); + } + sb.append("\n"); + } + // 添加数据源全局变量 + //sb.append(getDbSourceSqlStatements(dbConfig, id)); + sb.append(getFlinkSQLStatement(id, dbConfig)); + String sqlSeparator = Submiter.getSqlseparator(Long.parseLong(taskConfig.get("projectId")), dbConfig); + List statements = Submiter.getStatements(sb.toString(), sqlSeparator); + ExecutorSetting executorSetting = ExecutorSetting.build(taskConfig); + + // 加载第三方jar k8s application 才有效 TODO + loadDep(taskConfig.get("type"), id, flinkyAddr, executorSetting); + + String uuid = UUID.randomUUID().toString().replace("-", ""); + if (executorSetting.getConfig().containsKey(CheckpointingOptions.CHECKPOINTS_DIRECTORY.key())) { + executorSetting.getConfig().put(CheckpointingOptions.CHECKPOINTS_DIRECTORY.key(), + executorSetting.getConfig().get(CheckpointingOptions.CHECKPOINTS_DIRECTORY.key()) + "/" + uuid); + } + if (executorSetting.getConfig().containsKey(CheckpointingOptions.SAVEPOINT_DIRECTORY.key())) { + executorSetting.getConfig().put(CheckpointingOptions.SAVEPOINT_DIRECTORY.key(), + executorSetting.getConfig().get(CheckpointingOptions.SAVEPOINT_DIRECTORY.key()) + "/" + uuid); + } + logger.info("作业配置如下: {}", executorSetting); + Executor executor = Executor.buildAppStreamExecutor(executorSetting); + List ddl = new ArrayList<>(); + List trans = new ArrayList<>(); + List execute = new ArrayList<>(); + for (String item : statements) { + String statement = FlinkInterceptor.pretreatStatement(executor, item, sqlSeparator); + if (statement.isEmpty()) { + continue; + } + SqlType operationType = Operations.getOperationType(statement); + if (operationType.equals(SqlType.INSERT) || operationType.equals(SqlType.SELECT) || operationType.equals(SqlType.SHOW) + || operationType.equals(SqlType.DESCRIBE) || operationType.equals(SqlType.DESC)) { + trans.add(new StatementParam(statement, operationType)); + // zrx + /*if (!executorSetting.isUseStatementSet()) { + break; + }*/ + } else if (operationType.equals(SqlType.EXECUTE)) { + execute.add(new StatementParam(statement, operationType)); + // zrx + /*if (!executorSetting.isUseStatementSet()) { + break; + }*/ + } else { + ddl.add(new StatementParam(statement, operationType)); + } + } + for (StatementParam item : ddl) { + logger.info("正在执行 FlinkSQL: " + item.getValue()); + executor.submitSql(item.getValue(), sqlSeparator); + logger.info("执行成功"); + } + if (trans.size() > 0) { + if (executorSetting.isUseStatementSet()) { + List inserts = new ArrayList<>(); + for (StatementParam item : trans) { + if (item.getType().equals(SqlType.INSERT)) { + inserts.add(item.getValue()); + } + } + logger.info("正在执行 FlinkSQL 语句集: " + String.join(FlinkSQLConstant.SEPARATOR, inserts)); + executor.submitStatementSet(inserts); + logger.info("执行成功"); + } else { + for (StatementParam item : trans) { + logger.info("正在执行 FlinkSQL: " + item.getValue()); + executor.submitSql(item.getValue(), sqlSeparator); + logger.info("执行成功"); + break; + } + } + } + if (execute.size() > 0) { + List executes = new ArrayList<>(); + for (StatementParam item : execute) { + executes.add(item.getValue()); + executor.executeSql(item.getValue(), sqlSeparator); + // zrx + /*if (!executorSetting.isUseStatementSet()) { + break; + }*/ + } + logger.info("正在执行 FlinkSQL 语句集: " + String.join(FlinkSQLConstant.SEPARATOR, executes)); + try { + executor.execute(executorSetting.getJobName()); + logger.info("执行成功"); + } catch (Exception e) { + logger.error("执行失败, {}", e.getMessage(), e); + } + } + logger.info("{}任务提交成功", LocalDateTime.now()); + } + + private static String getSqlseparator(Long projectId, DBConfig dbConfig) { + String sql = "select value from data_production_sys_config where name='sqlSeparator' and project_id=" + projectId; + String sqlseparator = null; + try { + sqlseparator = DBUtil.getOneByID(sql, dbConfig); + } catch (IOException | SQLException e) { + logger.error("{} --> 获取 数据源信息异常,请检查数据库连接,连接信息为:{} ,异常信息为:{}", LocalDateTime.now(), + dbConfig.toString(), e.getMessage(), e); + } + return sqlseparator == null ? SystemConfiguration.getInstances().getSqlSeparator() : sqlseparator; + } + + private static void loadDep(String type, Integer taskId, String flinkyAddr, ExecutorSetting executorSetting) { + if (StringUtils.isBlank(flinkyAddr)) { + return; + } + //kubernetes-application + if ("6".equals(type)) { + try { + String httpJar = "http://" + flinkyAddr + "/download/downloadDepJar/" + taskId; + logger.info("下载依赖 http-url为:{}", httpJar); + URLClassLoader urlClassLoader = + FlinkUserCodeClassLoader.newInstance(new URL[]{new URL(httpJar)}, Thread.currentThread().getContextClassLoader()); + String flinkHome = System.getenv("FLINK_HOME"); + String usrlib = flinkHome + "/usrlib"; + FileUtils.forceMkdir(new File(usrlib)); + String udfJarPath = usrlib + "/udf.jar"; + downloadFile(httpJar, udfJarPath); + executorSetting.getConfig().put(PipelineOptions.JARS.key(), "file://" + udfJarPath); + Thread.currentThread().setContextClassLoader(urlClassLoader); + + // download python_udf.zip + String httpPythonZip = "http://" + flinkyAddr + "/download/downloadPythonUDF/" + taskId; + downloadFile(httpPythonZip, flinkHome + "/python_udf.zip"); + } catch (IOException e) { + logger.error(""); + throw new RuntimeException(e); + } + } + executorSetting.getConfig().put("python.files", "./python_udf.zip"); + } + + public static void downloadFile(String url, String path) throws IOException { + HttpURLConnection conn = (HttpURLConnection) new URL(url).openConnection(); + // 设置超时间为3秒 + conn.setConnectTimeout(3 * 1000); + //获取输入流 + InputStream inputStream = conn.getInputStream(); + //获取输出流 + FileOutputStream outputStream = new FileOutputStream(path); + //每次下载1024位 + byte[] b = new byte[1024]; + int len = -1; + while ((len = inputStream.read(b)) != -1) { + outputStream.write(b, 0, len); + } + inputStream.close(); + outputStream.close(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-app/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-app/pom.xml new file mode 100644 index 0000000..d601bf9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-app/pom.xml @@ -0,0 +1,34 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-app + pom + + flink-app-base + + + + + flink-1.14 + + flink-app-1.14 + + + + flink-1.16 + + flink-app-1.16 + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/pom.xml new file mode 100644 index 0000000..7691781 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/pom.xml @@ -0,0 +1,44 @@ + + + + flink-catalog-mysql + net.srt + 2.0.0 + + 4.0.0 + + flink-catalog-mysql-1.14 + + + + net.srt + flink-common + + + net.srt + flink-1.14 + + + junit + junit + 4.13.2 + test + + + + + + + org.apache.maven.plugins + maven-jar-plugin + + + ${project.parent.parent.parent.basedir}/build/extends + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/java/net/srt/flink/catalog/mysql/DlinkMysqlCatalog.java b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/java/net/srt/flink/catalog/mysql/DlinkMysqlCatalog.java new file mode 100644 index 0000000..aea17b8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/java/net/srt/flink/catalog/mysql/DlinkMysqlCatalog.java @@ -0,0 +1,1182 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.catalog.mysql; + +import net.srt.flink.catalog.mysql.factory.DlinkMysqlCatalogFactoryOptions; +import org.apache.flink.table.api.Schema; +import org.apache.flink.table.catalog.AbstractCatalog; +import org.apache.flink.table.catalog.CatalogBaseTable; +import org.apache.flink.table.catalog.CatalogDatabase; +import org.apache.flink.table.catalog.CatalogDatabaseImpl; +import org.apache.flink.table.catalog.CatalogFunction; +import org.apache.flink.table.catalog.CatalogFunctionImpl; +import org.apache.flink.table.catalog.CatalogPartition; +import org.apache.flink.table.catalog.CatalogPartitionSpec; +import org.apache.flink.table.catalog.CatalogTable; +import org.apache.flink.table.catalog.CatalogView; +import org.apache.flink.table.catalog.FunctionLanguage; +import org.apache.flink.table.catalog.ObjectPath; +import org.apache.flink.table.catalog.ResolvedCatalogBaseTable; +import org.apache.flink.table.catalog.ResolvedCatalogTable; +import org.apache.flink.table.catalog.ResolvedCatalogView; +import org.apache.flink.table.catalog.exceptions.CatalogException; +import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException; +import org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException; +import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException; +import org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException; +import org.apache.flink.table.catalog.exceptions.FunctionNotExistException; +import org.apache.flink.table.catalog.exceptions.PartitionAlreadyExistsException; +import org.apache.flink.table.catalog.exceptions.PartitionNotExistException; +import org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException; +import org.apache.flink.table.catalog.exceptions.TableAlreadyExistException; +import org.apache.flink.table.catalog.exceptions.TableNotExistException; +import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException; +import org.apache.flink.table.catalog.exceptions.TablePartitionedException; +import org.apache.flink.table.catalog.stats.CatalogColumnStatistics; +import org.apache.flink.table.catalog.stats.CatalogTableStatistics; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.types.DataType; +import org.apache.flink.util.StringUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static org.apache.flink.util.Preconditions.checkArgument; +import static org.apache.flink.util.Preconditions.checkNotNull; + +/** + * 自定义 catalog + * 检查connection done. + * 默认db,会被强制指定,不管输入的是什么,都会指定为 default_database + * 可以读取配置文件信息来获取数据库连接,而不是在sql语句中强制指定。 + */ +public class DlinkMysqlCatalog extends AbstractCatalog { + + private final Logger logger = LoggerFactory.getLogger(this.getClass()); + + public static final String MYSQL_DRIVER = "com.mysql.cj.jdbc.Driver"; + + public static final String DEFAULT_DATABASE = "default_database"; + + static { + try { + Class.forName(MYSQL_DRIVER); + } catch (ClassNotFoundException e) { + throw new CatalogException("未加载 mysql 驱动!", e); + } + } + + private static final String COMMENT = "comment"; + /** + * 判断是否发生过SQL异常,如果发生过,那么conn可能失效。要注意判断 + */ + private boolean sqlExceptionHappened = false; + + /** + * 对象类型,例如 库、表、视图等 + */ + protected static class ObjectType { + + /** + * 数据库 + */ + public static final String DATABASE = "database"; + + /** + * 数据表 + */ + public static final String TABLE = "TABLE"; + + /** + * 视图 + */ + public static final String VIEW = "VIEW"; + } + + /** + * 对象类型,例如 库、表、视图等 + */ + protected static class ColumnType { + + /** + * 物理字段 + */ + public static final String PHYSICAL = "physical"; + + /** + * 计算字段 + */ + public static final String COMPUTED = "computed"; + + /** + * 元数据字段 + */ + public static final String METADATA = "metadata"; + + /** + * 水印 + */ + public static final String WATERMARK = "watermark"; + } + + /** + * 数据库用户名 + */ + private final String user; + /** + * 数据库密码 + */ + private final String pwd; + /** + * 数据库连接 + */ + private final String url; + + /** + * 默认database + */ + private static final String defaultDatabase = "default_database"; + + /** + * 数据库用户名 + * + * @return 数据库用户名 + */ + public String getUser() { + return user; + } + + /** + * 数据库密码 + * + * @return 数据库密码 + */ + public String getPwd() { + return pwd; + } + + /** + * 数据库用户名 + * + * @return 数据库用户名 + */ + public String getUrl() { + return url; + } + + public DlinkMysqlCatalog(String name, + String url, + String user, + String pwd) { + super(name, defaultDatabase); + this.url = url; + this.user = user; + this.pwd = pwd; + } + + public DlinkMysqlCatalog(String name) { + super(name, defaultDatabase); + this.url = DlinkMysqlCatalogFactoryOptions.URL.defaultValue(); + this.user = DlinkMysqlCatalogFactoryOptions.USERNAME.defaultValue(); + this.pwd = DlinkMysqlCatalogFactoryOptions.PASSWORD.defaultValue(); + } + + @Override + public void open() throws CatalogException { + // 验证连接是否有效 + // 获取默认db看看是否存在 + Integer defaultDbId = getDatabaseId(defaultDatabase); + if (defaultDbId == null) { + try { + createDatabase(defaultDatabase, new CatalogDatabaseImpl(new HashMap<>(), ""), true); + } catch (DatabaseAlreadyExistException a) { + logger.info("重复创建默认库"); + } + } + } + + @Override + public void close() throws CatalogException { + if (connection != null) { + try { + connection.close(); + connection = null; + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("Fail to close connection.", e); + } + } + } + + private Connection connection; + + protected Connection getConnection() throws CatalogException { + try { + // todo: 包装一个方法用于获取连接,方便后续改造使用其他的连接生成。 + // Class.forName(MYSQL_DRIVER); + if (connection == null) { + connection = DriverManager.getConnection(url, user, pwd); + } + if (sqlExceptionHappened) { + sqlExceptionHappened = false; + if (!connection.isValid(10)) { + connection.close(); + } + if (connection.isClosed()) { + connection = null; + return getConnection(); + } + connection = null; + return getConnection(); + } + + return connection; + } catch (Exception e) { + throw new CatalogException("Fail to get connection.", e); + } + } + + @Override + public List listDatabases() throws CatalogException { + List myDatabases = new ArrayList<>(); + String querySql = "SELECT database_name FROM metadata_database"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(querySql)) { + + ResultSet rs = ps.executeQuery(); + while (rs.next()) { + String dbName = rs.getString(1); + myDatabases.add(dbName); + } + + return myDatabases; + } catch (Exception e) { + throw new CatalogException( + String.format("Failed listing database in catalog %s", getName()), e); + } + } + + @Override + public CatalogDatabase getDatabase(String databaseName) throws DatabaseNotExistException, CatalogException { + String querySql = "SELECT id, database_name,description " + + " FROM metadata_database where database_name=?"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(querySql)) { + ps.setString(1, databaseName); + ResultSet rs = ps.executeQuery(); + + if (rs.next()) { + int id = rs.getInt("id"); + String description = rs.getString("description"); + + Map map = new HashMap<>(); + + String sql = "select `key`,`value` " + + "from metadata_database_property " + + "where database_id=? "; + try (PreparedStatement pStat = conn.prepareStatement(sql)) { + pStat.setInt(1, id); + ResultSet prs = pStat.executeQuery(); + while (prs.next()) { + map.put(rs.getString("key"), rs.getString("value")); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException( + String.format("Failed get database properties in catalog %s", getName()), e); + } + + return new CatalogDatabaseImpl(map, description); + } else { + throw new DatabaseNotExistException(getName(), databaseName); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException( + String.format("Failed get database in catalog %s", getName()), e); + } + } + + @Override + public boolean databaseExists(String databaseName) throws CatalogException { + return getDatabaseId(databaseName) != null; + } + + private Integer getDatabaseId(String databaseName) throws CatalogException { + String querySql = "select id from metadata_database where database_name=?"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(querySql)) { + ps.setString(1, databaseName); + ResultSet rs = ps.executeQuery(); + boolean multiDB = false; + Integer id = null; + while (rs.next()) { + if (!multiDB) { + id = rs.getInt(1); + multiDB = true; + } else { + throw new CatalogException("存在多个同名database: " + databaseName); + } + } + return id; + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException(String.format("获取 database 信息失败:%s.%s", getName(), databaseName), e); + } + } + + @Override + public void createDatabase(String databaseName, CatalogDatabase db, + boolean ignoreIfExists) throws DatabaseAlreadyExistException, CatalogException { + + checkArgument(!StringUtils.isNullOrWhitespaceOnly(databaseName)); + checkNotNull(db); + if (databaseExists(databaseName)) { + if (!ignoreIfExists) { + throw new DatabaseAlreadyExistException(getName(), databaseName); + } + } else { + // 在这里实现创建库的代码 + Connection conn = getConnection(); + // 启动事务 + String insertSql = "insert into metadata_database(database_name, description) values(?, ?)"; + + try (PreparedStatement stat = conn.prepareStatement(insertSql, Statement.RETURN_GENERATED_KEYS)) { + conn.setAutoCommit(false); + stat.setString(1, databaseName); + stat.setString(2, db.getComment()); + stat.executeUpdate(); + ResultSet idRs = stat.getGeneratedKeys(); + if (idRs.next() && db.getProperties() != null && db.getProperties().size() > 0) { + int id = idRs.getInt(1); + String propInsertSql = "insert into metadata_database_property(database_id, " + + "`key`,`value`) values (?,?,?)"; + PreparedStatement pstat = conn.prepareStatement(propInsertSql); + for (Map.Entry entry : db.getProperties().entrySet()) { + pstat.setInt(1, id); + pstat.setString(2, entry.getKey()); + pstat.setString(3, entry.getValue()); + pstat.addBatch(); + } + pstat.executeBatch(); + pstat.close(); + } + conn.commit(); + } catch (SQLException e) { + sqlExceptionHappened = true; + logger.error("创建 database 信息失败:", e); + } + } + } + + @Override + public void dropDatabase(String name, boolean ignoreIfNotExists, + boolean cascade) throws DatabaseNotExistException, DatabaseNotEmptyException, CatalogException { + if (name.equals(defaultDatabase)) { + throw new CatalogException("默认 database 不可以删除"); + } + // 1、取出db id, + Integer id = getDatabaseId(name); + if (id == null) { + if (!ignoreIfNotExists) { + throw new DatabaseNotExistException(getName(), name); + } + return; + } + Connection conn = getConnection(); + try { + conn.setAutoCommit(false); + // 查询是否有表 + List tables = listTables(name); + if (tables.size() > 0) { + if (!cascade) { + // 有表,不做级联删除。 + throw new DatabaseNotEmptyException(getName(), name); + } + // 做级联删除 + for (String table : tables) { + try { + dropTable(new ObjectPath(name, table), true); + } catch (TableNotExistException t) { + logger.warn("表{}不存在", name + "." + table); + } + } + } + // todo: 现在是真实删除,后续设计是否做记录保留。 + String deletePropSql = "delete from metadata_database_property where database_id=?"; + PreparedStatement dStat = conn.prepareStatement(deletePropSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + String deleteDbSql = "delete from metadata_database where database_id=?"; + dStat = conn.prepareStatement(deleteDbSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + conn.commit(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("删除 database 信息失败:", e); + } + } + + @Override + public void alterDatabase(String name, CatalogDatabase newDb, + boolean ignoreIfNotExists) throws DatabaseNotExistException, CatalogException { + if (name.equals(defaultDatabase)) { + throw new CatalogException("默认 database 不可以修改"); + } + // 1、取出db id, + Integer id = getDatabaseId(name); + if (id == null) { + if (!ignoreIfNotExists) { + throw new DatabaseNotExistException(getName(), name); + } + return; + } + Connection conn = getConnection(); + try { + conn.setAutoCommit(false); + // 1、名称不能改,类型不能改。只能改备注 + String updateCommentSql = "update metadata_database set description=? where database_id=?"; + PreparedStatement uState = conn.prepareStatement(updateCommentSql); + uState.setString(1, newDb.getComment()); + uState.setInt(2, id); + uState.executeUpdate(); + uState.close(); + if (newDb.getProperties() != null && newDb.getProperties().size() > 0) { + String upsertSql = "insert into metadata_database_property (database_id, `key`,`value`) \n" + + "values (?,?,?)\n" + + "on duplicate key update `value` =?, update_time = sysdate()\n"; + PreparedStatement pstat = conn.prepareStatement(upsertSql); + for (Map.Entry entry : newDb.getProperties().entrySet()) { + pstat.setInt(1, id); + pstat.setString(2, entry.getKey()); + pstat.setString(3, entry.getValue()); + pstat.setString(4, entry.getValue()); + pstat.addBatch(); + } + + pstat.executeBatch(); + } + conn.commit(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("修改 database 信息失败:", e); + } + } + + @Override + public List listTables(String databaseName) throws DatabaseNotExistException, CatalogException { + return listTablesViews(databaseName, ObjectType.TABLE); + } + + @Override + public List listViews(String databaseName) throws DatabaseNotExistException, CatalogException { + return listTablesViews(databaseName, ObjectType.VIEW); + } + + protected List listTablesViews(String databaseName, + String tableType) throws DatabaseNotExistException, CatalogException { + Integer databaseId = getDatabaseId(databaseName); + if (null == databaseId) { + throw new DatabaseNotExistException(getName(), databaseName); + } + + // get all schemas + // 要给出table 或 view + String querySql = "SELECT table_name FROM metadata_table where table_type=? and database_id = ?"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(querySql)) { + ps.setString(1, tableType); + ps.setInt(2, databaseId); + ResultSet rs = ps.executeQuery(); + + List tables = new ArrayList<>(); + + while (rs.next()) { + String table = rs.getString(1); + tables.add(table); + } + return tables; + } catch (Exception e) { + throw new CatalogException( + String.format("Failed listing %s in catalog %s", tableType, getName()), e); + } + } + + @Override + public CatalogBaseTable getTable(ObjectPath tablePath) throws TableNotExistException, CatalogException { + // 还是分步骤来 + // 1、先取出表 这可能是view也可能是table + // 2、取出列 + // 3、取出属性 + Integer id = getTableId(tablePath); + + if (id == null) { + throw new TableNotExistException(getName(), tablePath); + } + + Connection conn = getConnection(); + try { + String queryTable = "SELECT table_name " + + " ,description, table_type " + + " FROM metadata_table " + + " where id=?"; + PreparedStatement ps = conn.prepareStatement(queryTable); + ps.setInt(1, id); + ResultSet rs = ps.executeQuery(); + String description; + String tableType; + if (rs.next()) { + description = rs.getString("description"); + tableType = rs.getString("table_type"); + ps.close(); + } else { + ps.close(); + throw new TableNotExistException(getName(), tablePath); + } + if (tableType.equals(ObjectType.TABLE)) { + // 这个是 table + String propSql = "SELECT `key`, `value` from metadata_table_property " + + "WHERE table_id=?"; + PreparedStatement pState = conn.prepareStatement(propSql); + pState.setInt(1, id); + ResultSet prs = pState.executeQuery(); + Map props = new HashMap<>(); + while (prs.next()) { + String key = prs.getString("key"); + String value = prs.getString("value"); + props.put(key, value); + } + pState.close(); + props.put(COMMENT, description); + return CatalogTable.fromProperties(props); + } else if (tableType.equals(ObjectType.VIEW)) { + // 1、从库中取出table信息。(前面已做) + // 2、取出字段。 + String colSql = "SELECT column_name, column_type, data_type, description " + + " FROM metadata_column WHERE " + + " table_id=?"; + PreparedStatement cStat = conn.prepareStatement(colSql); + cStat.setInt(1, id); + ResultSet crs = cStat.executeQuery(); + + Schema.Builder builder = Schema.newBuilder(); + while (crs.next()) { + String colName = crs.getString("column_name"); + String dataType = crs.getString("data_type"); + + builder.column(colName, dataType); + String cDesc = crs.getString("description"); + if (null != cDesc && cDesc.length() > 0) { + builder.withComment(cDesc); + } + } + cStat.close(); + // 3、取出query + String qSql = "SELECT `key`, value FROM metadata_table_property" + + " WHERE table_id=? "; + PreparedStatement qStat = conn.prepareStatement(qSql); + qStat.setInt(1, id); + ResultSet qrs = qStat.executeQuery(); + String originalQuery = ""; + String expandedQuery = ""; + Map options = new HashMap<>(); + while (qrs.next()) { + String key = qrs.getString("key"); + String value = qrs.getString("value"); + if ("OriginalQuery".equals(key)) { + originalQuery = value; + } else if ("ExpandedQuery".equals(key)) { + expandedQuery = value; + } else { + options.put(key, value); + } + } + // 合成view + return CatalogView.of(builder.build(), description, originalQuery, expandedQuery, options); + } else { + throw new CatalogException("不支持的数据类型。" + tableType); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("获取 表信息失败。", e); + } + + } + + @Override + public boolean tableExists(ObjectPath tablePath) throws CatalogException { + Integer id = getTableId(tablePath); + return id != null; + } + + private Integer getTableId(ObjectPath tablePath) { + Integer dbId = getDatabaseId(tablePath.getDatabaseName()); + if (dbId == null) { + return null; + } + // 获取id + String getIdSql = "select id from metadata_table " + + " where table_name=? and database_id=?"; + Connection conn = getConnection(); + try (PreparedStatement gStat = conn.prepareStatement(getIdSql)) { + gStat.setString(1, tablePath.getObjectName()); + gStat.setInt(2, dbId); + ResultSet rs = gStat.executeQuery(); + if (rs.next()) { + return rs.getInt(1); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + logger.error("get table fail", e); + throw new CatalogException("get table fail.", e); + } + return null; + } + + @Override + public void dropTable(ObjectPath tablePath, + boolean ignoreIfNotExists) throws TableNotExistException, CatalogException { + Integer id = getTableId(tablePath); + + if (id == null) { + throw new TableNotExistException(getName(), tablePath); + } + Connection conn = getConnection(); + try { + // todo: 现在是真实删除,后续设计是否做记录保留。 + conn.setAutoCommit(false); + String deletePropSql = "delete from metadata_table_property " + + " where table_id=?"; + PreparedStatement dStat = conn.prepareStatement(deletePropSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + String deleteColSql = "delete from metadata_column " + + " where table_id=?"; + dStat = conn.prepareStatement(deleteColSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + String deleteDbSql = "delete from metadata_table " + + " where id=?"; + dStat = conn.prepareStatement(deleteDbSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + conn.commit(); + } catch (SQLException e) { + sqlExceptionHappened = true; + logger.error("drop table fail", e); + throw new CatalogException("drop table fail.", e); + } + } + + @Override + public void renameTable(ObjectPath tablePath, String newTableName, + boolean ignoreIfNotExists) throws TableNotExistException, TableAlreadyExistException, CatalogException { + Integer id = getTableId(tablePath); + + if (id == null) { + throw new TableNotExistException(getName(), tablePath); + } + ObjectPath newPath = new ObjectPath(tablePath.getDatabaseName(), newTableName); + if (tableExists(newPath)) { + throw new TableAlreadyExistException(getName(), newPath); + } + String updateSql = "UPDATE metadata_table SET table_name=? WHERE id=?"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(updateSql)) { + ps.setString(1, newTableName); + ps.setInt(2, id); + ps.executeUpdate(); + } catch (SQLException ex) { + sqlExceptionHappened = true; + throw new CatalogException("修改表名失败", ex); + } + } + + @Override + public void createTable(ObjectPath tablePath, CatalogBaseTable table, + boolean ignoreIfExists) throws TableAlreadyExistException, DatabaseNotExistException, CatalogException { + Integer dbId = getDatabaseId(tablePath.getDatabaseName()); + if (null == dbId) { + throw new DatabaseNotExistException(getName(), tablePath.getDatabaseName()); + } + if (tableExists(tablePath)) { + if (!ignoreIfExists) { + throw new TableAlreadyExistException(getName(), tablePath); + } + return; + } + // 插入表 + // 插入到table表。这里,它可能是table也可能是view + // 如果是一个table,我们认为它是一个 resolved table,就可以使用properties方式来进行序列化并保存。 + // 如果是一个view,我们认为它只能有物理字段 + if (!(table instanceof ResolvedCatalogBaseTable)) { + throw new UnsupportedOperationException("暂时不支持输入非 ResolvedCatalogBaseTable 类型的表"); + } + Connection conn = getConnection(); + try { + conn.setAutoCommit(false); + // 首先插入表信息 + CatalogBaseTable.TableKind kind = table.getTableKind(); + + String insertSql = "insert into metadata_table(\n" + + " table_name," + + " table_type," + + " database_id," + + " description)" + + " values(?,?,?,?)"; + PreparedStatement iStat = conn.prepareStatement(insertSql, Statement.RETURN_GENERATED_KEYS); + iStat.setString(1, tablePath.getObjectName()); + iStat.setString(2, kind.toString()); + iStat.setInt(3, dbId); + iStat.setString(4, table.getComment()); + iStat.executeUpdate(); + ResultSet idRs = iStat.getGeneratedKeys(); + if (!idRs.next()) { + iStat.close(); + throw new CatalogException("插入元数据表信息失败"); + } + int id = idRs.getInt(1); + iStat.close(); + // 插入属性和列 + if (table instanceof ResolvedCatalogTable) { + // table 就可以直接拿properties了。 + Map props = ((ResolvedCatalogTable) table).toProperties(); + String propInsertSql = "insert into metadata_table_property(table_id," + + "`key`,`value`) values (?,?,?)"; + PreparedStatement pStat = conn.prepareStatement(propInsertSql); + for (Map.Entry entry : props.entrySet()) { + pStat.setInt(1, id); + pStat.setString(2, entry.getKey()); + pStat.setString(3, entry.getValue()); + pStat.addBatch(); + } + pStat.executeBatch(); + pStat.close(); + } else { + // view,咱先假定它只有物理字段 + // view 还需要保存:query,expanded query + // 插入属性和列 + ResolvedCatalogView view = (ResolvedCatalogView) table; + List cols = view.getUnresolvedSchema().getColumns(); + if (cols.size() > 0) { + String colInsertSql = "insert into metadata_column(" + + " column_name, column_type, data_type" + + " , `expr`" + + " , description" + + " , table_id" + + " , `primary`) " + + " values(?,?,?,?,?,?,?)"; + PreparedStatement colIStat = conn.prepareStatement(colInsertSql); + for (Schema.UnresolvedColumn col : cols) { + if (col instanceof Schema.UnresolvedPhysicalColumn) { + Schema.UnresolvedPhysicalColumn pCol = (Schema.UnresolvedPhysicalColumn) col; + if (!(pCol.getDataType() instanceof DataType)) { + throw new UnsupportedOperationException(String.format( + "类型识别失败,该列不是有效类型:%s.%s.%s : %s", tablePath.getDatabaseName(), + tablePath.getObjectName(), pCol.getName(), + pCol.getDataType())); + } + DataType dataType = (DataType) pCol.getDataType(); + + colIStat.setString(1, pCol.getName()); + colIStat.setString(2, ColumnType.PHYSICAL); + colIStat.setString(3, + dataType.getLogicalType().asSerializableString()); + colIStat.setObject(4, null); + colIStat.setString(5, pCol.getComment().orElse("")); + colIStat.setInt(6, id); + colIStat.setObject(7, null); // view没有主键 + colIStat.addBatch(); + } else { + throw new UnsupportedOperationException("暂时认为view 不会出现 非物理字段"); + } + } + colIStat.executeBatch(); + colIStat.close(); + + // 写 query等信息到数据库 + Map option = view.getOptions(); + if (option == null) { + option = new HashMap<>(); + } + option.put("OriginalQuery", view.getOriginalQuery()); + option.put("ExpandedQuery", view.getExpandedQuery()); + String propInsertSql = "insert into metadata_table_property(table_id," + + "`key`,`value`) values (?,?,?)"; + PreparedStatement pStat = conn.prepareStatement(propInsertSql); + for (Map.Entry entry : option.entrySet()) { + pStat.setInt(1, id); + pStat.setString(2, entry.getKey()); + pStat.setString(3, entry.getValue()); + pStat.addBatch(); + } + pStat.executeBatch(); + pStat.close(); + } + } + conn.commit(); + } catch (SQLException ex) { + sqlExceptionHappened = true; + logger.error("插入数据库失败", ex); + throw new CatalogException("插入数据库失败", ex); + } + } + + @Override + public void alterTable(ObjectPath tablePath, CatalogBaseTable newTable, + boolean ignoreIfNotExists) throws TableNotExistException, CatalogException { + Integer id = getTableId(tablePath); + + if (id == null) { + throw new TableNotExistException(getName(), tablePath); + } + + Map opts = newTable.getOptions(); + if (opts != null && opts.size() > 0) { + String updateSql = "INSERT INTO metadata_table_property(table_id," + + "`key`,`value`) values (?,?,?) " + + "on duplicate key update `value` =?, update_time = sysdate()"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(updateSql)) { + for (Map.Entry entry : opts.entrySet()) { + ps.setInt(1, id); + ps.setString(2, entry.getKey()); + ps.setString(3, entry.getValue()); + ps.setString(4, entry.getValue()); + ps.addBatch(); + } + ps.executeBatch(); + } catch (SQLException ex) { + sqlExceptionHappened = true; + throw new CatalogException("修改表名失败", ex); + } + } + } + + /************************ partition *************************/ + @Override + public List listPartitions(ObjectPath tablePath) throws TableNotExistException, TableNotPartitionedException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public List listPartitions(ObjectPath tablePath, + CatalogPartitionSpec partitionSpec) throws TableNotExistException, TableNotPartitionedException, PartitionSpecInvalidException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public List listPartitionsByFilter(ObjectPath tablePath, + List filters) throws TableNotExistException, TableNotPartitionedException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public CatalogPartition getPartition(ObjectPath tablePath, + CatalogPartitionSpec partitionSpec) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public boolean partitionExists(ObjectPath tablePath, CatalogPartitionSpec partitionSpec) throws CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void createPartition(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, CatalogPartition partition, + boolean ignoreIfExists) throws TableNotExistException, TableNotPartitionedException, PartitionSpecInvalidException, PartitionAlreadyExistsException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void dropPartition(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, + boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterPartition(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, CatalogPartition newPartition, + boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + /***********************Functions**********************/ + + @Override + public List listFunctions(String dbName) throws DatabaseNotExistException, CatalogException { + Integer dbId = getDatabaseId(dbName); + if (null == dbId) { + throw new DatabaseNotExistException(getName(), dbName); + } + String querySql = "SELECT function_name from metadata_function " + + " WHERE database_id=?"; + + Connection conn = getConnection(); + try (PreparedStatement gStat = conn.prepareStatement(querySql)) { + gStat.setInt(1, dbId); + ResultSet rs = gStat.executeQuery(); + List functions = new ArrayList<>(); + while (rs.next()) { + String n = rs.getString("function_name"); + functions.add(n); + } + return functions; + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("获取 UDF 列表失败"); + } + } + + @Override + public CatalogFunction getFunction(ObjectPath functionPath) throws FunctionNotExistException, CatalogException { + Integer id = getFunctionId(functionPath); + if (null == id) { + throw new FunctionNotExistException(getName(), functionPath); + } + + String querySql = "SELECT class_name,function_language from metadata_function " + + " WHERE id=?"; + Connection conn = getConnection(); + try (PreparedStatement gStat = conn.prepareStatement(querySql)) { + gStat.setInt(1, id); + ResultSet rs = gStat.executeQuery(); + if (rs.next()) { + String className = rs.getString("class_name"); + String language = rs.getString("function_language"); + CatalogFunctionImpl func = new CatalogFunctionImpl(className, FunctionLanguage.valueOf(language)); + return func; + } else { + throw new FunctionNotExistException(getName(), functionPath); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("获取 UDF 失败:" + + functionPath.getDatabaseName() + "." + + functionPath.getObjectName()); + } + } + + @Override + public boolean functionExists(ObjectPath functionPath) throws CatalogException { + Integer id = getFunctionId(functionPath); + return id != null; + } + + private Integer getFunctionId(ObjectPath functionPath) { + Integer dbId = getDatabaseId(functionPath.getDatabaseName()); + if (dbId == null) { + return null; + } + // 获取id + String getIdSql = "select id from metadata_function " + + " where function_name=? and database_id=?"; + Connection conn = getConnection(); + try (PreparedStatement gStat = conn.prepareStatement(getIdSql)) { + gStat.setString(1, functionPath.getObjectName()); + gStat.setInt(2, dbId); + ResultSet rs = gStat.executeQuery(); + if (rs.next()) { + int id = rs.getInt(1); + return id; + } + } catch (SQLException e) { + sqlExceptionHappened = true; + logger.error("get function fail", e); + throw new CatalogException("get function fail.", e); + } + return null; + } + + @Override + public void createFunction(ObjectPath functionPath, CatalogFunction function, + boolean ignoreIfExists) throws FunctionAlreadyExistException, DatabaseNotExistException, CatalogException { + Integer dbId = getDatabaseId(functionPath.getDatabaseName()); + if (null == dbId) { + throw new DatabaseNotExistException(getName(), functionPath.getDatabaseName()); + } + if (functionExists(functionPath)) { + if (!ignoreIfExists) { + throw new FunctionAlreadyExistException(getName(), functionPath); + } + } + + Connection conn = getConnection(); + String insertSql = "Insert into metadata_function " + + "(function_name,class_name,database_id,function_language) " + + " values (?,?,?,?)"; + try (PreparedStatement ps = conn.prepareStatement(insertSql)) { + ps.setString(1, functionPath.getObjectName()); + ps.setString(2, function.getClassName()); + ps.setInt(3, dbId); + ps.setString(4, function.getFunctionLanguage().toString()); + ps.executeUpdate(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("创建 函数 失败", e); + } + } + + @Override + public void alterFunction(ObjectPath functionPath, CatalogFunction newFunction, + boolean ignoreIfNotExists) throws FunctionNotExistException, CatalogException { + Integer id = getFunctionId(functionPath); + if (null == id) { + if (!ignoreIfNotExists) { + throw new FunctionNotExistException(getName(), functionPath); + } + return; + } + + Connection conn = getConnection(); + String insertSql = "update metadata_function " + + "set (class_name =?, function_language=?) " + + " where id=?"; + try (PreparedStatement ps = conn.prepareStatement(insertSql)) { + ps.setString(1, newFunction.getClassName()); + ps.setString(2, newFunction.getFunctionLanguage().toString()); + ps.setInt(3, id); + ps.executeUpdate(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("修改 函数 失败", e); + } + } + + @Override + public void dropFunction(ObjectPath functionPath, + boolean ignoreIfNotExists) throws FunctionNotExistException, CatalogException { + Integer id = getFunctionId(functionPath); + if (null == id) { + if (!ignoreIfNotExists) { + throw new FunctionNotExistException(getName(), functionPath); + } + return; + } + + Connection conn = getConnection(); + String insertSql = "delete from metadata_function " + + " where id=?"; + try (PreparedStatement ps = conn.prepareStatement(insertSql)) { + ps.setInt(1, id); + ps.executeUpdate(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("删除 函数 失败", e); + } + } + + @Override + public CatalogTableStatistics getTableStatistics(ObjectPath tablePath) throws TableNotExistException, CatalogException { + // todo: 补充完成该方法。 + checkNotNull(tablePath); + + if (!tableExists(tablePath)) { + throw new TableNotExistException(getName(), tablePath); + } + /* + * if (!isPartitionedTable(tablePath)) { CatalogTableStatistics result = tableStats.get(tablePath); return + * result != null ? result.copy() : CatalogTableStatistics.UNKNOWN; } else { return + * CatalogTableStatistics.UNKNOWN; } + */ + return CatalogTableStatistics.UNKNOWN; + } + + @Override + public CatalogColumnStatistics getTableColumnStatistics(ObjectPath tablePath) throws TableNotExistException, CatalogException { + // todo: 补充完成该方法。 + checkNotNull(tablePath); + + if (!tableExists(tablePath)) { + throw new TableNotExistException(getName(), tablePath); + } + + // CatalogColumnStatistics result = tableColumnStats.get(tablePath); + // return result != null ? result.copy() : CatalogColumnStatistics.UNKNOWN; + return CatalogColumnStatistics.UNKNOWN; + } + + @Override + public CatalogTableStatistics getPartitionStatistics(ObjectPath tablePath, + CatalogPartitionSpec partitionSpec) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public CatalogColumnStatistics getPartitionColumnStatistics(ObjectPath tablePath, + CatalogPartitionSpec partitionSpec) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterTableStatistics(ObjectPath tablePath, CatalogTableStatistics tableStatistics, + boolean ignoreIfNotExists) throws TableNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterTableColumnStatistics(ObjectPath tablePath, CatalogColumnStatistics columnStatistics, + boolean ignoreIfNotExists) throws TableNotExistException, CatalogException, TablePartitionedException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterPartitionStatistics(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, + CatalogTableStatistics partitionStatistics, + boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterPartitionColumnStatistics(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, + CatalogColumnStatistics columnStatistics, + boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactory.java b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactory.java new file mode 100644 index 0000000..c6c6c2a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactory.java @@ -0,0 +1,74 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.catalog.mysql.factory; + +import net.srt.flink.catalog.mysql.DlinkMysqlCatalog; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.table.catalog.Catalog; +import org.apache.flink.table.factories.CatalogFactory; +import org.apache.flink.table.factories.FactoryUtil; + +import java.util.HashSet; +import java.util.Set; + +import static net.srt.flink.catalog.mysql.factory.DlinkMysqlCatalogFactoryOptions.PASSWORD; +import static net.srt.flink.catalog.mysql.factory.DlinkMysqlCatalogFactoryOptions.URL; +import static net.srt.flink.catalog.mysql.factory.DlinkMysqlCatalogFactoryOptions.USERNAME; +import static org.apache.flink.table.factories.FactoryUtil.PROPERTY_VERSION; + +/** + * Factory for {@link DlinkMysqlCatalog}. + */ +public class DlinkMysqlCatalogFactory implements CatalogFactory { + + @Override + public String factoryIdentifier() { + return DlinkMysqlCatalogFactoryOptions.IDENTIFIER; + } + + @Override + public Set> requiredOptions() { + final Set> options = new HashSet<>(); + return options; + } + + @Override + public Set> optionalOptions() { + final Set> options = new HashSet<>(); + options.add(USERNAME); + options.add(PASSWORD); + options.add(URL); + options.add(PROPERTY_VERSION); + return options; + } + + @Override + public Catalog createCatalog(Context context) { + final FactoryUtil.CatalogFactoryHelper helper = + FactoryUtil.createCatalogFactoryHelper(this, context); + helper.validate(); + + return new DlinkMysqlCatalog( + context.getName(), + helper.getOptions().get(URL), + helper.getOptions().get(USERNAME), + helper.getOptions().get(PASSWORD)); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactoryOptions.java b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactoryOptions.java new file mode 100644 index 0000000..eba717a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactoryOptions.java @@ -0,0 +1,43 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.catalog.mysql.factory; + +import net.srt.flink.catalog.mysql.DlinkMysqlCatalog; +import org.apache.flink.annotation.Internal; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; + +/** + * {@link ConfigOption}s for {@link DlinkMysqlCatalog}. + */ +@Internal +public class DlinkMysqlCatalogFactoryOptions { + + public static final String IDENTIFIER = "dlink_mysql"; + + public static final ConfigOption USERNAME = ConfigOptions.key("username").stringType().noDefaultValue(); + + public static final ConfigOption PASSWORD = ConfigOptions.key("password").stringType().noDefaultValue(); + + public static final ConfigOption URL = ConfigOptions.key("url").stringType().noDefaultValue(); + + private DlinkMysqlCatalogFactoryOptions() { + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory new file mode 100644 index 0000000..cbaf9d5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.14/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory @@ -0,0 +1,16 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +net.srt.flink.catalog.mysql.factory.DlinkMysqlCatalogFactory diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/pom.xml new file mode 100644 index 0000000..0bd2cb1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/pom.xml @@ -0,0 +1,46 @@ + + + + flink-catalog-mysql + net.srt + 2.0.0 + + 4.0.0 + + flink-catalog-mysql-1.16 + + + + net.srt + flink-common + ${project.version} + + + net.srt + flink-1.16 + ${project.version} + + + junit + junit + 4.13.2 + test + + + + + + + org.apache.maven.plugins + maven-jar-plugin + + + ${project.parent.parent.parent.basedir}/build/extends + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/java/net/srt/flink/catalog/mysql/DlinkMysqlCatalog.java b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/java/net/srt/flink/catalog/mysql/DlinkMysqlCatalog.java new file mode 100644 index 0000000..853c0bb --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/java/net/srt/flink/catalog/mysql/DlinkMysqlCatalog.java @@ -0,0 +1,1182 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.catalog.mysql; + +import net.srt.flink.catalog.mysql.factory.DlinkMysqlCatalogFactoryOptions; +import org.apache.flink.table.api.Schema; +import org.apache.flink.table.catalog.AbstractCatalog; +import org.apache.flink.table.catalog.CatalogBaseTable; +import org.apache.flink.table.catalog.CatalogDatabase; +import org.apache.flink.table.catalog.CatalogDatabaseImpl; +import org.apache.flink.table.catalog.CatalogFunction; +import org.apache.flink.table.catalog.CatalogFunctionImpl; +import org.apache.flink.table.catalog.CatalogPartition; +import org.apache.flink.table.catalog.CatalogPartitionSpec; +import org.apache.flink.table.catalog.CatalogTable; +import org.apache.flink.table.catalog.CatalogView; +import org.apache.flink.table.catalog.FunctionLanguage; +import org.apache.flink.table.catalog.ObjectPath; +import org.apache.flink.table.catalog.ResolvedCatalogBaseTable; +import org.apache.flink.table.catalog.ResolvedCatalogTable; +import org.apache.flink.table.catalog.ResolvedCatalogView; +import org.apache.flink.table.catalog.exceptions.CatalogException; +import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException; +import org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException; +import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException; +import org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException; +import org.apache.flink.table.catalog.exceptions.FunctionNotExistException; +import org.apache.flink.table.catalog.exceptions.PartitionAlreadyExistsException; +import org.apache.flink.table.catalog.exceptions.PartitionNotExistException; +import org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException; +import org.apache.flink.table.catalog.exceptions.TableAlreadyExistException; +import org.apache.flink.table.catalog.exceptions.TableNotExistException; +import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException; +import org.apache.flink.table.catalog.exceptions.TablePartitionedException; +import org.apache.flink.table.catalog.stats.CatalogColumnStatistics; +import org.apache.flink.table.catalog.stats.CatalogTableStatistics; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.types.DataType; +import org.apache.flink.util.StringUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static org.apache.flink.util.Preconditions.checkArgument; +import static org.apache.flink.util.Preconditions.checkNotNull; + +/** + * 自定义 catalog + * 检查connection done. + * 默认db,会被强制指定,不管输入的是什么,都会指定为 default_database + * 可以读取配置文件信息来获取数据库连接,而不是在sql语句中强制指定。 + */ +public class DlinkMysqlCatalog extends AbstractCatalog { + + private final Logger logger = LoggerFactory.getLogger(this.getClass()); + + public static final String MYSQL_DRIVER = "com.mysql.cj.jdbc.Driver"; + + public static final String DEFAULT_DATABASE = "default_database"; + + static { + try { + Class.forName(MYSQL_DRIVER); + } catch (ClassNotFoundException e) { + throw new CatalogException("未加载 mysql 驱动!", e); + } + } + + private static final String COMMENT = "comment"; + /** + * 判断是否发生过SQL异常,如果发生过,那么conn可能失效。要注意判断 + */ + private boolean sqlExceptionHappened = false; + + /** + * 对象类型,例如 库、表、视图等 + */ + protected static class ObjectType { + + /** + * 数据库 + */ + public static final String DATABASE = "database"; + + /** + * 数据表 + */ + public static final String TABLE = "TABLE"; + + /** + * 视图 + */ + public static final String VIEW = "VIEW"; + } + + /** + * 对象类型,例如 库、表、视图等 + */ + protected static class ColumnType { + + /** + * 物理字段 + */ + public static final String PHYSICAL = "physical"; + + /** + * 计算字段 + */ + public static final String COMPUTED = "computed"; + + /** + * 元数据字段 + */ + public static final String METADATA = "metadata"; + + /** + * 水印 + */ + public static final String WATERMARK = "watermark"; + } + + /** + * 数据库用户名 + */ + private final String user; + /** + * 数据库密码 + */ + private final String pwd; + /** + * 数据库连接 + */ + private final String url; + + /** + * 默认database + */ + private static final String defaultDatabase = "default_database"; + + /** + * 数据库用户名 + * + * @return 数据库用户名 + */ + public String getUser() { + return user; + } + + /** + * 数据库密码 + * + * @return 数据库密码 + */ + public String getPwd() { + return pwd; + } + + /** + * 数据库用户名 + * + * @return 数据库用户名 + */ + public String getUrl() { + return url; + } + + public DlinkMysqlCatalog(String name, + String url, + String user, + String pwd) { + super(name, defaultDatabase); + this.url = url; + this.user = user; + this.pwd = pwd; + } + + public DlinkMysqlCatalog(String name) { + super(name, defaultDatabase); + this.url = DlinkMysqlCatalogFactoryOptions.URL.defaultValue(); + this.user = DlinkMysqlCatalogFactoryOptions.USERNAME.defaultValue(); + this.pwd = DlinkMysqlCatalogFactoryOptions.PASSWORD.defaultValue(); + } + + @Override + public void open() throws CatalogException { + // 验证连接是否有效 + // 获取默认db看看是否存在 + Integer defaultDbId = getDatabaseId(defaultDatabase); + if (defaultDbId == null) { + try { + createDatabase(defaultDatabase, new CatalogDatabaseImpl(new HashMap<>(), ""), true); + } catch (DatabaseAlreadyExistException a) { + logger.info("重复创建默认库"); + } + } + } + + @Override + public void close() throws CatalogException { + if (connection != null) { + try { + connection.close(); + connection = null; + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("Fail to close connection.", e); + } + } + } + + private Connection connection; + + protected Connection getConnection() throws CatalogException { + try { + // todo: 包装一个方法用于获取连接,方便后续改造使用其他的连接生成。 + // Class.forName(MYSQL_DRIVER); + if (connection == null) { + connection = DriverManager.getConnection(url, user, pwd); + } + if (sqlExceptionHappened) { + sqlExceptionHappened = false; + if (!connection.isValid(10)) { + connection.close(); + } + if (connection.isClosed()) { + connection = null; + return getConnection(); + } + connection = null; + return getConnection(); + } + + return connection; + } catch (Exception e) { + throw new CatalogException("Fail to get connection.", e); + } + } + + @Override + public List listDatabases() throws CatalogException { + List myDatabases = new ArrayList<>(); + String querySql = "SELECT database_name FROM metadata_database"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(querySql)) { + + ResultSet rs = ps.executeQuery(); + while (rs.next()) { + String dbName = rs.getString(1); + myDatabases.add(dbName); + } + + return myDatabases; + } catch (Exception e) { + throw new CatalogException( + String.format("Failed listing database in catalog %s", getName()), e); + } + } + + @Override + public CatalogDatabase getDatabase(String databaseName) throws DatabaseNotExistException, CatalogException { + String querySql = "SELECT id, database_name,description " + + " FROM metadata_database where database_name=?"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(querySql)) { + ps.setString(1, databaseName); + ResultSet rs = ps.executeQuery(); + + if (rs.next()) { + int id = rs.getInt("id"); + String description = rs.getString("description"); + + Map map = new HashMap<>(); + + String sql = "select `key`,`value` " + + "from metadata_database_property " + + "where database_id=? "; + try (PreparedStatement pStat = conn.prepareStatement(sql)) { + pStat.setInt(1, id); + ResultSet prs = pStat.executeQuery(); + while (prs.next()) { + map.put(rs.getString("key"), rs.getString("value")); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException( + String.format("Failed get database properties in catalog %s", getName()), e); + } + + return new CatalogDatabaseImpl(map, description); + } else { + throw new DatabaseNotExistException(getName(), databaseName); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException( + String.format("Failed get database in catalog %s", getName()), e); + } + } + + @Override + public boolean databaseExists(String databaseName) throws CatalogException { + return getDatabaseId(databaseName) != null; + } + + private Integer getDatabaseId(String databaseName) throws CatalogException { + String querySql = "select id from metadata_database where database_name=?"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(querySql)) { + ps.setString(1, databaseName); + ResultSet rs = ps.executeQuery(); + boolean multiDB = false; + Integer id = null; + while (rs.next()) { + if (!multiDB) { + id = rs.getInt(1); + multiDB = true; + } else { + throw new CatalogException("存在多个同名database: " + databaseName); + } + } + return id; + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException(String.format("获取 database 信息失败:%s.%s", getName(), databaseName), e); + } + } + + @Override + public void createDatabase(String databaseName, CatalogDatabase db, + boolean ignoreIfExists) throws DatabaseAlreadyExistException, CatalogException { + + checkArgument(!StringUtils.isNullOrWhitespaceOnly(databaseName)); + checkNotNull(db); + if (databaseExists(databaseName)) { + if (!ignoreIfExists) { + throw new DatabaseAlreadyExistException(getName(), databaseName); + } + } else { + // 在这里实现创建库的代码 + Connection conn = getConnection(); + // 启动事务 + String insertSql = "insert into metadata_database(database_name, description) values(?, ?)"; + + try (PreparedStatement stat = conn.prepareStatement(insertSql, Statement.RETURN_GENERATED_KEYS)) { + conn.setAutoCommit(false); + stat.setString(1, databaseName); + stat.setString(2, db.getComment()); + stat.executeUpdate(); + ResultSet idRs = stat.getGeneratedKeys(); + if (idRs.next() && db.getProperties() != null && db.getProperties().size() > 0) { + int id = idRs.getInt(1); + String propInsertSql = "insert into metadata_database_property(database_id, " + + "`key`,`value`) values (?,?,?)"; + PreparedStatement pstat = conn.prepareStatement(propInsertSql); + for (Map.Entry entry : db.getProperties().entrySet()) { + pstat.setInt(1, id); + pstat.setString(2, entry.getKey()); + pstat.setString(3, entry.getValue()); + pstat.addBatch(); + } + pstat.executeBatch(); + pstat.close(); + } + conn.commit(); + } catch (SQLException e) { + sqlExceptionHappened = true; + logger.error("创建 database 信息失败:", e); + } + } + } + + @Override + public void dropDatabase(String name, boolean ignoreIfNotExists, + boolean cascade) throws DatabaseNotExistException, DatabaseNotEmptyException, CatalogException { + if (name.equals(defaultDatabase)) { + throw new CatalogException("默认 database 不可以删除"); + } + // 1、取出db id, + Integer id = getDatabaseId(name); + if (id == null) { + if (!ignoreIfNotExists) { + throw new DatabaseNotExistException(getName(), name); + } + return; + } + Connection conn = getConnection(); + try { + conn.setAutoCommit(false); + // 查询是否有表 + List tables = listTables(name); + if (tables.size() > 0) { + if (!cascade) { + // 有表,不做级联删除。 + throw new DatabaseNotEmptyException(getName(), name); + } + // 做级联删除 + for (String table : tables) { + try { + dropTable(new ObjectPath(name, table), true); + } catch (TableNotExistException t) { + logger.warn("表{}不存在", name + "." + table); + } + } + } + // todo: 现在是真实删除,后续设计是否做记录保留。 + String deletePropSql = "delete from metadata_database_property where database_id=?"; + PreparedStatement dStat = conn.prepareStatement(deletePropSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + String deleteDbSql = "delete from metadata_database where database_id=?"; + dStat = conn.prepareStatement(deleteDbSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + conn.commit(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("删除 database 信息失败:", e); + } + } + + @Override + public void alterDatabase(String name, CatalogDatabase newDb, + boolean ignoreIfNotExists) throws DatabaseNotExistException, CatalogException { + if (name.equals(defaultDatabase)) { + throw new CatalogException("默认 database 不可以修改"); + } + // 1、取出db id, + Integer id = getDatabaseId(name); + if (id == null) { + if (!ignoreIfNotExists) { + throw new DatabaseNotExistException(getName(), name); + } + return; + } + Connection conn = getConnection(); + try { + conn.setAutoCommit(false); + // 1、名称不能改,类型不能改。只能改备注 + String updateCommentSql = "update metadata_database set description=? where database_id=?"; + PreparedStatement uState = conn.prepareStatement(updateCommentSql); + uState.setString(1, newDb.getComment()); + uState.setInt(2, id); + uState.executeUpdate(); + uState.close(); + if (newDb.getProperties() != null && newDb.getProperties().size() > 0) { + String upsertSql = "insert into metadata_database_property (database_id, `key`,`value`) \n" + + "values (?,?,?)\n" + + "on duplicate key update `value` =?, update_time = sysdate()\n"; + PreparedStatement pstat = conn.prepareStatement(upsertSql); + for (Map.Entry entry : newDb.getProperties().entrySet()) { + pstat.setInt(1, id); + pstat.setString(2, entry.getKey()); + pstat.setString(3, entry.getValue()); + pstat.setString(4, entry.getValue()); + pstat.addBatch(); + } + + pstat.executeBatch(); + } + conn.commit(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("修改 database 信息失败:", e); + } + } + + @Override + public List listTables(String databaseName) throws DatabaseNotExistException, CatalogException { + return listTablesViews(databaseName, ObjectType.TABLE); + } + + @Override + public List listViews(String databaseName) throws DatabaseNotExistException, CatalogException { + return listTablesViews(databaseName, ObjectType.VIEW); + } + + protected List listTablesViews(String databaseName, + String tableType) throws DatabaseNotExistException, CatalogException { + Integer databaseId = getDatabaseId(databaseName); + if (null == databaseId) { + throw new DatabaseNotExistException(getName(), databaseName); + } + + // get all schemas + // 要给出table 或 view + String querySql = "SELECT table_name FROM metadata_table where table_type=? and database_id = ?"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(querySql)) { + ps.setString(1, tableType); + ps.setInt(2, databaseId); + ResultSet rs = ps.executeQuery(); + + List tables = new ArrayList<>(); + + while (rs.next()) { + String table = rs.getString(1); + tables.add(table); + } + return tables; + } catch (Exception e) { + throw new CatalogException( + String.format("Failed listing %s in catalog %s", tableType, getName()), e); + } + } + + @Override + public CatalogBaseTable getTable(ObjectPath tablePath) throws TableNotExistException, CatalogException { + // 还是分步骤来 + // 1、先取出表 这可能是view也可能是table + // 2、取出列 + // 3、取出属性 + Integer id = getTableId(tablePath); + + if (id == null) { + throw new TableNotExistException(getName(), tablePath); + } + + Connection conn = getConnection(); + try { + String queryTable = "SELECT table_name " + + " ,description, table_type " + + " FROM metadata_table " + + " where id=?"; + PreparedStatement ps = conn.prepareStatement(queryTable); + ps.setInt(1, id); + ResultSet rs = ps.executeQuery(); + String description; + String tableType; + if (rs.next()) { + description = rs.getString("description"); + tableType = rs.getString("table_type"); + ps.close(); + } else { + ps.close(); + throw new TableNotExistException(getName(), tablePath); + } + if (tableType.equals(ObjectType.TABLE)) { + // 这个是 table + String propSql = "SELECT `key`, `value` from metadata_table_property " + + "WHERE table_id=?"; + PreparedStatement pState = conn.prepareStatement(propSql); + pState.setInt(1, id); + ResultSet prs = pState.executeQuery(); + Map props = new HashMap<>(); + while (prs.next()) { + String key = prs.getString("key"); + String value = prs.getString("value"); + props.put(key, value); + } + pState.close(); + props.put(COMMENT, description); + return CatalogTable.fromProperties(props); + } else if (tableType.equals(ObjectType.VIEW)) { + // 1、从库中取出table信息。(前面已做) + // 2、取出字段。 + String colSql = "SELECT column_name, column_type, data_type, description " + + " FROM metadata_column WHERE " + + " table_id=?"; + PreparedStatement cStat = conn.prepareStatement(colSql); + cStat.setInt(1, id); + ResultSet crs = cStat.executeQuery(); + + Schema.Builder builder = Schema.newBuilder(); + while (crs.next()) { + String colName = crs.getString("column_name"); + String dataType = crs.getString("data_type"); + + builder.column(colName, dataType); + String cDesc = crs.getString("description"); + if (null != cDesc && cDesc.length() > 0) { + builder.withComment(cDesc); + } + } + cStat.close(); + // 3、取出query + String qSql = "SELECT `key`, value FROM metadata_table_property" + + " WHERE table_id=? "; + PreparedStatement qStat = conn.prepareStatement(qSql); + qStat.setInt(1, id); + ResultSet qrs = qStat.executeQuery(); + String originalQuery = ""; + String expandedQuery = ""; + Map options = new HashMap<>(); + while (qrs.next()) { + String key = qrs.getString("key"); + String value = qrs.getString("value"); + if ("OriginalQuery".equals(key)) { + originalQuery = value; + } else if ("ExpandedQuery".equals(key)) { + expandedQuery = value; + } else { + options.put(key, value); + } + } + // 合成view + return CatalogView.of(builder.build(), description, originalQuery, expandedQuery, options); + } else { + throw new CatalogException("不支持的数据类型。" + tableType); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("获取 表信息失败。", e); + } + + } + + @Override + public boolean tableExists(ObjectPath tablePath) throws CatalogException { + Integer id = getTableId(tablePath); + return id != null; + } + + private Integer getTableId(ObjectPath tablePath) { + Integer dbId = getDatabaseId(tablePath.getDatabaseName()); + if (dbId == null) { + return null; + } + // 获取id + String getIdSql = "select id from metadata_table " + + " where table_name=? and database_id=?"; + Connection conn = getConnection(); + try (PreparedStatement gStat = conn.prepareStatement(getIdSql)) { + gStat.setString(1, tablePath.getObjectName()); + gStat.setInt(2, dbId); + ResultSet rs = gStat.executeQuery(); + if (rs.next()) { + return rs.getInt(1); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + logger.error("get table fail", e); + throw new CatalogException("get table fail.", e); + } + return null; + } + + @Override + public void dropTable(ObjectPath tablePath, + boolean ignoreIfNotExists) throws TableNotExistException, CatalogException { + Integer id = getTableId(tablePath); + + if (id == null) { + throw new TableNotExistException(getName(), tablePath); + } + Connection conn = getConnection(); + try { + // todo: 现在是真实删除,后续设计是否做记录保留。 + conn.setAutoCommit(false); + String deletePropSql = "delete from metadata_table_property " + + " where table_id=?"; + PreparedStatement dStat = conn.prepareStatement(deletePropSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + String deleteColSql = "delete from metadata_column " + + " where table_id=?"; + dStat = conn.prepareStatement(deleteColSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + String deleteDbSql = "delete from metadata_table " + + " where id=?"; + dStat = conn.prepareStatement(deleteDbSql); + dStat.setInt(1, id); + dStat.executeUpdate(); + dStat.close(); + conn.commit(); + } catch (SQLException e) { + sqlExceptionHappened = true; + logger.error("drop table fail", e); + throw new CatalogException("drop table fail.", e); + } + } + + @Override + public void renameTable(ObjectPath tablePath, String newTableName, + boolean ignoreIfNotExists) throws TableNotExistException, TableAlreadyExistException, CatalogException { + Integer id = getTableId(tablePath); + + if (id == null) { + throw new TableNotExistException(getName(), tablePath); + } + ObjectPath newPath = new ObjectPath(tablePath.getDatabaseName(), newTableName); + if (tableExists(newPath)) { + throw new TableAlreadyExistException(getName(), newPath); + } + String updateSql = "UPDATE metadata_table SET table_name=? WHERE id=?"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(updateSql)) { + ps.setString(1, newTableName); + ps.setInt(2, id); + ps.executeUpdate(); + } catch (SQLException ex) { + sqlExceptionHappened = true; + throw new CatalogException("修改表名失败", ex); + } + } + + @Override + public void createTable(ObjectPath tablePath, CatalogBaseTable table, + boolean ignoreIfExists) throws TableAlreadyExistException, DatabaseNotExistException, CatalogException { + Integer dbId = getDatabaseId(tablePath.getDatabaseName()); + if (null == dbId) { + throw new DatabaseNotExistException(getName(), tablePath.getDatabaseName()); + } + if (tableExists(tablePath)) { + if (!ignoreIfExists) { + throw new TableAlreadyExistException(getName(), tablePath); + } + return; + } + // 插入表 + // 插入到table表。这里,它可能是table也可能是view + // 如果是一个table,我们认为它是一个 resolved table,就可以使用properties方式来进行序列化并保存。 + // 如果是一个view,我们认为它只能有物理字段 + if (!(table instanceof ResolvedCatalogBaseTable)) { + throw new UnsupportedOperationException("暂时不支持输入非 ResolvedCatalogBaseTable 类型的表"); + } + Connection conn = getConnection(); + try { + conn.setAutoCommit(false); + // 首先插入表信息 + CatalogBaseTable.TableKind kind = table.getTableKind(); + + String insertSql = "insert into metadata_table(\n" + + " table_name," + + " table_type," + + " database_id," + + " description)" + + " values(?,?,?,?)"; + PreparedStatement iStat = conn.prepareStatement(insertSql, Statement.RETURN_GENERATED_KEYS); + iStat.setString(1, tablePath.getObjectName()); + iStat.setString(2, kind.toString()); + iStat.setInt(3, dbId); + iStat.setString(4, table.getComment()); + iStat.executeUpdate(); + ResultSet idRs = iStat.getGeneratedKeys(); + if (!idRs.next()) { + iStat.close(); + throw new CatalogException("插入元数据表信息失败"); + } + int id = idRs.getInt(1); + iStat.close(); + // 插入属性和列 + if (table instanceof ResolvedCatalogTable) { + // table 就可以直接拿properties了。 + Map props = ((ResolvedCatalogTable) table).toProperties(); + String propInsertSql = "insert into metadata_table_property(table_id," + + "`key`,`value`) values (?,?,?)"; + PreparedStatement pStat = conn.prepareStatement(propInsertSql); + for (Map.Entry entry : props.entrySet()) { + pStat.setInt(1, id); + pStat.setString(2, entry.getKey()); + pStat.setString(3, entry.getValue()); + pStat.addBatch(); + } + pStat.executeBatch(); + pStat.close(); + } else { + // view,咱先假定它只有物理字段 + // view 还需要保存:query,expanded query + // 插入属性和列 + ResolvedCatalogView view = (ResolvedCatalogView) table; + List cols = view.getUnresolvedSchema().getColumns(); + if (cols.size() > 0) { + String colInsertSql = "insert into metadata_column(" + + " column_name, column_type, data_type" + + " , `expr`" + + " , description" + + " , table_id" + + " , `primary`) " + + " values(?,?,?,?,?,?,?)"; + PreparedStatement colIStat = conn.prepareStatement(colInsertSql); + for (Schema.UnresolvedColumn col : cols) { + if (col instanceof Schema.UnresolvedPhysicalColumn) { + Schema.UnresolvedPhysicalColumn pCol = (Schema.UnresolvedPhysicalColumn) col; + if (!(pCol.getDataType() instanceof DataType)) { + throw new UnsupportedOperationException(String.format( + "类型识别失败,该列不是有效类型:%s.%s.%s : %s", tablePath.getDatabaseName(), + tablePath.getObjectName(), pCol.getName(), + pCol.getDataType())); + } + DataType dataType = (DataType) pCol.getDataType(); + + colIStat.setString(1, pCol.getName()); + colIStat.setString(2, ColumnType.PHYSICAL); + colIStat.setString(3, + dataType.getLogicalType().asSerializableString()); + colIStat.setObject(4, null); + colIStat.setString(5, pCol.getComment().orElse("")); + colIStat.setInt(6, id); + colIStat.setObject(7, null); // view没有主键 + colIStat.addBatch(); + } else { + throw new UnsupportedOperationException("暂时认为view 不会出现 非物理字段"); + } + } + colIStat.executeBatch(); + colIStat.close(); + + // 写 query等信息到数据库 + Map option = view.getOptions(); + if (option == null) { + option = new HashMap<>(); + } + option.put("OriginalQuery", view.getOriginalQuery()); + option.put("ExpandedQuery", view.getExpandedQuery()); + String propInsertSql = "insert into metadata_table_property(table_id," + + "`key`,`value`) values (?,?,?)"; + PreparedStatement pStat = conn.prepareStatement(propInsertSql); + for (Map.Entry entry : option.entrySet()) { + pStat.setInt(1, id); + pStat.setString(2, entry.getKey()); + pStat.setString(3, entry.getValue()); + pStat.addBatch(); + } + pStat.executeBatch(); + pStat.close(); + } + } + conn.commit(); + } catch (SQLException ex) { + sqlExceptionHappened = true; + logger.error("插入数据库失败", ex); + throw new CatalogException("插入数据库失败", ex); + } + } + + @Override + public void alterTable(ObjectPath tablePath, CatalogBaseTable newTable, + boolean ignoreIfNotExists) throws TableNotExistException, CatalogException { + Integer id = getTableId(tablePath); + + if (id == null) { + throw new TableNotExistException(getName(), tablePath); + } + + Map opts = newTable.getOptions(); + if (opts != null && opts.size() > 0) { + String updateSql = "INSERT INTO metadata_table_property(table_id," + + "`key`,`value`) values (?,?,?) " + + "on duplicate key update `value` =?, update_time = sysdate()"; + Connection conn = getConnection(); + try (PreparedStatement ps = conn.prepareStatement(updateSql)) { + for (Map.Entry entry : opts.entrySet()) { + ps.setInt(1, id); + ps.setString(2, entry.getKey()); + ps.setString(3, entry.getValue()); + ps.setString(4, entry.getValue()); + ps.addBatch(); + } + ps.executeBatch(); + } catch (SQLException ex) { + sqlExceptionHappened = true; + throw new CatalogException("修改表名失败", ex); + } + } + } + + /************************ partition *************************/ + @Override + public List listPartitions(ObjectPath tablePath) throws TableNotExistException, TableNotPartitionedException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public List listPartitions(ObjectPath tablePath, + CatalogPartitionSpec partitionSpec) throws TableNotExistException, TableNotPartitionedException, PartitionSpecInvalidException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public List listPartitionsByFilter(ObjectPath tablePath, + List filters) throws TableNotExistException, TableNotPartitionedException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public CatalogPartition getPartition(ObjectPath tablePath, + CatalogPartitionSpec partitionSpec) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public boolean partitionExists(ObjectPath tablePath, CatalogPartitionSpec partitionSpec) throws CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void createPartition(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, CatalogPartition partition, + boolean ignoreIfExists) throws TableNotExistException, TableNotPartitionedException, PartitionSpecInvalidException, PartitionAlreadyExistsException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void dropPartition(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, + boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterPartition(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, CatalogPartition newPartition, + boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + /***********************Functions**********************/ + + @Override + public List listFunctions(String dbName) throws DatabaseNotExistException, CatalogException { + Integer dbId = getDatabaseId(dbName); + if (null == dbId) { + throw new DatabaseNotExistException(getName(), dbName); + } + String querySql = "SELECT function_name from metadata_function " + + " WHERE database_id=?"; + + Connection conn = getConnection(); + try (PreparedStatement gStat = conn.prepareStatement(querySql)) { + gStat.setInt(1, dbId); + ResultSet rs = gStat.executeQuery(); + List functions = new ArrayList<>(); + while (rs.next()) { + String n = rs.getString("function_name"); + functions.add(n); + } + return functions; + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("获取 UDF 列表失败"); + } + } + + @Override + public CatalogFunction getFunction(ObjectPath functionPath) throws FunctionNotExistException, CatalogException { + Integer id = getFunctionId(functionPath); + if (null == id) { + throw new FunctionNotExistException(getName(), functionPath); + } + + String querySql = "SELECT class_name,function_language from metadata_function " + + " WHERE id=?"; + Connection conn = getConnection(); + try (PreparedStatement gStat = conn.prepareStatement(querySql)) { + gStat.setInt(1, id); + ResultSet rs = gStat.executeQuery(); + if (rs.next()) { + String className = rs.getString("class_name"); + String language = rs.getString("function_language"); + CatalogFunctionImpl func = new CatalogFunctionImpl(className, FunctionLanguage.valueOf(language)); + return func; + } else { + throw new FunctionNotExistException(getName(), functionPath); + } + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("获取 UDF 失败:" + + functionPath.getDatabaseName() + "." + + functionPath.getObjectName()); + } + } + + @Override + public boolean functionExists(ObjectPath functionPath) throws CatalogException { + Integer id = getFunctionId(functionPath); + return id != null; + } + + private Integer getFunctionId(ObjectPath functionPath) { + Integer dbId = getDatabaseId(functionPath.getDatabaseName()); + if (dbId == null) { + return null; + } + // 获取id + String getIdSql = "select id from metadata_function " + + " where function_name=? and database_id=?"; + Connection conn = getConnection(); + try (PreparedStatement gStat = conn.prepareStatement(getIdSql)) { + gStat.setString(1, functionPath.getObjectName()); + gStat.setInt(2, dbId); + ResultSet rs = gStat.executeQuery(); + if (rs.next()) { + int id = rs.getInt(1); + return id; + } + } catch (SQLException e) { + sqlExceptionHappened = true; + logger.error("get function fail", e); + throw new CatalogException("get function fail.", e); + } + return null; + } + + @Override + public void createFunction(ObjectPath functionPath, CatalogFunction function, + boolean ignoreIfExists) throws FunctionAlreadyExistException, DatabaseNotExistException, CatalogException { + Integer dbId = getDatabaseId(functionPath.getDatabaseName()); + if (null == dbId) { + throw new DatabaseNotExistException(getName(), functionPath.getDatabaseName()); + } + if (functionExists(functionPath)) { + if (!ignoreIfExists) { + throw new FunctionAlreadyExistException(getName(), functionPath); + } + } + + Connection conn = getConnection(); + String insertSql = "Insert into metadata_function " + + "(function_name,class_name,database_id,function_language) " + + " values (?,?,?,?)"; + try (PreparedStatement ps = conn.prepareStatement(insertSql)) { + ps.setString(1, functionPath.getObjectName()); + ps.setString(2, function.getClassName()); + ps.setInt(3, dbId); + ps.setString(4, function.getFunctionLanguage().toString()); + ps.executeUpdate(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("创建 函数 失败", e); + } + } + + @Override + public void alterFunction(ObjectPath functionPath, CatalogFunction newFunction, + boolean ignoreIfNotExists) throws FunctionNotExistException, CatalogException { + Integer id = getFunctionId(functionPath); + if (null == id) { + if (!ignoreIfNotExists) { + throw new FunctionNotExistException(getName(), functionPath); + } + return; + } + + Connection conn = getConnection(); + String insertSql = "update metadata_function " + + "set (class_name =?, function_language=?) " + + " where id=?"; + try (PreparedStatement ps = conn.prepareStatement(insertSql)) { + ps.setString(1, newFunction.getClassName()); + ps.setString(2, newFunction.getFunctionLanguage().toString()); + ps.setInt(3, id); + ps.executeUpdate(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("修改 函数 失败", e); + } + } + + @Override + public void dropFunction(ObjectPath functionPath, + boolean ignoreIfNotExists) throws FunctionNotExistException, CatalogException { + Integer id = getFunctionId(functionPath); + if (null == id) { + if (!ignoreIfNotExists) { + throw new FunctionNotExistException(getName(), functionPath); + } + return; + } + + Connection conn = getConnection(); + String insertSql = "delete from metadata_function " + + " where id=?"; + try (PreparedStatement ps = conn.prepareStatement(insertSql)) { + ps.setInt(1, id); + ps.executeUpdate(); + } catch (SQLException e) { + sqlExceptionHappened = true; + throw new CatalogException("删除 函数 失败", e); + } + } + + @Override + public CatalogTableStatistics getTableStatistics(ObjectPath tablePath) throws TableNotExistException, CatalogException { + // todo: 补充完成该方法。 + checkNotNull(tablePath); + + if (!tableExists(tablePath)) { + throw new TableNotExistException(getName(), tablePath); + } + /* + * if (!isPartitionedTable(tablePath)) { CatalogTableStatistics result = tableStats.get(tablePath); return + * result != null ? result.copy() : CatalogTableStatistics.UNKNOWN; } else { return + * CatalogTableStatistics.UNKNOWN; } + */ + return CatalogTableStatistics.UNKNOWN; + } + + @Override + public CatalogColumnStatistics getTableColumnStatistics(ObjectPath tablePath) throws TableNotExistException, CatalogException { + // todo: 补充完成该方法。 + checkNotNull(tablePath); + + if (!tableExists(tablePath)) { + throw new TableNotExistException(getName(), tablePath); + } + + // CatalogColumnStatistics result = tableColumnStats.get(tablePath); + // return result != null ? result.copy() : CatalogColumnStatistics.UNKNOWN; + return CatalogColumnStatistics.UNKNOWN; + } + + @Override + public CatalogTableStatistics getPartitionStatistics(ObjectPath tablePath, + CatalogPartitionSpec partitionSpec) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public CatalogColumnStatistics getPartitionColumnStatistics(ObjectPath tablePath, + CatalogPartitionSpec partitionSpec) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterTableStatistics(ObjectPath tablePath, CatalogTableStatistics tableStatistics, + boolean ignoreIfNotExists) throws TableNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterTableColumnStatistics(ObjectPath tablePath, CatalogColumnStatistics columnStatistics, + boolean ignoreIfNotExists) throws TableNotExistException, CatalogException, TablePartitionedException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterPartitionStatistics(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, + CatalogTableStatistics partitionStatistics, + boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } + + @Override + public void alterPartitionColumnStatistics(ObjectPath tablePath, CatalogPartitionSpec partitionSpec, + CatalogColumnStatistics columnStatistics, + boolean ignoreIfNotExists) throws PartitionNotExistException, CatalogException { + // todo: 补充完成该方法。 + throw new UnsupportedOperationException("该方法尚未完成"); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactory.java b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactory.java new file mode 100644 index 0000000..a3fb304 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactory.java @@ -0,0 +1,71 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.catalog.mysql.factory; + +import net.srt.flink.catalog.mysql.DlinkMysqlCatalog; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.table.catalog.Catalog; +import org.apache.flink.table.factories.CatalogFactory; +import org.apache.flink.table.factories.FactoryUtil; + +import java.util.HashSet; +import java.util.Set; + +import static org.apache.flink.table.factories.FactoryUtil.PROPERTY_VERSION; + +/** + * Factory for {@link DlinkMysqlCatalog}. + */ +public class DlinkMysqlCatalogFactory implements CatalogFactory { + + @Override + public String factoryIdentifier() { + return DlinkMysqlCatalogFactoryOptions.IDENTIFIER; + } + + @Override + public Set> requiredOptions() { + final Set> options = new HashSet<>(); + return options; + } + + @Override + public Set> optionalOptions() { + final Set> options = new HashSet<>(); + options.add(DlinkMysqlCatalogFactoryOptions.USERNAME); + options.add(DlinkMysqlCatalogFactoryOptions.PASSWORD); + options.add(DlinkMysqlCatalogFactoryOptions.URL); + options.add(PROPERTY_VERSION); + return options; + } + + @Override + public Catalog createCatalog(Context context) { + final FactoryUtil.CatalogFactoryHelper helper = + FactoryUtil.createCatalogFactoryHelper(this, context); + helper.validate(); + + return new DlinkMysqlCatalog( + context.getName(), + helper.getOptions().get(DlinkMysqlCatalogFactoryOptions.URL), + helper.getOptions().get(DlinkMysqlCatalogFactoryOptions.USERNAME), + helper.getOptions().get(DlinkMysqlCatalogFactoryOptions.PASSWORD)); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactoryOptions.java b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactoryOptions.java new file mode 100644 index 0000000..5d9f3b2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/java/net/srt/flink/catalog/mysql/factory/DlinkMysqlCatalogFactoryOptions.java @@ -0,0 +1,41 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.catalog.mysql.factory; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; + +/** + */ +@Internal +public class DlinkMysqlCatalogFactoryOptions { + + public static final String IDENTIFIER = "dlink_mysql"; + + public static final ConfigOption USERNAME = ConfigOptions.key("username").stringType().noDefaultValue(); + + public static final ConfigOption PASSWORD = ConfigOptions.key("password").stringType().noDefaultValue(); + + public static final ConfigOption URL = ConfigOptions.key("url").stringType().noDefaultValue(); + + private DlinkMysqlCatalogFactoryOptions() { + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory new file mode 100644 index 0000000..cbaf9d5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/flink-catalog-mysql-1.16/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory @@ -0,0 +1,16 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +net.srt.flink.catalog.mysql.factory.DlinkMysqlCatalogFactory diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/pom.xml new file mode 100644 index 0000000..5cfbef6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/flink-catalog-mysql/pom.xml @@ -0,0 +1,29 @@ + + + + flink-catalog + net.srt + 2.0.0 + + 4.0.0 + + flink-catalog-mysql + pom + + + + flink-1.16 + + flink-catalog-mysql-1.16 + + + + flink-1.14 + + flink-catalog-mysql-1.14 + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-catalog/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-catalog/pom.xml new file mode 100644 index 0000000..31af9dc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-catalog/pom.xml @@ -0,0 +1,19 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-catalog + pom + + flink-catalog-mysql + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/dependency-reduced-pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/dependency-reduced-pom.xml new file mode 100644 index 0000000..5dd15ae --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/dependency-reduced-pom.xml @@ -0,0 +1,106 @@ + + + + flink-client + net.srt + 2.0.0 + + 4.0.0 + flink-client-1.14 + + + + maven-compiler-plugin + + + maven-shade-plugin + + + package + + shade + + + + + + + + + net.srt + flink-client-base + 2.0.0 + provided + + + net.srt + flink-common + 2.0.0 + provided + + + net.srt + flink-1.14 + 2.0.0 + provided + + + com.fasterxml.jackson.datatype + jackson-datatype-jsr310 + 2.13.3 + provided + + + javax.xml.bind + jaxb-api + 2.3.0 + provided + + + com.sun.xml.bind + jaxb-impl + 2.3.0 + provided + + + com.sun.xml.bind + jaxb-core + 2.3.0 + provided + + + javax.activation + activation + 1.1.1 + provided + + + org.projectlombok + lombok + 1.18.24 + provided + true + + + org.mapstruct + mapstruct + 1.4.2.Final + provided + + + org.mapstruct + mapstruct-jdk8 + 1.4.2.Final + provided + + + org.mapstruct + mapstruct-processor + 1.4.2.Final + provided + + + + UTF-8 + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/pom.xml new file mode 100644 index 0000000..8c050a3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/pom.xml @@ -0,0 +1,69 @@ + + + + flink-client + net.srt + 2.0.0 + + 4.0.0 + + flink-client-1.14 + + + UTF-8 + + + + + + net.srt + flink-client-base + + + net.srt + flink-common + + + net.srt + flink-1.14 + provided + + + com.fasterxml.jackson.datatype + jackson-datatype-jsr310 + provided + + + javax.xml.bind + jaxb-api + + + com.sun.xml.bind + jaxb-impl + + + com.sun.xml.bind + jaxb-core + + + javax.activation + activation + + + + + + + org.apache.maven.plugins + maven-jar-plugin + + + ${project.parent.parent.basedir}/build/extends + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/AbstractCDCBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/AbstractCDCBuilder.java new file mode 100644 index 0000000..c23df86 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/AbstractCDCBuilder.java @@ -0,0 +1,92 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + + +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.common.assertion.Asserts; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +/** + * AbstractCDCBuilder + * + * @author wenmo + * @since 2022/4/12 21:28 + **/ +public abstract class AbstractCDCBuilder { + + protected FlinkCDCConfig config; + + public AbstractCDCBuilder() { + } + + public AbstractCDCBuilder(FlinkCDCConfig config) { + this.config = config; + } + + public FlinkCDCConfig getConfig() { + return config; + } + + public void setConfig(FlinkCDCConfig config) { + this.config = config; + } + + public List getSchemaList() { + List schemaList = new ArrayList<>(); + String schema = getSchema(); + if (Asserts.isNotNullString(schema)) { + String[] schemas = schema.split(FlinkParamConstant.SPLIT); + Collections.addAll(schemaList, schemas); + } + List tableList = getTableList(); + for (String tableName : tableList) { + tableName = tableName.trim(); + if (Asserts.isNotNullString(tableName) && tableName.contains(".")) { + String[] names = tableName.split("\\\\."); + if (!schemaList.contains(names[0])) { + schemaList.add(names[0]); + } + } + } + return schemaList; + } + + public List getTableList() { + List tableList = new ArrayList<>(); + String table = config.getTable(); + if (Asserts.isNullString(table)) { + return tableList; + } + String[] tables = table.split(FlinkParamConstant.SPLIT); + Collections.addAll(tableList, tables); + return tableList; + } + + public String getSchemaFieldName() { + return "schema"; + } + + public abstract String getSchema(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/AbstractSinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/AbstractSinkBuilder.java new file mode 100644 index 0000000..60076d2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/AbstractSinkBuilder.java @@ -0,0 +1,405 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.utils.JSONUtil; +import org.apache.flink.api.common.functions.FilterFunction; +import org.apache.flink.api.common.functions.FlatMapFunction; +import org.apache.flink.api.common.functions.MapFunction; +import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.table.data.DecimalData; +import org.apache.flink.table.data.GenericRowData; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.StringData; +import org.apache.flink.table.data.TimestampData; +import org.apache.flink.table.operations.ModifyOperation; +import org.apache.flink.table.types.logical.BigIntType; +import org.apache.flink.table.types.logical.BooleanType; +import org.apache.flink.table.types.logical.DateType; +import org.apache.flink.table.types.logical.DecimalType; +import org.apache.flink.table.types.logical.DoubleType; +import org.apache.flink.table.types.logical.FloatType; +import org.apache.flink.table.types.logical.IntType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.SmallIntType; +import org.apache.flink.table.types.logical.TimestampType; +import org.apache.flink.table.types.logical.TinyIntType; +import org.apache.flink.table.types.logical.VarBinaryType; +import org.apache.flink.table.types.logical.VarCharType; +import org.apache.flink.types.RowKind; +import org.apache.flink.util.Collector; +import org.apache.flink.util.OutputTag; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.xml.bind.DatatypeConverter; +import java.math.BigDecimal; +import java.time.Instant; +import java.time.ZoneId; +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * AbstractCDCBuilder + * + * @author wenmo + * @since 2022/4/12 21:28 + **/ +public abstract class AbstractSinkBuilder implements SinkBuilder { + + protected static final Logger logger = LoggerFactory.getLogger(AbstractSinkBuilder.class); + + protected FlinkCDCConfig config; + protected List modifyOperations = new ArrayList(); + private ZoneId sinkTimeZone = ZoneId.of("UTC"); + + public AbstractSinkBuilder() { + } + + public AbstractSinkBuilder(FlinkCDCConfig config) { + this.config = config; + } + + public FlinkCDCConfig getConfig() { + return config; + } + + public void setConfig(FlinkCDCConfig config) { + this.config = config; + } + + protected Properties getProperties() { + Properties properties = new Properties(); + Map sink = config.getSink(); + for (Map.Entry entry : sink.entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && entry.getKey().startsWith("properties") + && Asserts.isNotNullString(entry.getValue())) { + properties.setProperty(entry.getKey().replace("properties.", ""), entry.getValue()); + } + } + return properties; + } + + protected SingleOutputStreamOperator deserialize(DataStreamSource dataStreamSource) { + return dataStreamSource.map(new MapFunction() { + + @Override + public Map map(String value) throws Exception { + ObjectMapper objectMapper = new ObjectMapper(); + return objectMapper.readValue(value, Map.class); + } + }); + } + + protected SingleOutputStreamOperator shunt( + SingleOutputStreamOperator mapOperator, + Table table, + String schemaFieldName) { + final String tableName = table.getName(); + final String schemaName = table.getSchema(); + return mapOperator.filter(new FilterFunction() { + + @Override + public boolean filter(Map value) throws Exception { + LinkedHashMap source = (LinkedHashMap) value.get("source"); + return tableName.equals(source.get("table").toString()) + && schemaName.equals(source.get(schemaFieldName).toString()); + } + }); + } + + protected DataStream shunt( + SingleOutputStreamOperator processOperator, + Table table, + OutputTag tag) { + + return processOperator.getSideOutput(tag); + } + + protected DataStream buildRowData( + SingleOutputStreamOperator filterOperator, + List columnNameList, + List columnTypeList, + String schemaTableName) { + return filterOperator + .flatMap(new FlatMapFunction() { + + @Override + public void flatMap(Map value, Collector out) throws Exception { + try { + switch (value.get("op").toString()) { + case "r": + case "c": + GenericRowData igenericRowData = new GenericRowData(columnNameList.size()); + igenericRowData.setRowKind(RowKind.INSERT); + Map idata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + igenericRowData.setField(i, + convertValue(idata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(igenericRowData); + break; + case "d": + GenericRowData dgenericRowData = new GenericRowData(columnNameList.size()); + dgenericRowData.setRowKind(RowKind.DELETE); + Map ddata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + dgenericRowData.setField(i, + convertValue(ddata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(dgenericRowData); + break; + case "u": + GenericRowData ubgenericRowData = new GenericRowData(columnNameList.size()); + ubgenericRowData.setRowKind(RowKind.UPDATE_BEFORE); + Map ubdata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + ubgenericRowData.setField(i, + convertValue(ubdata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(ubgenericRowData); + GenericRowData uagenericRowData = new GenericRowData(columnNameList.size()); + uagenericRowData.setRowKind(RowKind.UPDATE_AFTER); + Map uadata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + uagenericRowData.setField(i, + convertValue(uadata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(uagenericRowData); + break; + default: + } + } catch (Exception e) { + logger.error("SchameTable: {} - Row: {} - Exception: {}", schemaTableName, + JSONUtil.toJsonString(value), e); + throw e; + } + } + }); + } + + public abstract void addSink( + StreamExecutionEnvironment env, + DataStream rowDataDataStream, + Table table, + List columnNameList, + List columnTypeList); + + public DataStreamSource build( + CDCBuilder cdcBuilder, + StreamExecutionEnvironment env, + CustomTableEnvironment customTableEnvironment, + DataStreamSource dataStreamSource) { + + final String timeZone = config.getSink().get("timezone"); + config.getSink().remove("timezone"); + if (Asserts.isNotNullString(timeZone)) { + sinkTimeZone = ZoneId.of(timeZone); + } + + final List schemaList = config.getSchemaList(); + final String schemaFieldName = config.getSchemaFieldName(); + + if (Asserts.isNotNullCollection(schemaList)) { + SingleOutputStreamOperator mapOperator = deserialize(dataStreamSource); + for (Schema schema : schemaList) { + for (Table table : schema.getTables()) { + SingleOutputStreamOperator filterOperator = shunt(mapOperator, table, schemaFieldName); + + List columnNameList = new ArrayList<>(); + List columnTypeList = new ArrayList<>(); + + buildColumn(columnNameList, columnTypeList, table.getColumns()); + + DataStream rowDataDataStream = + buildRowData(filterOperator, columnNameList, columnTypeList, table.getSchemaTableName()); + + addSink(env, rowDataDataStream, table, columnNameList, columnTypeList); + } + } + } + return dataStreamSource; + } + + protected void buildColumn(List columnNameList, List columnTypeList, List columns) { + for (Column column : columns) { + columnNameList.add(column.getName()); + columnTypeList.add(getLogicalType(column)); + } + } + + public LogicalType getLogicalType(Column column) { + switch (column.getJavaType()) { + case STRING: + return new VarCharType(); + case BOOLEAN: + case JAVA_LANG_BOOLEAN: + return new BooleanType(); + case BYTE: + case JAVA_LANG_BYTE: + return new TinyIntType(); + case SHORT: + case JAVA_LANG_SHORT: + return new SmallIntType(); + case LONG: + case JAVA_LANG_LONG: + return new BigIntType(); + case FLOAT: + case JAVA_LANG_FLOAT: + return new FloatType(); + case DOUBLE: + case JAVA_LANG_DOUBLE: + return new DoubleType(); + case DECIMAL: + if (column.getPrecision() == null || column.getPrecision() == 0) { + return new DecimalType(38, column.getScale()); + } else { + return new DecimalType(column.getPrecision(), column.getScale()); + } + case INT: + case INTEGER: + return new IntType(); + case DATE: + case LOCALDATE: + return new DateType(); + case LOCALDATETIME: + case TIMESTAMP: + return new TimestampType(); + case BYTES: + return new VarBinaryType(Integer.MAX_VALUE); + default: + return new VarCharType(); + } + } + + protected Object convertValue(Object value, LogicalType logicalType) { + if (value == null) { + return null; + } + if (logicalType instanceof VarCharType) { + return StringData.fromString((String) value); + } else if (logicalType instanceof DateType) { + return value; + } else if (logicalType instanceof TimestampType) { + if (value instanceof Integer) { + return TimestampData.fromLocalDateTime( + Instant.ofEpochMilli(((Integer) value).longValue()).atZone(sinkTimeZone).toLocalDateTime()); + } else if (value instanceof Long) { + return TimestampData + .fromLocalDateTime(Instant.ofEpochMilli((long) value).atZone(sinkTimeZone).toLocalDateTime()); + } else { + return TimestampData + .fromLocalDateTime(Instant.parse(value.toString()).atZone(sinkTimeZone).toLocalDateTime()); + } + } else if (logicalType instanceof DecimalType) { + final DecimalType decimalType = ((DecimalType) logicalType); + final int precision = decimalType.getPrecision(); + final int scale = decimalType.getScale(); + return DecimalData.fromBigDecimal(new BigDecimal((String) value), precision, scale); + } else if (logicalType instanceof FloatType) { + if (value instanceof Float) { + return value; + } else if (value instanceof Double) { + return ((Double) value).floatValue(); + } else { + return Float.parseFloat(value.toString()); + } + } else if (logicalType instanceof BigIntType) { + if (value instanceof Integer) { + return ((Integer) value).longValue(); + } else { + return value; + } + } else if (logicalType instanceof VarBinaryType) { + // VARBINARY AND BINARY is converted to String with encoding base64 in FlinkCDC. + if (value instanceof String) { + return DatatypeConverter.parseBase64Binary(value.toString()); + } else { + return value; + } + } else { + return value; + } + } + + public String getSinkSchemaName(Table table) { + String schemaName = table.getSchema(); + if (config.getSink().containsKey("sink.db")) { + schemaName = config.getSink().get("sink.db"); + } + return schemaName; + } + + public String getSinkTableName(Table table) { + String tableName = table.getName(); + if (config.getSink().containsKey("table.prefix.schema")) { + if (Boolean.valueOf(config.getSink().get("table.prefix.schema"))) { + tableName = table.getSchema() + "_" + tableName; + } + } + if (config.getSink().containsKey("table.prefix")) { + tableName = config.getSink().get("table.prefix") + tableName; + } + if (config.getSink().containsKey("table.suffix")) { + tableName = tableName + config.getSink().get("table.suffix"); + } + if (config.getSink().containsKey("table.lower")) { + if (Boolean.valueOf(config.getSink().get("table.lower"))) { + tableName = tableName.toLowerCase(); + } + } + if (config.getSink().containsKey("table.upper")) { + if (Boolean.valueOf(config.getSink().get("table.upper"))) { + tableName = tableName.toUpperCase(); + } + } + return tableName; + } + + protected List getPKList(Table table) { + List pks = new ArrayList<>(); + if (Asserts.isNullCollection(table.getColumns())) { + return pks; + } + for (Column column : table.getColumns()) { + if (column.isKeyFlag()) { + pks.add(column.getName()); + } + } + return pks; + } + + protected ZoneId getSinkTimeZone() { + return this.sinkTimeZone; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/CDCBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/CDCBuilder.java new file mode 100644 index 0000000..95e0870 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/CDCBuilder.java @@ -0,0 +1,56 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.common.exception.SplitTableException; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +import java.util.List; +import java.util.Map; + +/** + * CDCBuilder + * + * @author wenmo + * @since 2022/4/12 21:09 + **/ +public interface CDCBuilder { + + String getHandle(); + + CDCBuilder create(FlinkCDCConfig config); + + DataStreamSource build(StreamExecutionEnvironment env); + + List getSchemaList(); + + List getTableList(); + + Map> parseMetaDataConfigs(); + + String getSchemaFieldName(); + + default Map parseMetaDataConfig() { + throw new SplitTableException("此数据源并未实现分库分表"); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/CDCBuilderFactory.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/CDCBuilderFactory.java new file mode 100644 index 0000000..ffacb2d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/CDCBuilderFactory.java @@ -0,0 +1,60 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + + +import net.srt.flink.client.base.exception.FlinkClientException; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.mysql.MysqlCDCBuilder; +import net.srt.flink.client.cdc.oracle.OracleCDCBuilder; +import net.srt.flink.client.cdc.postgres.PostgresCDCBuilder; +import net.srt.flink.client.cdc.sqlserver.SqlServerCDCBuilder; +import net.srt.flink.common.assertion.Asserts; + +import java.util.HashMap; +import java.util.Map; +import java.util.function.Supplier; + +/** + * CDCBuilderFactory + * + * @author wenmo + * @since 2022/4/12 21:12 + **/ +public class CDCBuilderFactory { + + private static final Map> CDC_BUILDER_MAP = new HashMap>() { + { + put(MysqlCDCBuilder.KEY_WORD, MysqlCDCBuilder::new); + put(OracleCDCBuilder.KEY_WORD, OracleCDCBuilder::new); + put(SqlServerCDCBuilder.KEY_WORD, SqlServerCDCBuilder::new); + put(PostgresCDCBuilder.KEY_WORD, PostgresCDCBuilder::new); + } + }; + + public static CDCBuilder buildCDCBuilder(FlinkCDCConfig config) { + if (Asserts.isNull(config) || Asserts.isNullString(config.getType())) { + throw new FlinkClientException("请指定 CDC Source 类型。"); + } + return CDC_BUILDER_MAP.getOrDefault(config.getType(), () -> { + throw new FlinkClientException("未匹配到对应 CDC Source 类型的【" + config.getType() + "】。"); + }).get().create(config); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/SinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/SinkBuilder.java new file mode 100644 index 0000000..0bb0450 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/SinkBuilder.java @@ -0,0 +1,45 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.common.model.Table; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +/** + * SinkBuilder + * + * @author wenmo + * @since 2022/4/12 21:09 + **/ +public interface SinkBuilder { + + String getHandle(); + + SinkBuilder create(FlinkCDCConfig config); + + DataStreamSource build(CDCBuilder cdcBuilder, StreamExecutionEnvironment env, CustomTableEnvironment customTableEnvironment, DataStreamSource dataStreamSource); + + String getSinkSchemaName(Table table); + + String getSinkTableName(Table table); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/SinkBuilderFactory.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/SinkBuilderFactory.java new file mode 100644 index 0000000..9d92691 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/SinkBuilderFactory.java @@ -0,0 +1,65 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + + +import net.srt.flink.client.base.exception.FlinkClientException; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.doris.DorisExtendSinkBuilder; +import net.srt.flink.client.cdc.doris.DorisSchemaEvolutionSinkBuilder; +import net.srt.flink.client.cdc.doris.DorisSinkBuilder; +import net.srt.flink.client.cdc.kafka.KafkaSinkBuilder; +import net.srt.flink.client.cdc.kafka.KafkaSinkJsonBuilder; +import net.srt.flink.client.cdc.sql.SQLSinkBuilder; +import net.srt.flink.client.cdc.starrocks.StarrocksSinkBuilder; +import net.srt.flink.common.assertion.Asserts; + +import java.util.HashMap; +import java.util.Map; +import java.util.function.Supplier; + +/** + * SinkBuilderFactory + * + * @author wenmo + * @since 2022/4/12 21:12 + **/ +public class SinkBuilderFactory { + + private static final Map> SINK_BUILDER_MAP = new HashMap>() { + { + put(KafkaSinkBuilder.KEY_WORD, KafkaSinkBuilder::new); + put(KafkaSinkJsonBuilder.KEY_WORD, KafkaSinkJsonBuilder::new); + put(DorisSinkBuilder.KEY_WORD, DorisSinkBuilder::new); + put(StarrocksSinkBuilder.KEY_WORD, StarrocksSinkBuilder::new); + put(SQLSinkBuilder.KEY_WORD, SQLSinkBuilder::new); + put(DorisExtendSinkBuilder.KEY_WORD, DorisExtendSinkBuilder::new); + put(DorisSchemaEvolutionSinkBuilder.KEY_WORD, DorisSchemaEvolutionSinkBuilder::new); + } + }; + + public static SinkBuilder buildSinkBuilder(FlinkCDCConfig config) { + if (Asserts.isNull(config) || Asserts.isNullString(config.getSink().get("connector"))) { + throw new FlinkClientException("请指定 Sink connector。"); + } + return SINK_BUILDER_MAP.getOrDefault(config.getSink().get("connector"), SQLSinkBuilder::new).get() + .create(config); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/AdditionalColumnEntry.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/AdditionalColumnEntry.java new file mode 100644 index 0000000..35495aa --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/AdditionalColumnEntry.java @@ -0,0 +1,59 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.doris; + +import java.io.Serializable; +import java.util.Map; + +/** + * @author BillyXing + * @since 2022/11/10 19:37 + */ +public class AdditionalColumnEntry implements Map.Entry, Serializable { + + private static final long serialVersionUID = 45678994131L; + + private final K k; + private final V v; + + private AdditionalColumnEntry(K k, V v) { + this.k = k; + this.v = v; + } + + public static AdditionalColumnEntry of(K k, V v) { + return new AdditionalColumnEntry(k, v); + } + + @Override + public K getKey() { + return k; + } + + @Override + public V getValue() { + return v; + } + + @Override + public V setValue(V value) { + throw new UnsupportedOperationException(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisExtendSinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisExtendSinkBuilder.java new file mode 100644 index 0000000..b4dbca9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisExtendSinkBuilder.java @@ -0,0 +1,310 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.doris; + +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.SinkBuilder; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.utils.JSONUtil; +import org.apache.flink.api.common.functions.FlatMapFunction; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; +import org.apache.flink.table.data.GenericRowData; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.TimestampData; +import org.apache.flink.table.types.logical.BigIntType; +import org.apache.flink.table.types.logical.BooleanType; +import org.apache.flink.table.types.logical.CharType; +import org.apache.flink.table.types.logical.DateType; +import org.apache.flink.table.types.logical.DecimalType; +import org.apache.flink.table.types.logical.FloatType; +import org.apache.flink.table.types.logical.IntType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.TimestampType; +import org.apache.flink.table.types.logical.TinyIntType; +import org.apache.flink.table.types.logical.VarCharType; +import org.apache.flink.types.RowKind; +import org.apache.flink.util.Collector; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.Serializable; +import java.time.Instant; +import java.time.ZoneId; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * @author BillyXing + * @since 2022/11/10 19:37 + */ +public class DorisExtendSinkBuilder extends DorisSinkBuilder implements Serializable { + + public static final String KEY_WORD = "datastream-doris-ext"; + private static final long serialVersionUID = 8430362249137471854L; + + protected static final Logger logger = LoggerFactory.getLogger(DorisSinkBuilder.class); + + private Map> additionalColumnConfigList = null; + + public DorisExtendSinkBuilder() { + } + + public DorisExtendSinkBuilder(FlinkCDCConfig config) { + super(config); + additionalColumnConfigList = buildAdditionalColumnsConfig(); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public SinkBuilder create(FlinkCDCConfig config) { + return new DorisExtendSinkBuilder(config); + } + + protected Object buildRowDataValues(Map value, Map rowData, String columnName, LogicalType columnType, + Map> aColumnConfigList, + ZoneId opTimeZone) { + if (aColumnConfigList != null && aColumnConfigList.size() > 0 + && aColumnConfigList.containsKey(columnName)) { + AdditionalColumnEntry col = aColumnConfigList.get(columnName); + if (col != null) { + if ("META".equals(col.getKey())) { + switch (col.getValue()) { + case "op_ts": + Object opVal = ((Map) value.get("source")).get("ts_ms"); + if (opVal instanceof Integer) { + return TimestampData + .fromLocalDateTime(Instant.ofEpochMilli(((Integer) opVal).longValue()) + .atZone(opTimeZone).toLocalDateTime()); + } else if (opVal instanceof Long) { + return TimestampData.fromLocalDateTime( + Instant.ofEpochMilli((long) opVal).atZone(opTimeZone).toLocalDateTime()); + } else { + return TimestampData.fromLocalDateTime( + Instant.parse(value.toString()).atZone(opTimeZone).toLocalDateTime()); + } + case "database_name": + return convertValue(((Map) value.get("source")).get("db"), columnType); + case "table_name": + return convertValue(((Map) value.get("source")).get("table"), columnType); + case "schema_name": + return convertValue(((Map) value.get("source")).get("schema"), columnType); + default: + logger.warn("Unsupported meta field:" + col.getValue()); + return null; + } + } else { + return convertValue(col.getValue(), columnType); + } + } + } + return convertValue(rowData.get(columnName), columnType); + } + + @Override + protected DataStream buildRowData( + SingleOutputStreamOperator filterOperator, + List columnNameList, + List columnTypeList, + String schemaTableName) { + final Map> aColumnConfigList = this.additionalColumnConfigList; + final ZoneId opTimeZone = this.getSinkTimeZone(); + logger.info("sinkTimeZone:" + this.getSinkTimeZone().toString()); + return filterOperator + .flatMap(new FlatMapFunction() { + + @Override + public void flatMap(Map value, Collector out) throws Exception { + try { + switch (value.get("op").toString()) { + case "r": + case "c": + GenericRowData igenericRowData = new GenericRowData(columnNameList.size()); + igenericRowData.setRowKind(RowKind.INSERT); + Map idata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + igenericRowData.setField(i, + buildRowDataValues(value, idata, columnNameList.get(i), + columnTypeList.get(i), aColumnConfigList, opTimeZone)); + } + out.collect(igenericRowData); + break; + case "d": + GenericRowData dgenericRowData = new GenericRowData(columnNameList.size()); + dgenericRowData.setRowKind(RowKind.DELETE); + Map ddata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + dgenericRowData.setField(i, + buildRowDataValues(value, ddata, columnNameList.get(i), + columnTypeList.get(i), aColumnConfigList, opTimeZone)); + } + out.collect(dgenericRowData); + break; + case "u": + GenericRowData ubgenericRowData = new GenericRowData(columnNameList.size()); + ubgenericRowData.setRowKind(RowKind.UPDATE_BEFORE); + Map ubdata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + ubgenericRowData.setField(i, + buildRowDataValues(value, ubdata, columnNameList.get(i), + columnTypeList.get(i), aColumnConfigList, opTimeZone)); + } + out.collect(ubgenericRowData); + GenericRowData uagenericRowData = new GenericRowData(columnNameList.size()); + uagenericRowData.setRowKind(RowKind.UPDATE_AFTER); + Map uadata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + uagenericRowData.setField(i, + buildRowDataValues(value, uadata, columnNameList.get(i), + columnTypeList.get(i), aColumnConfigList, opTimeZone)); + } + out.collect(uagenericRowData); + break; + default: + } + } catch (Exception e) { + logger.error("SchameTable: {} - Row: {} - Exception: {}", schemaTableName, + JSONUtil.toJsonString(value), e); + throw e; + } + } + }); + } + + @Override + protected void buildColumn(List columnNameList, List columnTypeList, List columns) { + for (Column column : columns) { + columnNameList.add(column.getName()); + columnTypeList.add(getLogicalType(column)); + } + if (this.additionalColumnConfigList != null && this.additionalColumnConfigList.size() > 0) { + logger.info("Start add additional column"); + for (Map.Entry col : this.additionalColumnConfigList.entrySet()) { + String colName = (String) col.getKey(); + AdditionalColumnEntry kv = (AdditionalColumnEntry) col.getValue(); + logger.info("col: { name: " + colName + ", type:" + kv.getKey() + ", val: " + kv.getValue() + "}"); + + switch (kv.getKey()) { + case "META": + switch (kv.getValue().toLowerCase()) { + case "op_ts": + columnNameList.add(colName); + columnTypeList.add(new TimestampType()); + break; + case "database_name": + case "table_name": + case "schema_name": + columnNameList.add(colName); + columnTypeList.add(new VarCharType()); + break; + default: + logger.warn("Unsupported meta field:" + kv.getValue()); + } + break; + case "BOOLEAN": + columnNameList.add(colName); + columnTypeList.add(new BooleanType()); + break; + case "INT": + columnNameList.add(colName); + columnTypeList.add(new IntType()); + break; + case "TINYINT": + columnNameList.add(colName); + columnTypeList.add(new TinyIntType()); + break; + case "BIGINT": + columnNameList.add(colName); + columnTypeList.add(new BigIntType()); + break; + case "DECIMAL": + columnNameList.add(colName); + columnTypeList.add(new DecimalType()); + break; + case "FLOAT": + columnNameList.add(colName); + columnTypeList.add(new FloatType()); + break; + case "DATE": + columnNameList.add(colName); + columnTypeList.add(new DateType()); + break; + case "TIMESTAMP": + columnNameList.add(colName); + columnTypeList.add(new TimestampType()); + break; + case "CHAR": + columnNameList.add(colName); + columnTypeList.add(new CharType()); + break; + case "VARCHAR": + case "STRING": + columnNameList.add(colName); + columnTypeList.add(new VarCharType()); + break; + default: + logger.warn("Unsupported additional column type:" + kv.getKey()); + break; + } + } + logger.info("Additional column added complete"); + } + + } + + protected Map> buildAdditionalColumnsConfig() { + if (!config.getSink().containsKey(DorisExtendSinkOptions.AdditionalColumns.key())) { + return null; + } + + String additionalColumnConfig = config.getSink().get(DorisExtendSinkOptions.AdditionalColumns.key()); + if (additionalColumnConfig == null || additionalColumnConfig.length() == 0) { + return null; + } + + Map> cfg = new HashMap<>(); + logger.info("AdditionalColumns: " + additionalColumnConfig); + String[] cols = additionalColumnConfig.split(","); + + for (String col : cols) { + String[] kv = col.split(":"); + if (kv.length != 2) { + logger.warn("additional-columns format invalid. col=" + col); + return null; + } + + String[] strs = kv[1].split("@"); + if (strs.length != 2) { + logger.warn("additional-columns format invalid. val=" + kv[1]); + return null; + } + + AdditionalColumnEntry item = + AdditionalColumnEntry.of(strs[0].trim().toUpperCase(), strs[1]); + cfg.put(kv[0].trim(), item); + } + return cfg; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisExtendSinkOptions.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisExtendSinkOptions.java new file mode 100644 index 0000000..a8db873 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisExtendSinkOptions.java @@ -0,0 +1,36 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.doris; + +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; + +/** + * @author BillyXing + * @since 2022/11/10 19:37 + */ +public class DorisExtendSinkOptions extends DorisSinkOptions { + + public static final ConfigOption AdditionalColumns = ConfigOptions.key("additional-columns").stringType() + .noDefaultValue() + .withDescription("Additional columns for sink, support meta column and fix value column." + + "(meta: op_ts,database_name,schema_name,table_name; " + + "fix value column type:BOOLEAN,INT,TINYINT,BIGINT,DECIMAL,FLOAT,DATE,TIMESTAMP,CHAR,VARCHAR,STRING)"); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisSchemaEvolutionSinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisSchemaEvolutionSinkBuilder.java new file mode 100644 index 0000000..e731470 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisSchemaEvolutionSinkBuilder.java @@ -0,0 +1,197 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.doris; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractSinkBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.client.cdc.SinkBuilder; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import org.apache.doris.flink.cfg.DorisExecutionOptions; +import org.apache.doris.flink.cfg.DorisOptions; +import org.apache.doris.flink.cfg.DorisReadOptions; +import org.apache.doris.flink.sink.DorisSink; +import org.apache.doris.flink.sink.writer.JsonDebeziumSchemaSerializer; +import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.functions.ProcessFunction; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.util.Collector; +import org.apache.flink.util.OutputTag; + +import java.io.Serializable; +import java.util.HashMap; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; +import java.util.UUID; + +/** + * DorisSchemaEvolutionSinkBuilder + * + * @author wenmo + * @since 2022/12/6 + **/ +public class DorisSchemaEvolutionSinkBuilder extends AbstractSinkBuilder implements Serializable { + + public static final String KEY_WORD = "datastream-doris-schema-evolution"; + + public DorisSchemaEvolutionSinkBuilder() { + } + + public DorisSchemaEvolutionSinkBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public void addSink( + StreamExecutionEnvironment env, + DataStream rowDataDataStream, + Table table, + List columnNameList, + List columnTypeList) { + + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public SinkBuilder create(FlinkCDCConfig config) { + return new DorisSchemaEvolutionSinkBuilder(config); + } + + @Override + public DataStreamSource build( + CDCBuilder cdcBuilder, + StreamExecutionEnvironment env, + CustomTableEnvironment customTableEnvironment, + DataStreamSource dataStreamSource) { + + Map sink = config.getSink(); + + Properties properties = getProperties(); + // schema evolution need json format + properties.setProperty("format", "json"); + properties.setProperty("read_json_by_line", "true"); + + Map> tagMap = new HashMap<>(); + Map tableMap = new HashMap<>(); + ObjectMapper objectMapper = new ObjectMapper(); + + SingleOutputStreamOperator mapOperator = dataStreamSource.map(x -> objectMapper.readValue(x, Map.class)) + .returns(Map.class); + final List schemaList = config.getSchemaList(); + final String schemaFieldName = config.getSchemaFieldName(); + if (Asserts.isNotNullCollection(schemaList)) { + for (Schema schema : schemaList) { + for (Table table : schema.getTables()) { + String sinkTableName = getSinkTableName(table); + OutputTag outputTag = new OutputTag(sinkTableName) { + }; + tagMap.put(table, outputTag); + tableMap.put(table.getSchemaTableName(), table); + } + } + SingleOutputStreamOperator process = mapOperator.process(new ProcessFunction() { + + @Override + public void processElement(Map map, Context ctx, Collector out) throws Exception { + LinkedHashMap source = (LinkedHashMap) map.get("source"); + try { + String result = objectMapper.writeValueAsString(map); + Table table = tableMap + .get(source.get(schemaFieldName).toString() + "." + source.get("table").toString()); + OutputTag outputTag = tagMap.get(table); + ctx.output(outputTag, result); + } catch (Exception e) { + out.collect(objectMapper.writeValueAsString(map)); + } + } + }); + tagMap.forEach((table, v) -> { + DorisOptions dorisOptions = DorisOptions.builder() + .setFenodes(config.getSink().get(DorisSinkOptions.FENODES.key())) + .setTableIdentifier(getSinkSchemaName(table) + "." + getSinkTableName(table)) + .setUsername(config.getSink().get(DorisSinkOptions.USERNAME.key())) + .setPassword(config.getSink().get(DorisSinkOptions.PASSWORD.key())).build(); + + DorisExecutionOptions.Builder executionBuilder = DorisExecutionOptions.builder(); + if (sink.containsKey(DorisSinkOptions.SINK_BUFFER_COUNT.key())) { + executionBuilder + .setBufferCount(Integer.valueOf(sink.get(DorisSinkOptions.SINK_BUFFER_COUNT.key()))); + } + if (sink.containsKey(DorisSinkOptions.SINK_BUFFER_SIZE.key())) { + executionBuilder.setBufferSize(Integer.valueOf(sink.get(DorisSinkOptions.SINK_BUFFER_SIZE.key()))); + } + if (sink.containsKey(DorisSinkOptions.SINK_ENABLE_DELETE.key())) { + executionBuilder.setDeletable(Boolean.valueOf(sink.get(DorisSinkOptions.SINK_ENABLE_DELETE.key()))); + } else { + executionBuilder.setDeletable(true); + } + if (sink.containsKey(DorisSinkOptions.SINK_LABEL_PREFIX.key())) { + executionBuilder.setLabelPrefix(sink.get(DorisSinkOptions.SINK_LABEL_PREFIX.key()) + "-" + + getSinkSchemaName(table) + "_" + getSinkTableName(table)); + } else { + executionBuilder.setLabelPrefix( + "dlink-" + getSinkSchemaName(table) + "_" + getSinkTableName(table) + UUID.randomUUID()); + } + if (sink.containsKey(DorisSinkOptions.SINK_MAX_RETRIES.key())) { + executionBuilder.setMaxRetries(Integer.valueOf(sink.get(DorisSinkOptions.SINK_MAX_RETRIES.key()))); + } + + executionBuilder.setStreamLoadProp(properties).setDeletable(true); + + DorisSink.Builder builder = DorisSink.builder(); + builder.setDorisReadOptions(DorisReadOptions.builder().build()) + .setDorisExecutionOptions(executionBuilder.build()) + .setDorisOptions(dorisOptions) + .setSerializer(JsonDebeziumSchemaSerializer.builder().setDorisOptions(dorisOptions).build()); + + process.getSideOutput(v).rebalance().sinkTo(builder.build()).name("Doris Schema Evolution Sink(table=[" + + getSinkSchemaName(table) + "." + getSinkTableName(table) + "])"); + }); + } + return dataStreamSource; + } + + @Override + protected Properties getProperties() { + Properties properties = new Properties(); + Map sink = config.getSink(); + for (Map.Entry entry : sink.entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && entry.getKey().startsWith("sink.properties") + && Asserts.isNotNullString(entry.getValue())) { + properties.setProperty(entry.getKey().replace("sink.properties.", ""), entry.getValue()); + } + } + return properties; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisSinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisSinkBuilder.java new file mode 100644 index 0000000..c0f7e46 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisSinkBuilder.java @@ -0,0 +1,185 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.doris; + +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractSinkBuilder; +import net.srt.flink.client.cdc.SinkBuilder; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Table; +import org.apache.doris.flink.cfg.DorisExecutionOptions; +import org.apache.doris.flink.cfg.DorisOptions; +import org.apache.doris.flink.cfg.DorisReadOptions; +import org.apache.doris.flink.sink.DorisSink; +import org.apache.doris.flink.sink.writer.RowDataSerializer; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.utils.TypeConversions; + +import java.io.Serializable; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Properties; +import java.util.UUID; + +/** + * DorisSinkBuilder + **/ +public class DorisSinkBuilder extends AbstractSinkBuilder implements Serializable { + + public static final String KEY_WORD = "datastream-doris"; + private static final long serialVersionUID = 8330362249137471854L; + + public DorisSinkBuilder() { + } + + public DorisSinkBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public SinkBuilder create(FlinkCDCConfig config) { + return new DorisSinkBuilder(config); + } + + @Override + public void addSink( + StreamExecutionEnvironment env, + DataStream rowDataDataStream, + Table table, + List columnNameList, + List columnTypeList) { + + Map sink = config.getSink(); + + // Create FieldNames and FieldType for RowDataSerializer. + final String[] columnNames = columnNameList.toArray(new String[columnNameList.size()]); + final List dataTypeList = new ArrayList<>(); + for (LogicalType logicalType : columnTypeList) { + dataTypeList.add(TypeConversions.fromLogicalToDataType(logicalType)); + } + final DataType[] columnTypes = dataTypeList.toArray(new DataType[dataTypeList.size()]); + + // Create DorisReadOptions for DorisSink. + final DorisReadOptions.Builder readOptionBuilder = DorisReadOptions.builder(); + if (sink.containsKey(DorisSinkOptions.DORIS_DESERIALIZE_ARROW_ASYNC.key())) { + readOptionBuilder.setDeserializeArrowAsync(Boolean.valueOf(sink.get(DorisSinkOptions.DORIS_DESERIALIZE_ARROW_ASYNC.key()))); + } + if (sink.containsKey(DorisSinkOptions.DORIS_DESERIALIZE_QUEUE_SIZE.key())) { + readOptionBuilder.setDeserializeQueueSize(Integer.valueOf(sink.get(DorisSinkOptions.DORIS_DESERIALIZE_QUEUE_SIZE.key()))); + } + if (sink.containsKey(DorisSinkOptions.DORIS_EXEC_MEM_LIMIT.key())) { + readOptionBuilder.setExecMemLimit(Long.valueOf(sink.get(DorisSinkOptions.DORIS_EXEC_MEM_LIMIT.key()))); + } + if (sink.containsKey(DorisSinkOptions.DORIS_FILTER_QUERY.key())) { + readOptionBuilder.setFilterQuery(String.valueOf(sink.get(DorisSinkOptions.DORIS_FILTER_QUERY.key()))); + } + if (sink.containsKey(DorisSinkOptions.DORIS_READ_FIELD.key())) { + readOptionBuilder.setReadFields(sink.get(DorisSinkOptions.DORIS_READ_FIELD.key())); + } + if (sink.containsKey(DorisSinkOptions.DORIS_BATCH_SIZE.key())) { + readOptionBuilder.setRequestBatchSize(Integer.valueOf(sink.get(DorisSinkOptions.DORIS_BATCH_SIZE.key()))); + } + if (sink.containsKey(DorisSinkOptions.DORIS_REQUEST_CONNECT_TIMEOUT_MS.key())) { + readOptionBuilder.setRequestConnectTimeoutMs(Integer.valueOf(sink.get(DorisSinkOptions.DORIS_REQUEST_CONNECT_TIMEOUT_MS.key()))); + } + if (sink.containsKey(DorisSinkOptions.DORIS_REQUEST_QUERY_TIMEOUT_S.key())) { + readOptionBuilder.setRequestQueryTimeoutS(Integer.valueOf(sink.get(DorisSinkOptions.DORIS_REQUEST_QUERY_TIMEOUT_S.key()))); + } + if (sink.containsKey(DorisSinkOptions.DORIS_REQUEST_READ_TIMEOUT_MS.key())) { + readOptionBuilder.setRequestReadTimeoutMs(Integer.valueOf(sink.get(DorisSinkOptions.DORIS_REQUEST_READ_TIMEOUT_MS.key()))); + } + if (sink.containsKey(DorisSinkOptions.DORIS_REQUEST_RETRIES.key())) { + readOptionBuilder.setRequestRetries(Integer.valueOf(sink.get(DorisSinkOptions.DORIS_REQUEST_RETRIES.key()))); + } + if (sink.containsKey(DorisSinkOptions.DORIS_REQUEST_TABLET_SIZE.key())) { + readOptionBuilder.setRequestTabletSize(Integer.valueOf(sink.get(DorisSinkOptions.DORIS_REQUEST_TABLET_SIZE.key()))); + } + + // Create DorisOptions for DorisSink. + DorisOptions.Builder dorisBuilder = DorisOptions.builder(); + dorisBuilder.setFenodes(config.getSink().get(DorisSinkOptions.FENODES.key())) + .setTableIdentifier(getSinkSchemaName(table) + "." + getSinkTableName(table)) + .setUsername(config.getSink().get(DorisSinkOptions.USERNAME.key())) + .setPassword(config.getSink().get(DorisSinkOptions.PASSWORD.key())).build(); + + // Create DorisExecutionOptions for DorisSink. + DorisExecutionOptions.Builder executionBuilder = DorisExecutionOptions.builder(); + if (sink.containsKey(DorisSinkOptions.SINK_BUFFER_COUNT.key())) { + executionBuilder.setBufferCount(Integer.valueOf(sink.get(DorisSinkOptions.SINK_BUFFER_COUNT.key()))); + } + if (sink.containsKey(DorisSinkOptions.SINK_BUFFER_SIZE.key())) { + executionBuilder.setBufferSize(Integer.valueOf(sink.get(DorisSinkOptions.SINK_BUFFER_SIZE.key()))); + } + if (sink.containsKey(DorisSinkOptions.SINK_ENABLE_DELETE.key())) { + executionBuilder.setDeletable(Boolean.valueOf(sink.get(DorisSinkOptions.SINK_ENABLE_DELETE.key()))); + } else { + executionBuilder.setDeletable(true); + } + if (sink.containsKey(DorisSinkOptions.SINK_LABEL_PREFIX.key())) { + executionBuilder.setLabelPrefix(sink.get(DorisSinkOptions.SINK_LABEL_PREFIX.key()) + "-" + getSinkSchemaName(table) + "_" + getSinkTableName(table)); + } else { + executionBuilder.setLabelPrefix("dlink-" + getSinkSchemaName(table) + "_" + getSinkTableName(table) + UUID.randomUUID()); + } + if (sink.containsKey(DorisSinkOptions.SINK_MAX_RETRIES.key())) { + executionBuilder.setMaxRetries(Integer.valueOf(sink.get(DorisSinkOptions.SINK_MAX_RETRIES.key()))); + } + + Properties properties = getProperties(); + // Doris 1.1 need to this para to support delete + properties.setProperty("columns", String.join(",", columnNameList) + ",__DORIS_DELETE_SIGN__"); + + executionBuilder.setStreamLoadProp(properties); + + // Create DorisSink. + DorisSink.Builder builder = DorisSink.builder(); + builder.setDorisReadOptions(readOptionBuilder.build()) + .setDorisExecutionOptions(executionBuilder.build()) + .setSerializer(RowDataSerializer.builder() + .setFieldNames(columnNames) + .setType("json") + .enableDelete(true) + .setFieldType(columnTypes).build()) + .setDorisOptions(dorisBuilder.build()); + + rowDataDataStream.sinkTo(builder.build()).name("Doris Sink(table=[" + getSinkSchemaName(table) + "." + getSinkTableName(table) + "])"); + } + + @Override + protected Properties getProperties() { + Properties properties = new Properties(); + Map sink = config.getSink(); + for (Map.Entry entry : sink.entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && entry.getKey().startsWith("sink.properties") && Asserts.isNotNullString(entry.getValue())) { + properties.setProperty(entry.getKey().replace("sink.properties.", ""), entry.getValue()); + } + } + return properties; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisSinkOptions.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisSinkOptions.java new file mode 100644 index 0000000..0097134 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/doris/DorisSinkOptions.java @@ -0,0 +1,75 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.doris; + +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; + +/** + * DorisSinkOptions + **/ +public class DorisSinkOptions { + + public static final ConfigOption FENODES = ConfigOptions.key("fenodes").stringType().noDefaultValue() + .withDescription("Doris FE http address, support multiple addresses, separated by commas."); + public static final ConfigOption TABLE_IDENTIFIER = ConfigOptions.key("table.identifier").stringType().noDefaultValue() + .withDescription("Doris table identifier, eg, db1.tbl1."); + public static final ConfigOption USERNAME = ConfigOptions.key("username").stringType().noDefaultValue() + .withDescription("Doris username."); + public static final ConfigOption PASSWORD = ConfigOptions.key("password").stringType().noDefaultValue() + .withDescription("Doris password."); + + public static final ConfigOption DORIS_DESERIALIZE_ARROW_ASYNC = ConfigOptions.key("doris.deserialize.arrow.async").booleanType().defaultValue(false) + .withDescription("Whether to support asynchronous conversion of Arrow format to RowBatch required for flink-doris-connector iteration."); + public static final ConfigOption DORIS_DESERIALIZE_QUEUE_SIZE = ConfigOptions.key("doris.deserialize.queue.size").intType().defaultValue(64) + .withDescription("Asynchronous conversion of the internal processing queue in Arrow format takes effect when doris.deserialize.arrow.async is true."); + public static final ConfigOption DORIS_EXEC_MEM_LIMIT = ConfigOptions.key("doris.exec.mem.limit").longType().defaultValue(2147483648L) + .withDescription("Memory limit for a single query. The default is 2GB, in bytes."); + public static final ConfigOption DORIS_FILTER_QUERY = ConfigOptions.key("doris.filter.query").stringType().noDefaultValue() + .withDescription("Filter expression of the query, which is transparently transmitted to Doris. Doris uses this expression to complete source-side data filtering."); + public static final ConfigOption DORIS_READ_FIELD = ConfigOptions.key("doris.read.field").stringType().noDefaultValue() + .withDescription("List of column names in the Doris table, separated by commas."); + public static final ConfigOption DORIS_BATCH_SIZE = ConfigOptions.key("doris.batch.size").intType().defaultValue(1024) + .withDescription("The maximum number of rows to read data from BE at one time. Increasing this value can reduce the number of connections between Flink and Doris." + + " Thereby reducing the extra time overhead caused by network delay."); + public static final ConfigOption DORIS_REQUEST_CONNECT_TIMEOUT_MS = ConfigOptions.key("doris.request.connect.timeout.ms").intType().defaultValue(30000) + .withDescription("Connection timeout for sending requests to Doris."); + public static final ConfigOption DORIS_REQUEST_QUERY_TIMEOUT_S = ConfigOptions.key("doris.request.query.timeout.s").intType().defaultValue(3600) + .withDescription("Query the timeout time of doris, the default is 1 hour, -1 means no timeout limit."); + public static final ConfigOption DORIS_REQUEST_READ_TIMEOUT_MS = ConfigOptions.key("doris.request.read.timeout.ms").intType().defaultValue(30000) + .withDescription("Read timeout for sending request to Doris."); + public static final ConfigOption DORIS_REQUEST_RETRIES = ConfigOptions.key("doris.request.retries").intType().defaultValue(3) + .withDescription("Number of retries to send requests to Doris."); + public static final ConfigOption DORIS_REQUEST_TABLET_SIZE = ConfigOptions.key("doris.request.tablet.size").intType().defaultValue(Integer.MAX_VALUE) + .withDescription("The number of Doris Tablets corresponding to an Partition. The smaller this value is set, the more partitions will be generated. " + + "This will increase the parallelism on the flink side, but at the same time will cause greater pressure on Doris."); + + public static final ConfigOption SINK_BUFFER_COUNT = ConfigOptions.key("sink.buffer-count").intType().defaultValue(3) + .withDescription("The number of write data cache buffers, it is not recommended to modify, the default configuration is sufficient."); + public static final ConfigOption SINK_BUFFER_SIZE = ConfigOptions.key("sink.buffer-size").intType().defaultValue(1048576) + .withDescription("Write data cache buffer size, in bytes. It is not recommended to modify, the default configuration is sufficient."); + public static final ConfigOption SINK_ENABLE_DELETE = ConfigOptions.key("sink.enable-delete").booleanType().defaultValue(true) + .withDescription("Whether to enable deletion. This option requires Doris table to enable batch delete function (0.15+ version is enabled by default), and only supports Uniq model."); + public static final ConfigOption SINK_LABEL_PREFIX = ConfigOptions.key("sink.label-prefix").stringType().noDefaultValue() + .withDescription("The label prefix used by stream load imports. In the 2pc scenario, global uniqueness is required to ensure the EOS semantics of Flink."); + public static final ConfigOption SINK_MAX_RETRIES = ConfigOptions.key("sink.max-retries").intType().defaultValue(1) + .withDescription("In the 2pc scenario, the number of retries after the commit phase fails."); + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/kafka/KafkaSinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/kafka/KafkaSinkBuilder.java new file mode 100644 index 0000000..bb4b9c8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/kafka/KafkaSinkBuilder.java @@ -0,0 +1,160 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.kafka; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractSinkBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.client.cdc.SinkBuilder; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import org.apache.flink.api.common.serialization.SimpleStringSchema; +import org.apache.flink.connector.base.DeliveryGuarantee; +import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema; +import org.apache.flink.connector.kafka.sink.KafkaSink; +import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.functions.ProcessFunction; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.util.Collector; +import org.apache.flink.util.OutputTag; + +import java.io.Serializable; +import java.util.HashMap; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * MysqlCDCBuilder + * + * @author wenmo + * @since 2022/4/12 21:29 + **/ +public class KafkaSinkBuilder extends AbstractSinkBuilder implements Serializable { + + public static final String KEY_WORD = "datastream-kafka"; + + public KafkaSinkBuilder() { + } + + public KafkaSinkBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public void addSink( + StreamExecutionEnvironment env, + DataStream rowDataDataStream, + Table table, + List columnNameList, + List columnTypeList) { + + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public SinkBuilder create(FlinkCDCConfig config) { + return new KafkaSinkBuilder(config); + } + + @Override + public DataStreamSource build( + CDCBuilder cdcBuilder, + StreamExecutionEnvironment env, + CustomTableEnvironment customTableEnvironment, + DataStreamSource dataStreamSource) { + // 解决kafka的 properties 配置未加载问题 + Properties kafkaProducerConfig = getProperties(); + if (Asserts.isNotNullString(config.getSink().get("topic"))) { + KafkaSink kafkaSink = KafkaSink.builder().setBootstrapServers(config.getSink().get("brokers")) + .setRecordSerializer(KafkaRecordSerializationSchema.builder() + .setTopic(config.getSink().get("topic")) + .setValueSerializationSchema(new SimpleStringSchema()) + .build() + ) + .setDeliverGuarantee(DeliveryGuarantee.valueOf(env.getCheckpointingMode().name())) + .setKafkaProducerConfig(kafkaProducerConfig) + .setTransactionalIdPrefix(kafkaProducerConfig.getProperty("transactional.id")) + .build(); + dataStreamSource.sinkTo(kafkaSink); + } else { + Map> tagMap = new HashMap<>(); + Map tableMap = new HashMap<>(); + ObjectMapper objectMapper = new ObjectMapper(); + SingleOutputStreamOperator mapOperator = dataStreamSource.map(x -> objectMapper.readValue(x,Map.class)).returns(Map.class); + final List schemaList = config.getSchemaList(); + final String schemaFieldName = config.getSchemaFieldName(); + if (Asserts.isNotNullCollection(schemaList)) { + + for (Schema schema : schemaList) { + for (Table table : schema.getTables()) { + String sinkTableName = getSinkTableName(table); + OutputTag outputTag = new OutputTag(sinkTableName) { + }; + tagMap.put(table, outputTag); + tableMap.put(table.getSchemaTableName(), table); + } + } + + SingleOutputStreamOperator process = mapOperator.process(new ProcessFunction() { + @Override + public void processElement(Map map, ProcessFunction.Context ctx, Collector out) throws Exception { + LinkedHashMap source = (LinkedHashMap) map.get("source"); + try { + String result = objectMapper.writeValueAsString(map); + Table table = tableMap.get(source.get(schemaFieldName).toString() + "." + source.get("table").toString()); + OutputTag outputTag = tagMap.get(table); + ctx.output(outputTag, result); + } catch (Exception e) { + out.collect(objectMapper.writeValueAsString(map)); + } + } + }); + tagMap.forEach((k, v) -> { + String topic = getSinkTableName(k); + KafkaSink kafkaSink = KafkaSink.builder().setBootstrapServers(config.getSink().get("brokers")) + .setRecordSerializer(KafkaRecordSerializationSchema.builder() + .setTopic(topic) + .setValueSerializationSchema(new SimpleStringSchema()) + .build() + ) + .setDeliverGuarantee(DeliveryGuarantee.valueOf(env.getCheckpointingMode().name())) + .setKafkaProducerConfig(kafkaProducerConfig) + .setTransactionalIdPrefix(kafkaProducerConfig.getProperty("transactional.id") + "-" + topic) + .build(); + process.getSideOutput(v).rebalance().sinkTo(kafkaSink).name(topic); + }); + } + } + return dataStreamSource; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/kafka/KafkaSinkJsonBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/kafka/KafkaSinkJsonBuilder.java new file mode 100644 index 0000000..5b62992 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/kafka/KafkaSinkJsonBuilder.java @@ -0,0 +1,188 @@ +package net.srt.flink.client.cdc.kafka; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.SerializationFeature; +import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule; +import com.fasterxml.jackson.datatype.jsr310.deser.LocalDateTimeDeserializer; +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractSinkBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.client.cdc.SinkBuilder; +import net.srt.flink.client.utils.ObjectConvertUtil; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import org.apache.flink.api.common.functions.FilterFunction; +import org.apache.flink.api.common.functions.MapFunction; +import org.apache.flink.api.common.serialization.SimpleStringSchema; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.functions.ProcessFunction; +import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.util.Collector; + +import java.io.Serializable; +import java.time.LocalDateTime; +import java.time.format.DateTimeFormatter; +import java.util.LinkedHashMap; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; + +/** + * @className: com.dlink.cdc.kafka.KafkaSinkSimpleBuilder + */ +public class KafkaSinkJsonBuilder extends AbstractSinkBuilder implements Serializable { + + public static final String KEY_WORD = "datastream-kafka-json"; + private transient ObjectMapper objectMapper; + + public KafkaSinkJsonBuilder() { + } + + public KafkaSinkJsonBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public SinkBuilder create(FlinkCDCConfig config) { + return new KafkaSinkJsonBuilder(config); + } + + @Override + public DataStreamSource build( + CDCBuilder cdcBuilder, + StreamExecutionEnvironment env, + CustomTableEnvironment customTableEnvironment, + DataStreamSource dataStreamSource) { + try { + SingleOutputStreamOperator mapOperator = dataStreamSource.map(new MapFunction() { + @Override + public Map map(String value) throws Exception { + ObjectMapper objectMapper = new ObjectMapper(); + return objectMapper.readValue(value, Map.class); + } + }); + final List schemaList = config.getSchemaList(); + final String schemaFieldName = config.getSchemaFieldName(); + if (Asserts.isNotNullCollection(schemaList)) { + for (Schema schema : schemaList) { + for (Table table : schema.getTables()) { + final String tableName = table.getName(); + final String schemaName = table.getSchema(); + SingleOutputStreamOperator filterOperator = mapOperator.filter(new FilterFunction() { + @Override + public boolean filter(Map value) throws Exception { + LinkedHashMap source = (LinkedHashMap) value.get("source"); + return tableName.equals(source.get("table").toString()) + && schemaName.equals(source.get(schemaFieldName).toString()); + } + }); + String topic = getSinkTableName(table); + if (Asserts.isNotNullString(config.getSink().get("topic"))) { + topic = config.getSink().get("topic"); + } + List columnNameList = new LinkedList<>(); + List columnTypeList = new LinkedList<>(); + buildColumn(columnNameList, columnTypeList, table.getColumns()); + SingleOutputStreamOperator stringOperator = filterOperator.process(new ProcessFunction() { + @Override + public void processElement(Map value, Context context, Collector collector) throws Exception { + Map after = null; + Map before = null; + String tsMs = value.get("ts_ms").toString(); + try { + switch (value.get("op").toString()) { + case "r": + case "c": + after = (Map) value.get("after"); + convertAttr(columnNameList, columnTypeList, after,value.get("op").toString(), 0, schemaName, tableName, tsMs); + break; + case "u": + before = (Map) value.get("before"); + convertAttr(columnNameList, columnTypeList, before,value.get("op").toString(), 1, schemaName, tableName, tsMs); + + after = (Map) value.get("after"); + convertAttr(columnNameList, columnTypeList, after,value.get("op").toString(), 0, schemaName, tableName, tsMs); + break; + case "d": + before = (Map) value.get("before"); + convertAttr(columnNameList, columnTypeList, before,value.get("op").toString(), 1,schemaName, tableName, tsMs); + break; + default: + } + } catch (Exception e) { + logger.error("SchameTable: {} - Exception:", e); + throw e; + } + if (objectMapper == null) { + initializeObjectMapper(); + } + if (before != null) { + collector.collect(objectMapper.writeValueAsString(before)); + } + if (after != null) { + collector.collect(objectMapper.writeValueAsString(after)); + } + } + }); + stringOperator.addSink(new FlinkKafkaProducer(config.getSink().get("brokers"), + topic, + new SimpleStringSchema())); + } + } + } + } catch (Exception ex) { + logger.error("kafka sink error:",ex); + } + return dataStreamSource; + } + + private void initializeObjectMapper() { + this.objectMapper = new ObjectMapper(); + JavaTimeModule javaTimeModule = new JavaTimeModule(); + // Hack time module to allow 'Z' at the end of string (i.e. javascript json's) + javaTimeModule.addDeserializer(LocalDateTime.class, new LocalDateTimeDeserializer(DateTimeFormatter.ISO_DATE_TIME)); + objectMapper.registerModule(javaTimeModule); + objectMapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false); + } + + @Override + public void addSink( + StreamExecutionEnvironment env, + DataStream rowDataDataStream, + Table table, + List columnNameList, + List columnTypeList) { + } + + @Override + protected Object convertValue(Object value, LogicalType logicalType) { + return ObjectConvertUtil.convertValue(value,logicalType); + } + + private void convertAttr(List columnNameList, List columnTypeList, Map value, String op, int isDeleted, + String schemaName, String tableName, String tsMs) { + for (int i = 0; i < columnNameList.size(); i++) { + String columnName = columnNameList.get(i); + Object columnNameValue = value.remove(columnName); + Object columnNameNewVal = convertValue(columnNameValue, columnTypeList.get(i)); + value.put(columnName, columnNameNewVal); + } + value.put("__op", op); + value.put("is_deleted", Integer.valueOf(isDeleted)); + value.put("db", schemaName); + value.put("table", tableName); + value.put("ts_ms", tsMs); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/mysql/MysqlCDCBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/mysql/MysqlCDCBuilder.java new file mode 100644 index 0000000..099fba0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/mysql/MysqlCDCBuilder.java @@ -0,0 +1,242 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.mysql; + +import com.ververica.cdc.connectors.mysql.source.MySqlSource; +import com.ververica.cdc.connectors.mysql.source.MySqlSourceBuilder; +import com.ververica.cdc.connectors.mysql.table.StartupOptions; +import net.srt.flink.client.base.constant.ClientConstant; +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractCDCBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.api.common.eventtime.WatermarkStrategy; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +import java.time.Duration; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * MysqlCDCBuilder + * + * @author wenmo + * @since 2022/4/12 21:29 + **/ +public class MysqlCDCBuilder extends AbstractCDCBuilder implements CDCBuilder { + + public static final String KEY_WORD = "mysql-cdc"; + private static final String METADATA_TYPE = "MySql"; + + public MysqlCDCBuilder() { + } + + public MysqlCDCBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public CDCBuilder create(FlinkCDCConfig config) { + return new MysqlCDCBuilder(config); + } + + @Override + public DataStreamSource build(StreamExecutionEnvironment env) { + String database = config.getDatabase(); + String serverId = config.getSource().get("server-id"); + String serverTimeZone = config.getSource().get("server-time-zone"); + String fetchSize = config.getSource().get("scan.snapshot.fetch.size"); + String connectTimeout = config.getSource().get("connect.timeout"); + String connectMaxRetries = config.getSource().get("connect.max-retries"); + String connectionPoolSize = config.getSource().get("connection.pool.size"); + String heartbeatInterval = config.getSource().get("heartbeat.interval"); + String chunkSize = config.getSource().get("scan.incremental.snapshot.chunk.size"); + String distributionFactorLower = config.getSource().get("chunk-key.even-distribution.factor.upper-bound"); + String distributionFactorUpper = config.getSource().get("chunk-key.even-distribution.factor.lower-bound"); + String scanNewlyAddedTableEnabled = config.getSource().get("scan.newly-added-table.enabled"); + String schemaChanges = config.getSource().get("schema.changes"); + + Properties debeziumProperties = new Properties(); + // 为部分转换添加默认值 + debeziumProperties.setProperty("bigint.unsigned.handling.mode", "long"); + debeziumProperties.setProperty("decimal.handling.mode", "string"); + + for (Map.Entry entry : config.getDebezium().entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && Asserts.isNotNullString(entry.getValue())) { + debeziumProperties.setProperty(entry.getKey(), entry.getValue()); + } + } + + // 添加jdbc参数注入 + Properties jdbcProperties = new Properties(); + for (Map.Entry entry : config.getJdbc().entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && Asserts.isNotNullString(entry.getValue())) { + jdbcProperties.setProperty(entry.getKey(), entry.getValue()); + } + } + + MySqlSourceBuilder sourceBuilder = MySqlSource.builder() + .hostname(config.getHostname()) + .port(config.getPort()) + .username(config.getUsername()) + .password(config.getPassword()); + + if (Asserts.isNotNullString(database)) { + String[] databases = database.split(FlinkParamConstant.SPLIT); + sourceBuilder.databaseList(databases); + } else { + sourceBuilder.databaseList(new String[0]); + } + + List schemaTableNameList = config.getSchemaTableNameList(); + if (Asserts.isNotNullCollection(schemaTableNameList)) { + sourceBuilder.tableList(schemaTableNameList.toArray(new String[schemaTableNameList.size()])); + } else { + sourceBuilder.tableList(new String[0]); + } + + sourceBuilder.deserializer(new MysqlJsonDebeziumDeserializationSchema()); + sourceBuilder.debeziumProperties(debeziumProperties); + sourceBuilder.jdbcProperties(jdbcProperties); + + if (Asserts.isNotNullString(config.getStartupMode())) { + switch (config.getStartupMode().toLowerCase()) { + case "initial": + sourceBuilder.startupOptions(StartupOptions.initial()); + break; + case "latest-offset": + sourceBuilder.startupOptions(StartupOptions.latest()); + break; + default: + } + } else { + sourceBuilder.startupOptions(StartupOptions.latest()); + } + + if (Asserts.isNotNullString(serverId)) { + sourceBuilder.serverId(serverId); + } + + if (Asserts.isNotNullString(serverTimeZone)) { + sourceBuilder.serverTimeZone(serverTimeZone); + } + + if (Asserts.isNotNullString(fetchSize)) { + sourceBuilder.fetchSize(Integer.valueOf(fetchSize)); + } + + if (Asserts.isNotNullString(connectTimeout)) { + sourceBuilder.connectTimeout(Duration.ofMillis(Long.valueOf(connectTimeout))); + } + + if (Asserts.isNotNullString(connectMaxRetries)) { + sourceBuilder.connectMaxRetries(Integer.valueOf(connectMaxRetries)); + } + + if (Asserts.isNotNullString(connectionPoolSize)) { + sourceBuilder.connectionPoolSize(Integer.valueOf(connectionPoolSize)); + } + + if (Asserts.isNotNullString(heartbeatInterval)) { + sourceBuilder.heartbeatInterval(Duration.ofMillis(Long.valueOf(heartbeatInterval))); + } + + if (Asserts.isAllNotNullString(chunkSize)) { + sourceBuilder.splitSize(Integer.parseInt(chunkSize)); + } + + if (Asserts.isNotNullString(distributionFactorLower)) { + sourceBuilder.distributionFactorLower(Double.valueOf(distributionFactorLower)); + } + + if (Asserts.isNotNullString(distributionFactorUpper)) { + sourceBuilder.distributionFactorUpper(Double.valueOf(distributionFactorUpper)); + } + + if (Asserts.isEqualsIgnoreCase(scanNewlyAddedTableEnabled, "true")) { + sourceBuilder.scanNewlyAddedTableEnabled(true); + } + + if (Asserts.isEqualsIgnoreCase(schemaChanges, "true")) { + sourceBuilder.includeSchemaChanges(true); + } + + return env.fromSource(sourceBuilder.build(), WatermarkStrategy.noWatermarks(), "MySQL CDC Source"); + } + + @Override + public Map> parseMetaDataConfigs() { + Map> allConfigMap = new HashMap<>(); + List schemaList = getSchemaList(); + for (String schema : schemaList) { + Map configMap = new HashMap<>(); + configMap.put(ClientConstant.METADATA_TYPE, METADATA_TYPE); + StringBuilder sb = new StringBuilder("jdbc:mysql://"); + sb.append(config.getHostname()); + sb.append(":"); + sb.append(config.getPort()); + sb.append("/"); + sb.append(schema); + configMap.put(ClientConstant.METADATA_NAME, sb.toString()); + configMap.put(ClientConstant.METADATA_URL, sb.toString()); + configMap.put(ClientConstant.METADATA_USERNAME, config.getUsername()); + configMap.put(ClientConstant.METADATA_PASSWORD, config.getPassword()); + allConfigMap.put(schema, configMap); + } + return allConfigMap; + } + + @Override + public Map parseMetaDataConfig() { + Map configMap = new HashMap<>(); + + configMap.put(ClientConstant.METADATA_TYPE, METADATA_TYPE); + StringBuilder sb = new StringBuilder("jdbc:mysql://"); + sb.append(config.getHostname()); + sb.append(":"); + sb.append(config.getPort()); + sb.append("/"); + configMap.put(ClientConstant.METADATA_NAME, sb.toString()); + configMap.put(ClientConstant.METADATA_URL, sb.toString()); + configMap.put(ClientConstant.METADATA_USERNAME, config.getUsername()); + configMap.put(ClientConstant.METADATA_PASSWORD, config.getPassword()); + + return configMap; + } + + @Override + public String getSchemaFieldName() { + return "db"; + } + + @Override + public String getSchema() { + return config.getDatabase(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/mysql/MysqlJsonDebeziumDeserializationSchema.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/mysql/MysqlJsonDebeziumDeserializationSchema.java new file mode 100644 index 0000000..2843710 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/mysql/MysqlJsonDebeziumDeserializationSchema.java @@ -0,0 +1,84 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.mysql; + +import com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.json.JsonConverter; +import com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.source.SourceRecord; +import com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.storage.ConverterType; +import com.ververica.cdc.debezium.DebeziumDeserializationSchema; +import org.apache.flink.api.common.typeinfo.BasicTypeInfo; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.util.Collector; + +import java.nio.charset.StandardCharsets; +import java.util.HashMap; +import java.util.Map; + +/** + * @version 1.0 + * @className: com.dlink.cdc.mysql.MysqlJsonDebeziumDeserializationSchema + * @Description: + * @author: jack zhong + */ +public class MysqlJsonDebeziumDeserializationSchema implements DebeziumDeserializationSchema { + private static final long serialVersionUID = 1L; + private transient JsonConverter jsonConverter; + private final Boolean includeSchema; + private Map customConverterConfigs; + + public MysqlJsonDebeziumDeserializationSchema() { + this(false); + } + + public MysqlJsonDebeziumDeserializationSchema(Boolean includeSchema) { + this.includeSchema = includeSchema; + } + + public MysqlJsonDebeziumDeserializationSchema(Boolean includeSchema, Map customConverterConfigs) { + this.includeSchema = includeSchema; + this.customConverterConfigs = customConverterConfigs; + } + + @Override + public void deserialize(SourceRecord record, Collector out) throws Exception { + if (this.jsonConverter == null) { + this.initializeJsonConverter(); + } + byte[] bytes = this.jsonConverter.fromConnectData(record.topic(), record.valueSchema(), record.value()); + out.collect(new String(bytes, StandardCharsets.UTF_8)); + } + + private void initializeJsonConverter() { + this.jsonConverter = new JsonConverter(); + HashMap configs = new HashMap(2); + configs.put("converter.type", ConverterType.VALUE.getName()); + configs.put("schemas.enable", this.includeSchema); + if (this.customConverterConfigs != null) { + configs.putAll(this.customConverterConfigs); + } + + this.jsonConverter.configure(configs); + } + + @Override + public TypeInformation getProducedType() { + return BasicTypeInfo.STRING_TYPE_INFO; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/oracle/OracleCDCBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/oracle/OracleCDCBuilder.java new file mode 100644 index 0000000..be5947a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/oracle/OracleCDCBuilder.java @@ -0,0 +1,138 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.oracle; + +import com.ververica.cdc.connectors.base.options.StartupOptions; +import com.ververica.cdc.connectors.oracle.OracleSource; +import com.ververica.cdc.debezium.JsonDebeziumDeserializationSchema; +import net.srt.flink.client.base.constant.ClientConstant; +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractCDCBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * MysqlCDCBuilder + * + * @author wenmo + * @since 2022/4/12 21:29 + **/ +public class OracleCDCBuilder extends AbstractCDCBuilder implements CDCBuilder { + + public static final String KEY_WORD = "oracle-cdc"; + private static final String METADATA_TYPE = "Oracle"; + + public OracleCDCBuilder() { + } + + public OracleCDCBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public CDCBuilder create(FlinkCDCConfig config) { + return new OracleCDCBuilder(config); + } + + @Override + public DataStreamSource build(StreamExecutionEnvironment env) { + Properties properties = new Properties(); + for (Map.Entry entry : config.getDebezium().entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && Asserts.isNotNullString(entry.getValue())) { + properties.setProperty(entry.getKey(), entry.getValue()); + } + } + OracleSource.Builder sourceBuilder = OracleSource.builder() + .hostname(config.getHostname()) + .port(config.getPort()) + .username(config.getUsername()) + .password(config.getPassword()) + .database(config.getDatabase()); + String schema = config.getSchema(); + if (Asserts.isNotNullString(schema)) { + String[] schemas = schema.split(FlinkParamConstant.SPLIT); + sourceBuilder.schemaList(schemas); + } else { + sourceBuilder.schemaList(new String[0]); + } + List schemaTableNameList = config.getSchemaTableNameList(); + if (Asserts.isNotNullCollection(schemaTableNameList)) { + sourceBuilder.tableList(schemaTableNameList.toArray(new String[schemaTableNameList.size()])); + } else { + sourceBuilder.tableList(new String[0]); + } + sourceBuilder.deserializer(new JsonDebeziumDeserializationSchema()); + sourceBuilder.debeziumProperties(properties); + if (Asserts.isNotNullString(config.getStartupMode())) { + switch (config.getStartupMode().toLowerCase()) { + case "initial": + sourceBuilder.startupOptions(StartupOptions.initial()); + break; + case "latest-offset": + sourceBuilder.startupOptions(StartupOptions.latest()); + break; + default: + } + } else { + sourceBuilder.startupOptions(StartupOptions.latest()); + } + return env.addSource(sourceBuilder.build(), "Oracle CDC Source"); + } + + @Override + public Map> parseMetaDataConfigs() { + Map> allConfigList = new HashMap<>(); + List schemaList = getSchemaList(); + for (String schema : schemaList) { + Map configMap = new HashMap<>(); + configMap.put(ClientConstant.METADATA_TYPE, METADATA_TYPE); + StringBuilder sb = new StringBuilder("jdbc:oracle:thin:@"); + sb.append(config.getHostname()); + sb.append(":"); + sb.append(config.getPort()); + sb.append(":"); + sb.append(config.getDatabase()); + configMap.put(ClientConstant.METADATA_NAME, sb.toString()); + configMap.put(ClientConstant.METADATA_URL, sb.toString()); + configMap.put(ClientConstant.METADATA_USERNAME, config.getUsername()); + configMap.put(ClientConstant.METADATA_PASSWORD, config.getPassword()); + allConfigList.put(schema, configMap); + } + return allConfigList; + } + + @Override + public String getSchema() { + return config.getSchema(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/postgres/PostgresCDCBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/postgres/PostgresCDCBuilder.java new file mode 100644 index 0000000..d064e62 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/postgres/PostgresCDCBuilder.java @@ -0,0 +1,139 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.postgres; + +import com.ververica.cdc.connectors.postgres.PostgreSQLSource; +import com.ververica.cdc.debezium.JsonDebeziumDeserializationSchema; +import net.srt.flink.client.base.constant.ClientConstant; +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractCDCBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * postgresCDCBuilder + * + * @author mengyejiang + * @since 2022/8/21 10:00 + **/ +public class PostgresCDCBuilder extends AbstractCDCBuilder implements CDCBuilder { + + public static final String KEY_WORD = "postgres-cdc"; + private static final String METADATA_TYPE = "PostgreSql"; + + public PostgresCDCBuilder() { + } + + public PostgresCDCBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public CDCBuilder create(FlinkCDCConfig config) { + return new PostgresCDCBuilder(config); + } + + @Override + public DataStreamSource build(StreamExecutionEnvironment env) { + + String decodingPluginName = config.getSource().get("decoding.plugin.name"); + String slotName = config.getSource().get("slot.name"); + + Properties debeziumProperties = new Properties(); + for (Map.Entry entry : config.getDebezium().entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && Asserts.isNotNullString(entry.getValue())) { + debeziumProperties.setProperty(entry.getKey(), entry.getValue()); + } + } + + PostgreSQLSource.Builder sourceBuilder = PostgreSQLSource.builder() + .hostname(config.getHostname()) + .port(config.getPort()) + .database(config.getDatabase()) + .username(config.getUsername()) + .password(config.getPassword()); + String schema = config.getSchema(); + if (Asserts.isNotNullString(schema)) { + String[] schemas = schema.split(FlinkParamConstant.SPLIT); + sourceBuilder.schemaList(schemas); + } else { + sourceBuilder.schemaList(new String[0]); + } + List schemaTableNameList = config.getSchemaTableNameList(); + if (Asserts.isNotNullCollection(schemaTableNameList)) { + sourceBuilder.tableList(schemaTableNameList.toArray(new String[schemaTableNameList.size()])); + } else { + sourceBuilder.tableList(new String[0]); + } + + sourceBuilder.deserializer(new JsonDebeziumDeserializationSchema()); + sourceBuilder.debeziumProperties(debeziumProperties); + + if (Asserts.isNotNullString(decodingPluginName)) { + sourceBuilder.decodingPluginName(decodingPluginName); + } + + if (Asserts.isNotNullString(slotName)) { + sourceBuilder.slotName(slotName); + } + + return env.addSource(sourceBuilder.build(), "Postgres CDC Source"); + } + + @Override + public Map> parseMetaDataConfigs() { + Map> allConfigMap = new HashMap<>(); + List schemaList = getSchemaList(); + for (String schema : schemaList) { + Map configMap = new HashMap<>(); + configMap.put(ClientConstant.METADATA_TYPE, METADATA_TYPE); + StringBuilder sb = new StringBuilder("jdbc:postgresql://"); + sb.append(config.getHostname()); + sb.append(":"); + sb.append(config.getPort()); + sb.append("/"); + sb.append(config.getDatabase()); + configMap.put(ClientConstant.METADATA_NAME, sb.toString()); + configMap.put(ClientConstant.METADATA_URL, sb.toString()); + configMap.put(ClientConstant.METADATA_USERNAME, config.getUsername()); + configMap.put(ClientConstant.METADATA_PASSWORD, config.getPassword()); + allConfigMap.put(schema, configMap); + } + return allConfigMap; + } + + @Override + public String getSchema() { + return config.getSchema(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/sql/SQLSinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/sql/SQLSinkBuilder.java new file mode 100644 index 0000000..714f47a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/sql/SQLSinkBuilder.java @@ -0,0 +1,327 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.sql; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.base.utils.FlinkBaseUtil; +import net.srt.flink.client.cdc.AbstractSinkBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.client.cdc.SinkBuilder; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.utils.JSONUtil; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.common.utils.SplitUtil; +import org.apache.commons.lang3.StringUtils; +import org.apache.flink.api.common.functions.FlatMapFunction; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.api.dag.Transformation; +import org.apache.flink.api.java.typeutils.RowTypeInfo; +import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.functions.ProcessFunction; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.operations.ModifyOperation; +import org.apache.flink.table.operations.Operation; +import org.apache.flink.table.types.logical.BigIntType; +import org.apache.flink.table.types.logical.DateType; +import org.apache.flink.table.types.logical.DecimalType; +import org.apache.flink.table.types.logical.FloatType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.TimestampType; +import org.apache.flink.table.types.logical.VarBinaryType; +import org.apache.flink.table.types.utils.TypeConversions; +import org.apache.flink.types.Row; +import org.apache.flink.types.RowKind; +import org.apache.flink.util.Collector; +import org.apache.flink.util.OutputTag; + +import javax.xml.bind.DatatypeConverter; +import java.io.Serializable; +import java.math.BigDecimal; +import java.time.Instant; +import java.time.LocalDate; +import java.time.ZoneId; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * SQLSinkBuilder + * + * @author wenmo + * @since 2022/4/25 23:02 + */ +public class SQLSinkBuilder extends AbstractSinkBuilder implements Serializable { + + public static final String KEY_WORD = "sql"; + private static final long serialVersionUID = -3699685106324048226L; + private static AtomicInteger atomicInteger = new AtomicInteger(0); + private ZoneId sinkTimeZone = ZoneId.of("UTC"); + + public SQLSinkBuilder() { + } + + private SQLSinkBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public void addSink(StreamExecutionEnvironment env, DataStream rowDataDataStream, Table table, List columnNameList, List columnTypeList) { + } + + private DataStream buildRow( + DataStream filterOperator, + List columnNameList, + List columnTypeList, + String schemaTableName) { + final String[] columnNames = columnNameList.toArray(new String[columnNameList.size()]); + final LogicalType[] columnTypes = columnTypeList.toArray(new LogicalType[columnTypeList.size()]); + TypeInformation[] typeInformations = TypeConversions.fromDataTypeToLegacyInfo(TypeConversions.fromLogicalToDataType(columnTypes)); + RowTypeInfo rowTypeInfo = new RowTypeInfo(typeInformations, columnNames); + return filterOperator + .flatMap(new FlatMapFunction() { + @Override + public void flatMap(Map value, Collector out) throws Exception { + try { + switch (value.get("op").toString()) { + case "r": + case "c": + Row irow = Row.withPositions(RowKind.INSERT, columnNameList.size()); + Map idata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + irow.setField(i, convertValue(idata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(irow); + break; + case "d": + Row drow = Row.withPositions(RowKind.DELETE, columnNameList.size()); + Map ddata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + drow.setField(i, convertValue(ddata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(drow); + break; + case "u": + Row ubrow = Row.withPositions(RowKind.UPDATE_BEFORE, columnNameList.size()); + Map ubdata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + ubrow.setField(i, convertValue(ubdata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(ubrow); + Row uarow = Row.withPositions(RowKind.UPDATE_AFTER, columnNameList.size()); + Map uadata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + uarow.setField(i, convertValue(uadata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(uarow); + break; + default: + } + } catch (Exception e) { + logger.error("SchameTable: {} - Row: {} - Exception:", schemaTableName, JSONUtil.toJsonString(value),e); + throw e; + } + } + }, rowTypeInfo); + } + + private void addTableSink( + int indexSink, + CustomTableEnvironment customTableEnvironment, + DataStream rowDataDataStream, + Table table, + List columnNameList) { + String sinkSchemaName = getSinkSchemaName(table); + String tableName = getSinkTableName(table); + String sinkTableName = tableName + "_" + indexSink; + String pkList = StringUtils.join(getPKList(table), "."); + String viewName = "VIEW_" + table.getSchemaTableNameWithUnderline(); + try { + customTableEnvironment.createTemporaryView(viewName, rowDataDataStream, StringUtils.join(columnNameList, ",")); + logger.info("Create " + viewName + " temporaryView successful..."); + } catch (ValidationException exception) { + if (!exception.getMessage().contains("already exists")) { + logger.error(exception.getMessage(), exception); + } + } + String flinkDDL = FlinkBaseUtil.getFlinkDDL(table, "" + sinkTableName, config, sinkSchemaName, tableName, pkList); + logger.info(flinkDDL); + customTableEnvironment.executeSql(flinkDDL); + logger.info("Create " + sinkTableName + " FlinkSQL DDL successful..."); + String cdcSqlInsert = FlinkBaseUtil.getCDCSqlInsert(table, sinkTableName, viewName, config); + logger.info(cdcSqlInsert); + List operations = customTableEnvironment.getParser().parse(cdcSqlInsert); + logger.info("Create " + sinkTableName + " FlinkSQL insert into successful..."); + try { + if (operations.size() > 0) { + Operation operation = operations.get(0); + if (operation instanceof ModifyOperation) { + modifyOperations.add((ModifyOperation) operation); + } + } + } catch (Exception e) { + logger.error("Translate to plan occur exception: {}", e); + throw e; + } + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public SinkBuilder create(FlinkCDCConfig config) { + return new SQLSinkBuilder(config); + } + + @Override + public DataStreamSource build( + CDCBuilder cdcBuilder, + StreamExecutionEnvironment env, + CustomTableEnvironment customTableEnvironment, + DataStreamSource dataStreamSource) { + final String timeZone = config.getSink().get("timezone"); + config.getSink().remove("timezone"); + if (Asserts.isNotNullString(timeZone)) { + sinkTimeZone = ZoneId.of(timeZone); + } + final List schemaList = config.getSchemaList(); + if (Asserts.isNotNullCollection(schemaList)) { + logger.info("Build deserialize successful..."); + Map> tagMap = new HashMap<>(); + Map tableMap = new HashMap<>(); + Map splitConfMap = config.getSplit(); + + for (Schema schema : schemaList) { + for (Table table : schema.getTables()) { + String sinkTableName = getSinkTableName(table); + OutputTag outputTag = new OutputTag(sinkTableName) { + }; + tagMap.put(table, outputTag); + tableMap.put(table.getSchemaTableName(), table); + } + } + final String schemaFieldName = config.getSchemaFieldName(); + ObjectMapper objectMapper = new ObjectMapper(); + SingleOutputStreamOperator mapOperator = dataStreamSource.map(x -> objectMapper.readValue(x, Map.class)).returns(Map.class); + SingleOutputStreamOperator processOperator = mapOperator.process(new ProcessFunction() { + @Override + public void processElement(Map map, ProcessFunction.Context ctx, Collector out) throws Exception { + LinkedHashMap source = (LinkedHashMap) map.get("source"); + try { + String tableName = SplitUtil.getReValue(source.get(schemaFieldName).toString(), splitConfMap) + "." + SplitUtil.getReValue(source.get("table").toString(), splitConfMap); + Table table = tableMap.get(tableName); + OutputTag outputTag = tagMap.get(table); + Optional.ofNullable(outputTag).orElseThrow(() -> new RuntimeException("data outPutTag is not exists!table name is " + tableName)); + ctx.output(outputTag, map); + } catch (Exception e) { + logger.error(e.getMessage(), e); + out.collect(map); + } + } + }); + final int indexSink = atomicInteger.getAndAdd(1); + tagMap.forEach((table, tag) -> { + final String schemaTableName = table.getSchemaTableName(); + try { + DataStream filterOperator = shunt(processOperator, table, tag); + logger.info("Build " + schemaTableName + " shunt successful..."); + List columnNameList = new ArrayList<>(); + List columnTypeList = new ArrayList<>(); + buildColumn(columnNameList, columnTypeList, table.getColumns()); + DataStream rowDataDataStream = buildRow(filterOperator, columnNameList, columnTypeList, schemaTableName).rebalance(); + logger.info("Build " + schemaTableName + " flatMap successful..."); + logger.info("Start build " + schemaTableName + " sink..."); + addTableSink(indexSink, customTableEnvironment, rowDataDataStream, table, columnNameList); + } catch (Exception e) { + logger.error("Build " + schemaTableName + " cdc sync failed..."); + logger.error(LogUtil.getError(e)); + } + }); + List> trans = customTableEnvironment.getPlanner().translate(modifyOperations); + for (Transformation item : trans) { + env.addOperator(item); + } + logger.info("A total of " + trans.size() + " table cdc sync were build successfull..."); + } + return dataStreamSource; + } + + @Override + protected Object convertValue(Object value, LogicalType logicalType) { + if (value == null) { + return null; + } + if (logicalType instanceof DateType) { + if (value instanceof Integer) { + return LocalDate.ofEpochDay((Integer) value); + } else if (value instanceof Long) { + return Instant.ofEpochMilli((long) value).atZone(sinkTimeZone).toLocalDate(); + } else { + return Instant.parse(value.toString()).atZone(sinkTimeZone).toLocalDate(); + } + } else if (logicalType instanceof TimestampType) { + if (value instanceof Integer) { + return Instant.ofEpochMilli(((Integer) value).longValue()).atZone(sinkTimeZone).toLocalDateTime(); + } else if (value instanceof Long) { + return Instant.ofEpochMilli((long) value).atZone(sinkTimeZone).toLocalDateTime(); + } else { + return Instant.parse(value.toString()).atZone(sinkTimeZone).toLocalDateTime(); + } + } else if (logicalType instanceof DecimalType) { + return new BigDecimal(value.toString()); + } else if (logicalType instanceof FloatType) { + if (value instanceof Float) { + return value; + } else if (value instanceof Double) { + return ((Double) value).floatValue(); + } else { + return Float.parseFloat(value.toString()); + } + } else if (logicalType instanceof BigIntType) { + if (value instanceof Integer) { + return ((Integer) value).longValue(); + } else { + return value; + } + } else if (logicalType instanceof VarBinaryType) { + // VARBINARY AND BINARY is converted to String with encoding base64 in FlinkCDC. + if (value instanceof String) { + return DatatypeConverter.parseBase64Binary(value.toString()); + } else { + return value; + } + } else { + return value; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/sqlserver/SqlServerCDCBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/sqlserver/SqlServerCDCBuilder.java new file mode 100644 index 0000000..686eb28 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/sqlserver/SqlServerCDCBuilder.java @@ -0,0 +1,157 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.sqlserver; + +import com.ververica.cdc.connectors.sqlserver.SqlServerSource; +import com.ververica.cdc.connectors.sqlserver.table.StartupOptions; +import net.srt.flink.client.base.constant.ClientConstant; +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractCDCBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * sql server CDC + * + * @author 郑文豪 + */ +public class SqlServerCDCBuilder extends AbstractCDCBuilder implements CDCBuilder { + + protected static final Logger logger = LoggerFactory.getLogger(SqlServerCDCBuilder.class); + + public static final String KEY_WORD = "sqlserver-cdc"; + private static final String METADATA_TYPE = "SqlServer"; + + public SqlServerCDCBuilder() { + } + + public SqlServerCDCBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public CDCBuilder create(FlinkCDCConfig config) { + return new SqlServerCDCBuilder(config); + } + + @Override + public DataStreamSource build(StreamExecutionEnvironment env) { + String database = config.getDatabase(); + Properties debeziumProperties = new Properties(); + // 为部分转换添加默认值 + debeziumProperties.setProperty("bigint.unsigned.handling.mode", "long"); + debeziumProperties.setProperty("decimal.handling.mode", "string"); + for (Map.Entry entry : config.getDebezium().entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && Asserts.isNotNullString(entry.getValue())) { + debeziumProperties.setProperty(entry.getKey(), entry.getValue()); + } + } + // 添加jdbc参数注入 + Properties jdbcProperties = new Properties(); + for (Map.Entry entry : config.getJdbc().entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && Asserts.isNotNullString(entry.getValue())) { + jdbcProperties.setProperty(entry.getKey(), entry.getValue()); + } + } + final SqlServerSource.Builder sourceBuilder = SqlServerSource.builder() + .hostname(config.getHostname()) + .port(config.getPort()) + .username(config.getUsername()) + .password(config.getPassword()); + if (Asserts.isNotNullString(database)) { + String[] databases = database.split(FlinkParamConstant.SPLIT); + sourceBuilder.database(databases[0]); + } else { + sourceBuilder.database(new String()); + } + List schemaTableNameList = config.getSchemaTableNameList(); + if (Asserts.isNotNullCollection(schemaTableNameList)) { + sourceBuilder.tableList(schemaTableNameList.toArray(new String[schemaTableNameList.size()])); + } else { + sourceBuilder.tableList(new String[0]); + } + // sourceBuilder.deserializer(new JsonDebeziumDeserializationSchema()); + sourceBuilder.deserializer(new SqlServerJsonDebeziumDeserializationSchema()); + if (Asserts.isNotNullString(config.getStartupMode())) { + switch (config.getStartupMode().toLowerCase()) { + case "initial": + sourceBuilder.startupOptions(StartupOptions.initial()); + break; + case "latest-offset": + sourceBuilder.startupOptions(StartupOptions.latest()); + break; + default: + } + } else { + sourceBuilder.startupOptions(StartupOptions.latest()); + } + sourceBuilder.debeziumProperties(debeziumProperties); + final DataStreamSource sqlServer_cdc_source = env.addSource(sourceBuilder.build(), + "SqlServer CDC Source"); + return sqlServer_cdc_source; + } + + @Override + public Map> parseMetaDataConfigs() { + Map> allConfigMap = new HashMap<>(); + List schemaList = getSchemaList(); + for (String schema : schemaList) { + Map configMap = new HashMap<>(); + configMap.put(ClientConstant.METADATA_TYPE, METADATA_TYPE); + StringBuilder sb = new StringBuilder("jdbc:sqlserver://"); + sb.append(config.getHostname()); + sb.append(":"); + sb.append(config.getPort()); + sb.append(";database="); + sb.append(config.getDatabase()); + configMap.put(ClientConstant.METADATA_NAME, sb.toString()); + configMap.put(ClientConstant.METADATA_URL, sb.toString()); + configMap.put(ClientConstant.METADATA_USERNAME, config.getUsername()); + configMap.put(ClientConstant.METADATA_PASSWORD, config.getPassword()); + allConfigMap.put(schema, configMap); + } + return allConfigMap; + } + + @Override + public String getSchemaFieldName() { + return "schema"; + } + + @Override + public String getSchema() { + return config.getDatabase(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/sqlserver/SqlServerJsonDebeziumDeserializationSchema.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/sqlserver/SqlServerJsonDebeziumDeserializationSchema.java new file mode 100644 index 0000000..7ab4621 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/sqlserver/SqlServerJsonDebeziumDeserializationSchema.java @@ -0,0 +1,63 @@ +package net.srt.flink.client.cdc.sqlserver; + +import com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.json.JsonConverter; +import com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.source.SourceRecord; +import com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.storage.ConverterType; +import com.ververica.cdc.debezium.DebeziumDeserializationSchema; +import org.apache.flink.api.common.typeinfo.BasicTypeInfo; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.util.Collector; + +import java.nio.charset.StandardCharsets; +import java.util.HashMap; +import java.util.Map; + +/** + * @version 1.0 + * @className: com.dlink.cdc.mysql.MysqlJsonDebeziumDeserializationSchema + * @Description: + * @author: jack zhong + */ +public class SqlServerJsonDebeziumDeserializationSchema implements DebeziumDeserializationSchema { + private static final long serialVersionUID = 1L; + private transient JsonConverter jsonConverter; + private final Boolean includeSchema; + private Map customConverterConfigs; + + public SqlServerJsonDebeziumDeserializationSchema() { + this(false); + } + + public SqlServerJsonDebeziumDeserializationSchema(Boolean includeSchema) { + this.includeSchema = includeSchema; + } + + public SqlServerJsonDebeziumDeserializationSchema(Boolean includeSchema, Map customConverterConfigs) { + this.includeSchema = includeSchema; + this.customConverterConfigs = customConverterConfigs; + } + + public void deserialize(SourceRecord record, Collector out) throws Exception { + if (this.jsonConverter == null) { + this.initializeJsonConverter(); + } + byte[] bytes = this.jsonConverter.fromConnectData(record.topic(), record.valueSchema(), record.value()); + out.collect(new String(bytes, StandardCharsets.UTF_8)); + } + + private void initializeJsonConverter() { + this.jsonConverter = new JsonConverter(); + HashMap configs = new HashMap(2); + configs.put("converter.type", ConverterType.VALUE.getName()); + configs.put("schemas.enable", this.includeSchema); + if (this.customConverterConfigs != null) { + configs.putAll(this.customConverterConfigs); + } + + this.jsonConverter.configure(configs); + } + + public TypeInformation getProducedType() { + return BasicTypeInfo.STRING_TYPE_INFO; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/starrocks/StarrocksSinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/starrocks/StarrocksSinkBuilder.java new file mode 100644 index 0000000..ecc60c0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/cdc/starrocks/StarrocksSinkBuilder.java @@ -0,0 +1,133 @@ +package net.srt.flink.client.cdc.starrocks; + +import com.starrocks.connector.flink.row.sink.StarRocksTableRowTransformer; +import com.starrocks.connector.flink.table.sink.StarRocksDynamicSinkFunction; +import com.starrocks.connector.flink.table.sink.StarRocksSinkOptions; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractSinkBuilder; +import net.srt.flink.client.cdc.SinkBuilder; +import net.srt.flink.client.utils.ObjectConvertUtil; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.Table; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.table.api.TableSchema; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.TimestampData; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.DateType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.TimestampType; +import org.apache.flink.table.types.utils.TypeConversions; + +import java.io.Serializable; +import java.time.Instant; +import java.time.LocalDateTime; +import java.time.ZoneId; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; + +/** + * StarrocksSinkBuilder + * + **/ +public class StarrocksSinkBuilder extends AbstractSinkBuilder implements Serializable { + + public static final String KEY_WORD = "datastream-starrocks"; + private static final long serialVersionUID = 8330362249137431824L; + private final ZoneId sinkZoneIdUTC = ZoneId.of("UTC"); + + public StarrocksSinkBuilder() { + } + + public StarrocksSinkBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public SinkBuilder create(FlinkCDCConfig config) { + return new StarrocksSinkBuilder(config); + } + + @Override + public void addSink( + StreamExecutionEnvironment env, + DataStream rowDataDataStream, + Table table, + List columnNameList, + List columnTypeList) { + try { + List columns = table.getColumns(); + List primaryKeys = new LinkedList<>(); + String[] columnNames = new String[columns.size()]; + for (int i = 0; i < columns.size(); i++) { + Column column = columns.get(i); + if (column.isKeyFlag()) { + primaryKeys.add(column.getName()); + } + columnNames[i] = column.getName(); + } + String[] primaryKeyArrays = primaryKeys.stream().toArray(String[]::new); + DataType[] dataTypes = new DataType[columnTypeList.size()]; + for (int i = 0; i < columnTypeList.size(); i++) { + LogicalType logicalType = columnTypeList.get(i); + String columnName = columnNameList.get(i); + if (primaryKeys.contains(columnName)) { + logicalType = logicalType.copy(false); + } + dataTypes[i] = TypeConversions.fromLogicalToDataType(logicalType); + } + TableSchema tableSchema = TableSchema.builder().primaryKey(primaryKeyArrays).fields(columnNames, dataTypes).build(); + Map sink = config.getSink(); + StarRocksSinkOptions.Builder builder = StarRocksSinkOptions.builder() + .withProperty("jdbc-url", sink.get("jdbc-url")) + .withProperty("load-url", sink.get("load-url")) + .withProperty("username", sink.get("username")) + .withProperty("password", sink.get("password")) + .withProperty("table-name", getSinkTableName(table)) + .withProperty("database-name", getSinkSchemaName(table)) + .withProperty("sink.properties.format", "json") + .withProperty("sink.properties.strip_outer_array", "true") + // 设置并行度,多并行度情况下需要考虑如何保证数据有序性 + .withProperty("sink.parallelism", "1"); + sink.forEach((key,value) -> { + if (key.startsWith("sink.")) { + builder.withProperty(key,value); + } + }); + StarRocksDynamicSinkFunction starrocksSinkFunction = new StarRocksDynamicSinkFunction( + builder.build(), + tableSchema, + new StarRocksTableRowTransformer(TypeInformation.of(RowData.class)) + ); + rowDataDataStream.addSink(starrocksSinkFunction); + logger.info("handler connector name:{} sink successful.....",getHandle()); + } catch (Exception ex) { + logger.error("handler connector name:{} sink ex:",getHandle(),ex); + } + } + + @Override + protected Object convertValue(Object value, LogicalType logicalType) { + Object object = ObjectConvertUtil.convertValue(value, logicalType, sinkZoneIdUTC); + if (object == null) { + return null; + } + if (logicalType instanceof TimestampType && object instanceof LocalDateTime) { + return TimestampData.fromLocalDateTime((LocalDateTime) object); + } else if (logicalType instanceof DateType) { + if (value instanceof Integer) { + return Instant.ofEpochSecond((int) value).atZone(sinkZoneIdUTC).toEpochSecond(); + } + return Instant.ofEpochMilli((long) value).atZone(sinkZoneIdUTC).toEpochSecond(); + } + return object; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/executor/CustomTableEnvironmentImpl.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/executor/CustomTableEnvironmentImpl.java new file mode 100644 index 0000000..660f304 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/executor/CustomTableEnvironmentImpl.java @@ -0,0 +1,406 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.executor; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.node.ObjectNode; +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.LineageRel; +import net.srt.flink.client.utils.FlinkStreamProgramWithoutPhysical; +import net.srt.flink.client.utils.LineageContext; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.result.SqlExplainResult; +import org.apache.flink.api.common.RuntimeExecutionMode; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.api.dag.Transformation; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ExecutionOptions; +import org.apache.flink.configuration.PipelineOptions; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.flink.runtime.jobgraph.jsonplan.JsonPlanGenerator; +import org.apache.flink.runtime.rest.messages.JobPlanInfo; +import org.apache.flink.streaming.api.TimeCharacteristic; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.graph.JSONGenerator; +import org.apache.flink.streaming.api.graph.StreamGraph; +import org.apache.flink.table.api.EnvironmentSettings; +import org.apache.flink.table.api.ExplainDetail; +import org.apache.flink.table.api.Table; +import org.apache.flink.table.api.TableConfig; +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.api.internal.TableEnvironmentImpl; +import org.apache.flink.table.catalog.CatalogManager; +import org.apache.flink.table.catalog.FunctionCatalog; +import org.apache.flink.table.catalog.GenericInMemoryCatalog; +import org.apache.flink.table.delegation.Executor; +import org.apache.flink.table.delegation.ExecutorFactory; +import org.apache.flink.table.delegation.Planner; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.expressions.ExpressionParser; +import org.apache.flink.table.factories.FactoryUtil; +import org.apache.flink.table.factories.PlannerFactoryUtil; +import org.apache.flink.table.module.ModuleManager; +import org.apache.flink.table.operations.ExplainOperation; +import org.apache.flink.table.operations.JavaDataStreamQueryOperation; +import org.apache.flink.table.operations.ModifyOperation; +import org.apache.flink.table.operations.Operation; +import org.apache.flink.table.operations.QueryOperation; +import org.apache.flink.table.operations.command.ResetOperation; +import org.apache.flink.table.operations.command.SetOperation; +import org.apache.flink.table.planner.delegation.DefaultExecutor; +import org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram; +import org.apache.flink.table.typeutils.FieldInfoUtils; + +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +/** + * 定制TableEnvironmentImpl + * + * @author wenmo + * @since 2021/10/22 10:02 + **/ +public class CustomTableEnvironmentImpl extends TableEnvironmentImpl implements CustomTableEnvironment { + + private final StreamExecutionEnvironment executionEnvironment; + private final FlinkChainedProgram flinkChainedProgram; + + public CustomTableEnvironmentImpl( + CatalogManager catalogManager, + ModuleManager moduleManager, + FunctionCatalog functionCatalog, + TableConfig tableConfig, + StreamExecutionEnvironment executionEnvironment, + Planner planner, + Executor executor, + boolean isStreamingMode, + ClassLoader userClassLoader) { + super( + catalogManager, + moduleManager, + tableConfig, + executor, + functionCatalog, + planner, + isStreamingMode, + userClassLoader); + this.executionEnvironment = executionEnvironment; + this.flinkChainedProgram = FlinkStreamProgramWithoutPhysical.buildProgram((Configuration) executionEnvironment.getConfiguration()); + } + + public static CustomTableEnvironmentImpl create(StreamExecutionEnvironment executionEnvironment) { + return create(executionEnvironment, EnvironmentSettings.newInstance().build(), TableConfig.getDefault()); + } + + public static CustomTableEnvironmentImpl createBatch(StreamExecutionEnvironment executionEnvironment) { + Configuration configuration = new Configuration(); + configuration.set(ExecutionOptions.RUNTIME_MODE, RuntimeExecutionMode.BATCH); + TableConfig tableConfig = new TableConfig(); + tableConfig.addConfiguration(configuration); + return create(executionEnvironment, EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build(), tableConfig); + } + + public static CustomTableEnvironmentImpl create( + StreamExecutionEnvironment executionEnvironment, + EnvironmentSettings settings, + TableConfig tableConfig) { + + // temporary solution until FLINK-15635 is fixed + final ClassLoader classLoader = Thread.currentThread().getContextClassLoader(); + + final ModuleManager moduleManager = new ModuleManager(); + + final CatalogManager catalogManager = + CatalogManager.newBuilder() + .classLoader(classLoader) + .config(tableConfig.getConfiguration()) + .defaultCatalog( + settings.getBuiltInCatalogName(), + new GenericInMemoryCatalog( + settings.getBuiltInCatalogName(), + settings.getBuiltInDatabaseName())) + .executionConfig(executionEnvironment.getConfig()) + .build(); + + final FunctionCatalog functionCatalog = + new FunctionCatalog(tableConfig, catalogManager, moduleManager); + + final Executor executor = + lookupExecutor(classLoader, settings.getExecutor(), executionEnvironment); + + final Planner planner = + PlannerFactoryUtil.createPlanner( + settings.getPlanner(), + executor, + tableConfig, + catalogManager, + functionCatalog); + + return new CustomTableEnvironmentImpl( + catalogManager, + moduleManager, + functionCatalog, + tableConfig, + executionEnvironment, + planner, + executor, + settings.isStreamingMode(), + classLoader); + } + + private static Executor lookupExecutor( + ClassLoader classLoader, + String executorIdentifier, + StreamExecutionEnvironment executionEnvironment) { + try { + final ExecutorFactory executorFactory = + FactoryUtil.discoverFactory( + classLoader, ExecutorFactory.class, executorIdentifier); + final Method createMethod = + executorFactory + .getClass() + .getMethod("create", StreamExecutionEnvironment.class); + + return (Executor) createMethod.invoke(executorFactory, executionEnvironment); + } catch (Exception e) { + throw new TableException( + "Could not instantiate the executor. Make sure a planner module is on the classpath", + e); + } + } + + @Override + public ObjectNode getStreamGraph(String statement) { + List operations = super.getParser().parse(statement); + if (operations.size() != 1) { + throw new TableException("Unsupported SQL query! explainSql() only accepts a single SQL query."); + } else { + List modifyOperations = new ArrayList<>(); + for (int i = 0; i < operations.size(); i++) { + if (operations.get(i) instanceof ModifyOperation) { + modifyOperations.add((ModifyOperation) operations.get(i)); + } + } + List> trans = super.planner.translate(modifyOperations); + if (execEnv instanceof DefaultExecutor) { + StreamGraph streamGraph = ((DefaultExecutor) execEnv).getExecutionEnvironment().generateStreamGraph(trans); + JSONGenerator jsonGenerator = new JSONGenerator(streamGraph); + String json = jsonGenerator.getJSON(); + ObjectMapper mapper = new ObjectMapper(); + ObjectNode objectNode = mapper.createObjectNode(); + try { + objectNode = (ObjectNode) mapper.readTree(json); + } catch (JsonProcessingException e) { + e.printStackTrace(); + } finally { + return objectNode; + } + } else { + throw new TableException("Unsupported SQL query! explainSql() need a single SQL to query."); + } + } + } + + @Override + public JobPlanInfo getJobPlanInfo(List statements) { + return new JobPlanInfo(JsonPlanGenerator.generatePlan(getJobGraphFromInserts(statements))); + } + + @Override + public StreamGraph getStreamGraphFromInserts(List statements) { + List modifyOperations = new ArrayList(); + for (String statement : statements) { + List operations = getParser().parse(statement); + if (operations.size() != 1) { + throw new TableException("Only single statement is supported."); + } else { + Operation operation = operations.get(0); + if (operation instanceof ModifyOperation) { + modifyOperations.add((ModifyOperation) operation); + } else { + throw new TableException("Only insert statement is supported now."); + } + } + } + List> trans = getPlanner().translate(modifyOperations); + if (execEnv instanceof DefaultExecutor) { + StreamGraph streamGraph = ((DefaultExecutor) execEnv).getExecutionEnvironment().generateStreamGraph(trans); + if (tableConfig.getConfiguration().containsKey(PipelineOptions.NAME.key())) { + streamGraph.setJobName(tableConfig.getConfiguration().getString(PipelineOptions.NAME)); + } + return streamGraph; + } else { + throw new TableException("Unsupported SQL query! ExecEnv need a ExecutorBase."); + } + } + + @Override + public JobGraph getJobGraphFromInserts(List statements) { + return getStreamGraphFromInserts(statements).getJobGraph(); + } + + @Override + public SqlExplainResult explainSqlRecord(String statement, ExplainDetail... extraDetails) { + SqlExplainResult record = new SqlExplainResult(); + List operations = getParser().parse(statement); + record.setParseTrue(true); + if (operations.size() != 1) { + throw new TableException( + "Unsupported SQL query! explainSql() only accepts a single SQL query."); + } + + Operation operation = operations.get(0); + if (operation instanceof ModifyOperation) { + record.setType("Modify DML"); + } else if (operation instanceof ExplainOperation) { + record.setType("Explain DML"); + } else if (operation instanceof QueryOperation) { + record.setType("Query DML"); + } else { + record.setExplain(operation.asSummaryString()); + record.setType("DDL"); + } + record.setExplainTrue(true); + if ("DDL".equals(record.getType())) { + //record.setExplain("DDL语句不进行解释。"); + return record; + } + record.setExplain(planner.explain(operations, extraDetails)); + return record; + } + + @Override + public boolean parseAndLoadConfiguration(String statement, StreamExecutionEnvironment environment, Map setMap) { + List operations = getParser().parse(statement); + for (Operation operation : operations) { + if (operation instanceof SetOperation) { + callSet((SetOperation) operation, environment, setMap); + return true; + } else if (operation instanceof ResetOperation) { + callReset((ResetOperation) operation, environment, setMap); + return true; + } + } + return false; + } + + private void callSet(SetOperation setOperation, StreamExecutionEnvironment environment, Map setMap) { + if (setOperation.getKey().isPresent() && setOperation.getValue().isPresent()) { + String key = setOperation.getKey().get().trim(); + String value = setOperation.getValue().get().trim(); + if (Asserts.isNullString(key) || Asserts.isNullString(value)) { + return; + } + Map confMap = new HashMap<>(); + confMap.put(key, value); + setMap.put(key, value); + Configuration configuration = Configuration.fromMap(confMap); + environment.getConfig().configure(configuration, null); + getConfig().addConfiguration(configuration); + } + } + + private void callReset(ResetOperation resetOperation, StreamExecutionEnvironment environment, Map setMap) { + if (resetOperation.getKey().isPresent()) { + String key = resetOperation.getKey().get().trim(); + if (Asserts.isNullString(key)) { + return; + } + Map confMap = new HashMap<>(); + confMap.put(key, null); + setMap.remove(key); + Configuration configuration = Configuration.fromMap(confMap); + environment.getConfig().configure(configuration, null); + getConfig().addConfiguration(configuration); + } else { + setMap.clear(); + } + } + + public Table fromDataStream(DataStream dataStream, Expression... fields) { + JavaDataStreamQueryOperation queryOperation = + asQueryOperation(dataStream, Optional.of(Arrays.asList(fields))); + + return createTable(queryOperation); + } + + public Table fromDataStream(DataStream dataStream, String fields) { + List expressions = ExpressionParser.parseExpressionList(fields); + return fromDataStream(dataStream, expressions.toArray(new Expression[0])); + } + + @Override + public void createTemporaryView(String path, DataStream dataStream, String fields) { + createTemporaryView(path, fromDataStream(dataStream, fields)); + } + + @Override + public List getLineage(String statement) { + LineageContext lineageContext = new LineageContext(flinkChainedProgram, this); + return lineageContext.getLineage(statement); + } + + @Override + public void createTemporaryView( + String path, DataStream dataStream, Expression... fields) { + createTemporaryView(path, fromDataStream(dataStream, fields)); + } + + private JavaDataStreamQueryOperation asQueryOperation( + DataStream dataStream, Optional> fields) { + TypeInformation streamType = dataStream.getType(); + + // get field names and types for all non-replaced fields + FieldInfoUtils.TypeInfoSchema typeInfoSchema = + fields.map( + f -> { + FieldInfoUtils.TypeInfoSchema fieldsInfo = + FieldInfoUtils.getFieldsInfo( + streamType, f.toArray(new Expression[0])); + + // check if event-time is enabled + validateTimeCharacteristic(fieldsInfo.isRowtimeDefined()); + return fieldsInfo; + }) + .orElseGet(() -> FieldInfoUtils.getFieldsInfo(streamType)); + + return new JavaDataStreamQueryOperation<>( + dataStream, typeInfoSchema.getIndices(), typeInfoSchema.toResolvedSchema()); + } + + private void validateTimeCharacteristic(boolean isRowtimeDefined) { + if (isRowtimeDefined + && executionEnvironment.getStreamTimeCharacteristic() + != TimeCharacteristic.EventTime) { + throw new ValidationException( + String.format( + "A rowtime attribute requires an EventTime time characteristic in stream environment. But is: %s", + executionEnvironment.getStreamTimeCharacteristic())); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/executor/CustomTableResultImpl.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/executor/CustomTableResultImpl.java new file mode 100644 index 0000000..d72215a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/executor/CustomTableResultImpl.java @@ -0,0 +1,426 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.executor; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.core.execution.JobClient; +import org.apache.flink.table.api.DataTypes; +import org.apache.flink.table.api.ResultKind; +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.table.catalog.Column; +import org.apache.flink.table.catalog.ResolvedSchema; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.utils.PrintUtils; +import org.apache.flink.types.Row; +import org.apache.flink.util.CloseableIterator; +import org.apache.flink.util.Preconditions; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.annotation.Nullable; +import java.io.PrintWriter; +import java.time.ZoneId; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.Optional; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; + +/** + * 定制TableResultImpl + * + * @author wenmo + * @since 2021/10/22 10:02 + **/ +@Internal +public class CustomTableResultImpl implements TableResult { + protected static final Logger logger = LoggerFactory.getLogger(CustomTableResultImpl.class); + public static final TableResult TABLE_RESULT_OK = + CustomTableResultImpl.builder() + .resultKind(ResultKind.SUCCESS) + .schema(ResolvedSchema.of(Column.physical("result", DataTypes.STRING()))) + .data(Collections.singletonList(Row.of("OK"))) + .build(); + + private final JobClient jobClient; + private final ResolvedSchema resolvedSchema; + private final ResultKind resultKind; + private final CloseableRowIteratorWrapper data; + private final PrintStyle printStyle; + private final ZoneId sessionTimeZone; + + private CustomTableResultImpl( + @Nullable JobClient jobClient, + ResolvedSchema resolvedSchema, + ResultKind resultKind, + CloseableIterator data, + PrintStyle printStyle, + ZoneId sessionTimeZone) { + this.jobClient = jobClient; + this.resolvedSchema = + Preconditions.checkNotNull(resolvedSchema, "resolvedSchema should not be null"); + this.resultKind = Preconditions.checkNotNull(resultKind, "resultKind should not be null"); + Preconditions.checkNotNull(data, "data should not be null"); + this.data = new CloseableRowIteratorWrapper(data); + this.printStyle = Preconditions.checkNotNull(printStyle, "printStyle should not be null"); + this.sessionTimeZone = + Preconditions.checkNotNull(sessionTimeZone, "sessionTimeZone should not be null"); + } + + public static TableResult buildTableResult(List fields, List rows) { + Builder builder = builder().resultKind(ResultKind.SUCCESS); + if (fields.size() > 0) { + List columnNames = new ArrayList<>(); + List columnTypes = new ArrayList<>(); + for (int i = 0; i < fields.size(); i++) { + columnNames.add(fields.get(i).getName()); + columnTypes.add(fields.get(i).getType()); + } + builder.schema(ResolvedSchema.physical(columnNames, columnTypes)).data(rows); + } + return builder.build(); + } + + @Override + public Optional getJobClient() { + return Optional.ofNullable(jobClient); + } + + @Override + public void await() throws InterruptedException, ExecutionException { + try { + awaitInternal(-1, TimeUnit.MILLISECONDS); + } catch (TimeoutException e) { + // do nothing + } + } + + @Override + public void await(long timeout, TimeUnit unit) + throws InterruptedException, ExecutionException, TimeoutException { + awaitInternal(timeout, unit); + } + + private void awaitInternal(long timeout, TimeUnit unit) + throws InterruptedException, ExecutionException, TimeoutException { + if (jobClient == null) { + return; + } + + ExecutorService executor = + Executors.newFixedThreadPool(1, r -> new Thread(r, "TableResult-await-thread")); + try { + CompletableFuture future = + CompletableFuture.runAsync( + () -> { + while (!data.isFirstRowReady()) { + try { + Thread.sleep(100); + } catch (InterruptedException e) { + throw new TableException("Thread is interrupted"); + } + } + }, + executor); + + if (timeout >= 0) { + future.get(timeout, unit); + } else { + future.get(); + } + } finally { + executor.shutdown(); + } + } + + @Override + public ResolvedSchema getResolvedSchema() { + return resolvedSchema; + } + + @Override + public ResultKind getResultKind() { + return resultKind; + } + + @Override + public CloseableIterator collect() { + return data; + } + + @Override + public void print() { + Iterator it = collect(); + if (printStyle instanceof TableauStyle) { + int maxColumnWidth = ((TableauStyle) printStyle).getMaxColumnWidth(); + String nullColumn = ((TableauStyle) printStyle).getNullColumn(); + boolean deriveColumnWidthByType = + ((TableauStyle) printStyle).isDeriveColumnWidthByType(); + boolean printRowKind = ((TableauStyle) printStyle).isPrintRowKind(); + PrintUtils.printAsTableauForm( + getResolvedSchema(), + it, + new PrintWriter(System.out), + maxColumnWidth, + nullColumn, + deriveColumnWidthByType, + printRowKind, + sessionTimeZone); + } else if (printStyle instanceof RawContentStyle) { + while (it.hasNext()) { + logger.info( + String.join( + ",", + PrintUtils.rowToString( + it.next(), getResolvedSchema(), sessionTimeZone))); + } + } else { + throw new TableException("Unsupported print style: " + printStyle); + } + } + + public static Builder builder() { + return new Builder(); + } + + /** + * Builder for creating a {@link CustomTableResultImpl}. + */ + public static class Builder { + private JobClient jobClient = null; + private ResolvedSchema resolvedSchema = null; + private ResultKind resultKind = null; + private CloseableIterator data = null; + private PrintStyle printStyle = + PrintStyle.tableau(Integer.MAX_VALUE, PrintUtils.NULL_COLUMN, false, false); + private ZoneId sessionTimeZone = ZoneId.of("UTC"); + + private Builder() { + } + + /** + * Specifies job client which associates the submitted Flink job. + * + * @param jobClient a {@link JobClient} for the submitted Flink job. + */ + public Builder jobClient(JobClient jobClient) { + this.jobClient = jobClient; + return this; + } + + /** + * Specifies schema of the execution result. + * + * @param resolvedSchema a {@link ResolvedSchema} for the execution result. + */ + public Builder schema(ResolvedSchema resolvedSchema) { + Preconditions.checkNotNull(resolvedSchema, "resolvedSchema should not be null"); + this.resolvedSchema = resolvedSchema; + return this; + } + + /** + * Specifies result kind of the execution result. + * + * @param resultKind a {@link ResultKind} for the execution result. + */ + public Builder resultKind(ResultKind resultKind) { + Preconditions.checkNotNull(resultKind, "resultKind should not be null"); + this.resultKind = resultKind; + return this; + } + + /** + * Specifies an row iterator as the execution result. + * + * @param rowIterator a row iterator as the execution result. + */ + public Builder data(CloseableIterator rowIterator) { + Preconditions.checkNotNull(rowIterator, "rowIterator should not be null"); + this.data = rowIterator; + return this; + } + + /** + * Specifies an row list as the execution result. + * + * @param rowList a row list as the execution result. + */ + public Builder data(List rowList) { + Preconditions.checkNotNull(rowList, "listRows should not be null"); + this.data = CloseableIterator.adapterForIterator(rowList.iterator()); + return this; + } + + /** + * Specifies print style. Default is {@link TableauStyle} with max integer column width. + */ + public Builder setPrintStyle(PrintStyle printStyle) { + Preconditions.checkNotNull(printStyle, "printStyle should not be null"); + this.printStyle = printStyle; + return this; + } + + /** + * Specifies session time zone. + */ + public Builder setSessionTimeZone(ZoneId sessionTimeZone) { + Preconditions.checkNotNull(sessionTimeZone, "sessionTimeZone should not be null"); + this.sessionTimeZone = sessionTimeZone; + return this; + } + + /** + * Returns a {@link TableResult} instance. + */ + public TableResult build() { + return new CustomTableResultImpl( + jobClient, resolvedSchema, resultKind, data, printStyle, sessionTimeZone); + } + } + + /** + * Root interface for all print styles. + */ + public interface PrintStyle { + /** + * Create a tableau print style with given max column width, null column, change mode + * indicator and a flag to indicate whether the column width is derived from type (true) or + * content (false), which prints the result schema and content as tableau form. + */ + static PrintStyle tableau( + int maxColumnWidth, + String nullColumn, + boolean deriveColumnWidthByType, + boolean printRowKind) { + Preconditions.checkArgument( + maxColumnWidth > 0, "maxColumnWidth should be greater than 0"); + Preconditions.checkNotNull(nullColumn, "nullColumn should not be null"); + return new TableauStyle( + maxColumnWidth, nullColumn, deriveColumnWidthByType, printRowKind); + } + + /** + * Create a raw content print style, which only print the result content as raw form. column + * delimiter is ",", row delimiter is "\n". + */ + static PrintStyle rawContent() { + return new RawContentStyle(); + } + } + + /** + * print the result schema and content as tableau form. + */ + private static final class TableauStyle implements PrintStyle { + /** + * A flag to indicate whether the column width is derived from type (true) or content + * (false). + */ + private final boolean deriveColumnWidthByType; + + private final int maxColumnWidth; + private final String nullColumn; + /** + * A flag to indicate whether print row kind info. + */ + private final boolean printRowKind; + + private TableauStyle( + int maxColumnWidth, + String nullColumn, + boolean deriveColumnWidthByType, + boolean printRowKind) { + this.deriveColumnWidthByType = deriveColumnWidthByType; + this.maxColumnWidth = maxColumnWidth; + this.nullColumn = nullColumn; + this.printRowKind = printRowKind; + } + + public boolean isDeriveColumnWidthByType() { + return deriveColumnWidthByType; + } + + int getMaxColumnWidth() { + return maxColumnWidth; + } + + String getNullColumn() { + return nullColumn; + } + + public boolean isPrintRowKind() { + return printRowKind; + } + } + + /** + * only print the result content as raw form. column delimiter is ",", row delimiter is "\n". + */ + private static final class RawContentStyle implements PrintStyle { + } + + /** + * A {@link CloseableIterator} wrapper class that can return whether the first row is ready. + * + *

The first row is ready when {@link #hasNext} method returns true or {@link #next()} method + * returns a row. The execution order of {@link TableResult#collect} method and {@link + * TableResult#await()} may be arbitrary, this class will record whether the first row is ready + * (or accessed). + */ + private static final class CloseableRowIteratorWrapper implements CloseableIterator { + private final CloseableIterator iterator; + private boolean isFirstRowReady = false; + + private CloseableRowIteratorWrapper(CloseableIterator iterator) { + this.iterator = iterator; + } + + @Override + public void close() throws Exception { + iterator.close(); + } + + @Override + public boolean hasNext() { + boolean hasNext = iterator.hasNext(); + isFirstRowReady = isFirstRowReady || hasNext; + return hasNext; + } + + @Override + public Row next() { + Row next = iterator.next(); + isFirstRowReady = true; + return next; + } + + public boolean isFirstRowReady() { + return isFirstRowReady || hasNext(); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/executor/TableSchemaField.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/executor/TableSchemaField.java new file mode 100644 index 0000000..17e707d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/executor/TableSchemaField.java @@ -0,0 +1,52 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.executor; + +import org.apache.flink.table.types.DataType; + +/** + * @author wenmo + * @since 2021/10/22 10:02 + **/ +public class TableSchemaField { + private String name; + private DataType type; + + public TableSchemaField(String name, DataType type) { + this.name = name; + this.type = type; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public DataType getType() { + return type; + } + + public void setType(DataType type) { + this.type = type; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/FlinkStreamProgramWithoutPhysical.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/FlinkStreamProgramWithoutPhysical.java new file mode 100644 index 0000000..56da56c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/FlinkStreamProgramWithoutPhysical.java @@ -0,0 +1,205 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.utils; + +import org.apache.calcite.plan.Convention; +import org.apache.calcite.plan.hep.HepMatchOrder; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.table.api.config.OptimizerConfigOptions; +import org.apache.flink.table.planner.plan.nodes.FlinkConventions; +import org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram; +import org.apache.flink.table.planner.plan.optimize.program.FlinkDecorrelateProgram; +import org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgramBuilder; +import org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgramBuilder; +import org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgramBuilder; +import org.apache.flink.table.planner.plan.optimize.program.HEP_RULES_EXECUTION_TYPE; +import org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets; + +/** + * FlinkStreamProgramWithoutPhysical + * + * @author wenmo + * @since 2022/8/20 23:33 + */ +public class FlinkStreamProgramWithoutPhysical { + + private static final String SUBQUERY_REWRITE = "subquery_rewrite"; + private static final String TEMPORAL_JOIN_REWRITE = "temporal_join_rewrite"; + private static final String DECORRELATE = "decorrelate"; + private static final String DEFAULT_REWRITE = "default_rewrite"; + private static final String PREDICATE_PUSHDOWN = "predicate_pushdown"; + private static final String JOIN_REORDER = "join_reorder"; + private static final String PROJECT_REWRITE = "project_rewrite"; + private static final String LOGICAL = "logical"; + private static final String LOGICAL_REWRITE = "logical_rewrite"; + + public static FlinkChainedProgram buildProgram(Configuration config) { + FlinkChainedProgram chainedProgram = new FlinkChainedProgram(); + + // rewrite sub-queries to joins + chainedProgram.addLast( + SUBQUERY_REWRITE, + FlinkGroupProgramBuilder.newBuilder() + // rewrite QueryOperationCatalogViewTable before rewriting sub-queries + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.TABLE_REF_RULES()) + .build(), "convert table references before rewriting sub-queries to semi-join") + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.SEMI_JOIN_RULES()) + .build(), "rewrite sub-queries to semi-join") + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.TABLE_SUBQUERY_RULES()) + .build(), "sub-queries remove") + // convert RelOptTableImpl (which exists in SubQuery before) to FlinkRelOptTable + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.TABLE_REF_RULES()) + .build(), "convert table references after sub-queries removed") + .build()); + + // rewrite special temporal join plan + chainedProgram.addLast( + TEMPORAL_JOIN_REWRITE, + FlinkGroupProgramBuilder.newBuilder() + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.EXPAND_PLAN_RULES()) + .build(), + "convert correlate to temporal table join") + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.POST_EXPAND_CLEAN_UP_RULES()) + .build(), + "convert enumerable table scan") + .build()); + + // query decorrelation + chainedProgram.addLast(DECORRELATE, + FlinkGroupProgramBuilder.newBuilder() + // rewrite before decorrelation + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.PRE_DECORRELATION_RULES()) + .build(), + "pre-rewrite before decorrelation") + .addProgram(new FlinkDecorrelateProgram(), "") + .build()); + + // default rewrite, includes: predicate simplification, expression reduction, window + // properties rewrite, etc. + chainedProgram.addLast( + DEFAULT_REWRITE, + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.DEFAULT_REWRITE_RULES()) + .build()); + + // rule based optimization: push down predicate(s) in where clause, so it only needs to read + // the required data + chainedProgram.addLast( + PREDICATE_PUSHDOWN, + FlinkGroupProgramBuilder.newBuilder() + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.FILTER_PREPARE_RULES()) + .build(), + "filter rules") + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.FILTER_TABLESCAN_PUSHDOWN_RULES()) + .build(), + "push predicate into table scan") + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.PRUNE_EMPTY_RULES()) + .build(), + "prune empty after predicate push down") + .build()); + + // join reorder + if (config.getBoolean(OptimizerConfigOptions.TABLE_OPTIMIZER_JOIN_REORDER_ENABLED)) { + chainedProgram.addLast( + JOIN_REORDER, + FlinkGroupProgramBuilder.newBuilder() + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.JOIN_REORDER_PREPARE_RULES()) + .build(), "merge join into MultiJoin") + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.JOIN_REORDER_RULES()) + .build(), "do join reorder") + .build()); + } + + // project rewrite + chainedProgram.addLast( + PROJECT_REWRITE, + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.PROJECT_RULES()) + .build()); + + // optimize the logical plan + chainedProgram.addLast( + LOGICAL, + FlinkVolcanoProgramBuilder.newBuilder() + .add(FlinkStreamRuleSets.LOGICAL_OPT_RULES()) + .setRequiredOutputTraits(new Convention.Impl[]{ + FlinkConventions.LOGICAL() + }) + .build()); + + // logical rewrite + chainedProgram.addLast( + LOGICAL_REWRITE, + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.LOGICAL_REWRITE()) + .build()); + + return chainedProgram; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/FlinkUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/FlinkUtil.java new file mode 100644 index 0000000..6e39f4c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/FlinkUtil.java @@ -0,0 +1,67 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.utils; + +import org.apache.flink.api.common.JobID; +import org.apache.flink.client.program.ClusterClient; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.table.catalog.CatalogManager; +import org.apache.flink.table.catalog.ObjectIdentifier; + +import java.util.ArrayList; +import java.util.List; +import java.util.Optional; +import java.util.concurrent.ExecutionException; + +/** + * FlinkUtil + * + * @author wenmo + * @since 2021/10/22 10:02 + */ +public class FlinkUtil { + + public static List getFieldNamesFromCatalogManager(CatalogManager catalogManager, String catalog, String database, String table) { + Optional tableOpt = catalogManager.getTable( + ObjectIdentifier.of(catalog, database, table) + ); + if (tableOpt.isPresent()) { + return tableOpt.get().getResolvedSchema().getColumnNames(); + } else { + return new ArrayList(); + } + } + + public static List catchColumn(TableResult tableResult) { + return tableResult.getResolvedSchema().getColumnNames(); + } + + public static String triggerSavepoint(ClusterClient clusterClient, String jobId, String savePoint) throws ExecutionException, InterruptedException { + return clusterClient.triggerSavepoint(JobID.fromHexString(jobId), savePoint).get().toString(); + } + + public static String stopWithSavepoint(ClusterClient clusterClient, String jobId, String savePoint) throws ExecutionException, InterruptedException { + return clusterClient.stopWithSavepoint(JobID.fromHexString(jobId), true, savePoint).get().toString(); + } + + public static String cancelWithSavepoint(ClusterClient clusterClient, String jobId, String savePoint) throws ExecutionException, InterruptedException { + return clusterClient.cancelWithSavepoint(JobID.fromHexString(jobId), savePoint).get().toString(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/LineageContext.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/LineageContext.java new file mode 100644 index 0000000..4e5206b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/LineageContext.java @@ -0,0 +1,214 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.utils; + +import net.srt.flink.client.base.model.LineageRel; +import org.apache.calcite.plan.RelOptTable; +import org.apache.calcite.rel.RelNode; +import org.apache.calcite.rel.metadata.RelColumnOrigin; +import org.apache.calcite.rel.metadata.RelMetadataQuery; +import org.apache.commons.collections.CollectionUtils; +import org.apache.flink.api.java.tuple.Tuple2; +import org.apache.flink.table.api.TableConfig; +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.api.internal.TableEnvironmentImpl; +import org.apache.flink.table.catalog.CatalogManager; +import org.apache.flink.table.catalog.FunctionCatalog; +import org.apache.flink.table.operations.CatalogSinkModifyOperation; +import org.apache.flink.table.operations.Operation; +import org.apache.flink.table.planner.calcite.FlinkRelBuilder; +import org.apache.flink.table.planner.calcite.SqlExprToRexConverterFactory; +import org.apache.flink.table.planner.delegation.PlannerBase; +import org.apache.flink.table.planner.operations.PlannerQueryOperation; +import org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram; +import org.apache.flink.table.planner.plan.optimize.program.StreamOptimizeContext; +import org.apache.flink.table.planner.plan.schema.TableSourceTable; +import org.apache.flink.table.planner.plan.trait.MiniBatchInterval; + +import java.util.ArrayList; +import java.util.List; +import java.util.Set; + +/** + * LineageContext + * + * @author baisong + * @since 2022/8/6 11:06 + */ +public class LineageContext { + + private final FlinkChainedProgram flinkChainedProgram; + private final TableEnvironmentImpl tableEnv; + + public LineageContext(FlinkChainedProgram flinkChainedProgram, TableEnvironmentImpl tableEnv) { + this.flinkChainedProgram = flinkChainedProgram; + this.tableEnv = tableEnv; + } + + public List getLineage(String statement) { + // 1. Generate original relNode tree + Tuple2 parsed = parseStatement(statement); + String sinkTable = parsed.getField(0); + RelNode oriRelNode = parsed.getField(1); + + // 2. Optimize original relNode to generate Optimized Logical Plan + RelNode optRelNode = optimize(oriRelNode); + + // 3. Build lineage based from RelMetadataQuery + return buildFiledLineageResult(sinkTable, optRelNode); + } + + private Tuple2 parseStatement(String sql) { + List operations = tableEnv.getParser().parse(sql); + + if (operations.size() != 1) { + throw new TableException( + "Unsupported SQL query! only accepts a single SQL statement."); + } + Operation operation = operations.get(0); + if (operation instanceof CatalogSinkModifyOperation) { + CatalogSinkModifyOperation sinkOperation = (CatalogSinkModifyOperation) operation; + + PlannerQueryOperation queryOperation = (PlannerQueryOperation) sinkOperation.getChild(); + RelNode relNode = queryOperation.getCalciteTree(); + return new Tuple2<>( + sinkOperation.getTableIdentifier().asSummaryString(), + relNode); + } else { + throw new TableException("Only insert is supported now."); + } + } + + /** + * Calling each program's optimize method in sequence. + */ + private RelNode optimize(RelNode relNode) { + return flinkChainedProgram.optimize(relNode, new StreamOptimizeContext() { + + @Override + public boolean isBatchMode() { + return false; + } + + @Override + public TableConfig getTableConfig() { + return tableEnv.getConfig(); + } + + @Override + public FunctionCatalog getFunctionCatalog() { + return getPlanner().getFlinkContext().getFunctionCatalog(); + } + + @Override + public CatalogManager getCatalogManager() { + return tableEnv.getCatalogManager(); + } + + @Override + public SqlExprToRexConverterFactory getSqlExprToRexConverterFactory() { + return getPlanner().getFlinkContext().getSqlExprToRexConverterFactory(); + } + + @Override + public C unwrap(Class clazz) { + return getPlanner().getFlinkContext().unwrap(clazz); + } + + @Override + public FlinkRelBuilder getFlinkRelBuilder() { + return getPlanner().getRelBuilder(); + } + + @Override + public boolean needFinalTimeIndicatorConversion() { + return true; + } + + @Override + public boolean isUpdateBeforeRequired() { + return false; + } + + @Override + public MiniBatchInterval getMiniBatchInterval() { + return MiniBatchInterval.NONE; + } + + private PlannerBase getPlanner() { + return (PlannerBase) tableEnv.getPlanner(); + } + + }); + } + + /** + * Check the size of query and sink fields match + */ + private void validateSchema(String sinkTable, RelNode relNode, List sinkFieldList) { + List queryFieldList = relNode.getRowType().getFieldNames(); + if (queryFieldList.size() != sinkFieldList.size()) { + throw new ValidationException( + String.format( + "Column types of query result and sink for %s do not match.\n" + + "Query schema: %s\n" + + "Sink schema: %s", + sinkTable, queryFieldList, sinkFieldList)); + } + } + + private List buildFiledLineageResult(String sinkTable, RelNode optRelNode) { + // target columns + List targetColumnList = tableEnv.from(sinkTable) + .getResolvedSchema() + .getColumnNames(); + + // check the size of query and sink fields match + validateSchema(sinkTable, optRelNode, targetColumnList); + + RelMetadataQuery metadataQuery = optRelNode.getCluster().getMetadataQuery(); + List resultList = new ArrayList<>(); + + for (int index = 0; index < targetColumnList.size(); index++) { + String targetColumn = targetColumnList.get(index); + + Set relColumnOriginSet = metadataQuery.getColumnOrigins(optRelNode, index); + + if (CollectionUtils.isNotEmpty(relColumnOriginSet)) { + for (RelColumnOrigin relColumnOrigin : relColumnOriginSet) { + // table + RelOptTable table = relColumnOrigin.getOriginTable(); + String sourceTable = String.join(".", table.getQualifiedName()); + + // filed + int ordinal = relColumnOrigin.getOriginColumnOrdinal(); + List fieldNames = ((TableSourceTable) table).catalogTable().getResolvedSchema() + .getColumnNames(); + String sourceColumn = fieldNames.get(ordinal); + + // add record + resultList.add(LineageRel.build(sourceTable, sourceColumn, sinkTable, targetColumn)); + } + } + } + return resultList; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/ObjectConvertUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/ObjectConvertUtil.java new file mode 100644 index 0000000..9605296 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/net/srt/flink/client/utils/ObjectConvertUtil.java @@ -0,0 +1,66 @@ +package net.srt.flink.client.utils; + +import org.apache.flink.table.types.logical.BigIntType; +import org.apache.flink.table.types.logical.DateType; +import org.apache.flink.table.types.logical.DecimalType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.TimestampType; +import org.apache.flink.table.types.logical.VarBinaryType; + +import javax.xml.bind.DatatypeConverter; +import java.math.BigDecimal; +import java.time.Instant; +import java.time.ZoneId; + +/** + * @className: com.dlink.utils.ObjectConvertUtil + * @Description: + * @author: jack zhong + */ +public class ObjectConvertUtil { + + public static Object convertValue(Object value, LogicalType logicalType) { + return ObjectConvertUtil.convertValue(value,logicalType,null); + } + + public static Object convertValue(Object value, LogicalType logicalType,ZoneId sinkTimeZone) { + if (value == null) { + return null; + } + if (sinkTimeZone == null) { + sinkTimeZone = ZoneId.of("UTC"); + } + if (logicalType instanceof DateType) { + if (value instanceof Integer) { + return Instant.ofEpochMilli(((Integer) value).longValue()).atZone(sinkTimeZone).toLocalDate(); + } else { + return Instant.ofEpochMilli((long) value).atZone(ZoneId.systemDefault()).toLocalDate(); + } + } else if (logicalType instanceof TimestampType) { + if (value instanceof Integer) { + return Instant.ofEpochMilli(((Integer) value).longValue()).atZone(sinkTimeZone).toLocalDateTime(); + } else if (value instanceof String) { + return Instant.parse((String) value).atZone(ZoneId.systemDefault()).toLocalDateTime(); + } else { + return Instant.ofEpochMilli((long) value).atZone(sinkTimeZone).toLocalDateTime(); + } + } else if (logicalType instanceof DecimalType) { + return new BigDecimal((String) value); + } else if (logicalType instanceof BigIntType) { + if (value instanceof Integer) { + return ((Integer) value).longValue(); + } else { + return value; + } + } else if (logicalType instanceof VarBinaryType) { + // VARBINARY AND BINARY is converted to String with encoding base64 in FlinkCDC. + if (value instanceof String) { + return DatatypeConverter.parseBase64Binary((String) value); + } else { + return value; + } + } else { + return value; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/org/apache/calcite/rel/metadata/RelMdColumnOrigins.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/org/apache/calcite/rel/metadata/RelMdColumnOrigins.java new file mode 100644 index 0000000..27576d9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/org/apache/calcite/rel/metadata/RelMdColumnOrigins.java @@ -0,0 +1,380 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.calcite.rel.metadata; + +import org.apache.calcite.plan.RelOptTable; +import org.apache.calcite.rel.RelNode; +import org.apache.calcite.rel.SingleRel; +import org.apache.calcite.rel.core.Aggregate; +import org.apache.calcite.rel.core.AggregateCall; +import org.apache.calcite.rel.core.Calc; +import org.apache.calcite.rel.core.Correlate; +import org.apache.calcite.rel.core.Exchange; +import org.apache.calcite.rel.core.Filter; +import org.apache.calcite.rel.core.Join; +import org.apache.calcite.rel.core.Project; +import org.apache.calcite.rel.core.SetOp; +import org.apache.calcite.rel.core.Snapshot; +import org.apache.calcite.rel.core.Sort; +import org.apache.calcite.rel.core.TableFunctionScan; +import org.apache.calcite.rel.core.TableModify; +import org.apache.calcite.rel.type.RelDataTypeField; +import org.apache.calcite.rex.RexCall; +import org.apache.calcite.rex.RexFieldAccess; +import org.apache.calcite.rex.RexInputRef; +import org.apache.calcite.rex.RexLocalRef; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.rex.RexShuttle; +import org.apache.calcite.rex.RexVisitor; +import org.apache.calcite.rex.RexVisitorImpl; +import org.apache.calcite.util.BuiltInMethod; +import org.apache.flink.table.planner.plan.schema.TableSourceTable; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +/** + * Modified based on calcite's source code org.apache.calcite.rel.metadata.RelMdColumnOrigins + * + * Modification point: + * 1. Support lookup join, add method getColumnOrigins(Snapshot rel, RelMetadataQuery mq, int iOutputColumn) + * 2. Support watermark, add method getColumnOrigins(SingleRel rel,RelMetadataQuery mq, int iOutputColumn) + * 3. Support table function, add method getColumnOrigins(Correlate rel, RelMetadataQuery mq, int iOutputColumn) + * + * + * @description: RelMdColumnOrigins supplies a default implementation of {@link + * RelMetadataQuery#getColumnOrigins} for the standard logical algebra. + * @author: baisong + * @version: 1.0.0 + * @date: 2022/11/24 7:47 PM + */ +public class RelMdColumnOrigins implements MetadataHandler { + + public static final RelMetadataProvider SOURCE = ReflectiveRelMetadataProvider.reflectiveSource( + BuiltInMethod.COLUMN_ORIGIN.method, new RelMdColumnOrigins()); + + // ~ Constructors ----------------------------------------------------------- + + private RelMdColumnOrigins() { + } + + // ~ Methods ---------------------------------------------------------------- + + @Override + public MetadataDef getDef() { + return BuiltInMetadata.ColumnOrigin.DEF; + } + + public Set getColumnOrigins(Aggregate rel, + RelMetadataQuery mq, int iOutputColumn) { + if (iOutputColumn < rel.getGroupCount()) { + // get actual index of Group columns. + return mq.getColumnOrigins(rel.getInput(), rel.getGroupSet().asList().get(iOutputColumn)); + } + + // Aggregate columns are derived from input columns + AggregateCall call = rel.getAggCallList().get(iOutputColumn + - rel.getGroupCount()); + + final Set set = new HashSet<>(); + for (Integer iInput : call.getArgList()) { + Set inputSet = mq.getColumnOrigins(rel.getInput(), iInput); + inputSet = createDerivedColumnOrigins(inputSet); + if (inputSet != null) { + set.addAll(inputSet); + } + } + return set; + } + + public Set getColumnOrigins(Join rel, RelMetadataQuery mq, + int iOutputColumn) { + int nLeftColumns = rel.getLeft().getRowType().getFieldList().size(); + Set set; + boolean derived = false; + if (iOutputColumn < nLeftColumns) { + set = mq.getColumnOrigins(rel.getLeft(), iOutputColumn); + if (rel.getJoinType().generatesNullsOnLeft()) { + derived = true; + } + } else { + set = mq.getColumnOrigins(rel.getRight(), iOutputColumn - nLeftColumns); + if (rel.getJoinType().generatesNullsOnRight()) { + derived = true; + } + } + if (derived) { + // nulls are generated due to outer join; that counts + // as derivation + set = createDerivedColumnOrigins(set); + } + return set; + } + + /** + * Support the field blood relationship of table function + */ + public Set getColumnOrigins(Correlate rel, RelMetadataQuery mq, int iOutputColumn) { + + List leftFieldList = rel.getLeft().getRowType().getFieldList(); + + int nLeftColumns = leftFieldList.size(); + Set set; + if (iOutputColumn < nLeftColumns) { + set = mq.getColumnOrigins(rel.getLeft(), iOutputColumn); + } else { + // get the field name of the left table configured in the Table Function on the right + TableFunctionScan tableFunctionScan = (TableFunctionScan) rel.getRight(); + RexCall rexCall = (RexCall) tableFunctionScan.getCall(); + // support only one field in table function + RexFieldAccess rexFieldAccess = (RexFieldAccess) rexCall.operands.get(0); + String fieldName = rexFieldAccess.getField().getName(); + + int leftFieldIndex = 0; + for (int i = 0; i < nLeftColumns; i++) { + if (leftFieldList.get(i).getName().equalsIgnoreCase(fieldName)) { + leftFieldIndex = i; + break; + } + } + /** + * Get the fields from the left table, don't go to + * getColumnOrigins(TableFunctionScan rel,RelMetadataQuery mq, int iOutputColumn), + * otherwise the return is null, and the UDTF field origin cannot be parsed + */ + set = mq.getColumnOrigins(rel.getLeft(), leftFieldIndex); + } + return set; + } + + public Set getColumnOrigins(SetOp rel, + RelMetadataQuery mq, int iOutputColumn) { + final Set set = new HashSet<>(); + for (RelNode input : rel.getInputs()) { + Set inputSet = mq.getColumnOrigins(input, iOutputColumn); + if (inputSet == null) { + return null; + } + set.addAll(inputSet); + } + return set; + } + + /** + * Support the field blood relationship of lookup join + */ + public Set getColumnOrigins(Snapshot rel, + RelMetadataQuery mq, int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + /** + * Support the field blood relationship of watermark + */ + public Set getColumnOrigins(SingleRel rel, + RelMetadataQuery mq, int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(Project rel, + final RelMetadataQuery mq, int iOutputColumn) { + final RelNode input = rel.getInput(); + RexNode rexNode = rel.getProjects().get(iOutputColumn); + + if (rexNode instanceof RexInputRef) { + // Direct reference: no derivation added. + RexInputRef inputRef = (RexInputRef) rexNode; + return mq.getColumnOrigins(input, inputRef.getIndex()); + } + // Anything else is a derivation, possibly from multiple columns. + final Set set = getMultipleColumns(rexNode, input, mq); + return createDerivedColumnOrigins(set); + } + + public Set getColumnOrigins(Calc rel, + final RelMetadataQuery mq, int iOutputColumn) { + final RelNode input = rel.getInput(); + final RexShuttle rexShuttle = new RexShuttle() { + + @Override + public RexNode visitLocalRef(RexLocalRef localRef) { + return rel.getProgram().expandLocalRef(localRef); + } + }; + final List projects = new ArrayList<>(); + for (RexNode rex : rexShuttle.apply(rel.getProgram().getProjectList())) { + projects.add(rex); + } + final RexNode rexNode = projects.get(iOutputColumn); + if (rexNode instanceof RexInputRef) { + // Direct reference: no derivation added. + RexInputRef inputRef = (RexInputRef) rexNode; + return mq.getColumnOrigins(input, inputRef.getIndex()); + } else if (rexNode instanceof RexCall && ((RexCall) rexNode).operands.isEmpty()) { + // support for new fields in the source table similar to those created with the LOCALTIMESTAMP function + TableSourceTable table = ((TableSourceTable) rel.getInput().getTable()); + if (table != null) { + String targetFieldName = rel.getProgram().getOutputRowType().getFieldList().get(iOutputColumn) + .getName(); + List fieldList = table.catalogTable().getResolvedSchema().getColumnNames(); + + int index = -1; + for (int i = 0; i < fieldList.size(); i++) { + if (fieldList.get(i).equalsIgnoreCase(targetFieldName)) { + index = i; + break; + } + } + if (index != -1) { + return Collections.singleton(new RelColumnOrigin(table, index, false)); + } + } + } + // Anything else is a derivation, possibly from multiple columns. + final Set set = getMultipleColumns(rexNode, input, mq); + return createDerivedColumnOrigins(set); + } + + public Set getColumnOrigins(Filter rel, + RelMetadataQuery mq, int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(Sort rel, RelMetadataQuery mq, + int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(TableModify rel, RelMetadataQuery mq, + int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(Exchange rel, + RelMetadataQuery mq, int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(TableFunctionScan rel, + RelMetadataQuery mq, int iOutputColumn) { + final Set set = new HashSet<>(); + Set mappings = rel.getColumnMappings(); + if (mappings == null) { + if (rel.getInputs().size() > 0) { + // This is a non-leaf transformation: say we don't + // know about origins, because there are probably + // columns below. + return null; + } else { + // This is a leaf transformation: say there are fer sure no + // column origins. + return set; + } + } + for (RelColumnMapping mapping : mappings) { + if (mapping.iOutputColumn != iOutputColumn) { + continue; + } + final RelNode input = rel.getInputs().get(mapping.iInputRel); + final int column = mapping.iInputColumn; + Set origins = mq.getColumnOrigins(input, column); + if (origins == null) { + return null; + } + if (mapping.derived) { + origins = createDerivedColumnOrigins(origins); + } + set.addAll(origins); + } + return set; + } + + // Catch-all rule when none of the others apply. + public Set getColumnOrigins(RelNode rel, + RelMetadataQuery mq, int iOutputColumn) { + // NOTE jvs 28-Mar-2006: We may get this wrong for a physical table + // expression which supports projections. In that case, + // it's up to the plugin writer to override with the + // correct information. + + if (rel.getInputs().size() > 0) { + // No generic logic available for non-leaf rels. + return null; + } + + final Set set = new HashSet<>(); + + RelOptTable table = rel.getTable(); + if (table == null) { + // Somebody is making column values up out of thin air, like a + // VALUES clause, so we return an empty set. + return set; + } + + // Detect the case where a physical table expression is performing + // projection, and say we don't know instead of making any assumptions. + // (Theoretically we could try to map the projection using column + // names.) This detection assumes the table expression doesn't handle + // rename as well. + if (table.getRowType() != rel.getRowType()) { + return null; + } + + set.add(new RelColumnOrigin(table, iOutputColumn, false)); + return set; + } + + private Set createDerivedColumnOrigins( + Set inputSet) { + if (inputSet == null) { + return null; + } + final Set set = new HashSet<>(); + for (RelColumnOrigin rco : inputSet) { + RelColumnOrigin derived = new RelColumnOrigin( + rco.getOriginTable(), + rco.getOriginColumnOrdinal(), + true); + set.add(derived); + } + return set; + } + + private Set getMultipleColumns(RexNode rexNode, RelNode input, + final RelMetadataQuery mq) { + final Set set = new HashSet<>(); + final RexVisitor visitor = new RexVisitorImpl(true) { + + @Override + public Void visitInputRef(RexInputRef inputRef) { + Set inputSet = mq.getColumnOrigins(input, inputRef.getIndex()); + if (inputSet != null) { + set.addAll(inputSet); + } + return null; + } + }; + rexNode.accept(visitor); + return set; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/org/apache/flink/table/types/extraction/ExtractionUtils.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/org/apache/flink/table/types/extraction/ExtractionUtils.java new file mode 100644 index 0000000..6bf0cff --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.14/src/main/java/org/apache/flink/table/types/extraction/ExtractionUtils.java @@ -0,0 +1,1013 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.flink.table.types.extraction; + +import static org.apache.flink.shaded.asm7.org.objectweb.asm.Type.getConstructorDescriptor; +import static org.apache.flink.shaded.asm7.org.objectweb.asm.Type.getMethodDescriptor; + + +import net.srt.flink.common.pool.ClassPool; +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.typeutils.TypeSerializer; +import org.apache.flink.shaded.asm7.org.objectweb.asm.ClassReader; +import org.apache.flink.shaded.asm7.org.objectweb.asm.ClassVisitor; +import org.apache.flink.shaded.asm7.org.objectweb.asm.Label; +import org.apache.flink.shaded.asm7.org.objectweb.asm.MethodVisitor; +import org.apache.flink.shaded.asm7.org.objectweb.asm.Opcodes; +import org.apache.flink.table.api.DataTypes; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.catalog.DataTypeFactory; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.StructuredType; + +import java.io.IOException; +import java.io.InputStream; +import java.lang.annotation.Annotation; +import java.lang.reflect.Constructor; +import java.lang.reflect.Executable; +import java.lang.reflect.Field; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.lang.reflect.Parameter; +import java.lang.reflect.ParameterizedType; +import java.lang.reflect.Type; +import java.lang.reflect.TypeVariable; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.function.Function; +import java.util.regex.Pattern; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import javax.annotation.Nullable; + +/** + * Utilities for performing reflection tasks. + */ +@Internal +public final class ExtractionUtils { + + // -------------------------------------------------------------------------------------------- + // Methods shared across packages + // -------------------------------------------------------------------------------------------- + + /** + * Collects methods of the given name. + */ + public static List collectMethods(Class function, String methodName) { + return Arrays.stream(function.getMethods()) + .filter(method -> method.getName().equals(methodName)) + .sorted(Comparator.comparing(Method::toString)) // for deterministic order + .collect(Collectors.toList()); + } + + /** + * Checks whether a method/constructor can be called with the given argument classes. This + * includes type widening and vararg. {@code null} is a wildcard. + * + *

E.g., {@code (int.class, int.class)} matches {@code f(Object...), f(int, int), f(Integer, + * Object)} and so forth. + */ + public static boolean isInvokable(Executable executable, Class... classes) { + final int m = executable.getModifiers(); + if (!Modifier.isPublic(m)) { + return false; + } + final int paramCount = executable.getParameterCount(); + final int classCount = classes.length; + // check for enough classes for each parameter + if ((!executable.isVarArgs() && classCount != paramCount) + || (executable.isVarArgs() && classCount < paramCount - 1)) { + return false; + } + int currentClass = 0; + for (int currentParam = 0; currentParam < paramCount; currentParam++) { + final Class param = executable.getParameterTypes()[currentParam]; + // last parameter is a vararg that needs to consume remaining classes + if (currentParam == paramCount - 1 && executable.isVarArgs()) { + final Class paramComponent = executable.getParameterTypes()[currentParam].getComponentType(); + // we have more than 1 classes left so the vararg needs to consume them all + if (classCount - currentClass > 1) { + while (currentClass < classCount + && ExtractionUtils.isAssignable( + classes[currentClass], paramComponent, true)) { + currentClass++; + } + } else if (currentClass < classCount + && (parameterMatches(classes[currentClass], param) + || parameterMatches(classes[currentClass], paramComponent))) { + currentClass++; + } + } + // entire parameter matches + else if (parameterMatches(classes[currentClass], param)) { + currentClass++; + } + } + // check if all classes have been consumed + return currentClass == classCount; + } + + private static boolean parameterMatches(Class clz, Class param) { + return clz == null || ExtractionUtils.isAssignable(clz, param, true); + } + + /** + * Creates a method signature string like {@code int eval(Integer, String)}. + */ + public static String createMethodSignatureString( + String methodName, Class[] parameters, @Nullable Class returnType) { + final StringBuilder builder = new StringBuilder(); + if (returnType != null) { + builder.append(returnType.getCanonicalName()).append(" "); + } + builder.append(methodName) + .append( + Stream.of(parameters) + .map( + parameter -> { + // in case we don't know the parameter at this location + // (i.e. for accumulators) + if (parameter == null) { + return "_"; + } else { + return parameter.getCanonicalName(); + } + }) + .collect(Collectors.joining(", ", "(", ")"))); + return builder.toString(); + } + + /** + * Validates the characteristics of a class for a {@link StructuredType} such as accessibility. + */ + public static void validateStructuredClass(Class clazz) { + final int m = clazz.getModifiers(); + if (Modifier.isAbstract(m)) { + throw extractionError("Class '%s' must not be abstract.", clazz.getName()); + } + if (!Modifier.isPublic(m)) { + throw extractionError("Class '%s' is not public.", clazz.getName()); + } + if (clazz.getEnclosingClass() != null + && (clazz.getDeclaringClass() == null || !Modifier.isStatic(m))) { + throw extractionError( + "Class '%s' is a not a static, globally accessible class.", clazz.getName()); + } + } + + /** + * Returns the field of a structured type. The logic is as broad as possible to support both + * Java and Scala in different flavors. + */ + public static Field getStructuredField(Class clazz, String fieldName) { + final String normalizedFieldName = fieldName.toUpperCase(); + + final List fields = collectStructuredFields(clazz); + for (Field field : fields) { + if (field.getName().toUpperCase().equals(normalizedFieldName)) { + return field; + } + } + throw extractionError( + "Could not find a field named '%s' in class '%s' for structured type.", + fieldName, clazz.getName()); + } + + /** + * Checks for a field getter of a structured type. The logic is as broad as possible to support + * both Java and Scala in different flavors. + */ + public static Optional getStructuredFieldGetter(Class clazz, Field field) { + final String normalizedFieldName = normalizeAccessorName(field.getName()); + + final List methods = collectStructuredMethods(clazz); + for (Method method : methods) { + // check name: + // get() + // is() + // () for Scala + final String normalizedMethodName = normalizeAccessorName(method.getName()); + final boolean hasName = normalizedMethodName.equals("GET" + normalizedFieldName) + || normalizedMethodName.equals("IS" + normalizedFieldName) + || normalizedMethodName.equals(normalizedFieldName); + if (!hasName) { + continue; + } + + // check return type: + // equal to field type + final Type returnType = method.getGenericReturnType(); + final boolean hasReturnType = returnType.equals(field.getGenericType()); + if (!hasReturnType) { + continue; + } + + // check parameters: + // no parameters + final boolean hasNoParameters = method.getParameterCount() == 0; + if (!hasNoParameters) { + continue; + } + + // matching getter found + return Optional.of(method); + } + + // no getter found + return Optional.empty(); + } + + /** + * Checks for a field setters of a structured type. The logic is as broad as possible to support + * both Java and Scala in different flavors. + */ + public static Optional getStructuredFieldSetter(Class clazz, Field field) { + final String normalizedFieldName = normalizeAccessorName(field.getName()); + + final List methods = collectStructuredMethods(clazz); + for (Method method : methods) { + + // check name: + // set(type) + // (type) + // _$eq(type) for Scala + final String normalizedMethodName = normalizeAccessorName(method.getName()); + final boolean hasName = normalizedMethodName.equals("SET" + normalizedFieldName) + || normalizedMethodName.equals(normalizedFieldName) + || normalizedMethodName.equals(normalizedFieldName + "$EQ"); + if (!hasName) { + continue; + } + + // check return type: + // void or the declaring class + final Class returnType = method.getReturnType(); + final boolean hasReturnType = returnType == Void.TYPE || returnType == clazz; + if (!hasReturnType) { + continue; + } + + // check parameters: + // one parameter that has the same (or primitive) type of the field + final boolean hasParameter = method.getParameterCount() == 1 + && (method.getGenericParameterTypes()[0].equals(field.getGenericType()) + || primitiveToWrapper(method.getGenericParameterTypes()[0]) + .equals(field.getGenericType())); + if (!hasParameter) { + continue; + } + + // matching setter found + return Optional.of(method); + } + + // no setter found + return Optional.empty(); + } + + private static String normalizeAccessorName(String name) { + return name.toUpperCase().replaceAll(Pattern.quote("_"), ""); + } + + /** + * Checks for an invokable constructor matching the given arguments. + * + * @see #isInvokable(Executable, Class[]) + */ + public static boolean hasInvokableConstructor(Class clazz, Class... classes) { + for (Constructor constructor : clazz.getDeclaredConstructors()) { + if (isInvokable(constructor, classes)) { + return true; + } + } + return false; + } + + /** + * Checks whether a field is directly readable without a getter. + */ + public static boolean isStructuredFieldDirectlyReadable(Field field) { + final int m = field.getModifiers(); + + // field is directly readable + return Modifier.isPublic(m); + } + + /** + * Checks whether a field is directly writable without a setter or constructor. + */ + public static boolean isStructuredFieldDirectlyWritable(Field field) { + final int m = field.getModifiers(); + + // field is immutable + if (Modifier.isFinal(m)) { + return false; + } + + // field is directly writable + return Modifier.isPublic(m); + } + + /** + * A minimal version to extract a generic parameter from a given class. + * + *

This method should only be used for very specific use cases, in most cases {@link + * DataTypeExtractor#extractFromGeneric(DataTypeFactory, Class, int, Type)} should be more + * appropriate. + */ + public static Optional> extractSimpleGeneric( + Class baseClass, Class clazz, int pos) { + try { + if (clazz.getSuperclass() != baseClass) { + return Optional.empty(); + } + final Type t = ((ParameterizedType) clazz.getGenericSuperclass()) + .getActualTypeArguments()[pos]; + return Optional.ofNullable(toClass(t)); + } catch (Exception unused) { + return Optional.empty(); + } + } + + // -------------------------------------------------------------------------------------------- + // Methods intended for this package + // -------------------------------------------------------------------------------------------- + + /** + * Helper method for creating consistent exceptions during extraction. + */ + static ValidationException extractionError(String message, Object... args) { + return extractionError(null, message, args); + } + + /** + * Helper method for creating consistent exceptions during extraction. + */ + static ValidationException extractionError(Throwable cause, String message, Object... args) { + return new ValidationException(String.format(message, args), cause); + } + + /** + * Collects the partially ordered type hierarchy (i.e. all involved super classes and super + * interfaces) of the given type. + */ + static List collectTypeHierarchy(Type type) { + Type currentType = type; + Class currentClass = toClass(type); + final List typeHierarchy = new ArrayList<>(); + while (currentClass != null) { + // collect type + typeHierarchy.add(currentType); + // collect super interfaces + for (Type genericInterface : currentClass.getGenericInterfaces()) { + final Class interfaceClass = toClass(genericInterface); + if (interfaceClass != null) { + typeHierarchy.addAll(collectTypeHierarchy(genericInterface)); + } + } + currentType = currentClass.getGenericSuperclass(); + currentClass = toClass(currentType); + } + return typeHierarchy; + } + + /** + * Converts a {@link Type} to {@link Class} if possible, {@code null} otherwise. + */ + static @Nullable Class toClass(Type type) { + if (type instanceof Class) { + return (Class) type; + } else if (type instanceof ParameterizedType) { + // this is always a class + return (Class) ((ParameterizedType) type).getRawType(); + } + // unsupported: generic arrays, type variables, wildcard types + return null; + } + + /** + * Creates a raw data type. + */ + @SuppressWarnings({"unchecked", "rawtypes"}) + static DataType createRawType( + DataTypeFactory typeFactory, + @Nullable Class> rawSerializer, + @Nullable Class conversionClass) { + if (rawSerializer != null) { + return DataTypes.RAW( + (Class) createConversionClass(conversionClass), + instantiateRawSerializer(rawSerializer)); + } + return typeFactory.createRawDataType(createConversionClass(conversionClass)); + } + + static Class createConversionClass(@Nullable Class conversionClass) { + if (conversionClass != null) { + return conversionClass; + } + return Object.class; + } + + private static TypeSerializer instantiateRawSerializer( + Class> rawSerializer) { + try { + return rawSerializer.newInstance(); + } catch (Exception e) { + throw extractionError( + e, + "Cannot instantiate type serializer '%s' for RAW type. " + + "Make sure the class is publicly accessible and has a default constructor.", + rawSerializer.getName()); + } + } + + /** + * Resolves a {@link TypeVariable} using the given type hierarchy if possible. + */ + static Type resolveVariable(List typeHierarchy, TypeVariable variable) { + // iterate through hierarchy from top to bottom until type variable gets a non-variable + // assigned + for (int i = typeHierarchy.size() - 1; i >= 0; i--) { + final Type currentType = typeHierarchy.get(i); + + if (currentType instanceof ParameterizedType) { + final Type resolvedType = resolveVariableInParameterizedType( + variable, (ParameterizedType) currentType); + if (resolvedType instanceof TypeVariable) { + // follow type variables transitively + variable = (TypeVariable) resolvedType; + } else if (resolvedType != null) { + return resolvedType; + } + } + } + // unresolved variable + return variable; + } + + private static @Nullable Type resolveVariableInParameterizedType( + TypeVariable variable, ParameterizedType currentType) { + final Class currentRaw = (Class) currentType.getRawType(); + final TypeVariable[] currentVariables = currentRaw.getTypeParameters(); + // search for matching type variable + for (int paramPos = 0; paramPos < currentVariables.length; paramPos++) { + if (typeVariableEquals(variable, currentVariables[paramPos])) { + return currentType.getActualTypeArguments()[paramPos]; + } + } + return null; + } + + private static boolean typeVariableEquals( + TypeVariable variable, TypeVariable currentVariable) { + return currentVariable.getGenericDeclaration().equals(variable.getGenericDeclaration()) + && currentVariable.getName().equals(variable.getName()); + } + + /** + * Validates if a given type is not already contained in the type hierarchy of a structured + * type. + * + *

Otherwise this would lead to infinite data type extraction cycles. + */ + static void validateStructuredSelfReference(Type t, List typeHierarchy) { + final Class clazz = toClass(t); + if (clazz != null + && !clazz.isInterface() + && clazz != Object.class + && typeHierarchy.contains(t)) { + throw extractionError( + "Cyclic reference detected for class '%s'. Attributes of structured types must not " + + "(transitively) reference the structured type itself.", + clazz.getName()); + } + } + + /** + * Returns the fields of a class for a {@link StructuredType}. + */ + static List collectStructuredFields(Class clazz) { + final List fields = new ArrayList<>(); + while (clazz != Object.class) { + final Field[] declaredFields = clazz.getDeclaredFields(); + Stream.of(declaredFields) + .filter( + field -> { + final int m = field.getModifiers(); + return !Modifier.isStatic(m) && !Modifier.isTransient(m); + }) + .forEach(fields::add); + clazz = clazz.getSuperclass(); + } + return fields; + } + + /** + * Validates if a field is properly readable either directly or through a getter. + */ + static void validateStructuredFieldReadability(Class clazz, Field field) { + // field is accessible + if (isStructuredFieldDirectlyReadable(field)) { + return; + } + + // field needs a getter + if (!getStructuredFieldGetter(clazz, field).isPresent()) { + throw extractionError( + "Field '%s' of class '%s' is neither publicly accessible nor does it have " + + "a corresponding getter method.", + field.getName(), clazz.getName()); + } + } + + /** + * Checks if a field is mutable or immutable. Returns {@code true} if the field is properly + * mutable. Returns {@code false} if it is properly immutable. + */ + static boolean isStructuredFieldMutable(Class clazz, Field field) { + final int m = field.getModifiers(); + + // field is immutable + if (Modifier.isFinal(m)) { + return false; + } + // field is directly mutable + if (Modifier.isPublic(m)) { + return true; + } + + // field has setters by which it is mutable + if (getStructuredFieldSetter(clazz, field).isPresent()) { + return true; + } + + throw extractionError( + "Field '%s' of class '%s' is mutable but is neither publicly accessible nor does it have " + + "a corresponding setter method.", + field.getName(), clazz.getName()); + } + + /** + * Returns the boxed type of a primitive type. + */ + static Type primitiveToWrapper(Type type) { + if (type instanceof Class) { + return primitiveToWrapper((Class) type); + } + return type; + } + + /** + * Collects all methods that qualify as methods of a {@link StructuredType}. + */ + static List collectStructuredMethods(Class clazz) { + final List methods = new ArrayList<>(); + while (clazz != Object.class) { + final Method[] declaredMethods = clazz.getDeclaredMethods(); + Stream.of(declaredMethods) + .filter( + field -> { + final int m = field.getModifiers(); + return Modifier.isPublic(m) + && !Modifier.isNative(m) + && !Modifier.isAbstract(m); + }) + .forEach(methods::add); + clazz = clazz.getSuperclass(); + } + return methods; + } + + /** + * Collects all annotations of the given type defined in the current class or superclasses. + * Duplicates are ignored. + */ + static Set collectAnnotationsOfClass( + Class annotation, Class annotatedClass) { + final List> classHierarchy = new ArrayList<>(); + Class currentClass = annotatedClass; + while (currentClass != null) { + classHierarchy.add(currentClass); + currentClass = currentClass.getSuperclass(); + } + // convert to top down + Collections.reverse(classHierarchy); + return classHierarchy.stream() + .flatMap(c -> Stream.of(c.getAnnotationsByType(annotation))) + .collect(Collectors.toCollection(LinkedHashSet::new)); + } + + /** + * Collects all annotations of the given type defined in the given method. Duplicates are + * ignored. + */ + static Set collectAnnotationsOfMethod( + Class annotation, Method annotatedMethod) { + return new LinkedHashSet<>(Arrays.asList(annotatedMethod.getAnnotationsByType(annotation))); + } + + // -------------------------------------------------------------------------------------------- + // Parameter Extraction Utilities + // -------------------------------------------------------------------------------------------- + + /** + * Result of the extraction in {@link #extractAssigningConstructor(Class, List)}. + */ + public static class AssigningConstructor { + + public final Constructor constructor; + public final List parameterNames; + + private AssigningConstructor(Constructor constructor, List parameterNames) { + this.constructor = constructor; + this.parameterNames = parameterNames; + } + } + + /** + * Checks whether the given constructor takes all of the given fields with matching (possibly + * primitive) type and name. An assigning constructor can define the order of fields. + */ + public static @Nullable AssigningConstructor extractAssigningConstructor( + Class clazz, List fields) { + AssigningConstructor foundConstructor = null; + for (Constructor constructor : clazz.getDeclaredConstructors()) { + final boolean qualifyingConstructor = Modifier.isPublic(constructor.getModifiers()) + && constructor.getParameterTypes().length == fields.size(); + if (!qualifyingConstructor) { + continue; + } + final List parameterNames = extractConstructorParameterNames(constructor, fields); + if (parameterNames != null) { + if (foundConstructor != null) { + throw extractionError( + "Multiple constructors found that assign all fields for class '%s'.", + clazz.getName()); + } + foundConstructor = new AssigningConstructor(constructor, parameterNames); + } + } + return foundConstructor; + } + + /** + * Extracts the parameter names of a method if possible. + */ + static @Nullable List extractMethodParameterNames(Method method) { + return extractExecutableNames(method); + } + + /** + * Extracts ordered parameter names from a constructor that takes all of the given fields with + * matching (possibly primitive and lenient) type and name. + */ + private static @Nullable List extractConstructorParameterNames( + Constructor constructor, List fields) { + final Type[] parameterTypes = constructor.getGenericParameterTypes(); + + List parameterNames = extractExecutableNames(constructor); + if (parameterNames == null) { + return null; + } + + final Map fieldMap = fields.stream() + .collect( + Collectors.toMap( + f -> normalizeAccessorName(f.getName()), + Function.identity())); + + // check that all fields are represented in the parameters of the constructor + final List fieldNames = new ArrayList<>(); + for (int i = 0; i < parameterNames.size(); i++) { + final String parameterName = normalizeAccessorName(parameterNames.get(i)); + final Field field = fieldMap.get(parameterName); + if (field == null) { + return null; + } + final Type fieldType = field.getGenericType(); + final Type parameterType = parameterTypes[i]; + // we are tolerant here because frameworks such as Avro accept a boxed type even though + // the field is primitive + if (!primitiveToWrapper(parameterType).equals(primitiveToWrapper(fieldType))) { + return null; + } + fieldNames.add(field.getName()); + } + + return fieldNames; + } + + private static @Nullable List extractExecutableNames(Executable executable) { + final int offset; + if (!Modifier.isStatic(executable.getModifiers())) { + // remove "this" as first parameter + offset = 1; + } else { + offset = 0; + } + // by default parameter names are "arg0, arg1, arg2, ..." if compiler flag is not set + // so we need to extract them manually if possible + List parameterNames = Stream.of(executable.getParameters()) + .map(Parameter::getName) + .collect(Collectors.toList()); + if (parameterNames.stream().allMatch(n -> n.startsWith("arg"))) { + final ParameterExtractor extractor; + if (executable instanceof Constructor) { + extractor = new ParameterExtractor((Constructor) executable); + } else { + extractor = new ParameterExtractor((Method) executable); + } + + final List extractedNames = extractor.getParameterNames(); + if (extractedNames.size() == 0) { + return null; + } + // remove "this" and additional local variables + // select less names if class file has not the required information + parameterNames = extractedNames.subList( + offset, + Math.min( + executable.getParameterCount() + offset, + extractedNames.size())); + } + + if (parameterNames.size() != executable.getParameterCount()) { + return null; + } + + return parameterNames; + } + + private static ClassReader getClassReader(Class cls) { + final String className = cls.getName().replaceFirst("^.*\\.", "") + ".class"; + if (ClassPool.exist(cls.getName())) { + return new ClassReader(ClassPool.get(cls.getName()).getClassByte()); + } + try (InputStream i = cls.getResourceAsStream(className)) { + return new ClassReader(i); + } catch (IOException e) { + throw new IllegalStateException("Could not instantiate ClassReader.", e); + } + } + + /** + * Extracts the parameter names and descriptors from a constructor or method. Assuming the + * existence of a local variable table. + * + *

For example: + * + *

{@code
+	 * public WC(java.lang.String arg0, long arg1) { //  //(Ljava/lang/String;J)V
+	 *   
+	 *   
+	 *   
+	 *   
+	 *   
+	 * }
+	 * }
+ */ + private static class ParameterExtractor extends ClassVisitor { + + private static final int OPCODE = Opcodes.ASM7; + + private final String methodDescriptor; + + private final List parameterNames = new ArrayList<>(); + + ParameterExtractor(Constructor constructor) { + super(OPCODE); + methodDescriptor = getConstructorDescriptor(constructor); + } + + ParameterExtractor(Method method) { + super(OPCODE); + methodDescriptor = getMethodDescriptor(method); + } + + List getParameterNames() { + return parameterNames; + } + + @Override + public MethodVisitor visitMethod( + int access, String name, String descriptor, String signature, String[] exceptions) { + if (descriptor.equals(methodDescriptor)) { + return new MethodVisitor(OPCODE) { + + @Override + public void visitLocalVariable( + String name, + String descriptor, + String signature, + Label start, + Label end, + int index) { + parameterNames.add(name); + } + }; + } + return super.visitMethod(access, name, descriptor, signature, exceptions); + } + } + + // -------------------------------------------------------------------------------------------- + // Class Assignment and Boxing + // + // copied from o.a.commons.lang3.ClassUtils (commons-lang3:3.3.2) + // -------------------------------------------------------------------------------------------- + + /** + * Checks if one {@code Class} can be assigned to a variable of another {@code Class}. + * + *

Unlike the {@link Class#isAssignableFrom(java.lang.Class)} method, this method takes into + * account widenings of primitive classes and {@code null}s. + * + *

Primitive widenings allow an int to be assigned to a long, float or double. This method + * returns the correct result for these cases. + * + *

{@code Null} may be assigned to any reference type. This method will return {@code true} + * if {@code null} is passed in and the toClass is non-primitive. + * + *

Specifically, this method tests whether the type represented by the specified {@code + * Class} parameter can be converted to the type represented by this {@code Class} object via an + * identity conversion widening primitive or widening reference conversion. See The Java Language Specification, + * sections 5.1.1, 5.1.2 and 5.1.4 for details. + * + * @param cls the Class to check, may be null + * @param toClass the Class to try to assign into, returns false if null + * @param autoboxing whether to use implicit autoboxing/unboxing between primitives and wrappers + * @return {@code true} if assignment possible + */ + public static boolean isAssignable( + Class cls, final Class toClass, final boolean autoboxing) { + if (toClass == null) { + return false; + } + // have to check for null, as isAssignableFrom doesn't + if (cls == null) { + return !toClass.isPrimitive(); + } + // autoboxing: + if (autoboxing) { + if (cls.isPrimitive() && !toClass.isPrimitive()) { + cls = primitiveToWrapper(cls); + if (cls == null) { + return false; + } + } + if (toClass.isPrimitive() && !cls.isPrimitive()) { + cls = wrapperToPrimitive(cls); + if (cls == null) { + return false; + } + } + } + if (cls.equals(toClass)) { + return true; + } + if (cls.isPrimitive()) { + if (!toClass.isPrimitive()) { + return false; + } + if (Integer.TYPE.equals(cls)) { + return Long.TYPE.equals(toClass) + || Float.TYPE.equals(toClass) + || Double.TYPE.equals(toClass); + } + if (Long.TYPE.equals(cls)) { + return Float.TYPE.equals(toClass) || Double.TYPE.equals(toClass); + } + if (Boolean.TYPE.equals(cls)) { + return false; + } + if (Double.TYPE.equals(cls)) { + return false; + } + if (Float.TYPE.equals(cls)) { + return Double.TYPE.equals(toClass); + } + if (Character.TYPE.equals(cls)) { + return Integer.TYPE.equals(toClass) + || Long.TYPE.equals(toClass) + || Float.TYPE.equals(toClass) + || Double.TYPE.equals(toClass); + } + if (Short.TYPE.equals(cls)) { + return Integer.TYPE.equals(toClass) + || Long.TYPE.equals(toClass) + || Float.TYPE.equals(toClass) + || Double.TYPE.equals(toClass); + } + if (Byte.TYPE.equals(cls)) { + return Short.TYPE.equals(toClass) + || Integer.TYPE.equals(toClass) + || Long.TYPE.equals(toClass) + || Float.TYPE.equals(toClass) + || Double.TYPE.equals(toClass); + } + // should never get here + return false; + } + return toClass.isAssignableFrom(cls); + } + + /** + * Maps primitive {@code Class}es to their corresponding wrapper {@code Class}. + */ + private static final Map, Class> primitiveWrapperMap = new HashMap<>(); + + static { + primitiveWrapperMap.put(Boolean.TYPE, Boolean.class); + primitiveWrapperMap.put(Byte.TYPE, Byte.class); + primitiveWrapperMap.put(Character.TYPE, Character.class); + primitiveWrapperMap.put(Short.TYPE, Short.class); + primitiveWrapperMap.put(Integer.TYPE, Integer.class); + primitiveWrapperMap.put(Long.TYPE, Long.class); + primitiveWrapperMap.put(Double.TYPE, Double.class); + primitiveWrapperMap.put(Float.TYPE, Float.class); + primitiveWrapperMap.put(Void.TYPE, Void.TYPE); + } + + /** + * Maps wrapper {@code Class}es to their corresponding primitive types. + */ + private static final Map, Class> wrapperPrimitiveMap = new HashMap<>(); + + static { + for (final Class primitiveClass : primitiveWrapperMap.keySet()) { + final Class wrapperClass = primitiveWrapperMap.get(primitiveClass); + if (!primitiveClass.equals(wrapperClass)) { + wrapperPrimitiveMap.put(wrapperClass, primitiveClass); + } + } + } + + /** + * Converts the specified primitive Class object to its corresponding wrapper Class object. + * + *

NOTE: From v2.2, this method handles {@code Void.TYPE}, returning {@code Void.TYPE}. + * + * @param cls the class to convert, may be null + * @return the wrapper class for {@code cls} or {@code cls} if {@code cls} is not a primitive. + * {@code null} if null input. + * @since 2.1 + */ + public static Class primitiveToWrapper(final Class cls) { + Class convertedClass = cls; + if (cls != null && cls.isPrimitive()) { + convertedClass = primitiveWrapperMap.get(cls); + } + return convertedClass; + } + + /** + * Converts the specified wrapper class to its corresponding primitive class. + * + *

This method is the counter part of {@code primitiveToWrapper()}. If the passed in class is + * a wrapper class for a primitive type, this primitive type will be returned (e.g. {@code + * Integer.TYPE} for {@code Integer.class}). For other classes, or if the parameter is + * null, the return value is null. + * + * @param cls the class to convert, may be null + * @return the corresponding primitive type if {@code cls} is a wrapper class, null + * otherwise + * @see #primitiveToWrapper(Class) + * @since 2.4 + */ + public static Class wrapperToPrimitive(final Class cls) { + return wrapperPrimitiveMap.get(cls); + } + + // -------------------------------------------------------------------------------------------- + + private ExtractionUtils() { + // no instantiation + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/pom.xml new file mode 100644 index 0000000..e58601d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/pom.xml @@ -0,0 +1,70 @@ + + + + flink-client + net.srt + 2.0.0 + + 4.0.0 + + flink-client-1.16 + + + UTF-8 + + + + + net.srt + flink-client-base + ${project.version} + + + net.srt + flink-common + ${project.version} + + + net.srt + flink-1.16 + ${project.version} + provided + + + com.fasterxml.jackson.datatype + jackson-datatype-jsr310 + provided + + + javax.xml.bind + jaxb-api + + + com.sun.xml.bind + jaxb-impl + 2.3.0 + + + com.sun.xml.bind + jaxb-core + 2.3.0 + + + + + + + org.apache.maven.plugins + maven-jar-plugin + ${maven-jar-plugin.version} + + + ${project.parent.parent.basedir}/build/extends + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/AbstractCDCBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/AbstractCDCBuilder.java new file mode 100644 index 0000000..21d633f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/AbstractCDCBuilder.java @@ -0,0 +1,94 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.common.assertion.Asserts; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +/** + * AbstractCDCBuilder + * + * @author zrx + * @since 2022/11/04 + **/ +public abstract class AbstractCDCBuilder implements CDCBuilder { + + protected FlinkCDCConfig config; + + public AbstractCDCBuilder() { + } + + public AbstractCDCBuilder(FlinkCDCConfig config) { + this.config = config; + } + + public FlinkCDCConfig getConfig() { + return config; + } + + public void setConfig(FlinkCDCConfig config) { + this.config = config; + } + + @Override + public List getSchemaList() { + List schemaList = new ArrayList<>(); + String schema = getSchema(); + if (Asserts.isNotNullString(schema)) { + String[] schemas = schema.split(FlinkParamConstant.SPLIT); + Collections.addAll(schemaList, schemas); + } + List tableList = getTableList(); + for (String tableName : tableList) { + tableName = tableName.trim(); + if (Asserts.isNotNullString(tableName) && tableName.contains(".")) { + String[] names = tableName.split("\\\\."); + if (!schemaList.contains(names[0])) { + schemaList.add(names[0]); + } + } + } + return schemaList; + } + + @Override + public List getTableList() { + List tableList = new ArrayList<>(); + String table = config.getTable(); + if (Asserts.isNullString(table)) { + return tableList; + } + String[] tables = table.split(FlinkParamConstant.SPLIT); + Collections.addAll(tableList, tables); + return tableList; + } + + @Override + public String getSchemaFieldName() { + return "schema"; + } + + public abstract String getSchema(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/AbstractSinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/AbstractSinkBuilder.java new file mode 100644 index 0000000..684a675 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/AbstractSinkBuilder.java @@ -0,0 +1,369 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.utils.JSONUtil; +import org.apache.flink.api.common.functions.FilterFunction; +import org.apache.flink.api.common.functions.FlatMapFunction; +import org.apache.flink.api.common.functions.MapFunction; +import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.table.data.DecimalData; +import org.apache.flink.table.data.GenericRowData; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.StringData; +import org.apache.flink.table.data.TimestampData; +import org.apache.flink.table.operations.ModifyOperation; +import org.apache.flink.table.types.logical.BigIntType; +import org.apache.flink.table.types.logical.BooleanType; +import org.apache.flink.table.types.logical.DateType; +import org.apache.flink.table.types.logical.DecimalType; +import org.apache.flink.table.types.logical.DoubleType; +import org.apache.flink.table.types.logical.FloatType; +import org.apache.flink.table.types.logical.IntType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.SmallIntType; +import org.apache.flink.table.types.logical.TimestampType; +import org.apache.flink.table.types.logical.TinyIntType; +import org.apache.flink.table.types.logical.VarBinaryType; +import org.apache.flink.table.types.logical.VarCharType; +import org.apache.flink.types.RowKind; +import org.apache.flink.util.Collector; +import org.apache.flink.util.OutputTag; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.math.BigDecimal; +import java.sql.Timestamp; +import java.time.Instant; +import java.time.ZoneId; +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * AbstractCDCBuilder + * + * @author wenmo + * @since 2022/11/04 + **/ +public abstract class AbstractSinkBuilder implements SinkBuilder { + + protected static final Logger logger = LoggerFactory.getLogger(AbstractSinkBuilder.class); + + protected FlinkCDCConfig config; + protected List modifyOperations = new ArrayList(); + private ZoneId sinkTimeZone = ZoneId.of("UTC"); + + public AbstractSinkBuilder() { + } + + public AbstractSinkBuilder(FlinkCDCConfig config) { + this.config = config; + } + + public FlinkCDCConfig getConfig() { + return config; + } + + public void setConfig(FlinkCDCConfig config) { + this.config = config; + } + + protected Properties getProperties() { + Properties properties = new Properties(); + Map sink = config.getSink(); + for (Map.Entry entry : sink.entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && entry.getKey().startsWith("properties") + && Asserts.isNotNullString(entry.getValue())) { + properties.setProperty(entry.getKey().replace("properties.", ""), entry.getValue()); + } + } + return properties; + } + + protected SingleOutputStreamOperator deserialize(DataStreamSource dataStreamSource) { + return dataStreamSource.map(new MapFunction() { + + @Override + public Map map(String value) throws Exception { + ObjectMapper objectMapper = new ObjectMapper(); + return objectMapper.readValue(value, Map.class); + } + }); + } + + protected SingleOutputStreamOperator shunt( + SingleOutputStreamOperator mapOperator, + Table table, + String schemaFieldName) { + final String tableName = table.getName(); + final String schemaName = table.getSchema(); + return mapOperator.filter(new FilterFunction() { + + @Override + public boolean filter(Map value) throws Exception { + LinkedHashMap source = (LinkedHashMap) value.get("source"); + return tableName.equals(source.get("table").toString()) + && schemaName.equals(source.get(schemaFieldName).toString()); + } + }); + } + + protected DataStream shunt( + SingleOutputStreamOperator processOperator, + Table table, + OutputTag tag) { + + return processOperator.getSideOutput(tag); + } + + protected DataStream buildRowData( + SingleOutputStreamOperator filterOperator, + List columnNameList, + List columnTypeList, + String schemaTableName) { + return filterOperator + .flatMap(new FlatMapFunction() { + + @Override + public void flatMap(Map value, Collector out) throws Exception { + try { + switch (value.get("op").toString()) { + case "r": + case "c": + GenericRowData igenericRowData = new GenericRowData(columnNameList.size()); + igenericRowData.setRowKind(RowKind.INSERT); + Map idata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + igenericRowData.setField(i, + convertValue(idata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(igenericRowData); + break; + case "d": + GenericRowData dgenericRowData = new GenericRowData(columnNameList.size()); + dgenericRowData.setRowKind(RowKind.DELETE); + Map ddata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + dgenericRowData.setField(i, + convertValue(ddata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(dgenericRowData); + break; + case "u": + GenericRowData ubgenericRowData = new GenericRowData(columnNameList.size()); + ubgenericRowData.setRowKind(RowKind.UPDATE_BEFORE); + Map ubdata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + ubgenericRowData.setField(i, + convertValue(ubdata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(ubgenericRowData); + GenericRowData uagenericRowData = new GenericRowData(columnNameList.size()); + uagenericRowData.setRowKind(RowKind.UPDATE_AFTER); + Map uadata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + uagenericRowData.setField(i, + convertValue(uadata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(uagenericRowData); + break; + default: + } + } catch (Exception e) { + logger.error("SchameTable: {} - Row: {} - Exception:", schemaTableName, + JSONUtil.toJsonString(value), e); + throw e; + } + } + }); + } + + public abstract void addSink( + StreamExecutionEnvironment env, + DataStream rowDataDataStream, + Table table, + List columnNameList, + List columnTypeList); + + @Override + public DataStreamSource build( + CDCBuilder cdcBuilder, + StreamExecutionEnvironment env, + CustomTableEnvironment customTableEnvironment, + DataStreamSource dataStreamSource) { + + final List schemaList = config.getSchemaList(); + final String schemaFieldName = config.getSchemaFieldName(); + + if (Asserts.isNotNullCollection(schemaList)) { + SingleOutputStreamOperator mapOperator = deserialize(dataStreamSource); + for (Schema schema : schemaList) { + for (Table table : schema.getTables()) { + SingleOutputStreamOperator filterOperator = shunt(mapOperator, table, schemaFieldName); + + List columnNameList = new ArrayList<>(); + List columnTypeList = new ArrayList<>(); + + buildColumn(columnNameList, columnTypeList, table.getColumns()); + + DataStream rowDataDataStream = buildRowData(filterOperator, columnNameList, columnTypeList, + table.getSchemaTableName()); + + addSink(env, rowDataDataStream, table, columnNameList, columnTypeList); + } + } + } + return dataStreamSource; + } + + protected void buildColumn(List columnNameList, List columnTypeList, List columns) { + for (Column column : columns) { + columnNameList.add(column.getName()); + columnTypeList.add(getLogicalType(column)); + } + } + + public LogicalType getLogicalType(Column column) { + switch (column.getJavaType()) { + case STRING: + return new VarCharType(); + case BOOLEAN: + case JAVA_LANG_BOOLEAN: + return new BooleanType(); + case BYTE: + case JAVA_LANG_BYTE: + return new TinyIntType(); + case SHORT: + case JAVA_LANG_SHORT: + return new SmallIntType(); + case LONG: + case JAVA_LANG_LONG: + return new BigIntType(); + case FLOAT: + case JAVA_LANG_FLOAT: + return new FloatType(); + case DOUBLE: + case JAVA_LANG_DOUBLE: + return new DoubleType(); + case DECIMAL: + if (column.getPrecision() == null || column.getPrecision() == 0) { + return new DecimalType(38, column.getScale()); + } else { + return new DecimalType(column.getPrecision(), column.getScale()); + } + case INT: + case INTEGER: + return new IntType(); + case DATE: + case LOCALDATE: + return new DateType(); + case LOCALDATETIME: + case TIMESTAMP: + return new TimestampType(); + case BYTES: + return new VarBinaryType(Integer.MAX_VALUE); + default: + return new VarCharType(); + } + } + + protected Object convertValue(Object value, LogicalType logicalType) { + if (value == null) { + return null; + } + if (logicalType instanceof VarCharType) { + return StringData.fromString((String) value); + } else if (logicalType instanceof DateType) { + return StringData.fromString( + Instant.ofEpochMilli((long) value).atZone(ZoneId.systemDefault()).toLocalDate().toString()); + } else if (logicalType instanceof TimestampType) { + return TimestampData.fromTimestamp(Timestamp.from(Instant.ofEpochMilli((long) value))); + } else if (logicalType instanceof DecimalType) { + final DecimalType decimalType = ((DecimalType) logicalType); + final int precision = decimalType.getPrecision(); + final int scale = decimalType.getScale(); + return DecimalData.fromBigDecimal(new BigDecimal((String) value), precision, scale); + } else { + return value; + } + } + + @Override + public String getSinkSchemaName(Table table) { + String schemaName = table.getSchema(); + if (config.getSink().containsKey("sink.db")) { + schemaName = config.getSink().get("sink.db"); + } + return schemaName; + } + + @Override + public String getSinkTableName(Table table) { + String tableName = table.getName(); + if (config.getSink().containsKey("table.prefix.schema")) { + if (Boolean.valueOf(config.getSink().get("table.prefix.schema"))) { + tableName = table.getSchema() + "_" + tableName; + } + } + if (config.getSink().containsKey("table.prefix")) { + tableName = config.getSink().get("table.prefix") + tableName; + } + if (config.getSink().containsKey("table.suffix")) { + tableName = tableName + config.getSink().get("table.suffix"); + } + if (config.getSink().containsKey("table.lower")) { + if (Boolean.valueOf(config.getSink().get("table.lower"))) { + tableName = tableName.toLowerCase(); + } + } + if (config.getSink().containsKey("table.upper")) { + if (Boolean.valueOf(config.getSink().get("table.upper"))) { + tableName = tableName.toUpperCase(); + } + } + return tableName; + } + + protected List getPKList(Table table) { + List pks = new ArrayList<>(); + if (Asserts.isNullCollection(table.getColumns())) { + return pks; + } + for (Column column : table.getColumns()) { + if (column.isKeyFlag()) { + pks.add(column.getName()); + } + } + return pks; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/CDCBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/CDCBuilder.java new file mode 100644 index 0000000..bffdcae --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/CDCBuilder.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.common.exception.SplitTableException; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +import java.util.List; +import java.util.Map; + +/** + * CDCBuilder + * + * @author zrx + * @since 2022/11/04 + **/ +public interface CDCBuilder { + + String getHandle(); + + CDCBuilder create(FlinkCDCConfig config); + + DataStreamSource build(StreamExecutionEnvironment env); + + List getSchemaList(); + + List getTableList(); + + Map> parseMetaDataConfigs(); + + String getSchemaFieldName(); + + default Map parseMetaDataConfig() { + throw new SplitTableException("此数据源并未实现分库分表"); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/CDCBuilderFactory.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/CDCBuilderFactory.java new file mode 100644 index 0000000..409c584 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/CDCBuilderFactory.java @@ -0,0 +1,53 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + +import net.srt.flink.client.base.exception.FlinkClientException; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.mysql.MysqlCDCBuilder; +import net.srt.flink.common.assertion.Asserts; + +import java.util.HashMap; +import java.util.Map; +import java.util.function.Supplier; + +/** + * CDCBuilderFactory + * + * @author zrx + * @since 2022/11/04 + **/ +public class CDCBuilderFactory { + + private static final Map> CDC_BUILDER_MAP = new HashMap>() { + { + put(MysqlCDCBuilder.KEY_WORD, MysqlCDCBuilder::new); + } + }; + + public static CDCBuilder buildCDCBuilder(FlinkCDCConfig config) { + if (Asserts.isNull(config) || Asserts.isNullString(config.getType())) { + throw new FlinkClientException("请指定 CDC Source 类型。"); + } + return CDC_BUILDER_MAP.getOrDefault(config.getType(), () -> { + throw new FlinkClientException("未匹配到对应 CDC Source 类型的【" + config.getType() + "】。"); + }).get().create(config); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/SinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/SinkBuilder.java new file mode 100644 index 0000000..4143f00 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/SinkBuilder.java @@ -0,0 +1,46 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.common.model.Table; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +/** + * SinkBuilder + * + * @author zrx + * @since 2022/11/04 + **/ +public interface SinkBuilder { + + String getHandle(); + + SinkBuilder create(FlinkCDCConfig config); + + DataStreamSource build(CDCBuilder cdcBuilder, StreamExecutionEnvironment env, + CustomTableEnvironment customTableEnvironment, DataStreamSource dataStreamSource); + + String getSinkSchemaName(Table table); + + String getSinkTableName(Table table); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/SinkBuilderFactory.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/SinkBuilderFactory.java new file mode 100644 index 0000000..dcd886a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/SinkBuilderFactory.java @@ -0,0 +1,52 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc; + +import net.srt.flink.client.base.exception.FlinkClientException; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.sql.SQLSinkBuilder; +import net.srt.flink.common.assertion.Asserts; + +import java.util.HashMap; +import java.util.Map; +import java.util.function.Supplier; + +/** + * SinkBuilderFactory + * + * @author zrx + * @since 2022/11/04 + **/ +public class SinkBuilderFactory { + + private static final Map> SINK_BUILDER_MAP = new HashMap>() { + { + put(SQLSinkBuilder.KEY_WORD, SQLSinkBuilder::new); + } + }; + + public static SinkBuilder buildSinkBuilder(FlinkCDCConfig config) { + if (Asserts.isNull(config) || Asserts.isNullString(config.getSink().get("connector"))) { + throw new FlinkClientException("请指定 Sink connector。"); + } + return SINK_BUILDER_MAP.getOrDefault(config.getSink().get("connector"), SQLSinkBuilder::new).get() + .create(config); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/mysql/MysqlCDCBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/mysql/MysqlCDCBuilder.java new file mode 100644 index 0000000..ad710cc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/mysql/MysqlCDCBuilder.java @@ -0,0 +1,199 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.mysql; + +import com.ververica.cdc.connectors.mysql.source.MySqlSource; +import com.ververica.cdc.connectors.mysql.source.MySqlSourceBuilder; +import com.ververica.cdc.connectors.mysql.table.StartupOptions; +import net.srt.flink.client.base.constant.ClientConstant; +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.AbstractCDCBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.api.common.eventtime.WatermarkStrategy; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +import java.time.Duration; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * MysqlCDCBuilder + * + * @author zrx + * @since 2022/4/12 21:29 + **/ +public class MysqlCDCBuilder extends AbstractCDCBuilder { + + public static final String KEY_WORD = "mysql-cdc"; + private static final String METADATA_TYPE = "MySql"; + + public MysqlCDCBuilder() { + } + + public MysqlCDCBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public CDCBuilder create(FlinkCDCConfig config) { + return new MysqlCDCBuilder(config); + } + + @Override + public DataStreamSource build(StreamExecutionEnvironment env) { + String database = config.getDatabase(); + String serverId = config.getSource().get("server-id"); + String serverTimeZone = config.getSource().get("server-time-zone"); + String fetchSize = config.getSource().get("scan.snapshot.fetch.size"); + String connectTimeout = config.getSource().get("connect.timeout"); + String connectMaxRetries = config.getSource().get("connect.max-retries"); + String connectionPoolSize = config.getSource().get("connection.pool.size"); + String heartbeatInterval = config.getSource().get("heartbeat.interval"); + + Properties debeziumProperties = new Properties(); + // 为部分转换添加默认值 + debeziumProperties.setProperty("bigint.unsigned.handling.mode", "long"); + debeziumProperties.setProperty("decimal.handling.mode", "string"); + + for (Map.Entry entry : config.getDebezium().entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && Asserts.isNotNullString(entry.getValue())) { + debeziumProperties.setProperty(entry.getKey(), entry.getValue()); + } + } + + // 添加jdbc参数注入 + Properties jdbcProperties = new Properties(); + for (Map.Entry entry : config.getJdbc().entrySet()) { + if (Asserts.isNotNullString(entry.getKey()) && Asserts.isNotNullString(entry.getValue())) { + jdbcProperties.setProperty(entry.getKey(), entry.getValue()); + } + } + + MySqlSourceBuilder sourceBuilder = MySqlSource.builder() + .hostname(config.getHostname()) + .port(config.getPort()) + .username(config.getUsername()) + .password(config.getPassword()); + + if (Asserts.isNotNullString(database)) { + String[] databases = database.split(FlinkParamConstant.SPLIT); + sourceBuilder.databaseList(databases); + } else { + sourceBuilder.databaseList(new String[0]); + } + + List schemaTableNameList = config.getSchemaTableNameList(); + if (Asserts.isNotNullCollection(schemaTableNameList)) { + sourceBuilder.tableList(schemaTableNameList.toArray(new String[schemaTableNameList.size()])); + } else { + sourceBuilder.tableList(new String[0]); + } + + sourceBuilder.deserializer(new MysqlJsonDebeziumDeserializationSchema()); + sourceBuilder.debeziumProperties(debeziumProperties); + sourceBuilder.jdbcProperties(jdbcProperties); + + if (Asserts.isNotNullString(config.getStartupMode())) { + switch (config.getStartupMode().toLowerCase()) { + case "initial": + sourceBuilder.startupOptions(StartupOptions.initial()); + break; + case "latest-offset": + sourceBuilder.startupOptions(StartupOptions.latest()); + break; + default: + } + } else { + sourceBuilder.startupOptions(StartupOptions.latest()); + } + + if (Asserts.isNotNullString(serverId)) { + sourceBuilder.serverId(serverId); + } + + if (Asserts.isNotNullString(serverTimeZone)) { + sourceBuilder.serverTimeZone(serverTimeZone); + } + + if (Asserts.isNotNullString(fetchSize)) { + sourceBuilder.fetchSize(Integer.valueOf(fetchSize)); + } + + if (Asserts.isNotNullString(connectTimeout)) { + sourceBuilder.connectTimeout(Duration.ofMillis(Long.valueOf(connectTimeout))); + } + + if (Asserts.isNotNullString(connectMaxRetries)) { + sourceBuilder.connectMaxRetries(Integer.valueOf(connectMaxRetries)); + } + + if (Asserts.isNotNullString(connectionPoolSize)) { + sourceBuilder.connectionPoolSize(Integer.valueOf(connectionPoolSize)); + } + + if (Asserts.isNotNullString(heartbeatInterval)) { + sourceBuilder.heartbeatInterval(Duration.ofMillis(Long.valueOf(heartbeatInterval))); + } + + return env.fromSource(sourceBuilder.build(), WatermarkStrategy.noWatermarks(), "MySQL CDC Source"); + } + + @Override + public Map> parseMetaDataConfigs() { + Map> allConfigMap = new HashMap<>(); + List schemaList = getSchemaList(); + for (String schema : schemaList) { + Map configMap = new HashMap<>(); + configMap.put(ClientConstant.METADATA_TYPE, METADATA_TYPE); + StringBuilder sb = new StringBuilder("jdbc:mysql://"); + sb.append(config.getHostname()); + sb.append(":"); + sb.append(config.getPort()); + sb.append("/"); + sb.append(schema); + configMap.put(ClientConstant.METADATA_NAME, sb.toString()); + configMap.put(ClientConstant.METADATA_URL, sb.toString()); + configMap.put(ClientConstant.METADATA_USERNAME, config.getUsername()); + configMap.put(ClientConstant.METADATA_PASSWORD, config.getPassword()); + allConfigMap.put(schema, configMap); + } + return allConfigMap; + } + + @Override + public String getSchemaFieldName() { + return "db"; + } + + @Override + public String getSchema() { + return config.getDatabase(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/mysql/MysqlJsonDebeziumDeserializationSchema.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/mysql/MysqlJsonDebeziumDeserializationSchema.java new file mode 100644 index 0000000..2843710 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/mysql/MysqlJsonDebeziumDeserializationSchema.java @@ -0,0 +1,84 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.mysql; + +import com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.json.JsonConverter; +import com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.source.SourceRecord; +import com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.storage.ConverterType; +import com.ververica.cdc.debezium.DebeziumDeserializationSchema; +import org.apache.flink.api.common.typeinfo.BasicTypeInfo; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.util.Collector; + +import java.nio.charset.StandardCharsets; +import java.util.HashMap; +import java.util.Map; + +/** + * @version 1.0 + * @className: com.dlink.cdc.mysql.MysqlJsonDebeziumDeserializationSchema + * @Description: + * @author: jack zhong + */ +public class MysqlJsonDebeziumDeserializationSchema implements DebeziumDeserializationSchema { + private static final long serialVersionUID = 1L; + private transient JsonConverter jsonConverter; + private final Boolean includeSchema; + private Map customConverterConfigs; + + public MysqlJsonDebeziumDeserializationSchema() { + this(false); + } + + public MysqlJsonDebeziumDeserializationSchema(Boolean includeSchema) { + this.includeSchema = includeSchema; + } + + public MysqlJsonDebeziumDeserializationSchema(Boolean includeSchema, Map customConverterConfigs) { + this.includeSchema = includeSchema; + this.customConverterConfigs = customConverterConfigs; + } + + @Override + public void deserialize(SourceRecord record, Collector out) throws Exception { + if (this.jsonConverter == null) { + this.initializeJsonConverter(); + } + byte[] bytes = this.jsonConverter.fromConnectData(record.topic(), record.valueSchema(), record.value()); + out.collect(new String(bytes, StandardCharsets.UTF_8)); + } + + private void initializeJsonConverter() { + this.jsonConverter = new JsonConverter(); + HashMap configs = new HashMap(2); + configs.put("converter.type", ConverterType.VALUE.getName()); + configs.put("schemas.enable", this.includeSchema); + if (this.customConverterConfigs != null) { + configs.putAll(this.customConverterConfigs); + } + + this.jsonConverter.configure(configs); + } + + @Override + public TypeInformation getProducedType() { + return BasicTypeInfo.STRING_TYPE_INFO; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/sql/SQLSinkBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/sql/SQLSinkBuilder.java new file mode 100644 index 0000000..b4d2702 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/cdc/sql/SQLSinkBuilder.java @@ -0,0 +1,304 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.cdc.sql; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.base.utils.FlinkBaseUtil; +import net.srt.flink.client.cdc.AbstractSinkBuilder; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.client.cdc.SinkBuilder; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.utils.JSONUtil; +import net.srt.flink.common.utils.LogUtil; +import org.apache.commons.lang3.StringUtils; +import org.apache.flink.api.common.functions.FlatMapFunction; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.api.dag.Transformation; +import org.apache.flink.api.java.typeutils.RowTypeInfo; +import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.functions.ProcessFunction; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.operations.ModifyOperation; +import org.apache.flink.table.operations.Operation; +import org.apache.flink.table.types.logical.BigIntType; +import org.apache.flink.table.types.logical.DateType; +import org.apache.flink.table.types.logical.DecimalType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.TimestampType; +import org.apache.flink.table.types.logical.VarBinaryType; +import org.apache.flink.table.types.utils.TypeConversions; +import org.apache.flink.types.Row; +import org.apache.flink.types.RowKind; +import org.apache.flink.util.Collector; +import org.apache.flink.util.OutputTag; + +import javax.xml.bind.DatatypeConverter; +import java.io.Serializable; +import java.math.BigDecimal; +import java.time.Instant; +import java.time.ZoneId; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +/** + * SQLSinkBuilder + * + * @author zrx + * @since 2022/4/25 23:02 + */ +public class SQLSinkBuilder extends AbstractSinkBuilder implements Serializable { + + public static final String KEY_WORD = "sql"; + private static final long serialVersionUID = -3699685106324048226L; + private ZoneId sinkTimeZone = ZoneId.of("UTC"); + + public SQLSinkBuilder() { + } + + private SQLSinkBuilder(FlinkCDCConfig config) { + super(config); + } + + @Override + public void addSink(StreamExecutionEnvironment env, DataStream rowDataDataStream, Table table, List columnNameList, List columnTypeList) { + + } + + private DataStream buildRow( + DataStream filterOperator, + List columnNameList, + List columnTypeList, + String schemaTableName) { + final String[] columnNames = columnNameList.toArray(new String[columnNameList.size()]); + final LogicalType[] columnTypes = columnTypeList.toArray(new LogicalType[columnTypeList.size()]); + + TypeInformation[] typeInformations = TypeConversions.fromDataTypeToLegacyInfo(TypeConversions.fromLogicalToDataType(columnTypes)); + RowTypeInfo rowTypeInfo = new RowTypeInfo(typeInformations, columnNames); + + return filterOperator + .flatMap(new FlatMapFunction() { + @Override + public void flatMap(Map value, Collector out) throws Exception { + try { + switch (value.get("op").toString()) { + case "r": + case "c": + Row irow = Row.withPositions(RowKind.INSERT, columnNameList.size()); + Map idata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + irow.setField(i, convertValue(idata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(irow); + break; + case "d": + Row drow = Row.withPositions(RowKind.DELETE, columnNameList.size()); + Map ddata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + drow.setField(i, convertValue(ddata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(drow); + break; + case "u": + Row ubrow = Row.withPositions(RowKind.UPDATE_BEFORE, columnNameList.size()); + Map ubdata = (Map) value.get("before"); + for (int i = 0; i < columnNameList.size(); i++) { + ubrow.setField(i, convertValue(ubdata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(ubrow); + Row uarow = Row.withPositions(RowKind.UPDATE_AFTER, columnNameList.size()); + Map uadata = (Map) value.get("after"); + for (int i = 0; i < columnNameList.size(); i++) { + uarow.setField(i, convertValue(uadata.get(columnNameList.get(i)), columnTypeList.get(i))); + } + out.collect(uarow); + break; + default: + } + } catch (Exception e) { + logger.error("SchameTable: {} - Row: {} - Exception:", schemaTableName, JSONUtil.toJsonString(value), e); + throw e; + } + } + }, rowTypeInfo); + } + + private void addTableSink( + CustomTableEnvironment customTableEnvironment, + DataStream rowDataDataStream, + Table table, + List columnNameList) { + + String sinkSchemaName = getSinkSchemaName(table); + String sinkTableName = getSinkTableName(table); + String pkList = StringUtils.join(getPKList(table), "."); + String viewName = "VIEW_" + table.getSchemaTableNameWithUnderline(); + customTableEnvironment.createTemporaryView(viewName, rowDataDataStream, StringUtils.join(columnNameList, ",")); + logger.info("Create " + viewName + " temporaryView successful..."); + String flinkDDL = FlinkBaseUtil.getFlinkDDL(table, sinkTableName, config, sinkSchemaName, sinkTableName, pkList); + logger.info(flinkDDL); + customTableEnvironment.executeSql(flinkDDL); + logger.info("Create " + sinkTableName + " FlinkSQL DDL successful..."); + String cdcSqlInsert = FlinkBaseUtil.getCDCSqlInsert(table, sinkTableName, viewName, config); + logger.info(cdcSqlInsert); + List operations = customTableEnvironment.getParser().parse(cdcSqlInsert); + logger.info("Create " + sinkTableName + " FlinkSQL insert into successful..."); + try { + if (operations.size() > 0) { + Operation operation = operations.get(0); + if (operation instanceof ModifyOperation) { + modifyOperations.add((ModifyOperation) operation); + } + } + } catch (Exception e) { + logger.error("Translate to plan occur exception: {}", e); + throw e; + } + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public SinkBuilder create(FlinkCDCConfig config) { + return new SQLSinkBuilder(config); + } + + @Override + public DataStreamSource build( + CDCBuilder cdcBuilder, + StreamExecutionEnvironment env, + CustomTableEnvironment customTableEnvironment, + DataStreamSource dataStreamSource) { + final String timeZone = config.getSink().get("timezone"); + config.getSink().remove("timezone"); + if (Asserts.isNotNullString(timeZone)) { + sinkTimeZone = ZoneId.of(timeZone); + } + final List schemaList = config.getSchemaList(); + if (Asserts.isNotNullCollection(schemaList)) { + + logger.info("Build deserialize successful..."); + Map> tagMap = new HashMap<>(); + Map tableMap = new HashMap<>(); + for (Schema schema : schemaList) { + for (Table table : schema.getTables()) { + String sinkTableName = getSinkTableName(table); + OutputTag outputTag = new OutputTag(sinkTableName) { + }; + tagMap.put(table, outputTag); + tableMap.put(table.getSchemaTableName(), table); + + } + } + final String schemaFieldName = config.getSchemaFieldName(); + ObjectMapper objectMapper = new ObjectMapper(); + SingleOutputStreamOperator mapOperator = dataStreamSource.map(x -> objectMapper.readValue(x,Map.class)).returns(Map.class); + + SingleOutputStreamOperator processOperator = mapOperator.process(new ProcessFunction() { + @Override + public void processElement(Map map, Context ctx, Collector out) throws Exception { + LinkedHashMap source = (LinkedHashMap) map.get("source"); + try { + Table table = tableMap.get(source.get(schemaFieldName).toString() + "." + source.get("table").toString()); + OutputTag outputTag = tagMap.get(table); + ctx.output(outputTag, map); + } catch (Exception e) { + out.collect(map); + } + } + }); + tagMap.forEach((table,tag) -> { + final String schemaTableName = table.getSchemaTableName(); + try { + DataStream filterOperator = shunt(processOperator, table, tag); + logger.info("Build " + schemaTableName + " shunt successful..."); + List columnNameList = new ArrayList<>(); + List columnTypeList = new ArrayList<>(); + buildColumn(columnNameList, columnTypeList, table.getColumns()); + DataStream rowDataDataStream = buildRow(filterOperator, columnNameList, columnTypeList, schemaTableName).rebalance(); + logger.info("Build " + schemaTableName + " flatMap successful..."); + logger.info("Start build " + schemaTableName + " sink..."); + addTableSink(customTableEnvironment, rowDataDataStream, table, columnNameList); + } catch (Exception e) { + logger.error("Build " + schemaTableName + " cdc sync failed..."); + logger.error(LogUtil.getError(e)); + } + }); + + List> trans = customTableEnvironment.getPlanner().translate(modifyOperations); + for (Transformation item : trans) { + env.addOperator(item); + } + logger.info("A total of " + trans.size() + " table cdc sync were build successfull..."); + } + return dataStreamSource; + } + + @Override + protected Object convertValue(Object value, LogicalType logicalType) { + if (value == null) { + return null; + } + if (logicalType instanceof DateType) { + if (value instanceof Integer) { + return Instant.ofEpochMilli(((Integer) value).longValue()).atZone(sinkTimeZone).toLocalDate(); + } else { + return Instant.ofEpochMilli((long) value).atZone(sinkTimeZone).toLocalDate(); + } + } else if (logicalType instanceof TimestampType) { + if (value instanceof Integer) { + return Instant.ofEpochMilli(((Integer) value).longValue()).atZone(sinkTimeZone).toLocalDateTime(); + } else if (value instanceof String) { + return Instant.parse((String) value).atZone(sinkTimeZone).toLocalDateTime(); + } else { + return Instant.ofEpochMilli((long) value).atZone(sinkTimeZone).toLocalDateTime(); + } + } else if (logicalType instanceof DecimalType) { + return new BigDecimal((String) value); + } else if (logicalType instanceof BigIntType) { + if (value instanceof Integer) { + return ((Integer) value).longValue(); + } else { + return value; + } + } else if (logicalType instanceof VarBinaryType) { + // VARBINARY AND BINARY is converted to String with encoding base64 in FlinkCDC. + if (value instanceof String) { + return DatatypeConverter.parseBase64Binary((String) value); + } else { + return value; + } + } else { + return value; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/CustomTableEnvironmentImpl.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/CustomTableEnvironmentImpl.java new file mode 100644 index 0000000..0f7b41f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/CustomTableEnvironmentImpl.java @@ -0,0 +1,379 @@ + + + +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.executor; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.node.ObjectNode; +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.LineageRel; +import net.srt.flink.client.utils.FlinkStreamProgramWithoutPhysical; +import net.srt.flink.client.utils.LineageContext; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.result.SqlExplainResult; +import org.apache.flink.api.common.RuntimeExecutionMode; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.api.dag.Transformation; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ExecutionOptions; +import org.apache.flink.configuration.PipelineOptions; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.flink.runtime.jobgraph.jsonplan.JsonPlanGenerator; +import org.apache.flink.runtime.rest.messages.JobPlanInfo; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.graph.JSONGenerator; +import org.apache.flink.streaming.api.graph.StreamGraph; +import org.apache.flink.table.api.EnvironmentSettings; +import org.apache.flink.table.api.ExplainDetail; +import org.apache.flink.table.api.TableConfig; +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.api.bridge.internal.AbstractStreamTableEnvironmentImpl; +import org.apache.flink.table.catalog.CatalogManager; +import org.apache.flink.table.catalog.FunctionCatalog; +import org.apache.flink.table.catalog.GenericInMemoryCatalog; +import org.apache.flink.table.connector.ChangelogMode; +import org.apache.flink.table.delegation.Executor; +import org.apache.flink.table.delegation.Planner; +import org.apache.flink.table.expressions.Expression; +import org.apache.flink.table.factories.PlannerFactoryUtil; +import org.apache.flink.table.module.ModuleManager; +import org.apache.flink.table.operations.DataStreamQueryOperation; +import org.apache.flink.table.operations.ExplainOperation; +import org.apache.flink.table.operations.ModifyOperation; +import org.apache.flink.table.operations.Operation; +import org.apache.flink.table.operations.QueryOperation; +import org.apache.flink.table.operations.command.ResetOperation; +import org.apache.flink.table.operations.command.SetOperation; +import org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram; +import org.apache.flink.table.resource.ResourceManager; +import org.apache.flink.table.typeutils.FieldInfoUtils; +import org.apache.flink.util.FlinkUserCodeClassLoaders; +import org.apache.flink.util.MutableURLClassLoader; + +import java.net.URL; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +/** + * CustomTableEnvironmentImpl + * + * @author zrx + * @since 2022/05/08 + **/ +public class CustomTableEnvironmentImpl extends AbstractStreamTableEnvironmentImpl + implements + CustomTableEnvironment { + + private final FlinkChainedProgram flinkChainedProgram; + + public CustomTableEnvironmentImpl( + CatalogManager catalogManager, + ModuleManager moduleManager, + ResourceManager resourceManager, + FunctionCatalog functionCatalog, + TableConfig tableConfig, + StreamExecutionEnvironment executionEnvironment, + Planner planner, + Executor executor, + boolean isStreamingMode) { + super( + catalogManager, + moduleManager, + resourceManager, + tableConfig, + executor, + functionCatalog, + planner, + isStreamingMode, + executionEnvironment); + this.flinkChainedProgram = + FlinkStreamProgramWithoutPhysical.buildProgram((Configuration) executionEnvironment.getConfiguration()); + } + + public static CustomTableEnvironmentImpl create(StreamExecutionEnvironment executionEnvironment) { + return create(executionEnvironment, EnvironmentSettings.newInstance().build()); + } + + public static CustomTableEnvironmentImpl createBatch(StreamExecutionEnvironment executionEnvironment) { + Configuration configuration = new Configuration(); + configuration.set(ExecutionOptions.RUNTIME_MODE, RuntimeExecutionMode.BATCH); + TableConfig tableConfig = new TableConfig(); + tableConfig.addConfiguration(configuration); + return create(executionEnvironment, EnvironmentSettings.newInstance().inBatchMode().build()); + } + + public static CustomTableEnvironmentImpl create( + StreamExecutionEnvironment executionEnvironment, + EnvironmentSettings settings) { + final MutableURLClassLoader userClassLoader = + FlinkUserCodeClassLoaders.create( + new URL[0], settings.getUserClassLoader(), settings.getConfiguration()); + final Executor executor = lookupExecutor(userClassLoader, executionEnvironment); + + final TableConfig tableConfig = TableConfig.getDefault(); + tableConfig.setRootConfiguration(executor.getConfiguration()); + tableConfig.addConfiguration(settings.getConfiguration()); + + final ResourceManager resourceManager = + new ResourceManager(settings.getConfiguration(), userClassLoader); + final ModuleManager moduleManager = new ModuleManager(); + + final CatalogManager catalogManager = + CatalogManager.newBuilder() + .classLoader(userClassLoader) + .config(tableConfig) + .defaultCatalog( + settings.getBuiltInCatalogName(), + new GenericInMemoryCatalog( + settings.getBuiltInCatalogName(), + settings.getBuiltInDatabaseName())) + .executionConfig(executionEnvironment.getConfig()) + .build(); + + final FunctionCatalog functionCatalog = + new FunctionCatalog(tableConfig, resourceManager, catalogManager, moduleManager); + + final Planner planner = + PlannerFactoryUtil.createPlanner( + executor, + tableConfig, + userClassLoader, + moduleManager, + catalogManager, + functionCatalog); + + return new CustomTableEnvironmentImpl( + catalogManager, + moduleManager, + resourceManager, + functionCatalog, + tableConfig, + executionEnvironment, + planner, + executor, + settings.isStreamingMode()); + } + + @Override + public ObjectNode getStreamGraph(String statement) { + List operations = super.getParser().parse(statement); + if (operations.size() != 1) { + throw new TableException("Unsupported SQL query! explainSql() only accepts a single SQL query."); + } else { + List modifyOperations = new ArrayList<>(); + for (int i = 0; i < operations.size(); i++) { + if (operations.get(i) instanceof ModifyOperation) { + modifyOperations.add((ModifyOperation) operations.get(i)); + } + } + List> trans = super.planner.translate(modifyOperations); + for (Transformation transformation : trans) { + executionEnvironment.addOperator(transformation); + } + StreamGraph streamGraph = executionEnvironment.getStreamGraph(); + if (tableConfig.getConfiguration().containsKey(PipelineOptions.NAME.key())) { + streamGraph.setJobName(tableConfig.getConfiguration().getString(PipelineOptions.NAME)); + } + JSONGenerator jsonGenerator = new JSONGenerator(streamGraph); + String json = jsonGenerator.getJSON(); + ObjectMapper mapper = new ObjectMapper(); + ObjectNode objectNode = mapper.createObjectNode(); + try { + objectNode = (ObjectNode) mapper.readTree(json); + } catch (JsonProcessingException e) { + e.printStackTrace(); + } finally { + return objectNode; + } + } + } + + @Override + public JobPlanInfo getJobPlanInfo(List statements) { + return new JobPlanInfo(JsonPlanGenerator.generatePlan(getJobGraphFromInserts(statements))); + } + + public StreamGraph getStreamGraphFromInserts(List statements) { + List modifyOperations = new ArrayList(); + for (String statement : statements) { + List operations = getParser().parse(statement); + if (operations.size() != 1) { + throw new TableException("Only single statement is supported."); + } else { + Operation operation = operations.get(0); + if (operation instanceof ModifyOperation) { + modifyOperations.add((ModifyOperation) operation); + } else { + throw new TableException("Only insert statement is supported now."); + } + } + } + List> trans = getPlanner().translate(modifyOperations); + for (Transformation transformation : trans) { + executionEnvironment.addOperator(transformation); + } + StreamGraph streamGraph = executionEnvironment.getStreamGraph(); + if (tableConfig.getConfiguration().containsKey(PipelineOptions.NAME.key())) { + streamGraph.setJobName(tableConfig.getConfiguration().getString(PipelineOptions.NAME)); + } + return streamGraph; + } + + @Override + public JobGraph getJobGraphFromInserts(List statements) { + return getStreamGraphFromInserts(statements).getJobGraph(); + } + + @Override + public SqlExplainResult explainSqlRecord(String statement, ExplainDetail... extraDetails) { + SqlExplainResult record = new SqlExplainResult(); + List operations = getParser().parse(statement); + record.setParseTrue(true); + if (operations.size() != 1) { + throw new TableException( + "Unsupported SQL query! explainSql() only accepts a single SQL query."); + } + Operation operation = operations.get(0); + if (operation instanceof ModifyOperation) { + record.setType("Modify DML"); + } else if (operation instanceof ExplainOperation) { + record.setType("Explain DML"); + } else if (operation instanceof QueryOperation) { + record.setType("Query DML"); + } else { + record.setExplain(operation.asSummaryString()); + record.setType("DDL"); + } + record.setExplainTrue(true); + if ("DDL".equals(record.getType())) { + // record.setExplain("DDL语句不进行解释。"); + return record; + } + record.setExplain(planner.explain(operations, extraDetails)); + return record; + } + + @Override + public boolean parseAndLoadConfiguration(String statement, StreamExecutionEnvironment environment, + Map setMap) { + List operations = getParser().parse(statement); + for (Operation operation : operations) { + if (operation instanceof SetOperation) { + callSet((SetOperation) operation, environment, setMap); + return true; + } else if (operation instanceof ResetOperation) { + callReset((ResetOperation) operation, environment, setMap); + return true; + } + } + return false; + } + + private void callSet(SetOperation setOperation, StreamExecutionEnvironment environment, + Map setMap) { + if (setOperation.getKey().isPresent() && setOperation.getValue().isPresent()) { + String key = setOperation.getKey().get().trim(); + String value = setOperation.getValue().get().trim(); + if (Asserts.isNullString(key) || Asserts.isNullString(value)) { + return; + } + Map confMap = new HashMap<>(); + confMap.put(key, value); + setMap.put(key, value); + Configuration configuration = Configuration.fromMap(confMap); + environment.getConfig().configure(configuration, null); + getConfig().addConfiguration(configuration); + } + } + + private void callReset(ResetOperation resetOperation, StreamExecutionEnvironment environment, + Map setMap) { + if (resetOperation.getKey().isPresent()) { + String key = resetOperation.getKey().get().trim(); + if (Asserts.isNullString(key)) { + return; + } + Map confMap = new HashMap<>(); + confMap.put(key, null); + setMap.remove(key); + Configuration configuration = Configuration.fromMap(confMap); + environment.getConfig().configure(configuration, null); + getConfig().addConfiguration(configuration); + } else { + setMap.clear(); + } + } + + /* + * public Table fromDataStream(DataStream dataStream, Expression... fields) { return + * createTable(asQueryOperation(dataStream, Optional.of(Arrays.asList(fields)))); } + * + * public Table fromDataStream(DataStream dataStream, String fields) { List expressions = + * ExpressionParser.INSTANCE.parseExpressionList(fields); return fromDataStream(dataStream, expressions.toArray(new + * Expression[0])); } + */ + + @Override + public void createTemporaryView(String path, DataStream dataStream, String fields) { + createTemporaryView(path, fromStreamInternal(dataStream, null, null, ChangelogMode.all())); + } + + @Override + public List getLineage(String statement) { + LineageContext lineageContext = new LineageContext(flinkChainedProgram, this); + return lineageContext.getLineage(statement); + } + + @Override + public void createTemporaryView( + String path, DataStream dataStream, Expression... fields) { + createTemporaryView(path, fromStreamInternal(dataStream, null, null, ChangelogMode.all())); + } + + protected DataStreamQueryOperation asQueryOperation( + DataStream dataStream, + Optional> fields) { + TypeInformation streamType = dataStream.getType(); + + // get field names and types for all non-replaced fields + FieldInfoUtils.TypeInfoSchema typeInfoSchema = + fields.map( + f -> { + FieldInfoUtils.TypeInfoSchema fieldsInfo = + FieldInfoUtils.getFieldsInfo( + streamType, f.toArray(new Expression[0])); + + // check if event-time is enabled + validateTimeCharacteristic(fieldsInfo.isRowtimeDefined()); + return fieldsInfo; + }) + .orElseGet(() -> FieldInfoUtils.getFieldsInfo(streamType)); + + return new DataStreamQueryOperation<>( + dataStream, typeInfoSchema.getIndices(), typeInfoSchema.toResolvedSchema()); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/CustomTableResultImpl.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/CustomTableResultImpl.java new file mode 100644 index 0000000..ba0a3e5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/CustomTableResultImpl.java @@ -0,0 +1,272 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.executor; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.core.execution.JobClient; +import org.apache.flink.table.api.DataTypes; +import org.apache.flink.table.api.ResultKind; +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.table.api.internal.ResultProvider; +import org.apache.flink.table.api.internal.TableResultInternal; +import org.apache.flink.table.catalog.Column; +import org.apache.flink.table.catalog.ResolvedSchema; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.utils.print.PrintStyle; +import org.apache.flink.table.utils.print.RowDataToStringConverter; +import org.apache.flink.table.utils.print.TableauStyle; +import org.apache.flink.types.Row; +import org.apache.flink.util.CloseableIterator; +import org.apache.flink.util.Preconditions; + +import javax.annotation.Nullable; +import java.io.PrintWriter; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.Optional; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; + +/** + * Implementation for {@link TableResult}. + */ +@Internal +public class CustomTableResultImpl implements TableResultInternal { + + public static final TableResult TABLE_RESULT_OK = + CustomTableResultImpl.builder() + .resultKind(ResultKind.SUCCESS) + .schema(ResolvedSchema.of(Column.physical("result", DataTypes.STRING()))) + .data(Collections.singletonList(Row.of("OK"))) + .build(); + + private final JobClient jobClient; + private final ResolvedSchema resolvedSchema; + private final ResultKind resultKind; + private final ResultProvider resultProvider; + private final PrintStyle printStyle; + + private CustomTableResultImpl( + @Nullable JobClient jobClient, + ResolvedSchema resolvedSchema, + ResultKind resultKind, + ResultProvider resultProvider, + PrintStyle printStyle) { + this.jobClient = jobClient; + this.resolvedSchema = + Preconditions.checkNotNull(resolvedSchema, "resolvedSchema should not be null"); + this.resultKind = Preconditions.checkNotNull(resultKind, "resultKind should not be null"); + Preconditions.checkNotNull(resultProvider, "result provider should not be null"); + this.resultProvider = resultProvider; + this.printStyle = Preconditions.checkNotNull(printStyle, "printStyle should not be null"); + } + + public static TableResult buildTableResult(List fields, List rows) { + Builder builder = builder().resultKind(ResultKind.SUCCESS); + if (fields.size() > 0) { + List columnNames = new ArrayList<>(); + List columnTypes = new ArrayList<>(); + for (int i = 0; i < fields.size(); i++) { + columnNames.add(fields.get(i).getName()); + columnTypes.add(fields.get(i).getType()); + } + builder.schema(ResolvedSchema.physical(columnNames, columnTypes)).data(rows); + } + return builder.build(); + } + + @Override + public Optional getJobClient() { + return Optional.ofNullable(jobClient); + } + + @Override + public void await() throws InterruptedException, ExecutionException { + try { + awaitInternal(-1, TimeUnit.MILLISECONDS); + } catch (TimeoutException e) { + // do nothing + } + } + + @Override + public void await(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException { + awaitInternal(timeout, unit); + } + + private void awaitInternal(long timeout, + TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException { + if (jobClient == null) { + return; + } + + ExecutorService executor = + Executors.newFixedThreadPool(1, r -> new Thread(r, "TableResult-await-thread")); + try { + CompletableFuture future = + CompletableFuture.runAsync( + () -> { + while (!resultProvider.isFirstRowReady()) { + try { + Thread.sleep(100); + } catch (InterruptedException e) { + throw new TableException("Thread is interrupted"); + } + } + }, + executor); + + if (timeout >= 0) { + future.get(timeout, unit); + } else { + future.get(); + } + } finally { + executor.shutdown(); + } + } + + @Override + public ResolvedSchema getResolvedSchema() { + return resolvedSchema; + } + + @Override + public ResultKind getResultKind() { + return resultKind; + } + + @Override + public CloseableIterator collect() { + return resultProvider.toExternalIterator(); + } + + @Override + public CloseableIterator collectInternal() { + return resultProvider.toInternalIterator(); + } + + @Override + public RowDataToStringConverter getRowDataToStringConverter() { + return resultProvider.getRowDataStringConverter(); + } + + @Override + public void print() { + Iterator it = resultProvider.toInternalIterator(); + printStyle.print(it, new PrintWriter(System.out)); + } + + public static Builder builder() { + return new Builder(); + } + + /** + * Builder for creating a {@link CustomTableResultImpl}. + */ + public static class Builder { + + private JobClient jobClient = null; + private ResolvedSchema resolvedSchema = null; + private ResultKind resultKind = null; + private ResultProvider resultProvider = null; + private PrintStyle printStyle = null; + + private Builder() { + } + + /** + * Specifies job client which associates the submitted Flink job. + * + * @param jobClient a {@link JobClient} for the submitted Flink job. + */ + public Builder jobClient(JobClient jobClient) { + this.jobClient = jobClient; + return this; + } + + /** + * Specifies schema of the execution result. + * + * @param resolvedSchema a {@link ResolvedSchema} for the execution result. + */ + public Builder schema(ResolvedSchema resolvedSchema) { + Preconditions.checkNotNull(resolvedSchema, "resolvedSchema should not be null"); + this.resolvedSchema = resolvedSchema; + return this; + } + + /** + * Specifies result kind of the execution result. + * + * @param resultKind a {@link ResultKind} for the execution result. + */ + public Builder resultKind(ResultKind resultKind) { + Preconditions.checkNotNull(resultKind, "resultKind should not be null"); + this.resultKind = resultKind; + return this; + } + + public Builder resultProvider(ResultProvider resultProvider) { + Preconditions.checkNotNull(resultProvider, "resultProvider should not be null"); + this.resultProvider = resultProvider; + return this; + } + + /** + * Specifies an row list as the execution result. + * + * @param rowList a row list as the execution result. + */ + public Builder data(List rowList) { + Preconditions.checkNotNull(rowList, "listRows should not be null"); + this.resultProvider = new StaticResultProvider(rowList); + return this; + } + + /** + * Specifies print style. Default is {@link TableauStyle} with max integer column width. + */ + public Builder setPrintStyle(PrintStyle printStyle) { + Preconditions.checkNotNull(printStyle, "printStyle should not be null"); + this.printStyle = printStyle; + return this; + } + + /** + * Returns a {@link TableResult} instance. + */ + public TableResultInternal build() { + if (printStyle == null) { + printStyle = PrintStyle.rawContent(resultProvider.getRowDataStringConverter()); + } + return new CustomTableResultImpl( + jobClient, resolvedSchema, resultKind, resultProvider, printStyle); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/StaticResultProvider.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/StaticResultProvider.java new file mode 100644 index 0000000..56d0a6b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/StaticResultProvider.java @@ -0,0 +1,123 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.executor; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.core.execution.JobClient; +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.api.internal.ResultProvider; +import org.apache.flink.table.data.GenericRowData; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.StringData; +import org.apache.flink.table.utils.print.PrintStyle; +import org.apache.flink.table.utils.print.RowDataToStringConverter; +import org.apache.flink.types.Row; +import org.apache.flink.util.CloseableIterator; + +import java.util.List; +import java.util.function.Function; + +/** Create result provider from a static set of data using external types. */ +@Internal +public class StaticResultProvider implements ResultProvider { + + /** + * This converter supports only String, long, int and boolean fields. Moreover, this converter + * works only with {@link GenericRowData}. + */ + public static final RowDataToStringConverter SIMPLE_ROW_DATA_TO_STRING_CONVERTER = + rowData -> { + GenericRowData genericRowData = (GenericRowData) rowData; + String[] results = new String[rowData.getArity()]; + for (int i = 0; i < results.length; i++) { + Object value = genericRowData.getField(i); + if (Boolean.TRUE.equals(value)) { + results[i] = "TRUE"; + } else if (Boolean.FALSE.equals(value)) { + results[i] = "FALSE"; + } else { + results[i] = value == null ? PrintStyle.NULL_VALUE : "" + value; + } + } + return results; + }; + + private final List rows; + private final Function externalToInternalConverter; + + public StaticResultProvider(List rows) { + this(rows, StaticResultProvider::rowToInternalRow); + } + + public StaticResultProvider( + List rows, Function externalToInternalConverter) { + this.rows = rows; + this.externalToInternalConverter = externalToInternalConverter; + } + + @Override + public StaticResultProvider setJobClient(JobClient jobClient) { + return this; + } + + @Override + public CloseableIterator toInternalIterator() { + return CloseableIterator.adapterForIterator( + this.rows.stream().map(this.externalToInternalConverter).iterator()); + } + + @Override + public CloseableIterator toExternalIterator() { + return CloseableIterator.adapterForIterator(this.rows.iterator()); + } + + @Override + public RowDataToStringConverter getRowDataStringConverter() { + return SIMPLE_ROW_DATA_TO_STRING_CONVERTER; + } + + @Override + public boolean isFirstRowReady() { + return true; + } + + /** This function supports only String, long, int and boolean fields. */ + @VisibleForTesting + static RowData rowToInternalRow(Row row) { + Object[] values = new Object[row.getArity()]; + for (int i = 0; i < row.getArity(); i++) { + Object value = row.getField(i); + if (value == null) { + values[i] = null; + } else if (value instanceof String) { + values[i] = StringData.fromString((String) value); + } else if (value instanceof Boolean + || value instanceof Long + || value instanceof Integer) { + values[i] = value; + } else { + throw new TableException("Cannot convert row type"); + } + } + + return GenericRowData.of(values); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/TableSchemaField.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/TableSchemaField.java new file mode 100644 index 0000000..4f6ea17 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/executor/TableSchemaField.java @@ -0,0 +1,54 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.executor; + +import org.apache.flink.table.types.DataType; + +/** + * @author zrx + * @since 2022/11/04 + **/ + +public class TableSchemaField { + private String name; + private DataType type; + + public TableSchemaField(String name, DataType type) { + this.name = name; + this.type = type; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public DataType getType() { + return type; + } + + public void setType(DataType type) { + this.type = type; + } +} + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/FlinkStreamProgramWithoutPhysical.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/FlinkStreamProgramWithoutPhysical.java new file mode 100644 index 0000000..9750075 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/FlinkStreamProgramWithoutPhysical.java @@ -0,0 +1,227 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.utils; + +import org.apache.calcite.plan.Convention; +import org.apache.calcite.plan.hep.HepMatchOrder; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.table.api.config.OptimizerConfigOptions; +import org.apache.flink.table.planner.plan.nodes.FlinkConventions; +import org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram; +import org.apache.flink.table.planner.plan.optimize.program.FlinkDecorrelateProgram; +import org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgramBuilder; +import org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgramBuilder; +import org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgramBuilder; +import org.apache.flink.table.planner.plan.optimize.program.HEP_RULES_EXECUTION_TYPE; +import org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets; + +/** + * FlinkStreamProgramWithoutPhysical + * + * @author zrx + * @since 2022/11/22 + */ +public class FlinkStreamProgramWithoutPhysical { + + private static final String SUBQUERY_REWRITE = "subquery_rewrite"; + private static final String TEMPORAL_JOIN_REWRITE = "temporal_join_rewrite"; + private static final String DECORRELATE = "decorrelate"; + private static final String DEFAULT_REWRITE = "default_rewrite"; + private static final String PREDICATE_PUSHDOWN = "predicate_pushdown"; + private static final String JOIN_REORDER = "join_reorder"; + private static final String PROJECT_REWRITE = "project_rewrite"; + private static final String LOGICAL = "logical"; + private static final String LOGICAL_REWRITE = "logical_rewrite"; + + public static FlinkChainedProgram buildProgram(Configuration config) { + FlinkChainedProgram chainedProgram = new FlinkChainedProgram(); + + // rewrite sub-queries to joins + chainedProgram.addLast( + SUBQUERY_REWRITE, + FlinkGroupProgramBuilder.newBuilder() + // rewrite QueryOperationCatalogViewTable before rewriting sub-queries + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.TABLE_REF_RULES()) + .build(), "convert table references before rewriting sub-queries to semi-join") + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.SEMI_JOIN_RULES()) + .build(), "rewrite sub-queries to semi-join") + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.TABLE_SUBQUERY_RULES()) + .build(), "sub-queries remove") + // convert RelOptTableImpl (which exists in SubQuery before) to FlinkRelOptTable + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.TABLE_REF_RULES()) + .build(), "convert table references after sub-queries removed") + .build()); + + // rewrite special temporal join plan + chainedProgram.addLast( + TEMPORAL_JOIN_REWRITE, + FlinkGroupProgramBuilder.newBuilder() + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.EXPAND_PLAN_RULES()) + .build(), + "convert correlate to temporal table join") + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.POST_EXPAND_CLEAN_UP_RULES()) + .build(), + "convert enumerable table scan") + .build()); + + // query decorrelation + chainedProgram.addLast(DECORRELATE, + FlinkGroupProgramBuilder.newBuilder() + // rewrite before decorrelation + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.PRE_DECORRELATION_RULES()) + .build(), + "pre-rewrite before decorrelation") + .addProgram(new FlinkDecorrelateProgram(), "") + .build()); + + // default rewrite, includes: predicate simplification, expression reduction, window + // properties rewrite, etc. + chainedProgram.addLast( + DEFAULT_REWRITE, + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.DEFAULT_REWRITE_RULES()) + .build()); + + // rule based optimization: push down predicate(s) in where clause, so it only needs to read + // the required data + chainedProgram.addLast( + PREDICATE_PUSHDOWN, + FlinkGroupProgramBuilder.newBuilder() + .addProgram( + FlinkGroupProgramBuilder.newBuilder() + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType( + HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.JOIN_PREDICATE_REWRITE_RULES()) + .build(), + "join predicate rewrite") + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType( + HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.FILTER_PREPARE_RULES()) + .build(), + "filter rules") + .setIterations(5) + .build(), + "predicate rewrite") + .addProgram( + // PUSH_PARTITION_DOWN_RULES should always be in front of PUSH_FILTER_DOWN_RULES + // to prevent PUSH_FILTER_DOWN_RULES from consuming the predicates in partitions + FlinkGroupProgramBuilder.newBuilder() + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.PUSH_PARTITION_DOWN_RULES()) + .build(), "push down partitions into table scan") + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.PUSH_FILTER_DOWN_RULES()) + .build(), "push down filters into table scan") + .build(), + "push predicate into table scan") + .addProgram( + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.PRUNE_EMPTY_RULES()) + .build(), + "prune empty after predicate push down") + .build()); + + // join reorder + if (config.getBoolean(OptimizerConfigOptions.TABLE_OPTIMIZER_JOIN_REORDER_ENABLED)) { + chainedProgram.addLast( + JOIN_REORDER, + FlinkGroupProgramBuilder.newBuilder() + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.JOIN_REORDER_PREPARE_RULES()) + .build(), "merge join into MultiJoin") + .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.JOIN_REORDER_RULES()) + .build(), "do join reorder") + .build()); + } + + // project rewrite + chainedProgram.addLast( + PROJECT_REWRITE, + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.PROJECT_RULES()) + .build()); + + // optimize the logical plan + chainedProgram.addLast( + LOGICAL, + FlinkVolcanoProgramBuilder.newBuilder() + .add(FlinkStreamRuleSets.LOGICAL_OPT_RULES()) + .setRequiredOutputTraits(new Convention.Impl[]{ + FlinkConventions.LOGICAL() + }) + .build()); + + // logical rewrite + chainedProgram.addLast( + LOGICAL_REWRITE, + FlinkHepRuleSetProgramBuilder.newBuilder() + .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE()) + .setHepMatchOrder(HepMatchOrder.BOTTOM_UP) + .add(FlinkStreamRuleSets.LOGICAL_REWRITE()) + .build()); + + return chainedProgram; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/FlinkUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/FlinkUtil.java new file mode 100644 index 0000000..ca97b43 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/FlinkUtil.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.utils; + +import org.apache.flink.api.common.JobID; +import org.apache.flink.client.program.ClusterClient; +import org.apache.flink.core.execution.SavepointFormatType; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.table.catalog.CatalogManager; +import org.apache.flink.table.catalog.ContextResolvedTable; +import org.apache.flink.table.catalog.ObjectIdentifier; + +import java.util.ArrayList; +import java.util.List; +import java.util.Optional; +import java.util.concurrent.ExecutionException; + +/** + * FlinkUtil + * + * @author zrx + * @since 2022/05/08 + */ +public class FlinkUtil { + + public static List getFieldNamesFromCatalogManager(CatalogManager catalogManager, String catalog, String database, String table) { + Optional tableOpt = catalogManager.getTable( + ObjectIdentifier.of(catalog, database, table) + ); + if (tableOpt.isPresent()) { + return tableOpt.get().getResolvedSchema().getColumnNames(); + } else { + return new ArrayList(); + } + } + + public static List catchColumn(TableResult tableResult) { + return tableResult.getResolvedSchema().getColumnNames(); + } + + public static String triggerSavepoint(ClusterClient clusterClient, String jobId, String savePoint) throws ExecutionException, InterruptedException { + return clusterClient.triggerSavepoint(JobID.fromHexString(jobId), savePoint, SavepointFormatType.DEFAULT).get().toString(); + } + + public static String stopWithSavepoint(ClusterClient clusterClient, String jobId, String savePoint) throws ExecutionException, InterruptedException { + return clusterClient.stopWithSavepoint(JobID.fromHexString(jobId), true, savePoint, SavepointFormatType.DEFAULT).get().toString(); + } + + public static String cancelWithSavepoint(ClusterClient clusterClient, String jobId, String savePoint) throws ExecutionException, InterruptedException { + return clusterClient.cancelWithSavepoint(JobID.fromHexString(jobId), savePoint, SavepointFormatType.DEFAULT).get().toString(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/LineageContext.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/LineageContext.java new file mode 100644 index 0000000..55840ea --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/LineageContext.java @@ -0,0 +1,220 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.utils; + +import net.srt.flink.client.base.model.LineageRel; +import org.apache.calcite.plan.RelOptTable; +import org.apache.calcite.rel.RelNode; +import org.apache.calcite.rel.metadata.RelColumnOrigin; +import org.apache.calcite.rel.metadata.RelMetadataQuery; +import org.apache.commons.collections.CollectionUtils; +import org.apache.flink.api.java.tuple.Tuple2; +import org.apache.flink.table.api.TableConfig; +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.api.internal.TableEnvironmentImpl; +import org.apache.flink.table.catalog.CatalogManager; +import org.apache.flink.table.catalog.FunctionCatalog; +import org.apache.flink.table.module.ModuleManager; +import org.apache.flink.table.operations.Operation; +import org.apache.flink.table.operations.SinkModifyOperation; +import org.apache.flink.table.planner.calcite.FlinkRelBuilder; +import org.apache.flink.table.planner.calcite.RexFactory; +import org.apache.flink.table.planner.delegation.PlannerBase; +import org.apache.flink.table.planner.operations.PlannerQueryOperation; +import org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram; +import org.apache.flink.table.planner.plan.optimize.program.StreamOptimizeContext; +import org.apache.flink.table.planner.plan.schema.TableSourceTable; +import org.apache.flink.table.planner.plan.trait.MiniBatchInterval; + +import java.util.ArrayList; +import java.util.List; +import java.util.Set; + +/** + * LineageContext + * + * @author zrx + * @since 2022/11/22 + */ +public class LineageContext { + + private final FlinkChainedProgram flinkChainedProgram; + private final TableEnvironmentImpl tableEnv; + + public LineageContext(FlinkChainedProgram flinkChainedProgram, TableEnvironmentImpl tableEnv) { + this.flinkChainedProgram = flinkChainedProgram; + this.tableEnv = tableEnv; + } + + public List getLineage(String statement) { + // 1. Generate original relNode tree + Tuple2 parsed = parseStatement(statement); + String sinkTable = parsed.getField(0); + RelNode oriRelNode = parsed.getField(1); + + // 2. Optimize original relNode to generate Optimized Logical Plan + RelNode optRelNode = optimize(oriRelNode); + + // 3. Build lineage based from RelMetadataQuery + return buildFiledLineageResult(sinkTable, optRelNode); + } + + private Tuple2 parseStatement(String sql) { + List operations = tableEnv.getParser().parse(sql); + + if (operations.size() != 1) { + throw new TableException( + "Unsupported SQL query! only accepts a single SQL statement."); + } + Operation operation = operations.get(0); + if (operation instanceof SinkModifyOperation) { + SinkModifyOperation sinkOperation = (SinkModifyOperation) operation; + + PlannerQueryOperation queryOperation = (PlannerQueryOperation) sinkOperation.getChild(); + RelNode relNode = queryOperation.getCalciteTree(); + return new Tuple2<>( + sinkOperation.getContextResolvedTable().getIdentifier().asSummaryString(), + relNode); + } else { + throw new TableException("Only insert is supported now."); + } + } + + /** + * Calling each program's optimize method in sequence. + */ + private RelNode optimize(RelNode relNode) { + return flinkChainedProgram.optimize(relNode, new StreamOptimizeContext() { + + @Override + public boolean isBatchMode() { + return false; + } + + @Override + public TableConfig getTableConfig() { + return tableEnv.getConfig(); + } + + @Override + public FunctionCatalog getFunctionCatalog() { + return getPlanner().getFlinkContext().getFunctionCatalog(); + } + + @Override + public CatalogManager getCatalogManager() { + return tableEnv.getCatalogManager(); + } + + @Override + public ModuleManager getModuleManager() { + return getPlanner().getFlinkContext().getModuleManager(); + } + + @Override + public RexFactory getRexFactory() { + return getPlanner().getFlinkContext().getRexFactory(); + } + + @Override + public FlinkRelBuilder getFlinkRelBuilder() { + return getPlanner().createRelBuilder(); + } + + @Override + public boolean isUpdateBeforeRequired() { + return false; + } + + @Override + public MiniBatchInterval getMiniBatchInterval() { + return MiniBatchInterval.NONE; + } + + @Override + public boolean needFinalTimeIndicatorConversion() { + return true; + } + + @Override + public ClassLoader getClassLoader() { + return getPlanner().getFlinkContext().getClassLoader(); + } + + private PlannerBase getPlanner() { + return (PlannerBase) tableEnv.getPlanner(); + } + + }); + } + + /** + * Check the size of query and sink fields match + */ + private void validateSchema(String sinkTable, RelNode relNode, List sinkFieldList) { + List queryFieldList = relNode.getRowType().getFieldNames(); + if (queryFieldList.size() != sinkFieldList.size()) { + throw new ValidationException( + String.format( + "Column types of query result and sink for %s do not match.\n" + + "Query schema: %s\n" + + "Sink schema: %s", + sinkTable, queryFieldList, sinkFieldList)); + } + } + + private List buildFiledLineageResult(String sinkTable, RelNode optRelNode) { + // target columns + List targetColumnList = tableEnv.from(sinkTable) + .getResolvedSchema() + .getColumnNames(); + + // check the size of query and sink fields match + validateSchema(sinkTable, optRelNode, targetColumnList); + + RelMetadataQuery metadataQuery = optRelNode.getCluster().getMetadataQuery(); + List resultList = new ArrayList<>(); + + for (int index = 0; index < targetColumnList.size(); index++) { + String targetColumn = targetColumnList.get(index); + + Set relColumnOriginSet = metadataQuery.getColumnOrigins(optRelNode, index); + + if (CollectionUtils.isNotEmpty(relColumnOriginSet)) { + for (RelColumnOrigin relColumnOrigin : relColumnOriginSet) { + // table + RelOptTable table = relColumnOrigin.getOriginTable(); + String sourceTable = String.join(".", table.getQualifiedName()); + + // filed + int ordinal = relColumnOrigin.getOriginColumnOrdinal(); + List fieldNames = ((TableSourceTable) table).contextResolvedTable().getResolvedSchema() + .getColumnNames(); + String sourceColumn = fieldNames.get(ordinal); + + // add record + resultList.add(LineageRel.build(sourceTable, sourceColumn, sinkTable, targetColumn)); + } + } + } + return resultList; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/ObjectConvertUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/ObjectConvertUtil.java new file mode 100644 index 0000000..9605296 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/net/srt/flink/client/utils/ObjectConvertUtil.java @@ -0,0 +1,66 @@ +package net.srt.flink.client.utils; + +import org.apache.flink.table.types.logical.BigIntType; +import org.apache.flink.table.types.logical.DateType; +import org.apache.flink.table.types.logical.DecimalType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.TimestampType; +import org.apache.flink.table.types.logical.VarBinaryType; + +import javax.xml.bind.DatatypeConverter; +import java.math.BigDecimal; +import java.time.Instant; +import java.time.ZoneId; + +/** + * @className: com.dlink.utils.ObjectConvertUtil + * @Description: + * @author: jack zhong + */ +public class ObjectConvertUtil { + + public static Object convertValue(Object value, LogicalType logicalType) { + return ObjectConvertUtil.convertValue(value,logicalType,null); + } + + public static Object convertValue(Object value, LogicalType logicalType,ZoneId sinkTimeZone) { + if (value == null) { + return null; + } + if (sinkTimeZone == null) { + sinkTimeZone = ZoneId.of("UTC"); + } + if (logicalType instanceof DateType) { + if (value instanceof Integer) { + return Instant.ofEpochMilli(((Integer) value).longValue()).atZone(sinkTimeZone).toLocalDate(); + } else { + return Instant.ofEpochMilli((long) value).atZone(ZoneId.systemDefault()).toLocalDate(); + } + } else if (logicalType instanceof TimestampType) { + if (value instanceof Integer) { + return Instant.ofEpochMilli(((Integer) value).longValue()).atZone(sinkTimeZone).toLocalDateTime(); + } else if (value instanceof String) { + return Instant.parse((String) value).atZone(ZoneId.systemDefault()).toLocalDateTime(); + } else { + return Instant.ofEpochMilli((long) value).atZone(sinkTimeZone).toLocalDateTime(); + } + } else if (logicalType instanceof DecimalType) { + return new BigDecimal((String) value); + } else if (logicalType instanceof BigIntType) { + if (value instanceof Integer) { + return ((Integer) value).longValue(); + } else { + return value; + } + } else if (logicalType instanceof VarBinaryType) { + // VARBINARY AND BINARY is converted to String with encoding base64 in FlinkCDC. + if (value instanceof String) { + return DatatypeConverter.parseBase64Binary((String) value); + } else { + return value; + } + } else { + return value; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/org/apache/calcite/rel/metadata/RelMdColumnOrigins.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/org/apache/calcite/rel/metadata/RelMdColumnOrigins.java new file mode 100644 index 0000000..210a286 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-1.16/src/main/java/org/apache/calcite/rel/metadata/RelMdColumnOrigins.java @@ -0,0 +1,380 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.calcite.rel.metadata; + +import org.apache.calcite.plan.RelOptTable; +import org.apache.calcite.rel.RelNode; +import org.apache.calcite.rel.SingleRel; +import org.apache.calcite.rel.core.Aggregate; +import org.apache.calcite.rel.core.AggregateCall; +import org.apache.calcite.rel.core.Calc; +import org.apache.calcite.rel.core.Correlate; +import org.apache.calcite.rel.core.Exchange; +import org.apache.calcite.rel.core.Filter; +import org.apache.calcite.rel.core.Join; +import org.apache.calcite.rel.core.Project; +import org.apache.calcite.rel.core.SetOp; +import org.apache.calcite.rel.core.Snapshot; +import org.apache.calcite.rel.core.Sort; +import org.apache.calcite.rel.core.TableFunctionScan; +import org.apache.calcite.rel.core.TableModify; +import org.apache.calcite.rel.type.RelDataTypeField; +import org.apache.calcite.rex.RexCall; +import org.apache.calcite.rex.RexFieldAccess; +import org.apache.calcite.rex.RexInputRef; +import org.apache.calcite.rex.RexLocalRef; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.rex.RexShuttle; +import org.apache.calcite.rex.RexVisitor; +import org.apache.calcite.rex.RexVisitorImpl; +import org.apache.calcite.util.BuiltInMethod; +import org.apache.flink.table.planner.plan.schema.TableSourceTable; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +/** + * Modified based on calcite's source code org.apache.calcite.rel.metadata.RelMdColumnOrigins + * + * Modification point: + * 1. Support lookup join, add method getColumnOrigins(Snapshot rel, RelMetadataQuery mq, int iOutputColumn) + * 2. Support watermark, add method getColumnOrigins(SingleRel rel,RelMetadataQuery mq, int iOutputColumn) + * 3. Support table function, add method getColumnOrigins(Correlate rel, RelMetadataQuery mq, int iOutputColumn) + * + * + * @description: RelMdColumnOrigins supplies a default implementation of {@link + * RelMetadataQuery#getColumnOrigins} for the standard logical algebra. + * @author: baisong + * @version: 1.0.0 + * @date: 2022/11/24 7:47 PM + */ +public class RelMdColumnOrigins implements MetadataHandler { + + public static final RelMetadataProvider SOURCE = ReflectiveRelMetadataProvider.reflectiveSource( + BuiltInMethod.COLUMN_ORIGIN.method, new RelMdColumnOrigins()); + + // ~ Constructors ----------------------------------------------------------- + + private RelMdColumnOrigins() { + } + + // ~ Methods ---------------------------------------------------------------- + + @Override + public MetadataDef getDef() { + return BuiltInMetadata.ColumnOrigin.DEF; + } + + public Set getColumnOrigins(Aggregate rel, + RelMetadataQuery mq, int iOutputColumn) { + if (iOutputColumn < rel.getGroupCount()) { + // get actual index of Group columns. + return mq.getColumnOrigins(rel.getInput(), rel.getGroupSet().asList().get(iOutputColumn)); + } + + // Aggregate columns are derived from input columns + AggregateCall call = rel.getAggCallList().get(iOutputColumn + - rel.getGroupCount()); + + final Set set = new HashSet<>(); + for (Integer iInput : call.getArgList()) { + Set inputSet = mq.getColumnOrigins(rel.getInput(), iInput); + inputSet = createDerivedColumnOrigins(inputSet); + if (inputSet != null) { + set.addAll(inputSet); + } + } + return set; + } + + public Set getColumnOrigins(Join rel, RelMetadataQuery mq, + int iOutputColumn) { + int nLeftColumns = rel.getLeft().getRowType().getFieldList().size(); + Set set; + boolean derived = false; + if (iOutputColumn < nLeftColumns) { + set = mq.getColumnOrigins(rel.getLeft(), iOutputColumn); + if (rel.getJoinType().generatesNullsOnLeft()) { + derived = true; + } + } else { + set = mq.getColumnOrigins(rel.getRight(), iOutputColumn - nLeftColumns); + if (rel.getJoinType().generatesNullsOnRight()) { + derived = true; + } + } + if (derived) { + // nulls are generated due to outer join; that counts + // as derivation + set = createDerivedColumnOrigins(set); + } + return set; + } + + /** + * Support the field blood relationship of table function + */ + public Set getColumnOrigins(Correlate rel, RelMetadataQuery mq, int iOutputColumn) { + + List leftFieldList = rel.getLeft().getRowType().getFieldList(); + + int nLeftColumns = leftFieldList.size(); + Set set; + if (iOutputColumn < nLeftColumns) { + set = mq.getColumnOrigins(rel.getLeft(), iOutputColumn); + } else { + // get the field name of the left table configured in the Table Function on the right + TableFunctionScan tableFunctionScan = (TableFunctionScan) rel.getRight(); + RexCall rexCall = (RexCall) tableFunctionScan.getCall(); + // support only one field in table function + RexFieldAccess rexFieldAccess = (RexFieldAccess) rexCall.operands.get(0); + String fieldName = rexFieldAccess.getField().getName(); + + int leftFieldIndex = 0; + for (int i = 0; i < nLeftColumns; i++) { + if (leftFieldList.get(i).getName().equalsIgnoreCase(fieldName)) { + leftFieldIndex = i; + break; + } + } + /** + * Get the fields from the left table, don't go to + * getColumnOrigins(TableFunctionScan rel,RelMetadataQuery mq, int iOutputColumn), + * otherwise the return is null, and the UDTF field origin cannot be parsed + */ + set = mq.getColumnOrigins(rel.getLeft(), leftFieldIndex); + } + return set; + } + + public Set getColumnOrigins(SetOp rel, + RelMetadataQuery mq, int iOutputColumn) { + final Set set = new HashSet<>(); + for (RelNode input : rel.getInputs()) { + Set inputSet = mq.getColumnOrigins(input, iOutputColumn); + if (inputSet == null) { + return null; + } + set.addAll(inputSet); + } + return set; + } + + /** + * Support the field blood relationship of lookup join + */ + public Set getColumnOrigins(Snapshot rel, + RelMetadataQuery mq, int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + /** + * Support the field blood relationship of watermark + */ + public Set getColumnOrigins(SingleRel rel, + RelMetadataQuery mq, int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(Project rel, + final RelMetadataQuery mq, int iOutputColumn) { + final RelNode input = rel.getInput(); + RexNode rexNode = rel.getProjects().get(iOutputColumn); + + if (rexNode instanceof RexInputRef) { + // Direct reference: no derivation added. + RexInputRef inputRef = (RexInputRef) rexNode; + return mq.getColumnOrigins(input, inputRef.getIndex()); + } + // Anything else is a derivation, possibly from multiple columns. + final Set set = getMultipleColumns(rexNode, input, mq); + return createDerivedColumnOrigins(set); + } + + public Set getColumnOrigins(Calc rel, + final RelMetadataQuery mq, int iOutputColumn) { + final RelNode input = rel.getInput(); + final RexShuttle rexShuttle = new RexShuttle() { + + @Override + public RexNode visitLocalRef(RexLocalRef localRef) { + return rel.getProgram().expandLocalRef(localRef); + } + }; + final List projects = new ArrayList<>(); + for (RexNode rex : rexShuttle.apply(rel.getProgram().getProjectList())) { + projects.add(rex); + } + final RexNode rexNode = projects.get(iOutputColumn); + if (rexNode instanceof RexInputRef) { + // Direct reference: no derivation added. + RexInputRef inputRef = (RexInputRef) rexNode; + return mq.getColumnOrigins(input, inputRef.getIndex()); + } else if (rexNode instanceof RexCall && ((RexCall) rexNode).operands.isEmpty()) { + // support for new fields in the source table similar to those created with the LOCALTIMESTAMP function + TableSourceTable table = ((TableSourceTable) rel.getInput().getTable()); + if (table != null) { + String targetFieldName = rel.getProgram().getOutputRowType().getFieldList().get(iOutputColumn) + .getName(); + List fieldList = table.contextResolvedTable().getResolvedSchema().getColumnNames(); + + int index = -1; + for (int i = 0; i < fieldList.size(); i++) { + if (fieldList.get(i).equalsIgnoreCase(targetFieldName)) { + index = i; + break; + } + } + if (index != -1) { + return Collections.singleton(new RelColumnOrigin(table, index, false)); + } + } + } + // Anything else is a derivation, possibly from multiple columns. + final Set set = getMultipleColumns(rexNode, input, mq); + return createDerivedColumnOrigins(set); + } + + public Set getColumnOrigins(Filter rel, + RelMetadataQuery mq, int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(Sort rel, RelMetadataQuery mq, + int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(TableModify rel, RelMetadataQuery mq, + int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(Exchange rel, + RelMetadataQuery mq, int iOutputColumn) { + return mq.getColumnOrigins(rel.getInput(), iOutputColumn); + } + + public Set getColumnOrigins(TableFunctionScan rel, + RelMetadataQuery mq, int iOutputColumn) { + final Set set = new HashSet<>(); + Set mappings = rel.getColumnMappings(); + if (mappings == null) { + if (rel.getInputs().size() > 0) { + // This is a non-leaf transformation: say we don't + // know about origins, because there are probably + // columns below. + return null; + } else { + // This is a leaf transformation: say there are fer sure no + // column origins. + return set; + } + } + for (RelColumnMapping mapping : mappings) { + if (mapping.iOutputColumn != iOutputColumn) { + continue; + } + final RelNode input = rel.getInputs().get(mapping.iInputRel); + final int column = mapping.iInputColumn; + Set origins = mq.getColumnOrigins(input, column); + if (origins == null) { + return null; + } + if (mapping.derived) { + origins = createDerivedColumnOrigins(origins); + } + set.addAll(origins); + } + return set; + } + + // Catch-all rule when none of the others apply. + public Set getColumnOrigins(RelNode rel, + RelMetadataQuery mq, int iOutputColumn) { + // NOTE jvs 28-Mar-2006: We may get this wrong for a physical table + // expression which supports projections. In that case, + // it's up to the plugin writer to override with the + // correct information. + + if (rel.getInputs().size() > 0) { + // No generic logic available for non-leaf rels. + return null; + } + + final Set set = new HashSet<>(); + + RelOptTable table = rel.getTable(); + if (table == null) { + // Somebody is making column values up out of thin air, like a + // VALUES clause, so we return an empty set. + return set; + } + + // Detect the case where a physical table expression is performing + // projection, and say we don't know instead of making any assumptions. + // (Theoretically we could try to map the projection using column + // names.) This detection assumes the table expression doesn't handle + // rename as well. + if (table.getRowType() != rel.getRowType()) { + return null; + } + + set.add(new RelColumnOrigin(table, iOutputColumn, false)); + return set; + } + + private Set createDerivedColumnOrigins( + Set inputSet) { + if (inputSet == null) { + return null; + } + final Set set = new HashSet<>(); + for (RelColumnOrigin rco : inputSet) { + RelColumnOrigin derived = new RelColumnOrigin( + rco.getOriginTable(), + rco.getOriginColumnOrdinal(), + true); + set.add(derived); + } + return set; + } + + private Set getMultipleColumns(RexNode rexNode, RelNode input, + final RelMetadataQuery mq) { + final Set set = new HashSet<>(); + final RexVisitor visitor = new RexVisitorImpl(true) { + + @Override + public Void visitInputRef(RexInputRef inputRef) { + Set inputSet = mq.getColumnOrigins(input, inputRef.getIndex()); + if (inputSet != null) { + set.addAll(inputSet); + } + return null; + } + }; + rexNode.accept(visitor); + return set; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/pom.xml new file mode 100644 index 0000000..4829d14 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/pom.xml @@ -0,0 +1,75 @@ + + + + flink-client + net.srt + 2.0.0 + + 4.0.0 + + flink-client-base + + + + net.srt + flink-common + 2.0.0 + + + + + + + + flink-1.16 + + + net.srt + flink-1.16 + provided + ${project.version} + + + + + flink-1.14 + + + net.srt + flink-1.14 + provided + ${project.version} + + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/constant/ClientConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/constant/ClientConstant.java new file mode 100644 index 0000000..0a629dd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/constant/ClientConstant.java @@ -0,0 +1,36 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.base.constant; + +/** + * ClientConstant + * + * @author zrx + * @since 2022/4/14 23:23 + **/ +public final class ClientConstant { + + public static final String METADATA_NAME = "name"; + public static final String METADATA_TYPE = "type"; + public static final String METADATA_URL = "url"; + public static final String METADATA_USERNAME = "username"; + public static final String METADATA_PASSWORD = "password"; + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/constant/FlinkParamConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/constant/FlinkParamConstant.java new file mode 100644 index 0000000..6d67d5c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/constant/FlinkParamConstant.java @@ -0,0 +1,37 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.base.constant; + +/** + * FlinkParam + * + * @author zrx + * @since 2022/3/9 19:18 + */ +public final class FlinkParamConstant { + public static final String ID = "id"; + public static final String DRIVER = "driver"; + public static final String URL = "url"; + public static final String USERNAME = "username"; + public static final String PASSWORD = "password"; + public static final String FLINKY_ADDR = "flinkyAddr"; + + public static final String SPLIT = ","; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/exception/FlinkClientException.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/exception/FlinkClientException.java new file mode 100644 index 0000000..5de58a9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/exception/FlinkClientException.java @@ -0,0 +1,38 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.base.exception; + +/** + * FlinkClientException + * + * @author zrx + * @since 2022/4/12 21:21 + **/ +public class FlinkClientException extends RuntimeException { + + public FlinkClientException(String message, Throwable cause) { + super(message, cause); + } + + public FlinkClientException(String message) { + super(message); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/executor/CustomTableEnvironment.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/executor/CustomTableEnvironment.java new file mode 100644 index 0000000..1515ae6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/executor/CustomTableEnvironment.java @@ -0,0 +1,96 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.base.executor; + +import com.fasterxml.jackson.databind.node.ObjectNode; +import net.srt.flink.client.base.model.LineageRel; +import net.srt.flink.common.result.SqlExplainResult; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.flink.runtime.rest.messages.JobPlanInfo; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.graph.StreamGraph; +import org.apache.flink.table.api.ExplainDetail; +import org.apache.flink.table.api.StatementSet; +import org.apache.flink.table.api.Table; +import org.apache.flink.table.api.TableConfig; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.table.catalog.Catalog; +import org.apache.flink.table.catalog.CatalogManager; +import org.apache.flink.table.delegation.Parser; +import org.apache.flink.table.delegation.Planner; +import org.apache.flink.table.expressions.Expression; + +import java.util.List; +import java.util.Map; +import java.util.Optional; + +/** + * CustomTableEnvironment + * + * @author zrx + * @since 2022/2/5 10:35 + */ +public interface CustomTableEnvironment { + + TableConfig getConfig(); + + CatalogManager getCatalogManager(); + + void registerCatalog(String catalogName, Catalog catalog); + + String[] listCatalogs(); + + Optional getCatalog(String catalogName); + + TableResult executeSql(String statement); + + Table sqlQuery(String statement); + + void registerTable(String name, Table table); + + String explainSql(String statement, ExplainDetail... extraDetails); + + ObjectNode getStreamGraph(String statement); + + JobPlanInfo getJobPlanInfo(List statements); + + StreamGraph getStreamGraphFromInserts(List statements); + + JobGraph getJobGraphFromInserts(List statements); + + SqlExplainResult explainSqlRecord(String statement, ExplainDetail... extraDetails); + + boolean parseAndLoadConfiguration(String statement, StreamExecutionEnvironment config, Map setMap); + + StatementSet createStatementSet(); + + void createTemporaryView(String path, DataStream dataStream, Expression... fields); + + void createTemporaryView(String path, DataStream dataStream, String fields); + + // void createTemporaryView(String path, DataStream dataStream, Schema schema); + + Parser getParser(); + + Planner getPlanner(); + + List getLineage(String statement); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/model/FlinkCDCConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/model/FlinkCDCConfig.java new file mode 100644 index 0000000..2ed40a3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/model/FlinkCDCConfig.java @@ -0,0 +1,287 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.base.model; + +import net.srt.flink.common.model.Schema; + +import java.util.List; +import java.util.Map; + +/** + * FlinkCDCConfig + * + * @author zrx + * @since 2022/1/29 22:50 + */ +public class FlinkCDCConfig { + + private String type; + private String hostname; + private Integer port; + private String username; + private String password; + private Integer checkpoint; + private Integer parallelism; + private String database; + private String schema; + private String table; + private List schemaTableNameList; + private String startupMode; + private Map split; + private Map debezium; + private Map source; + private Map jdbc; + private Map sink; + private List schemaList; + private String schemaFieldName; + + public FlinkCDCConfig() { + } + + public FlinkCDCConfig(String type, String hostname, Integer port, String username, String password, Integer checkpoint, Integer parallelism, String database, String schema, String table, + String startupMode, + Map split, Map debezium, Map source, Map sink, Map jdbc) { + this.type = type; + this.hostname = hostname; + this.port = port; + this.username = username; + this.password = password; + this.checkpoint = checkpoint; + this.parallelism = parallelism; + this.database = database; + this.schema = schema; + this.table = table; + this.startupMode = startupMode; + this.split = split; + this.debezium = debezium; + this.source = source; + this.sink = sink; + this.jdbc = jdbc; + } + + public void init(String type, String hostname, Integer port, String username, String password, Integer checkpoint, Integer parallelism, String database, String schema, String table, + String startupMode, + Map split, Map debezium, Map source, Map sink, Map jdbc) { + this.type = type; + this.hostname = hostname; + this.port = port; + this.username = username; + this.password = password; + this.checkpoint = checkpoint; + this.parallelism = parallelism; + this.database = database; + this.schema = schema; + this.table = table; + this.startupMode = startupMode; + this.split = split; + this.debezium = debezium; + this.source = source; + this.sink = sink; + this.jdbc = jdbc; + } + + public String getType() { + return type; + } + + public void setType(String type) { + this.type = type; + } + + public String getHostname() { + return hostname; + } + + public void setHostname(String hostname) { + this.hostname = hostname; + } + + public Integer getPort() { + return port; + } + + public void setPort(Integer port) { + this.port = port; + } + + public String getUsername() { + return username; + } + + public void setUsername(String username) { + this.username = username; + } + + public String getPassword() { + return password; + } + + public void setPassword(String password) { + this.password = password; + } + + public Integer getCheckpoint() { + return checkpoint; + } + + public void setCheckpoint(Integer checkpoint) { + this.checkpoint = checkpoint; + } + + public Integer getParallelism() { + return parallelism; + } + + public void setParallelism(Integer parallelism) { + this.parallelism = parallelism; + } + + public String getDatabase() { + return database; + } + + public void setDatabase(String database) { + this.database = database; + } + + public String getSchema() { + return schema; + } + + public void setSchema(String schema) { + this.schema = schema; + } + + public String getTable() { + return table; + } + + public Map getSource() { + return source; + } + + public void setSource(Map source) { + this.source = source; + } + + public void setTable(String table) { + this.table = table; + } + + public Map getSink() { + return sink; + } + + public List getSchemaTableNameList() { + return schemaTableNameList; + } + + public void setSchemaTableNameList(List schemaTableNameList) { + this.schemaTableNameList = schemaTableNameList; + } + + private boolean skip(String key) { + switch (key) { + case "sink.db": + case "auto.create": + case "table.prefix": + case "table.suffix": + case "table.upper": + case "table.lower": + case "column.replace.line-break": + case "timezone": + return true; + default: + return false; + } + } + + public String getSinkConfigurationString() { + StringBuilder sb = new StringBuilder(); + int index = 0; + for (Map.Entry entry : sink.entrySet()) { + if (skip(entry.getKey())) { + continue; + } + if (index > 0) { + sb.append(","); + } + sb.append("'"); + sb.append(entry.getKey()); + sb.append("' = '"); + sb.append(entry.getValue()); + sb.append("'\n"); + index++; + } + return sb.toString(); + } + + public void setSink(Map sink) { + this.sink = sink; + } + + public String getStartupMode() { + return startupMode; + } + + public void setStartupMode(String startupMode) { + this.startupMode = startupMode; + } + + public List getSchemaList() { + return schemaList; + } + + public void setSchemaList(List schemaList) { + this.schemaList = schemaList; + } + + public String getSchemaFieldName() { + return schemaFieldName; + } + + public void setSchemaFieldName(String schemaFieldName) { + this.schemaFieldName = schemaFieldName; + } + + public Map getDebezium() { + return debezium; + } + + public Map getJdbc() { + return jdbc; + } + + public void setJdbc(Map jdbc) { + this.jdbc = jdbc; + } + + public void setDebezium(Map debezium) { + this.debezium = debezium; + } + + public Map getSplit() { + return split; + } + + public void setSplit(Map split) { + this.split = split; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/model/LineageRel.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/model/LineageRel.java new file mode 100644 index 0000000..67f110e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/model/LineageRel.java @@ -0,0 +1,111 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.base.model; + +/** + * LineageResult + * + * @author zrx + * @since 2022/8/20 21:09 + */ +public class LineageRel { + + private String sourceCatalog; + + private String sourceDatabase; + + private String sourceTable; + + private String sourceColumn; + + private String targetCatalog; + + private String targetDatabase; + + private String targetTable; + + private String targetColumn; + + private static final String DELIMITER = "."; + + public LineageRel(String sourceCatalog, String sourceDatabase, String sourceTable, String sourceColumn, String targetCatalog, String targetDatabase, String targetTable, + String targetColumn) { + this.sourceCatalog = sourceCatalog; + this.sourceDatabase = sourceDatabase; + this.sourceTable = sourceTable; + this.sourceColumn = sourceColumn; + this.targetCatalog = targetCatalog; + this.targetDatabase = targetDatabase; + this.targetTable = targetTable; + this.targetColumn = targetColumn; + } + + public static LineageRel build(String sourceTablePath, String sourceColumn, String targetTablePath, String targetColumn) { + String[] sourceItems = sourceTablePath.split("\\."); + String[] targetItems = targetTablePath.split("\\."); + + return new LineageRel(sourceItems[0], sourceItems[1], sourceItems[2], sourceColumn, targetItems[0], targetItems[1], targetItems[2], targetColumn); + } + + public static LineageRel build(String sourceCatalog, String sourceDatabase, String sourceTable, String sourceColumn, String targetCatalog, String targetDatabase, String targetTable, + String targetColumn) { + return new LineageRel(sourceCatalog, sourceDatabase, sourceTable, sourceColumn, targetCatalog, targetDatabase, targetTable, targetColumn); + } + + public String getSourceCatalog() { + return sourceCatalog; + } + + public String getSourceDatabase() { + return sourceDatabase; + } + + public String getSourceTable() { + return sourceTable; + } + + public String getSourceColumn() { + return sourceColumn; + } + + public String getTargetCatalog() { + return targetCatalog; + } + + public String getTargetDatabase() { + return targetDatabase; + } + + public String getTargetTable() { + return targetTable; + } + + public String getTargetColumn() { + return targetColumn; + } + + public String getSourceTablePath() { + return sourceCatalog + DELIMITER + sourceDatabase + DELIMITER + sourceTable; + } + + public String getTargetTablePath() { + return targetCatalog + DELIMITER + targetDatabase + DELIMITER + targetTable; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/sql/FlinkQuery.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/sql/FlinkQuery.java new file mode 100644 index 0000000..1ab884b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/sql/FlinkQuery.java @@ -0,0 +1,105 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.base.sql; + +/** + * FlinkQuery + * + * @author zrx + * @since 2022/7/18 18:43 + **/ +public class FlinkQuery { + + public static String separator() { + return ";\n"; + } + + public static String defaultCatalog() { + return "default_catalog"; + } + + public static String defaultDatabase() { + return "default_database"; + } + + public static String showCatalogs() { + return "SHOW CATALOGS"; + } + + public static String useCatalog(String catalog) { + return String.format("USE CATALOG %s", catalog); + } + + public static String showDatabases() { + return "SHOW DATABASES"; + } + + public static String useDatabase(String database) { + return String.format("USE %s", database); + } + + public static String showTables() { + return "SHOW TABLES"; + } + + public static String showViews() { + return "SHOW VIEWS"; + } + + public static String showFunctions() { + return "SHOW FUNCTIONS"; + } + + public static String showUserFunctions() { + return "SHOW USER FUNCTIONS"; + } + + public static String showModules() { + return "SHOW MODULES"; + } + + public static String descTable(String table) { + return String.format("DESC %s", table); + } + + public static String columnName() { + return "name"; + } + + public static String columnType() { + return "type"; + } + + public static String columnNull() { + return "null"; + } + + public static String columnKey() { + return "key"; + } + + public static String columnExtras() { + return "extras"; + } + + public static String columnWatermark() { + return "watermark"; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/utils/FlinkBaseUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/utils/FlinkBaseUtil.java new file mode 100644 index 0000000..4ad66b8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-base/src/main/java/net/srt/flink/client/base/utils/FlinkBaseUtil.java @@ -0,0 +1,142 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.client.base.utils; + +import net.srt.flink.client.base.constant.FlinkParamConstant; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.ColumnType; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.utils.SqlUtil; +import org.apache.flink.api.java.utils.ParameterTool; +import org.apache.flink.runtime.util.EnvironmentInformation; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * FlinkBaseUtil + * + * @author zrx + * @since 2022/3/9 19:15 + */ +public class FlinkBaseUtil { + + public static Map getParamsFromArgs(String[] args) { + Map params = new HashMap<>(); + ParameterTool parameters = ParameterTool.fromArgs(args); + params.put(FlinkParamConstant.ID, parameters.get(FlinkParamConstant.ID, null)); + params.put(FlinkParamConstant.DRIVER, parameters.get(FlinkParamConstant.DRIVER, null)); + params.put(FlinkParamConstant.URL, parameters.get(FlinkParamConstant.URL, null)); + params.put(FlinkParamConstant.USERNAME, parameters.get(FlinkParamConstant.USERNAME, null)); + params.put(FlinkParamConstant.PASSWORD, parameters.get(FlinkParamConstant.PASSWORD, null)); + return params; + } + + public static String getCDCSqlInsert(Table table, String targetName, String sourceName, FlinkCDCConfig config) { + StringBuilder sb = new StringBuilder("INSERT INTO `"); + sb.append(targetName); + sb.append("` SELECT\n"); + for (int i = 0; i < table.getColumns().size(); i++) { + sb.append(" "); + if (i > 0) { + sb.append(","); + } + sb.append(getColumnProcessing(table.getColumns().get(i), config)).append(" \n"); + } + sb.append(" FROM `"); + sb.append(sourceName); + sb.append("`"); + return sb.toString(); + } + + public static String getFlinkDDL(Table table, String tableName, FlinkCDCConfig config, String sinkSchemaName, String sinkTableName, String pkList) { + StringBuilder sb = new StringBuilder(); + if (Integer.parseInt(EnvironmentInformation.getVersion().split("\\.")[1]) < 13) { + sb.append("CREATE TABLE `"); + } else { + sb.append("CREATE TABLE IF NOT EXISTS `"); + } + sb.append(tableName); + sb.append("` (\n"); + List pks = new ArrayList<>(); + for (int i = 0; i < table.getColumns().size(); i++) { + String type = table.getColumns().get(i).getFlinkType(); + sb.append(" "); + if (i > 0) { + sb.append(","); + } + sb.append("`"); + sb.append(table.getColumns().get(i).getName()); + sb.append("` "); + sb.append(convertSinkColumnType(type, config)); + sb.append("\n"); + if (table.getColumns().get(i).isKeyFlag()) { + pks.add(table.getColumns().get(i).getName()); + } + } + StringBuilder pksb = new StringBuilder("PRIMARY KEY ( "); + for (int i = 0; i < pks.size(); i++) { + if (i > 0) { + pksb.append(","); + } + pksb.append("`"); + pksb.append(pks.get(i)); + pksb.append("`"); + } + pksb.append(" ) NOT ENFORCED\n"); + if (pks.size() > 0) { + sb.append(" ,"); + sb.append(pksb); + } + sb.append(") WITH (\n"); + sb.append(getSinkConfigurationString(table, config, sinkSchemaName, sinkTableName, pkList)); + sb.append(")\n"); + return sb.toString(); + } + + public static String getSinkConfigurationString(Table table, FlinkCDCConfig config, String sinkSchemaName, String sinkTableName, String pkList) { + String configurationString = SqlUtil.replaceAllParam(config.getSinkConfigurationString(), "schemaName", sinkSchemaName); + configurationString = SqlUtil.replaceAllParam(configurationString, "tableName", sinkTableName); + if (configurationString.contains("${pkList}")) { + configurationString = SqlUtil.replaceAllParam(configurationString, "pkList", pkList); + } + return configurationString; + } + + public static String convertSinkColumnType(String type, FlinkCDCConfig config) { + if (config.getSink().get("connector").equals("hudi")) { + if (type.equals("TIMESTAMP")) { + return "TIMESTAMP(3)"; + } + } + return type; + } + + public static String getColumnProcessing(Column column, FlinkCDCConfig config) { + if ("true".equals(config.getSink().get("column.replace.line-break")) && ColumnType.STRING.equals(column.getJavaType())) { + return "REGEXP_REPLACE(`" + column.getName() + "`, '\\n', '') AS `" + column.getName() + "`"; + } else { + return "`" + column.getName() + "`"; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-hadoop/dependency-reduced-pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-hadoop/dependency-reduced-pom.xml new file mode 100644 index 0000000..ed70d12 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-hadoop/dependency-reduced-pom.xml @@ -0,0 +1,145 @@ + + + + flink-client + net.srt + 2.0.0 + + 4.0.0 + flink-client-hadoop + + + + maven-compiler-plugin + + + maven-shade-plugin + + + package + + shade + + + + + + + + + org.apache.hadoop + hadoop-common + 3.3.2 + provided + + + guava + com.google.guava + + + servlet-api + javax.servlet + + + + + org.apache.hadoop + hadoop-hdfs + 3.3.2 + provided + + + guava + com.google.guava + + + servlet-api + javax.servlet + + + + + org.apache.hadoop + hadoop-yarn-common + 3.3.2 + provided + + + guava + com.google.guava + + + servlet-api + javax.servlet + + + + + org.apache.hadoop + hadoop-client + 3.3.2 + provided + + + org.apache.hadoop + hadoop-yarn-client + 3.3.2 + provided + + + guava + com.google.guava + + + servlet-api + javax.servlet + + + + + org.apache.hadoop + hadoop-mapreduce-client-core + 3.3.2 + provided + + + guava + com.google.guava + + + servlet-api + javax.servlet + + + + + org.projectlombok + lombok + 1.18.24 + provided + true + + + org.mapstruct + mapstruct + 1.4.2.Final + provided + + + org.mapstruct + mapstruct-jdk8 + 1.4.2.Final + provided + + + org.mapstruct + mapstruct-processor + 1.4.2.Final + provided + + + + 3.3.2 + UTF-8 + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-hadoop/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-hadoop/pom.xml new file mode 100644 index 0000000..b076d67 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/flink-client-hadoop/pom.xml @@ -0,0 +1,146 @@ + + + + flink-client + net.srt + 2.0.0 + + 4.0.0 + + flink-client-hadoop + + + UTF-8 + 3.3.2 + + + + + org.apache.hadoop + hadoop-common + ${hadoop.version} + + + com.google.guava + guava + + + javax.servlet + servlet-api + + + + + + org.apache.hadoop + hadoop-hdfs + ${hadoop.version} + + + com.google.guava + guava + + + javax.servlet + servlet-api + + + + + + org.apache.hadoop + hadoop-yarn-common + ${hadoop.version} + + + com.google.guava + guava + + + javax.servlet + servlet-api + + + + + + org.apache.hadoop + hadoop-client + ${hadoop.version} + + + + org.apache.hadoop + hadoop-yarn-client + ${hadoop.version} + + + com.google.guava + guava + + + javax.servlet + servlet-api + + + + + + org.apache.hadoop + hadoop-mapreduce-client-core + ${hadoop.version} + + + com.google.guava + guava + + + javax.servlet + servlet-api + + + + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-client/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-client/pom.xml new file mode 100644 index 0000000..174bb93 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-client/pom.xml @@ -0,0 +1,33 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + flink-client + pom + + flink-client-base + flink-client-hadoop + + + + + flink-1.16 + + flink-client-1.16 + + + + flink-1.14 + + flink-client-1.14 + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-common/pom.xml new file mode 100644 index 0000000..edd9aa0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/pom.xml @@ -0,0 +1,57 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-common + + + + com.fasterxml.jackson.core + jackson-annotations + + + com.fasterxml.jackson.core + jackson-databind + + + org.slf4j + slf4j-api + + + com.fasterxml.jackson.datatype + jackson-datatype-jsr310 + 2.13.4 + + + com.github.docker-java + docker-java-core + + + commons-io + commons-io + + + org.apache.commons + commons-lang3 + + + org.bouncycastle + bcpkix-jdk15on + + + + + com.github.docker-java + docker-java-transport-httpclient5 + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/assertion/Asserts.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/assertion/Asserts.java new file mode 100644 index 0000000..5307bd5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/assertion/Asserts.java @@ -0,0 +1,122 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.assertion; + + +import net.srt.flink.common.exception.RunTimeException; + +import java.util.Arrays; +import java.util.Collection; +import java.util.Map; +import java.util.Objects; + +/** + * Asserts + * + * @author wenmo + * @since 2021/7/5 21:57 + */ +public class Asserts { + + private Asserts() { + } + + public static boolean isNotNull(Object object) { + return object != null; + } + + public static boolean isNull(Object object) { + return object == null; + } + + public static boolean isAllNotNull(Object... object) { + return Arrays.stream(object).allMatch(Asserts::isNotNull); + } + + public static boolean isNullString(String str) { + return isNull(str) || str.isEmpty(); + } + + public static boolean isAllNullString(String... str) { + return Arrays.stream(str).allMatch(Asserts::isNullString); + } + + public static boolean isNotNullString(String str) { + return !isNullString(str); + } + + public static boolean isAllNotNullString(String... str) { + return Arrays.stream(str).noneMatch(Asserts::isNullString); + } + + public static boolean isEquals(String str1, String str2) { + return Objects.equals(str1, str2); + } + + public static boolean isEqualsIgnoreCase(String str1, String str2) { + return (str1 == null && str2 == null) || (str1 != null && str1.equalsIgnoreCase(str2)); + } + + public static boolean isNullCollection(Collection collection) { + return isNull(collection) || collection.isEmpty(); + } + + public static boolean isNotNullCollection(Collection collection) { + return !isNullCollection(collection); + } + + public static boolean isNullMap(Map map) { + return isNull(map) || map.isEmpty(); + } + + public static boolean isNotNullMap(Map map) { + return !isNullMap(map); + } + + public static void checkNull(Object key, String msg) { + if (key == null) { + throw new RunTimeException(msg); + } + } + + public static void checkNotNull(Object object, String msg) { + if (isNull(object)) { + throw new RunTimeException(msg); + } + } + + public static void checkNullString(String key, String msg) { + if (isNull(key) || isEquals("", key)) { + throw new RunTimeException(msg); + } + } + + public static void checkNullCollection(Collection collection, String msg) { + if (isNullCollection(collection)) { + throw new RunTimeException(msg); + } + } + + public static void checkNullMap(Map map, String msg) { + if (isNullMap(map)) { + throw new RunTimeException(msg); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/classloader/DinkyClassLoader.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/classloader/DinkyClassLoader.java new file mode 100644 index 0000000..66450dd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/classloader/DinkyClassLoader.java @@ -0,0 +1,91 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.classloader; + +import lombok.extern.slf4j.Slf4j; +import net.srt.flink.common.context.JarPathContextHolder; + +import java.io.File; +import java.net.MalformedURLException; +import java.net.URL; +import java.net.URLClassLoader; +import java.net.URLStreamHandlerFactory; +import java.util.Collection; +import java.util.List; + +/** + * @author ZackYoung + * @since 0.7.0 + */ +@Slf4j +public class DinkyClassLoader extends URLClassLoader { + + public DinkyClassLoader(URL[] urls, ClassLoader parent) { + super(new URL[]{}, parent); + } + + public DinkyClassLoader(Collection fileSet, ClassLoader parent) { + super(new URL[]{}, parent); + URL[] urls = fileSet.stream().map(x -> { + try { + return x.toURI().toURL(); + } catch (MalformedURLException e) { + throw new RuntimeException(e); + } + }).toArray(URL[]::new); + addURL(urls); + } + + public DinkyClassLoader(URL[] urls) { + super(new URL[]{}); + } + + public DinkyClassLoader(URL[] urls, ClassLoader parent, URLStreamHandlerFactory factory) { + super(new URL[]{}, parent, factory); + } + + public void addURL(URL... urls) { + for (URL url : urls) { + super.addURL(url); + } + } + + public void addURL(String[] paths, List notExistsFiles) { + for (String path : paths) { + File file = new File(path); + try { + if (!file.exists()) { + if (notExistsFiles != null && !notExistsFiles.isEmpty()) { + notExistsFiles.add(file.getAbsolutePath()); + } + return; + } + super.addURL(file.toURI().toURL()); + JarPathContextHolder.addOtherPlugins(file); + } catch (MalformedURLException e) { + throw new RuntimeException(e); + } + } + } + + public void addURL(String... paths) { + this.addURL(paths, null); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/config/Dialect.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/config/Dialect.java new file mode 100644 index 0000000..5980def --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/config/Dialect.java @@ -0,0 +1,129 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.config; + + +import net.srt.flink.common.assertion.Asserts; + +/** + * Dialect + * + * @author zrx + * @since 2021/12/13 + **/ +public enum Dialect { + + SQL(1, "Sql"), + FLINKSQL(2, "FlinkSql"), + FLINKJAR(3, "FlinkJar"), + FLINKSQLENV(4, "FlinkSqlEnv"), + JAVA(5, "Java"), + PYTHON(6, "Python"), + SCALA(7, "Scala"), + MYSQL(8, "Mysql"), + ORACLE(9, "Oracle"), + SQLSERVER(10, "SqlServer"), + POSTGRESQL(11, "PostgreSql"), + CLICKHOUSE(12, "ClickHouse"), + DORIS(13, "Doris"), + PHOENIX(14, "Phoenix"), + HIVE(15, "Hive"), + STARROCKS(16, "StarRocks"), + PRESTO(17, "Presto"), + KUBERNETES_APPLICATION(18, "KubernetesApplaction"); + + private Integer code; + private String value; + + public static final Dialect DEFAULT = Dialect.FLINKSQL; + + Dialect(Integer code, String value) { + this.code = code; + this.value = value; + } + + public Integer getCode() { + return code; + } + + public String getValue() { + return value; + } + + public boolean equalsVal(String valueText) { + return Asserts.isEqualsIgnoreCase(value, valueText); + } + + public static Dialect get(String value) { + for (Dialect type : Dialect.values()) { + if (Asserts.isEqualsIgnoreCase(type.getValue(), value)) { + return type; + } + } + return Dialect.FLINKSQL; + } + + public static Dialect getByCode(String code) { + for (Dialect type : Dialect.values()) { + if (type.getCode().toString().equals(code)) { + return type; + } + } + return Dialect.FLINKSQL; + } + + /** + * Judge sql dialect. + * + * @param value {@link Dialect} + * @return If is flink sql, return false, otherwise return true. + */ + public static boolean notFlinkSql(String value) { + Dialect dialect = Dialect.get(value); + switch (dialect) { + case SQL: + case MYSQL: + case ORACLE: + case SQLSERVER: + case POSTGRESQL: + case CLICKHOUSE: + case DORIS: + case PHOENIX: + case HIVE: + case STARROCKS: + case PRESTO: + return true; + default: + return false; + } + } + + public static boolean isUDF(String value) { + Dialect dialect = Dialect.get(value); + switch (dialect) { + case JAVA: + case SCALA: + case PYTHON: + return true; + default: + return false; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/constant/CommonConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/constant/CommonConstant.java new file mode 100644 index 0000000..83bafd1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/constant/CommonConstant.java @@ -0,0 +1,33 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.constant; + +/** + * CommonConstant + * + * @author zrx + * @since 2021/5/28 9:35 + **/ +public interface CommonConstant { + /** + * 实例健康 + */ + String HEALTHY = "1"; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/constant/NetConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/constant/NetConstant.java new file mode 100644 index 0000000..02c5fe2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/constant/NetConstant.java @@ -0,0 +1,47 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.constant; + +public interface NetConstant { + /** + * http:// + */ + String HTTP = "http://"; + /** + * 冒号: + */ + String COLON = ":"; + /** + * 斜杠/ + */ + String SLASH = "/"; + /** + * 连接运行服务器超时时间 10000 + */ + Integer SERVER_TIME_OUT_ACTIVE = 10000; + /** + * 读取服务器超时时间 3000 + */ + Integer READ_TIME_OUT = 3000; + /** + * 连接FLINK历史服务器超时时间 2000 + */ + Integer SERVER_TIME_OUT_HISTORY = 3000; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/context/DinkyClassLoaderContextHolder.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/context/DinkyClassLoaderContextHolder.java new file mode 100644 index 0000000..65a8c26 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/context/DinkyClassLoaderContextHolder.java @@ -0,0 +1,60 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.context; + +import lombok.extern.slf4j.Slf4j; +import net.srt.flink.common.classloader.DinkyClassLoader; + +import java.io.IOException; + +/** + * @author ZackYoung + * @since 0.7.0 + */ +@Slf4j +public class DinkyClassLoaderContextHolder { + + private static final ThreadLocal CLASS_LOADER_CONTEXT = new ThreadLocal<>(); + private static final ThreadLocal INIT_CLASS_LOADER_CONTEXT = new ThreadLocal<>(); + + public static void set(DinkyClassLoader classLoader) { + CLASS_LOADER_CONTEXT.set(classLoader); + INIT_CLASS_LOADER_CONTEXT.set(Thread.currentThread().getContextClassLoader()); + Thread.currentThread().setContextClassLoader(classLoader); + } + + public static DinkyClassLoader get() { + return CLASS_LOADER_CONTEXT.get(); + } + + public static void clear() { + DinkyClassLoader dinkyClassLoader = get(); + CLASS_LOADER_CONTEXT.remove(); + try { + dinkyClassLoader.close(); + } catch (IOException e) { + log.error("卸载类失败,reason: {}", e.getMessage()); + throw new RuntimeException(e); + } + dinkyClassLoader = null; + Thread.currentThread().setContextClassLoader(INIT_CLASS_LOADER_CONTEXT.get()); + INIT_CLASS_LOADER_CONTEXT.remove(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/context/JarPathContextHolder.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/context/JarPathContextHolder.java new file mode 100644 index 0000000..7bf1c3a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/context/JarPathContextHolder.java @@ -0,0 +1,61 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.context; + +import java.io.File; +import java.util.Set; +import java.util.concurrent.CopyOnWriteArraySet; + +/** + * @author ZackYoung + * @since 0.7.0 + */ +public class JarPathContextHolder { + + private static final ThreadLocal> UDF_PATH_CONTEXT = new ThreadLocal<>(); + private static final ThreadLocal> OTHER_PLUGINS_PATH_CONTEXT = new ThreadLocal<>(); + + public static void addUdfPath(File file) { + getUdfFile().add(file); + } + + public static void addOtherPlugins(File file) { + getOtherPluginsFiles().add(file); + } + + public static Set getUdfFile() { + if (UDF_PATH_CONTEXT.get() == null) { + UDF_PATH_CONTEXT.set(new CopyOnWriteArraySet<>()); + } + return UDF_PATH_CONTEXT.get(); + } + + public static Set getOtherPluginsFiles() { + if (OTHER_PLUGINS_PATH_CONTEXT.get() == null) { + OTHER_PLUGINS_PATH_CONTEXT.set(new CopyOnWriteArraySet<>()); + } + return OTHER_PLUGINS_PATH_CONTEXT.get(); + } + + public static void clear() { + UDF_PATH_CONTEXT.remove(); + OTHER_PLUGINS_PATH_CONTEXT.remove(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/JobException.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/JobException.java new file mode 100644 index 0000000..7515ee7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/JobException.java @@ -0,0 +1,37 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.exception; + +/** + * JobException + * + * @author zrx + * @since 2021/6/27 + **/ +public class JobException extends RuntimeException { + + public JobException(String message, Throwable cause) { + super(message, cause); + } + + public JobException(String message) { + super(message); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/MetaDataException.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/MetaDataException.java new file mode 100644 index 0000000..5a369b7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/MetaDataException.java @@ -0,0 +1,37 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.exception; + +/** + * JobException + * + * @author zrx + * @since 2021/6/27 + **/ +public class MetaDataException extends RuntimeException { + + public MetaDataException(String message, Throwable cause) { + super(message, cause); + } + + public MetaDataException(String message) { + super(message); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/RunTimeException.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/RunTimeException.java new file mode 100644 index 0000000..197c31a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/RunTimeException.java @@ -0,0 +1,37 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.exception; + +/** + * RunTimeException + * + * @author zrx + * @since 2021/6/27 + **/ +public class RunTimeException extends RuntimeException { + + public RunTimeException(String message, Throwable cause) { + super(message, cause); + } + + public RunTimeException(String message) { + super(message); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/SplitTableException.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/SplitTableException.java new file mode 100644 index 0000000..ee619a2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/SplitTableException.java @@ -0,0 +1,16 @@ +package net.srt.flink.common.exception; + +/** + * @author ZackYoung + * @version 1.0 + * @since 2022/9/2 + */ +public class SplitTableException extends RuntimeException { + public SplitTableException(String message, Throwable cause) { + super(message, cause); + } + + public SplitTableException(String message) { + super(message); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/SqlException.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/SqlException.java new file mode 100644 index 0000000..a6e8c96 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/exception/SqlException.java @@ -0,0 +1,37 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.exception; + +/** + * SqlException + * + * @author zrx + * @since 2021/6/22 + **/ +public class SqlException extends RuntimeException { + + public SqlException(String message, Throwable cause) { + super(message, cause); + } + + public SqlException(String message) { + super(message); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Catalog.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Catalog.java new file mode 100644 index 0000000..3e6e797 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Catalog.java @@ -0,0 +1,59 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +import lombok.Getter; +import lombok.Setter; + +import java.io.Serializable; +import java.util.ArrayList; +import java.util.List; + +/** + * Catalog + * + * @author zrx + * @since 2022/7/17 21:37 + */ +@Getter +@Setter +public class Catalog implements Serializable { + + private static final long serialVersionUID = -7535759384541414568L; + + private String name; + private List schemas = new ArrayList<>(); + + public Catalog() { + } + + public Catalog(String name) { + this.name = name; + } + + public Catalog(String name, List schemas) { + this.name = name; + this.schemas = schemas; + } + + public static Catalog build(String name) { + return new Catalog(name); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Column.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Column.java new file mode 100644 index 0000000..4a4df3b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Column.java @@ -0,0 +1,67 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +import lombok.Getter; +import lombok.Setter; + +import java.io.Serializable; + +/** + * Column + * + * @author zrx + * @since 2021/7/19 23:26 + */ +@Setter +@Getter +public class Column implements Serializable { + + private static final long serialVersionUID = 6438514547501611599L; + + private String name; + private String type; + private String comment; + private boolean keyFlag; + private boolean autoIncrement; + private String defaultValue; + private boolean isNullable; + private ColumnType javaType; + private String columnFamily; + private Integer position; + private Integer length; + private Integer precision; + private Integer scale; + private String characterSet; + private String collation; + + public String getFlinkType() { + String flinkType = javaType.getFlinkType(); + if (flinkType.equals("DECIMAL")) { + if (precision == null || precision == 0) { + return flinkType + "(" + 38 + "," + scale + ")"; + } else { + return flinkType + "(" + precision + "," + scale + ")"; + } + } else { + return flinkType; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/ColumnType.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/ColumnType.java new file mode 100644 index 0000000..25f2f1c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/ColumnType.java @@ -0,0 +1,75 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +/** + * ColumnType + * + * @author zrx + * @since 2022/2/17 10:59 + **/ +public enum ColumnType { + + STRING("java.lang.String", "STRING"), + JAVA_LANG_BOOLEAN("java.lang.Boolean", "BOOLEAN"), + BOOLEAN("Boolean", "BOOLEAN NOT NULL"), + JAVA_LANG_BYTE("java.lang.Byte", "TINYINT"), + BYTE("byte", "TINYINT NOT NULL"), + JAVA_LANG_SHORT("java.lang.Short", "SMALLINT"), + SHORT("short", "SMALLINT NOT NULL"), + INTEGER("java.lang.Integer", "INT"), + INT("int", "INT NOT NULL"), + JAVA_LANG_LONG("java.lang.Long", "BIGINT"), + LONG("long", "BIGINT NOT NULL"), + JAVA_LANG_FLOAT("java.lang.Float", "FLOAT"), + FLOAT("float", "FLOAT NOT NULL"), + JAVA_LANG_DOUBLE("java.lang.Double", "DOUBLE"), + DOUBLE("double", "DOUBLE NOT NULL"), + DATE("java.sql.Date", "DATE"), + LOCALDATE("java.time.LocalDate", "DATE"), + TIME("java.sql.Time", "TIME"), + LOCALTIME("java.time.LocalTime", "TIME"), + TIMESTAMP("java.sql.Timestamp", "TIMESTAMP"), + LOCALDATETIME("java.time.LocalDateTime", "TIMESTAMP"), + OFFSETDATETIME("java.time.OffsetDateTime", "TIMESTAMP WITH TIME ZONE"), + INSTANT("java.time.Instant", "TIMESTAMP_LTZ"), + DURATION("java.time.Duration", "INVERVAL SECOND"), + PERIOD("java.time.Period", "INTERVAL YEAR TO MONTH"), + DECIMAL("java.math.BigDecimal", "DECIMAL"), + BYTES("byte[]", "BYTES"), + T("T[]", "ARRAY"), + MAP("java.util.Map", "MAP"); + + private String javaType; + private String flinkType; + + ColumnType(String javaType, String flinkType) { + this.javaType = javaType; + this.flinkType = flinkType; + } + + public String getJavaType() { + return javaType; + } + + public String getFlinkType() { + return flinkType; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/FlinkColumn.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/FlinkColumn.java new file mode 100644 index 0000000..f506cdb --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/FlinkColumn.java @@ -0,0 +1,62 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +import lombok.Getter; +import lombok.Setter; + +import java.io.Serializable; + +/** + * FlinkColumn + * + * @author zrx + * @since 2022/7/18 19:55 + **/ +@Getter +@Setter +public class FlinkColumn implements Serializable { + private static final long serialVersionUID = 4820196727157711974L; + + private int position; + private String name; + private String type; + private String key; + private String nullable; + private String extras; + private String watermark; + + public FlinkColumn() { + } + + public FlinkColumn(int position, String name, String type, String key, String nullable, String extras, String watermark) { + this.position = position; + this.name = name; + this.type = type; + this.key = key; + this.nullable = nullable; + this.extras = extras; + this.watermark = watermark; + } + + public static FlinkColumn build(int position, String name, String type, String key, String nullable, String extras, String watermark) { + return new FlinkColumn(position, name, type, key, nullable, extras, watermark); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/JobLifeCycle.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/JobLifeCycle.java new file mode 100644 index 0000000..67b851b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/JobLifeCycle.java @@ -0,0 +1,68 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +/** + * JobLifeCycle + * + * @author zrx + * @since 2022/2/1 16:37 + */ +public enum JobLifeCycle { + UNKNOWN(0, "未知"), + CREATE(1, "创建"), + DEVELOP(2, "开发"), + DEBUG(3, "调试"), + RELEASE(4, "发布"), + ONLINE(5, "上线"), + CANCEL(6, "注销"); + + private Integer value; + private String label; + + JobLifeCycle(Integer value, String label) { + this.value = value; + this.label = label; + } + + public Integer getValue() { + return value; + } + + public String getLabel() { + return label; + } + + public static JobLifeCycle get(Integer value) { + for (JobLifeCycle item : JobLifeCycle.values()) { + if (item.getValue().equals(value)) { + return item; + } + } + return JobLifeCycle.UNKNOWN; + } + + public boolean equalsValue(Integer step) { + if (value.equals(step)) { + return true; + } + return false; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/JobStatus.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/JobStatus.java new file mode 100644 index 0000000..55b8537 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/JobStatus.java @@ -0,0 +1,150 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + + +import net.srt.flink.common.assertion.Asserts; + +import java.util.ArrayList; +import java.util.List; + +/** + * JobState + * + * @author zrx + * @since 2022/2/22 14:29 + **/ +public enum JobStatus { + + /** + * The job has been received by the Dispatcher, and is waiting for the job manager to receive + * leadership and to be created. + */ + INITIALIZING("INITIALIZING"), + + /** + * Job is newly created, no task has started to run. + */ + CREATED("CREATED"), + + /** + * Some tasks are scheduled or running, some may be pending, some may be finished. + */ + RUNNING("RUNNING"), + + /** + * The job has failed and is currently waiting for the cleanup to complete. + */ + FAILING("FAILING"), + + /** + * The job has failed with a non-recoverable task failure. + */ + FAILED("FAILED"), + + /** + * Job is being cancelled. + */ + CANCELLING("CANCELLING"), + + /** + * Job has been cancelled. + */ + CANCELED("CANCELED"), + + /** + * All of the job's tasks have successfully finished. + */ + FINISHED("FINISHED"), + + /** + * The job is currently undergoing a reset and total restart. + */ + RESTARTING("RESTARTING"), + + /** + * The job has been suspended which means that it has been stopped but not been removed from a + * potential HA job store. + */ + SUSPENDED("SUSPENDED"), + + /** + * The job is currently reconciling and waits for task execution report to recover state. + */ + RECONCILING("RECONCILING"), + + /** + * The job can't get any info. + */ + UNKNOWN("UNKNOWN"); + + private String value; + + JobStatus(String value) { + this.value = value; + } + + public String getValue() { + return value; + } + + public static JobStatus get(String value) { + for (JobStatus type : JobStatus.values()) { + if (Asserts.isEqualsIgnoreCase(type.getValue(), value)) { + return type; + } + } + return JobStatus.UNKNOWN; + } + + public static boolean isDone(String value) { + switch (get(value)) { + case FAILED: + case CANCELED: + case FINISHED: + case UNKNOWN: + return true; + default: + return false; + } + } + + public boolean isDone() { + switch (this) { + case FAILED: + case CANCELED: + case FINISHED: + case UNKNOWN: + return true; + default: + return false; + } + } + + public static List getAllDoneStatus() { + final List list = new ArrayList<>(4); + list.add(FAILED); + list.add(CANCELED); + list.add(FINISHED); + list.add(UNKNOWN); + return list; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/ProjectSystemConfiguration.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/ProjectSystemConfiguration.java new file mode 100644 index 0000000..6026a2e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/ProjectSystemConfiguration.java @@ -0,0 +1,24 @@ +package net.srt.flink.common.model; + +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +/** + * @ClassName ProjectSystemConfiguration + * @Author zrx + * @Date 2022/12/26 16:54 + */ +public class ProjectSystemConfiguration { + public static final Map PROJECT_SYSTEM_CONFIGURATION = new ConcurrentHashMap<>(); + + public static SystemConfiguration getByProjectId(Long projectId) { + if (projectId == null) { + throw new RuntimeException("projectId is null!"); + } + //不存在,添加默认配置 + if (!PROJECT_SYSTEM_CONFIGURATION.containsKey(projectId)) { + PROJECT_SYSTEM_CONFIGURATION.put(projectId, SystemConfiguration.getInstances()); + } + return PROJECT_SYSTEM_CONFIGURATION.get(projectId); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/QueryData.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/QueryData.java new file mode 100644 index 0000000..167f928 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/QueryData.java @@ -0,0 +1,22 @@ +package net.srt.flink.common.model; + +import lombok.Data; + +@Data +public class QueryData { + private String id; + + private String schemaName; + + private String tableName; + + private Option option; + + @Data + public class Option { + private String where; + private String order; + private String limitStart; + private String limitEnd; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Schema.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Schema.java new file mode 100644 index 0000000..af0e0d8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Schema.java @@ -0,0 +1,71 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +import lombok.Getter; +import lombok.Setter; + +import java.io.Serializable; +import java.util.ArrayList; +import java.util.List; + +/** + * Schema + * + * @author zrx + * @since 2021/7/19 23:27 + */ +@Getter +@Setter +public class Schema implements Serializable, Comparable { + + private static final long serialVersionUID = 4278304357661271040L; + + private String name; + private List tables = new ArrayList<>(); + private List views = new ArrayList<>(); + private List functions = new ArrayList<>(); + private List userFunctions = new ArrayList<>(); + private List modules = new ArrayList<>(); + + /** + * 需要保留一个空构造方法,否则序列化有问题 + * */ + public Schema() { + } + + public Schema(String name) { + this.name = name; + } + + public Schema(String name, List
tables) { + this.name = name; + this.tables = tables; + } + + public static Schema build(String name) { + return new Schema(name); + } + + @Override + public int compareTo(Schema o) { + return this.name.compareTo(o.getName()); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/SystemConfiguration.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/SystemConfiguration.java new file mode 100644 index 0000000..7213cb3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/SystemConfiguration.java @@ -0,0 +1,205 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +import com.fasterxml.jackson.databind.JsonNode; +import net.srt.flink.common.assertion.Asserts; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +/** + * SystemConfiguration + * + * @author zrx + * @since 2021/11/18 + **/ +public class SystemConfiguration { + + + public static SystemConfiguration getInstances() { + return new SystemConfiguration(); + } + + private SystemConfiguration() { + CONFIGURATION_LIST.add(sqlSubmitJarPath); + CONFIGURATION_LIST.add(sqlSubmitJarParas); + CONFIGURATION_LIST.add(sqlSubmitJarMainAppClass); + CONFIGURATION_LIST.add(useRestAPI); + CONFIGURATION_LIST.add(sqlSeparator); + CONFIGURATION_LIST.add(jobIdWait); + } + + private final List CONFIGURATION_LIST = new ArrayList<>(); + + private Configuration sqlSubmitJarPath = new Configuration( + "sqlSubmitJarPath", + "FlinkSQL提交Jar路径", + ValueType.STRING, + "hdfs:///flink-app/jar/flink-app.jar", + "用于指定Applcation模式提交FlinkSQL的Jar的路径"); + private Configuration sqlSubmitJarParas = new Configuration( + "sqlSubmitJarParas", + "FlinkSQL提交Jar参数", + ValueType.STRING, + "", + "用于指定Applcation模式提交FlinkSQL的Jar的参数"); + private Configuration sqlSubmitJarMainAppClass = new Configuration( + "sqlSubmitJarMainAppClass", + "FlinkSQL提交Jar主类", + ValueType.STRING, + "net.srt.flink.app.MainApp", + "用于指定Applcation模式提交FlinkSQL的Jar的主类"); + private Configuration useRestAPI = new Configuration( + "useRestAPI", + "使用 RestAPI", + ValueType.BOOLEAN, + true, + "在运维 Flink 任务时是否使用 RestAPI"); + private Configuration sqlSeparator = new Configuration( + "sqlSeparator", + "FlinkSQL语句分割符", + ValueType.STRING, + ";\n", + "Flink SQL 的语句分割符"); + private Configuration jobIdWait = new Configuration( + "jobIdWait", + "获取 Job ID 的最大等待时间(秒)", + ValueType.INT, + 30, + "提交 Application 或 PerJob 任务时获取 Job ID 的最大等待时间(秒)"); + + public void setConfiguration(JsonNode jsonNode) { + for (Configuration item : CONFIGURATION_LIST) { + if (!jsonNode.has(item.getName())) { + continue; + } + switch (item.getType()) { + case BOOLEAN: + item.setValue(jsonNode.get(item.getName()).asBoolean()); + break; + case INT: + item.setValue(jsonNode.get(item.getName()).asInt()); + break; + default: + item.setValue(jsonNode.get(item.getName()).asText()); + } + } + } + + public void addConfiguration(Map map) { + for (Configuration item : CONFIGURATION_LIST) { + if (map.containsKey(item.getName()) && item.getType().equals(ValueType.BOOLEAN)) { + map.put(item.getName(), Asserts.isEqualsIgnoreCase("true", map.get(item.getName()).toString())); + } + if (!map.containsKey(item.getName())) { + map.put(item.getName(), item.getValue()); + } + } + } + + public String getSqlSubmitJarParas() { + return sqlSubmitJarParas.getValue().toString(); + } + + public void setSqlSubmitJarParas(String sqlSubmitJarParas) { + this.sqlSubmitJarParas.setValue(sqlSubmitJarParas); + } + + public String getSqlSubmitJarPath() { + return sqlSubmitJarPath.getValue().toString(); + } + + public void setSqlSubmitJarPath(String sqlSubmitJarPath) { + this.sqlSubmitJarPath.setValue(sqlSubmitJarPath); + } + + public String getSqlSubmitJarMainAppClass() { + return sqlSubmitJarMainAppClass.getValue().toString(); + } + + public void setSqlSubmitJarMainAppClass(String sqlSubmitJarMainAppClass) { + this.sqlSubmitJarMainAppClass.setValue(sqlSubmitJarMainAppClass); + } + + public boolean isUseRestAPI() { + return (boolean) useRestAPI.getValue(); + } + + public void setUseRestAPI(boolean useRestAPI) { + this.useRestAPI.setValue(useRestAPI); + } + + public String getSqlSeparator() { + return sqlSeparator.getValue().toString(); + } + + public void setSqlSeparator(String sqlSeparator) { + this.sqlSeparator.setValue(sqlSeparator); + } + + public int getJobIdWait() { + return (int) jobIdWait.getValue(); + } + + public void setJobIdWait(Configuration jobIdWait) { + this.jobIdWait.setValue(jobIdWait); + } + + enum ValueType { + STRING, INT, DOUBLE, FLOAT, BOOLEAN, DATE + } + + public class Configuration { + + private String name; + private String label; + private ValueType type; + private Object defaultValue; + private Object value; + private String note; + + public Configuration(String name, String label, ValueType type, Object defaultValue, String note) { + this.name = name; + this.label = label; + this.type = type; + this.defaultValue = defaultValue; + this.value = defaultValue; + this.note = note; + } + + public void setValue(Object value) { + this.value = value; + } + + public Object getValue() { + return value; + } + + public ValueType getType() { + return type; + } + + public String getName() { + return name; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Table.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Table.java new file mode 100644 index 0000000..3672d44 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/Table.java @@ -0,0 +1,275 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.utils.SqlUtil; + +import java.beans.Transient; +import java.io.Serializable; +import java.util.ArrayList; +import java.util.Date; +import java.util.List; + + +/** + * Table + * + * @author zrx + * @since 2021/7/19 23:27 + */ +@Getter +@Setter +public class Table implements Serializable, Comparable
, Cloneable { + + private static final long serialVersionUID = 4209205512472367171L; + + private String name; + private String schema; + private String catalog; + private String comment; + private String type; + private String engine; + private String options; + private Long rows; + private Date createTime; + private Date updateTime; + /** + * 表类型 + */ + private TableType tableType = TableType.SINGLE_DATABASE_AND_TABLE; + /** + * 分库或分表对应的表名 + */ + private List schemaTableNameList; + + + private List columns; + + public Table() { + } + + public Table(String name, String schema, List columns) { + this.name = name; + this.schema = schema; + this.columns = columns; + } + + @Transient + public String getSchemaTableName() { + return Asserts.isNullString(schema) ? name : schema + "." + name; + } + + @Transient + public String getSchemaTableNameWithUnderline() { + return Asserts.isNullString(schema) ? name : schema + "_" + name; + } + + @Override + public int compareTo(Table o) { + return this.name.compareTo(o.getName()); + } + + public static Table build(String name) { + return new Table(name, null, null); + } + + public static Table build(String name, String schema) { + return new Table(name, schema, null); + } + + public static Table build(String name, String schema, List columns) { + return new Table(name, schema, columns); + } + + @Transient + public String getFlinkTableWith(String flinkConfig) { + String tableWithSql = ""; + if (Asserts.isNotNullString(flinkConfig)) { + tableWithSql = SqlUtil.replaceAllParam(flinkConfig, "schemaName", schema); + tableWithSql = SqlUtil.replaceAllParam(tableWithSql, "tableName", name); + } + return tableWithSql; + } + + @Transient + public String getFlinkTableSql(String flinkConfig) { + return getFlinkDDL(flinkConfig, name); + } + + @Transient + public String getFlinkDDL(String flinkConfig, String tableName) { + StringBuilder sb = new StringBuilder(); + sb.append("CREATE TABLE IF NOT EXISTS " + tableName + " (\n"); + List pks = new ArrayList<>(); + for (int i = 0; i < columns.size(); i++) { + String type = columns.get(i).getFlinkType(); + sb.append(" "); + if (i > 0) { + sb.append(","); + } + sb.append("`" + columns.get(i).getName() + "` " + type); + if (Asserts.isNotNullString(columns.get(i).getComment())) { + if (columns.get(i).getComment().contains("\'") | columns.get(i).getComment().contains("\"")) { + sb.append(" COMMENT '" + columns.get(i).getComment().replaceAll("\"|'", "") + "'"); + } else { + sb.append(" COMMENT '" + columns.get(i).getComment() + "'"); + } + } + sb.append("\n"); + if (columns.get(i).isKeyFlag()) { + pks.add(columns.get(i).getName()); + } + } + StringBuilder pksb = new StringBuilder("PRIMARY KEY ( "); + for (int i = 0; i < pks.size(); i++) { + if (i > 0) { + pksb.append(","); + } + pksb.append("`" + pks.get(i) + "`"); + } + pksb.append(" ) NOT ENFORCED\n"); + if (pks.size() > 0) { + sb.append(" ,"); + sb.append(pksb); + } + sb.append(")"); + if (Asserts.isNotNullString(comment)) { + if (comment.contains("\'") | comment.contains("\"")) { + sb.append(" COMMENT '" + comment.replaceAll("\"|'", "") + "'\n"); + } else { + sb.append(" COMMENT '" + comment + "'\n"); + } + } + sb.append(" WITH (\n"); + sb.append(flinkConfig); + sb.append(")\n"); + return sb.toString(); + } + + @Transient + public String getFlinkTableSql(String catalogName, String flinkConfig) { + StringBuilder sb = new StringBuilder("DROP TABLE IF EXISTS "); + String fullSchemaName = catalogName + "." + schema + "." + name; + sb.append(name + ";\n"); + sb.append("CREATE TABLE IF NOT EXISTS " + name + " (\n"); + List pks = new ArrayList<>(); + for (int i = 0; i < columns.size(); i++) { + String type = columns.get(i).getFlinkType(); + sb.append(" "); + if (i > 0) { + sb.append(","); + } + sb.append("`" + columns.get(i).getName() + "` " + type); + if (Asserts.isNotNullString(columns.get(i).getComment())) { + if (columns.get(i).getComment().contains("\'") | columns.get(i).getComment().contains("\"")) { + sb.append(" COMMENT '" + columns.get(i).getComment().replaceAll("\"|'", "") + "'"); + } else { + sb.append(" COMMENT '" + columns.get(i).getComment() + "'"); + } + } + sb.append("\n"); + if (columns.get(i).isKeyFlag()) { + pks.add(columns.get(i).getName()); + } + } + StringBuilder pksb = new StringBuilder("PRIMARY KEY ( "); + for (int i = 0; i < pks.size(); i++) { + if (i > 0) { + pksb.append(","); + } + pksb.append("`" + pks.get(i) + "`"); + } + pksb.append(" ) NOT ENFORCED\n"); + if (pks.size() > 0) { + sb.append(" ,"); + sb.append(pksb); + } + sb.append(")"); + if (Asserts.isNotNullString(comment)) { + if (comment.contains("\'") | comment.contains("\"")) { + sb.append(" COMMENT '" + comment.replaceAll("\"|'", "") + "'\n"); + } else { + sb.append(" COMMENT '" + comment + "'\n"); + } + } + sb.append(" WITH (\n"); + sb.append(getFlinkTableWith(flinkConfig)); + sb.append("\n);\n"); + return sb.toString(); + } + + @Transient + public String getSqlSelect(String catalogName) { + StringBuilder sb = new StringBuilder("SELECT\n"); + for (int i = 0; i < columns.size(); i++) { + sb.append(" "); + if (i > 0) { + sb.append(","); + } + String columnComment = columns.get(i).getComment(); + if (Asserts.isNotNullString(columnComment)) { + if (columnComment.contains("\'") | columnComment.contains("\"")) { + columnComment = columnComment.replaceAll("\"|'", ""); + } + sb.append("`" + columns.get(i).getName() + "` -- " + columnComment + " \n"); + } else { + sb.append("`" + columns.get(i).getName() + "` \n"); + + } + } + if (Asserts.isNotNullString(comment)) { + sb.append(" FROM " + schema + "." + name + ";" + " -- " + comment + "\n"); + } else { + sb.append(" FROM " + schema + "." + name + ";\n"); + } + return sb.toString(); + } + + @Transient + public String getCDCSqlInsert(String targetName, String sourceName) { + StringBuilder sb = new StringBuilder("INSERT INTO "); + sb.append(targetName); + sb.append(" SELECT\n"); + for (int i = 0; i < columns.size(); i++) { + sb.append(" "); + if (i > 0) { + sb.append(","); + } + sb.append("`" + columns.get(i).getName() + "` \n"); + } + sb.append(" FROM "); + sb.append(sourceName); + return sb.toString(); + } + + @Override + public Object clone() { + Table table = null; + try { + table = (Table) super.clone(); + } catch (CloneNotSupportedException e) { + e.printStackTrace(); + } + return table; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/TableType.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/TableType.java new file mode 100644 index 0000000..434da01 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/TableType.java @@ -0,0 +1,36 @@ +package net.srt.flink.common.model; + +/** + * 分库分表的类型 + */ +public enum TableType { + /** + * 分库分表 + */ + SPLIT_DATABASE_AND_TABLE, + /** + * 分表单库 + */ + SPLIT_DATABASE_AND_SINGLE_TABLE, + /** + * 单库分表 + */ + SINGLE_DATABASE_AND_SPLIT_TABLE + /** + * 单库单表 + */ + , SINGLE_DATABASE_AND_TABLE; + + public static TableType type(boolean splitDatabase, boolean splitTable) { + if (splitDatabase && splitTable) { + return TableType.SPLIT_DATABASE_AND_TABLE; + } + if (splitTable) { + return TableType.SINGLE_DATABASE_AND_SPLIT_TABLE; + } + if (!splitDatabase) { + return TableType.SINGLE_DATABASE_AND_TABLE; + } + return TableType.SPLIT_DATABASE_AND_SINGLE_TABLE; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/TaskOperatingSavepointSelect.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/TaskOperatingSavepointSelect.java new file mode 100644 index 0000000..8cdc93c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/TaskOperatingSavepointSelect.java @@ -0,0 +1,67 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +/** + * @author csz + * @version 1.0 + **/ +public enum TaskOperatingSavepointSelect { + + + DEFAULT_CONFIG(0, "defaultConfig", "默认保存点"), + + LATEST(1, "latest", "最新保存点"); + + + private Integer code; + + private String name; + + private String message; + + TaskOperatingSavepointSelect(Integer code, String name, String message) { + this.code = code; + this.name = name; + this.message = message; + } + + public Integer getCode() { + return code; + } + + public String getName() { + return name; + } + + public String getMessage() { + return message; + } + + public static TaskOperatingSavepointSelect valueByCode(Integer code) { + for (TaskOperatingSavepointSelect savepointSelect : TaskOperatingSavepointSelect.values()) { + if (savepointSelect.getCode().equals(code)) { + return savepointSelect; + } + } + return null; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/TaskOperatingStatus.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/TaskOperatingStatus.java new file mode 100644 index 0000000..641241b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/model/TaskOperatingStatus.java @@ -0,0 +1,65 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.model; + +/** + * @author mydq + * @version 1.0 + **/ +public enum TaskOperatingStatus { + + INIT(1, "init", "初始化"), + + OPERATING_BEFORE(4, "operatingBefore", "操作前准备"), + + TASK_STATUS_NO_DONE(8, "taskStatusNoDone", "任务不是完成状态"), + + OPERATING(12, "operating", "正在操作"), + + EXCEPTION(13, "exception", "异常"), + + SUCCESS(16, "success", "成功"), + FAIL(20, "fail", "失败"); + + private Integer code; + + private String name; + + private String message; + + TaskOperatingStatus(Integer code, String name, String message) { + this.code = code; + this.name = name; + this.message = message; + } + + public Integer getCode() { + return code; + } + + public String getName() { + return name; + } + + public String getMessage() { + return message; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/pool/AbstractPool.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/pool/AbstractPool.java new file mode 100644 index 0000000..89a67db --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/pool/AbstractPool.java @@ -0,0 +1,53 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.pool; + +import java.util.Map; + +/** + * AbstractPool + * + * @author zrx + * @since 2022/5/28 19:40 + */ +public abstract class AbstractPool { + + public abstract Map getMap(); + + public boolean exist(String key) { + return getMap().containsKey(key); + } + + public int push(String key, T entity) { + getMap().put(key, entity); + return getMap().size(); + } + + public int remove(String key) { + getMap().remove(key); + return getMap().size(); + } + + public T get(String key) { + return getMap().get(key); + } + + public abstract void refresh(T entity); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/pool/ClassEntity.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/pool/ClassEntity.java new file mode 100644 index 0000000..2d46248 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/pool/ClassEntity.java @@ -0,0 +1,61 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.pool; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.assertion.Asserts; + +/** + * ClassEntity + * + * @author zrx + * @since 2022/1/12 23:52 + */ +@Getter +@Setter +public class ClassEntity { + private String name; + private String code; + private byte[] classByte; + + public ClassEntity(String name, String code) { + this.name = name; + this.code = code; + } + + public ClassEntity(String name, String code, byte[] classByte) { + this.name = name; + this.code = code; + this.classByte = classByte; + } + + public static ClassEntity build(String name, String code) { + return new ClassEntity(name, code); + } + + public boolean equals(ClassEntity entity) { + if (Asserts.isEquals(name, entity.getName()) && Asserts.isEquals(code, entity.getCode())) { + return true; + } else { + return false; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/pool/ClassPool.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/pool/ClassPool.java new file mode 100644 index 0000000..720d046 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/pool/ClassPool.java @@ -0,0 +1,80 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.pool; + +import java.util.List; +import java.util.Vector; + +/** + * ClassPool + * + * @author zrx + * @since 2022/1/12 23:52 + */ +public class ClassPool { + + private static volatile List classList = new Vector<>(); + + public static boolean exist(String name) { + for (ClassEntity executorEntity : classList) { + if (executorEntity.getName().equals(name)) { + return true; + } + } + return false; + } + + public static boolean exist(ClassEntity entity) { + for (ClassEntity executorEntity : classList) { + if (executorEntity.equals(entity)) { + return true; + } + } + return false; + } + + public static Integer push(ClassEntity executorEntity) { + if (exist(executorEntity.getName())) { + remove(executorEntity.getName()); + } + classList.add(executorEntity); + return classList.size(); + } + + public static Integer remove(String name) { + int count = classList.size(); + for (int i = 0; i < classList.size(); i++) { + if (name.equals(classList.get(i).getName())) { + classList.remove(i); + break; + } + } + return count - classList.size(); + } + + public static ClassEntity get(String name) { + for (ClassEntity executorEntity : classList) { + if (executorEntity.getName().equals(name)) { + return executorEntity; + } + } + return null; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/AbstractResult.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/AbstractResult.java new file mode 100644 index 0000000..f2cb2cd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/AbstractResult.java @@ -0,0 +1,105 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.result; + +import com.fasterxml.jackson.annotation.JsonFormat; +import com.fasterxml.jackson.databind.annotation.JsonDeserialize; +import com.fasterxml.jackson.databind.annotation.JsonSerialize; +import com.fasterxml.jackson.datatype.jsr310.deser.LocalDateTimeDeserializer; +import com.fasterxml.jackson.datatype.jsr310.ser.LocalDateTimeSerializer; + +import java.time.Duration; +import java.time.LocalDateTime; + +/** + * AbstractResult + * + * @author zrx + * @since 2021/6/29 22:49 + */ +public class AbstractResult { + + protected boolean success; + @JsonDeserialize(using = LocalDateTimeDeserializer.class) + @JsonSerialize(using = LocalDateTimeSerializer.class) + @JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss") + protected LocalDateTime startTime; + @JsonDeserialize(using = LocalDateTimeDeserializer.class) + @JsonSerialize(using = LocalDateTimeSerializer.class) + @JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss") + protected LocalDateTime endTime; + protected long time; + protected String error; + + public void success() { + this.setEndTime(LocalDateTime.now()); + this.setSuccess(true); + } + + public void error(String error) { + this.setEndTime(LocalDateTime.now()); + this.setSuccess(false); + this.setError(error); + } + + public void setStartTime(LocalDateTime startTime) { + this.startTime = startTime; + } + + public boolean isSuccess() { + return success; + } + + public void setSuccess(boolean success) { + this.success = success; + } + + public LocalDateTime getStartTime() { + return startTime; + } + + public LocalDateTime getEndTime() { + return endTime; + } + + public void setEndTime(LocalDateTime endTime) { + this.endTime = endTime; + if (startTime != null && endTime != null) { + Duration duration = Duration.between(startTime, endTime); + time = duration.toMillis(); + } + } + + public String getError() { + return error; + } + + public void setError(String error) { + this.error = error; + } + + public long getTime() { + return time; + } + + public void setTime(long time) { + this.time = time; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/ExplainResult.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/ExplainResult.java new file mode 100644 index 0000000..48781a2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/ExplainResult.java @@ -0,0 +1,64 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.result; + +import java.util.List; + +/** + * ExplainResult + * + * @author zrx + * @since 2021/12/12 13:11 + */ +public class ExplainResult { + private boolean correct; + private int total; + private List sqlExplainResults; + + public ExplainResult(boolean correct, int total, List sqlExplainResults) { + this.correct = correct; + this.total = total; + this.sqlExplainResults = sqlExplainResults; + } + + public boolean isCorrect() { + return correct; + } + + public void setCorrect(boolean correct) { + this.correct = correct; + } + + public int getTotal() { + return total; + } + + public void setTotal(int total) { + this.total = total; + } + + public List getSqlExplainResults() { + return sqlExplainResults; + } + + public void setSqlExplainResults(List sqlExplainResults) { + this.sqlExplainResults = sqlExplainResults; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/IResult.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/IResult.java new file mode 100644 index 0000000..5f4c893 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/IResult.java @@ -0,0 +1,35 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.result; + +import java.time.LocalDateTime; + +/** + * IResult + * + * @author zrx + * @since 2021/5/25 16:22 + **/ +public interface IResult { + + void setStartTime(LocalDateTime startTime); + + String getJobId(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/SqlExplainResult.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/SqlExplainResult.java new file mode 100644 index 0000000..e167e32 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/result/SqlExplainResult.java @@ -0,0 +1,159 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.result; + +import com.fasterxml.jackson.annotation.JsonFormat; +import com.fasterxml.jackson.databind.annotation.JsonDeserialize; +import com.fasterxml.jackson.databind.annotation.JsonSerialize; +import com.fasterxml.jackson.datatype.jsr310.deser.LocalDateTimeDeserializer; +import com.fasterxml.jackson.datatype.jsr310.ser.LocalDateTimeSerializer; + +import java.time.LocalDateTime; + +/** + * 解释结果 + * + * @author zrx + * @since 2021/6/7 22:06 + **/ +public class SqlExplainResult { + private Integer index; + private String type; + private String sql; + private String parse; + private String explain; + private String error; + private boolean parseTrue; + private boolean explainTrue; + @JsonDeserialize(using = LocalDateTimeDeserializer.class) + @JsonSerialize(using = LocalDateTimeSerializer.class) + @JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss") + private LocalDateTime explainTime; + + public SqlExplainResult() { + } + + public SqlExplainResult(Integer index, String type, String sql, String parse, String explain, String error, boolean parseTrue, boolean explainTrue, LocalDateTime explainTime) { + this.index = index; + this.type = type; + this.sql = sql; + this.parse = parse; + this.explain = explain; + this.error = error; + this.parseTrue = parseTrue; + this.explainTrue = explainTrue; + this.explainTime = explainTime; + } + + public static SqlExplainResult success(String type, String sql, String explain) { + return new SqlExplainResult(1, type, sql, null, explain, null, true, true, LocalDateTime.now()); + } + + public static SqlExplainResult fail(String sql, String error) { + return new SqlExplainResult(1, null, sql, null, null, error, false, false, LocalDateTime.now()); + } + + public Integer getIndex() { + return index; + } + + public void setIndex(Integer index) { + this.index = index; + } + + public String getType() { + return type; + } + + public void setType(String type) { + this.type = type; + } + + public String getSql() { + return sql; + } + + public void setSql(String sql) { + this.sql = sql; + } + + public String getParse() { + return parse; + } + + public void setParse(String parse) { + this.parse = parse; + } + + public String getExplain() { + return explain; + } + + public void setExplain(String explain) { + this.explain = explain; + } + + public String getError() { + return error; + } + + public void setError(String error) { + this.error = error; + } + + public boolean isParseTrue() { + return parseTrue; + } + + public void setParseTrue(boolean parseTrue) { + this.parseTrue = parseTrue; + } + + public boolean isExplainTrue() { + return explainTrue; + } + + public void setExplainTrue(boolean explainTrue) { + this.explainTrue = explainTrue; + } + + public LocalDateTime getExplainTime() { + return explainTime; + } + + public void setExplainTime(LocalDateTime explainTime) { + this.explainTime = explainTime; + } + + @Override + public String toString() { + return "SqlExplainRecord{" + + "index=" + index + + ", type='" + type + '\'' + + ", sql='" + sql + '\'' + + ", parse='" + parse + '\'' + + ", explain='" + explain + '\'' + + ", error='" + error + '\'' + + ", parseTrue=" + parseTrue + + ", explainTrue=" + explainTrue + + ", explainTime=" + explainTime + + '}'; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/JSONUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/JSONUtil.java new file mode 100644 index 0000000..eafd55d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/JSONUtil.java @@ -0,0 +1,202 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.utils; + +import com.fasterxml.jackson.core.JsonParser; +import com.fasterxml.jackson.core.type.TypeReference; +import com.fasterxml.jackson.databind.DeserializationContext; +import com.fasterxml.jackson.databind.JsonDeserializer; +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.ObjectWriter; +import com.fasterxml.jackson.databind.SerializationFeature; +import com.fasterxml.jackson.databind.node.ArrayNode; +import com.fasterxml.jackson.databind.node.ObjectNode; +import com.fasterxml.jackson.databind.node.TextNode; +import com.fasterxml.jackson.databind.type.CollectionType; +import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule; +import net.srt.flink.common.assertion.Asserts; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.TimeZone; + +import static com.fasterxml.jackson.databind.DeserializationFeature.ACCEPT_EMPTY_ARRAY_AS_NULL_OBJECT; +import static com.fasterxml.jackson.databind.DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES; +import static com.fasterxml.jackson.databind.DeserializationFeature.READ_UNKNOWN_ENUM_VALUES_AS_NULL; +import static com.fasterxml.jackson.databind.MapperFeature.REQUIRE_SETTERS_FOR_GETTERS; +import static java.nio.charset.StandardCharsets.UTF_8; + +/** + * JSONUtil + * + * @author zrx + * @since 2022/2/23 19:57 + **/ +public class JSONUtil { + private static final Logger logger = LoggerFactory.getLogger(JSONUtil.class); + + private static final ObjectMapper objectMapper = new ObjectMapper() + .configure(FAIL_ON_UNKNOWN_PROPERTIES, false) + .configure(ACCEPT_EMPTY_ARRAY_AS_NULL_OBJECT, true) + .configure(READ_UNKNOWN_ENUM_VALUES_AS_NULL, true) + .configure(REQUIRE_SETTERS_FOR_GETTERS, true) + .registerModule(new JavaTimeModule()) + .setTimeZone(TimeZone.getDefault()); + + public static ArrayNode createArrayNode() { + return objectMapper.createArrayNode(); + } + + public static ObjectNode createObjectNode() { + return objectMapper.createObjectNode(); + } + + public static JsonNode toJsonNode(Object obj) { + return objectMapper.valueToTree(obj); + } + + public static String toJsonString(Object object, SerializationFeature feature) { + try { + ObjectWriter writer = objectMapper.writer(feature); + return writer.writeValueAsString(object); + } catch (Exception e) { + logger.error("object to json exception!", e); + } + + return null; + } + + public static T parseObject(String json, Class clazz) { + if (Asserts.isNullString(json)) { + return null; + } + try { + return objectMapper.readValue(json, clazz); + } catch (Exception e) { + logger.error("parse object exception!", e); + } + return null; + } + + public static T parseObject(byte[] src, Class clazz) { + if (src == null) { + return null; + } + String json = new String(src, UTF_8); + return parseObject(json, clazz); + } + + public static List toList(String json, Class clazz) { + if (Asserts.isNullString(json)) { + return Collections.emptyList(); + } + try { + CollectionType listType = objectMapper.getTypeFactory().constructCollectionType(ArrayList.class, clazz); + return objectMapper.readValue(json, listType); + } catch (Exception e) { + logger.error("parse list exception!", e); + } + return Collections.emptyList(); + } + + public static Map toMap(String json) { + return parseObject(json, new TypeReference>() { + }); + } + + public static Map toMap(String json, Class classK, Class classV) { + return parseObject(json, new TypeReference>() { + }); + } + + public static T parseObject(String json, TypeReference type) { + if (Asserts.isNullString(json)) { + return null; + } + try { + return objectMapper.readValue(json, type); + } catch (Exception e) { + logger.error("json to map exception!", e); + } + return null; + } + + public static String toJsonString(Object object) { + if (Asserts.isNull(object)) { + return null; + } + try { + return objectMapper.writeValueAsString(object); + } catch (Exception e) { + throw new RuntimeException("Object json deserialization exception.", e); + } + } + + public static byte[] toJsonByteArray(T obj) { + if (obj == null) { + return null; + } + String json = ""; + try { + json = toJsonString(obj); + } catch (Exception e) { + logger.error("json serialize exception.", e); + } + return json.getBytes(UTF_8); + } + + public static ObjectNode parseObject(String text) { + try { + if (text.isEmpty()) { + return parseObject(text, ObjectNode.class); + } else { + return (ObjectNode) objectMapper.readTree(text); + } + } catch (Exception e) { + throw new RuntimeException("String json deserialization exception.", e); + } + } + + public static ArrayNode parseArray(String text) { + try { + return (ArrayNode) objectMapper.readTree(text); + } catch (Exception e) { + throw new RuntimeException("Json deserialization exception.", e); + } + } + + public static class JsonDataDeserializer extends JsonDeserializer { + @Override + public String deserialize(JsonParser p, DeserializationContext ctxt) throws IOException { + JsonNode node = p.getCodec().readTree(p); + if (node instanceof TextNode) { + return node.asText(); + } else { + return node.toString(); + } + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/LogUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/LogUtil.java new file mode 100644 index 0000000..ffbd59a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/LogUtil.java @@ -0,0 +1,68 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.utils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.io.PrintWriter; +import java.io.StringWriter; +import java.time.LocalDateTime; + +/** + * LogUtil + * + * @author zrx + * @since 2022/2/11 15:46 + **/ +public class LogUtil { + + private static final Logger logger = LoggerFactory.getLogger(LogUtil.class); + + public static String getError(Throwable e) { + String error = null; + try (StringWriter sw = new StringWriter(); + PrintWriter pw = new PrintWriter(sw)) { + e.printStackTrace(pw); + error = sw.toString(); + logger.error(error); + } catch (IOException ioe) { + ioe.printStackTrace(); + } finally { + return error; + } + } + + public static String getError(String msg, Throwable e) { + String error = null; + try (StringWriter sw = new StringWriter(); + PrintWriter pw = new PrintWriter(sw)) { + e.printStackTrace(pw); + LocalDateTime now = LocalDateTime.now(); + error = now.toString() + ": " + msg + " \nError message:\n " + sw.toString(); + logger.error(error); + } catch (IOException ioe) { + ioe.printStackTrace(); + } finally { + return error; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/RunTimeUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/RunTimeUtil.java new file mode 100644 index 0000000..2a96456 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/RunTimeUtil.java @@ -0,0 +1,34 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.utils; + +/** + * RunTimeUtil + * + * @author zrx + * @since 2021/12/11 + **/ +public class RunTimeUtil { + + public static void recovery(Object obj) { + obj = null; + System.gc(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/SplitUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/SplitUtil.java new file mode 100644 index 0000000..7e77940 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/SplitUtil.java @@ -0,0 +1,78 @@ +package net.srt.flink.common.utils; + +import lombok.extern.slf4j.Slf4j; + +import java.util.Map; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +/** + * 分库分表的工具类 + * + * @author ZackYoung + * @version 1.0 + * @since 2022/9/2 + */ +@Slf4j +public class SplitUtil { + + public static boolean contains(String regex, String sourceData) { + return Pattern.matches(regex, sourceData); + } + + public static boolean isSplit(String value, Map splitConfig) { + String matchNumberRegex = splitConfig.get("match_number_regex"); + Pattern pattern = Pattern.compile(matchNumberRegex); + Matcher matcher = pattern.matcher(value); + if (matcher.find()) { + int splitNum = Integer.parseInt(matcher.group(0).replaceFirst("_", "")); + int maxMatchValue = Integer.parseInt(splitConfig.get("max_match_value")); + return splitNum <= maxMatchValue; + } + return false; + } + + public static String getReValue(String value, Map splitConfig) { + if (isEnabled(splitConfig)) { + try { + String matchNumberRegex = splitConfig.get("match_number_regex"); + String matchWay = splitConfig.get("match_way"); + Pattern pattern = Pattern.compile(matchNumberRegex); + Matcher matcher = pattern.matcher(value); + // Determine whether it is a prefix or a suffix + if ("prefix".equalsIgnoreCase(matchWay)) { + if (matcher.find()) { + String num = matcher.group(0); + int splitNum = Integer.parseInt(num.replaceFirst("_", "")); + int maxMatchValue = Integer.parseInt(splitConfig.get("max_match_value")); + if (splitNum <= maxMatchValue) { + return value.substring(0, value.lastIndexOf(num)); + } + } + } else { + String num = null; + while (matcher.find()) { + num = matcher.group(0); + } + if (num == null) { + return value; + } + int splitNum = Integer.parseInt(num.replaceFirst("_", "")); + int maxMatchValue = Integer.parseInt(splitConfig.get("max_match_value")); + if (splitNum <= maxMatchValue) { + return value.substring(0, value.lastIndexOf(num)); + } + } + + } catch (Exception ignored) { + log.warn("Unable to determine sub-database sub-table"); + } + } + return value; + } + + public static boolean isEnabled(Map split) { + return Boolean.parseBoolean(split.get("enable")); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/SqlUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/SqlUtil.java new file mode 100644 index 0000000..a3e02db --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/SqlUtil.java @@ -0,0 +1,70 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.utils; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.ProjectSystemConfiguration; +import net.srt.flink.common.model.SystemConfiguration; + +/** + * SqlUtil + * + * @author zrx + * @since 2021/7/14 21:57 + */ +public class SqlUtil { + + private static final String SEMICOLON = ";"; + + /*public static String[] getStatements(String sql) { + return getStatements(sql, SystemConfiguration.getInstances().getSqlSeparator()); + }*/ + + public static String[] getStatements(String sql, Long projectId) { + return getStatements(sql, ProjectSystemConfiguration.getByProjectId(projectId).getSqlSeparator()); + } + + public static String[] getStatements(String sql, String sqlSeparator) { + if (Asserts.isNullString(sql)) { + return new String[0]; + } + + String[] splits = sql.replace(";\r\n", ";\n").split(sqlSeparator); + String lastStatement = splits[splits.length - 1].trim(); + if (lastStatement.endsWith(SEMICOLON)) { + splits[splits.length - 1] = lastStatement.substring(0, lastStatement.length() - 1); + } + + return splits; + } + + public static String removeNote(String sql) { + if (Asserts.isNotNullString(sql)) { + sql = sql.replaceAll("\u00A0", " ") + .replaceAll("[\r\n]+", "\n") + .replaceAll("--([^'\n]{0,}('[^'\n]{0,}'){0,1}[^'\n]{0,}){0,}", "").trim(); + } + return sql; + } + + public static String replaceAllParam(String sql, String name, String value) { + return sql.replaceAll("\\$\\{" + name + "\\}", value); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/TextUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/TextUtil.java new file mode 100644 index 0000000..02ec090 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/TextUtil.java @@ -0,0 +1,87 @@ +package net.srt.flink.common.utils; + +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.regex.Pattern; + +public class TextUtil { + + /** + * 判断字符串是否为空 + */ + public static boolean isEmpty(CharSequence s) { + if (s == null) { + return true; + } else { + return s.length() == 0; + } + } + + /** + * 整个字符是否是数字 + * + * @param str 字符串 + * @return true 数字 false 字符串 + */ + public static boolean isInteger(String str) { + Pattern pattern = Pattern.compile("^[-\\+]?[\\d]*$"); + return pattern.matcher(str).matches(); + } + + /** + * 将字符串转化为sha1加密字符串 + * + * @param data 要加密的字符串 + */ + public static String sha1(String data) { + MessageDigest md = null; + try { + md = MessageDigest.getInstance("SHA1"); + } catch (NoSuchAlgorithmException e) { + throw new RuntimeException(e); + } + md.update(data.getBytes()); + StringBuffer buf = new StringBuffer(); + byte[] bits = md.digest(); + for (int i = 0; i < bits.length; i++) { + int a = bits[i]; + if (a < 0) { + a += 256; + } + if (a < 16) { + buf.append("0"); + } + buf.append(Integer.toHexString(a)); + } + return buf.toString(); + } + + /** + * 将字符串转化为md5加密字符串 + * + * @param data 要加密的字符串 + */ + public static String md5(String data) { + MessageDigest md = null; + try { + md = MessageDigest.getInstance("MD5"); + } catch (NoSuchAlgorithmException e) { + throw new RuntimeException(e); + } + md.update(data.getBytes()); + StringBuffer buf = new StringBuffer(); + byte[] bits = md.digest(); + for (int i = 0; i < bits.length; i++) { + int a = bits[i]; + if (a < 0) { + a += 256; + } + if (a < 16) { + buf.append("0"); + } + buf.append(Integer.toHexString(a)); + } + return buf.toString(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/ThreadUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/ThreadUtil.java new file mode 100644 index 0000000..ff523ce --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/ThreadUtil.java @@ -0,0 +1,30 @@ +package net.srt.flink.common.utils; + +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; + +/** + * @ClassName ThreadUtil + * @Author zrx + * @Date 2023/1/9 16:51 + */ +public class ThreadUtil { + + + public static ThreadPoolExecutor threadPool = new ThreadPoolExecutor( + 10, + 100, + 60, + TimeUnit.SECONDS, new ArrayBlockingQueue<>(10) + , new ThreadPoolExecutor.AbortPolicy() + ); + + public static void sleep(Integer sleepMills) { + try { + Thread.sleep(sleepMills); + } catch (InterruptedException e) { + e.printStackTrace(); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/URLUtils.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/URLUtils.java new file mode 100644 index 0000000..e1d3a25 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/URLUtils.java @@ -0,0 +1,66 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.utils; + +import java.io.File; +import java.net.MalformedURLException; +import java.net.URL; +import java.util.Arrays; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * @author zrx + */ +public class URLUtils { + + /** + * 获得URL,常用于使用绝对路径时的情况 + * + * @param files URL对应的文件对象 + * @return URL + */ + public static URL[] getURLs(File... files) { + final URL[] urls = new URL[files.length]; + try { + for (int i = 0; i < files.length; i++) { + urls[i] = files[i].toURI().toURL(); + } + } catch (MalformedURLException e) { + throw new RuntimeException(e); + } + + return urls; + } + + /** + * 获得URL,常用于使用绝对路径时的情况 + * + * @param files URL对应的文件对象 + * @return URL + */ + public static URL[] getURLs(Set files) { + return getURLs(files.stream().filter(File::exists).toArray(File[]::new)); + } + + public static String toString(URL[] urls) { + return Arrays.stream(urls).map(URL::toString).collect(Collectors.joining(",")); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/ZipUtils.java b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/ZipUtils.java new file mode 100644 index 0000000..d91fc41 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-common/src/main/java/net/srt/flink/common/utils/ZipUtils.java @@ -0,0 +1,71 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.common.utils; + +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.compress.archivers.zip.ZipArchiveEntry; +import org.apache.commons.compress.archivers.zip.ZipFile; + +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.Enumeration; + +/** + * @author ZackYoung + * @since 0.7.0 + */ +@Slf4j +public class ZipUtils { + + public static void unzip(String zipFile, String dir) { + try (ZipFile zip = new ZipFile(zipFile)) { + Enumeration entries = zip.getEntries(); + while (entries.hasMoreElements()) { + ZipArchiveEntry zipArchiveEntry = entries.nextElement(); + File file = new File(dir, zipArchiveEntry.getName()); + writeFile(file, zip.getInputStream(zipArchiveEntry)); + log.info("======解压成功=======,file:{}", file.getAbsolutePath()); + } + } catch (IOException e) { + log.error("压缩包处理异常,异常信息:", e); + } + } + + private static void writeFile(File file, InputStream inputStream) { + if (!file.exists()) { + file.getParentFile().mkdirs(); + } + if (file.isDirectory()) { + return; + } + try (OutputStream outputStream = new FileOutputStream(file)) { + byte[] bytes = new byte[4096]; + int len; + while ((len = inputStream.read(bytes)) != -1) { + outputStream.write(bytes, 0, len); + } + } catch (IOException e) { + throw new RuntimeException(e); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-core-all/pom.xml new file mode 100644 index 0000000..0154d40 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/pom.xml @@ -0,0 +1,164 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-core-all + + + + com.alibaba + druid-spring-boot-starter + + + net.srt + flink-common + ${project.version} + + + org.freemarker + freemarker + + + com.fasterxml.jackson.core + jackson-annotations + + + com.fasterxml.jackson.core + jackson-databind + + + cn.hutool + hutool-all + + + org.codehaus.groovy + groovy + 3.0.9 + + + junit + junit + provided + + + net.srt + flink-executor + ${project.version} + + + net.srt + flink-gateway + + + net.srt + flink-client-hadoop + provided + + + net.srt + flink-metadata-base + + + net.srt + flink-alert-dingtalk + + + net.srt + flink-alert-wechat + + + net.srt + flink-alert-feishu + + + net.srt + flink-alert-email + + + net.srt + flink-metadata-mysql + + + net.srt + flink-metadata-oracle + + + net.srt + flink-metadata-hive + + + net.srt + flink-metadata-postgresql + + + net.srt + flink-metadata-sqlserver + + + net.srt + flink-process + + + net.srt + flink-function + + + + + + flink-1.16 + + + net.srt + flink-client-1.16 + ${project.version} + provided + + + net.srt + flink-catalog-mysql-1.16 + ${project.version} + provided + + + net.srt + flink-1.16 + ${project.version} + provided + + + + + flink-1.14 + + + net.srt + flink-client-1.14 + ${project.version} + provided + + + net.srt + flink-catalog-mysql-1.14 + ${project.version} + provided + + + net.srt + flink-1.14 + ${project.version} + provided + + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/api/FlinkAPI.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/api/FlinkAPI.java new file mode 100644 index 0000000..b67e4d9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/api/FlinkAPI.java @@ -0,0 +1,378 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.api; + +import cn.hutool.http.HttpUtil; +import cn.hutool.http.Method; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.constant.NetConstant; +import net.srt.flink.common.utils.JSONUtil; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.common.utils.ThreadUtil; +import net.srt.flink.core.constant.FlinkRestAPIConstant; +import net.srt.flink.core.constant.FlinkRestResultConstant; +import net.srt.flink.gateway.GatewayType; +import net.srt.flink.gateway.config.SavePointType; +import net.srt.flink.gateway.model.JobInfo; +import net.srt.flink.gateway.result.SavePointResult; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.rmi.ServerException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; + +/** + * FlinkAPI + * + * @author zrx + * @since 2021/6/24 13:56 + **/ +public class FlinkAPI { + + private static final Logger logger = LoggerFactory.getLogger(FlinkAPI.class); + + private String address; + + public FlinkAPI(String address) { + this.address = address; + } + + public static FlinkAPI build(String address) { + return new FlinkAPI(address); + } + + private JsonNode parse(String res) { + ObjectMapper mapper = new ObjectMapper(); + JsonNode result = null; + try { + result = mapper.readTree(res); + } catch (JsonProcessingException e) { + e.printStackTrace(); + } + return result; + } + + private JsonNode get(String route) { + try { + String res = HttpUtil.get(NetConstant.HTTP + address + NetConstant.SLASH + route, + NetConstant.SERVER_TIME_OUT_ACTIVE); + return parse(res); + } catch (Exception e) { + logger.info("Unable to connect to Flink JobManager: {}", NetConstant.HTTP + address); + } + return null; + } + + /** + * get请求获取jobManger/TaskManager的日志 (结果为字符串并不是json格式) + * + * @param route + * @return + */ + private String getResult(String route) { + String res = HttpUtil.get(NetConstant.HTTP + address + NetConstant.SLASH + route, + NetConstant.SERVER_TIME_OUT_ACTIVE); + return res; + } + + private JsonNode post(String route, String body) { + String res = HttpUtil.post(NetConstant.HTTP + address + NetConstant.SLASH + route, body, + NetConstant.SERVER_TIME_OUT_ACTIVE); + return parse(res); + } + + private JsonNode patch(String route, String body) { + String res = HttpUtil.createRequest(Method.PATCH, NetConstant.HTTP + address + NetConstant.SLASH + route) + .timeout(NetConstant.SERVER_TIME_OUT_ACTIVE).body(body).execute().body(); + return parse(res); + } + + public List listJobs() { + JsonNode result = get(FlinkRestAPIConstant.JOBSLIST); + JsonNode jobs = result.get("jobs"); + List joblist = new ArrayList<>(); + if (jobs.isArray()) { + for (final JsonNode objNode : jobs) { + joblist.add(objNode); + } + } + return joblist; + } + + public boolean stop(String jobId) { + get(FlinkRestAPIConstant.JOBS + jobId + FlinkRestAPIConstant.CANCEL); + return true; + } + + public SavePointResult savepoints(String jobId, String savePointType) { + SavePointType type = SavePointType.get(savePointType); + String paramType = null; + SavePointResult result = SavePointResult.build(GatewayType.YARN_PER_JOB); + JobInfo jobInfo = new JobInfo(jobId); + Map paramMap = new HashMap<>(); + switch (type) { + case CANCEL: + paramMap.put("cancel-job", true); + paramType = FlinkRestAPIConstant.SAVEPOINTS; + jobInfo.setStatus(JobInfo.JobStatus.CANCEL); + break; + case STOP: + paramMap.put("drain", false); + paramType = FlinkRestAPIConstant.STOP; + jobInfo.setStatus(JobInfo.JobStatus.STOP); + break; + case TRIGGER: + paramMap.put("cancel-job", false); + // paramMap.put("target-directory","hdfs:///flink13/ss1"); + paramType = FlinkRestAPIConstant.SAVEPOINTS; + jobInfo.setStatus(JobInfo.JobStatus.RUN); + break; + default: + } + + String s = JSONUtil.toJsonString(paramMap); + JsonNode json = post(FlinkRestAPIConstant.JOBS + jobId + paramType, s); + if (json.has(FlinkRestResultConstant.ERRORS)) { + throw new RuntimeException(json.get(FlinkRestResultConstant.ERRORS).toString()); + } + String triggerid = json.get("request-id").asText(); + while (triggerid != null) { + ThreadUtil.sleep(1000); + JsonNode node = get(FlinkRestAPIConstant.JOBS + jobId + FlinkRestAPIConstant.SAVEPOINTS + + NetConstant.SLASH + triggerid); + String status = node.get("status").get("id").asText(); + if (Asserts.isEquals(status, "IN_PROGRESS")) { + continue; + } + if (node.get("operation").has("failure-cause")) { + String failureCause = node.get("operation").get("failure-cause").toString(); + if (Asserts.isNotNullString(failureCause)) { + throw new RuntimeException(failureCause); + //break; + } + } + if (node.get("operation").has("location")) { + String location = node.get("operation").get("location").asText(); + List jobInfos = new ArrayList<>(); + jobInfo.setSavePoint(location); + jobInfos.add(jobInfo); + result.setJobInfos(jobInfos); + break; + } + } + return result; + } + + public String getVersion() { + JsonNode result = get(FlinkRestAPIConstant.FLINK_CONFIG); + return result.get("flink-version").asText(); + } + + public JsonNode getOverview() { + return get(FlinkRestAPIConstant.OVERVIEW); + } + + public JsonNode getJobInfo(String jobId) { + return get(FlinkRestAPIConstant.JOBS + jobId); + } + + public JsonNode getException(String jobId) { + return get(FlinkRestAPIConstant.JOBS + jobId + FlinkRestAPIConstant.EXCEPTIONS); + } + + public JsonNode getCheckPoints(String jobId) { + return get(FlinkRestAPIConstant.JOBS + jobId + FlinkRestAPIConstant.CHECKPOINTS); + } + + public JsonNode getCheckPointsConfig(String jobId) { + return get(FlinkRestAPIConstant.JOBS + jobId + FlinkRestAPIConstant.CHECKPOINTS_CONFIG); + } + + public JsonNode getJobsConfig(String jobId) { + return get(FlinkRestAPIConstant.JOBS + jobId + FlinkRestAPIConstant.CONFIG); + } + + /** + * @return JsonNode + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getJobManagerMetrics 获取jobManager的监控信息 + */ + public JsonNode getJobManagerMetrics() { + return get(FlinkRestAPIConstant.JOB_MANAGER + FlinkRestAPIConstant.METRICS + FlinkRestAPIConstant.GET + + buildMetricsParms(FlinkRestAPIConstant.JOB_MANAGER)); + } + + /** + * @return JsonNode + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getJobManagerConfig 获取jobManager的配置信息 + */ + public JsonNode getJobManagerConfig() { + return get(FlinkRestAPIConstant.JOB_MANAGER + FlinkRestAPIConstant.CONFIG); + } + + /** + * @return JsonNode + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getJobManagerLog 获取jobManager的日志信息 + */ + public String getJobManagerLog() { + return getResult(FlinkRestAPIConstant.JOB_MANAGER + FlinkRestAPIConstant.LOG); + } + + /** + * @return String + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getJobManagerStdOut 获取jobManager的控制台输出日志 + */ + public String getJobManagerStdOut() { + return getResult(FlinkRestAPIConstant.JOB_MANAGER + FlinkRestAPIConstant.STDOUT); + } + + /** + * @return JsonNode + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getJobManagerLogList 获取jobManager的日志列表 + */ + public JsonNode getJobManagerLogList() { + return get(FlinkRestAPIConstant.JOB_MANAGER + FlinkRestAPIConstant.LOGS); + } + + /** + * @param logName 日志文件名 + * @return String + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getJobManagerLogFileDetail 获取jobManager的日志文件的具体信息 + */ + public String getJobManagerLogFileDetail(String logName) { + return getResult(FlinkRestAPIConstant.JOB_MANAGER + FlinkRestAPIConstant.LOGS + logName); + } + + /** + * @return JsonNode + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getTaskManagers 获取taskManager的列表 + */ + public JsonNode getTaskManagers() { + return get(FlinkRestAPIConstant.TASK_MANAGER); + } + + /** + * @return String + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: buildMetricsParms 构建metrics参数 + * @Params: type: 入参类型 可选值:task-manager, job-manager + */ + public String buildMetricsParms(String type) { + JsonNode jsonNode = get(type + FlinkRestAPIConstant.METRICS); + StringBuilder sb = new StringBuilder(); + Iterator jsonNodeIterator = jsonNode.elements(); + while (jsonNodeIterator.hasNext()) { + JsonNode node = jsonNodeIterator.next(); + if (Asserts.isNotNull(node) && Asserts.isNotNull(node.get("id"))) { + if (sb.length() > 0) { + sb.append(","); + } + sb.append(node.get("id").asText()); + } + } + return sb.toString(); + } + + /** + * @return JsonNode + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getJobManagerLog 获取jobManager的日志信息 + */ + public JsonNode getTaskManagerMetrics(String containerId) { + return get(FlinkRestAPIConstant.TASK_MANAGER + containerId + FlinkRestAPIConstant.METRICS + + FlinkRestAPIConstant.GET + buildMetricsParms(FlinkRestAPIConstant.JOB_MANAGER)); + } + + /** + * @param containerId 容器id + * @return String + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getTaskManagerLog 获取taskManager的日志信息 + */ + public String getTaskManagerLog(String containerId) { + return getResult(FlinkRestAPIConstant.TASK_MANAGER + containerId + FlinkRestAPIConstant.LOG); + } + + /** + * @param containerId 容器id + * @return JsonNode + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getTaskManagerStdOut 获取taskManager的StdOut日志信息 + */ + public String getTaskManagerStdOut(String containerId) { + return getResult(FlinkRestAPIConstant.TASK_MANAGER + containerId + FlinkRestAPIConstant.STDOUT); + } + + /** + * @param containerId 容器id + * @return JsonNode + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getTaskManagerLogList 获取taskManager的日志列表 + */ + public JsonNode getTaskManagerLogList(String containerId) { + return get(FlinkRestAPIConstant.TASK_MANAGER + containerId + FlinkRestAPIConstant.LOGS); + } + + /** + * @param logName 日志名称 + * @return String + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getTaskManagerLogFileDeatil 获取具体日志的详细信息 + */ + public String getTaskManagerLogFileDeatil(String containerId, String logName) { + return getResult(FlinkRestAPIConstant.TASK_MANAGER + containerId + FlinkRestAPIConstant.LOGS + logName); + } + + /** + * @return JsonNode + * @Author: zhumingye + * @date: 2022/6/24 + * @Description: getTaskManagerThreadDump 获取taskManager的线程信息 + */ + public JsonNode getTaskManagerThreadDump(String containerId) { + return get(FlinkRestAPIConstant.TASK_MANAGER + containerId + FlinkRestAPIConstant.THREAD_DUMP); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/cluster/FlinkCluster.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/cluster/FlinkCluster.java new file mode 100644 index 0000000..62236e0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/cluster/FlinkCluster.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.cluster; + +import cn.hutool.core.io.IORuntimeException; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.core.api.FlinkAPI; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * FlinkCluster + * + * @author zrx + * @since 2021/5/25 15:08 + **/ +public class FlinkCluster { + + private static Logger logger = LoggerFactory.getLogger(FlinkCluster.class); + + public static FlinkClusterInfo testFlinkJobManagerIP(String hosts, String host) { + if (Asserts.isNotNullString(host)) { + FlinkClusterInfo info = executeSocketTest(host); + if (info.isEffective()) { + return info; + } + } + String[] servers = hosts.split(","); + for (String server : servers) { + FlinkClusterInfo info = executeSocketTest(server); + if (info.isEffective()) { + return info; + } + } + return FlinkClusterInfo.INEFFECTIVE; + } + + private static FlinkClusterInfo executeSocketTest(String host) { + try { + String res = FlinkAPI.build(host).getVersion(); + if (Asserts.isNotNullString(res)) { + return FlinkClusterInfo.build(host, res); + } + } catch (IORuntimeException e) { + logger.info("Flink jobManager 地址排除 -- " + host); + } catch (Exception e) { + logger.error(e.getMessage(), e); + } + return FlinkClusterInfo.INEFFECTIVE; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/cluster/FlinkClusterInfo.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/cluster/FlinkClusterInfo.java new file mode 100644 index 0000000..2a7465a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/cluster/FlinkClusterInfo.java @@ -0,0 +1,53 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.cluster; + +import lombok.Getter; +import lombok.Setter; + +/** + * FlinkClusterInfo + * + * @author zrx + * @since 2021/10/20 9:10 + **/ +@Getter +@Setter +public class FlinkClusterInfo { + private boolean isEffective; + private String jobManagerAddress; + private String version; + + public static final FlinkClusterInfo INEFFECTIVE = new FlinkClusterInfo(false); + + public FlinkClusterInfo(boolean isEffective) { + this.isEffective = isEffective; + } + + public FlinkClusterInfo(boolean isEffective, String jobManagerAddress, String version) { + this.isEffective = isEffective; + this.jobManagerAddress = jobManagerAddress; + this.version = version; + } + + public static FlinkClusterInfo build(String jobManagerAddress, String version) { + return new FlinkClusterInfo(true, jobManagerAddress, version); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/constant/FlinkHistoryConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/constant/FlinkHistoryConstant.java new file mode 100644 index 0000000..c0343bc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/constant/FlinkHistoryConstant.java @@ -0,0 +1,98 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.constant; + +public interface FlinkHistoryConstant { + /** + * history端口 + */ + String PORT = "8082"; + + /** + * 逗号, + */ + String COMMA = ","; + /** + * 任务复数 jobs + */ + String JOBS = "jobs"; + /** + * 任务单数 job + */ + String JOB = "job"; + /** + * 总览 overview + */ + String OVERVIEW = "overview"; + /** + * 错误 error + */ + String ERROR = "error"; + /** + * 起始时间 start-time + */ + String START_TIME = "start-time"; + /** + * 任务名称 name + */ + String NAME = "name"; + /** + * 任务状态 state + */ + String STATE = "state"; + /** + * 异常 获取任务数据失败 + */ + String EXCEPTION_DATA_NOT_FOUND = "获取任务数据失败"; + /** + * 30天时间戳的大小 + */ + Long THIRTY_DAY = 30L * 24 * 60 * 60 * 1000; + /** + * 一天时间戳 + */ + Integer ONE_DAY = 24 * 60 * 60 * 1000; + /** + * 运行active + */ + String ACTIVE = "active"; + /** + * 查询记录的条数 + */ + String COUNT = "count"; + /** + * 当前页码 page + */ + String PAGE = "page"; + /** + * 每一页的大小 SIZE + */ + String SIZE = "size"; + /** + * 当前页的条数 pageCount + */ + String PAGE_COUNT = "pageCount"; + /** + * 返回数据集 resList + */ + String RES_LIST = "resList"; + + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/constant/FlinkRestAPIConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/constant/FlinkRestAPIConstant.java new file mode 100644 index 0000000..f8c9236 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/constant/FlinkRestAPIConstant.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.constant; + +/** + * FlinkRestAPIConstant + * + * @author zrx + * @since 2021/6/24 14:04 + **/ +public final class FlinkRestAPIConstant { + + public static final String OVERVIEW = "overview"; + + public static final String FLINK_CONFIG = "config"; + + public static final String CONFIG = "/config"; + + public static final String JOBS = "jobs/"; + + public static final String JOBSLIST = "jobs/overview"; + + public static final String CANCEL = "/yarn-cancel"; + + public static final String CHECKPOINTS = "/checkpoints"; + + public static final String CHECKPOINTS_CONFIG = "/checkpoints/config"; + + public static final String SAVEPOINTS = "/savepoints"; + + public static final String STOP = "/stop"; + + public static final String EXCEPTIONS = "/exceptions?maxExceptions=10"; + + public static final String JOB_MANAGER = "/jobmanager"; + + public static final String TASK_MANAGER = "/taskmanagers/"; + + public static final String METRICS = "/metrics"; + + public static final String LOG = "/log"; + + public static final String LOGS = "/logs/"; + + public static final String STDOUT = "/stdout"; + + public static final String THREAD_DUMP = "/thread-dump"; + + public static final String GET = "?get="; + + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/constant/FlinkRestResultConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/constant/FlinkRestResultConstant.java new file mode 100644 index 0000000..21da1a2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/constant/FlinkRestResultConstant.java @@ -0,0 +1,35 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.constant; + +/** + * FlinkRestAPIConstant + * + * @author zrx + * @since 2022/3/2 20:04 + **/ +public final class FlinkRestResultConstant { + + public static final String ERRORS = "errors"; + public static final String JOB_DURATION = "duration"; + public static final String JOB_STATE = "state"; + public static final String ROOT_EXCEPTION = "root-exception"; + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/Explainer.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/Explainer.java new file mode 100644 index 0000000..d600b60 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/Explainer.java @@ -0,0 +1,411 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer; + +import cn.hutool.core.collection.CollUtil; +import cn.hutool.core.util.StrUtil; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.node.ObjectNode; +import net.srt.flink.client.base.model.LineageRel; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.context.DinkyClassLoaderContextHolder; +import net.srt.flink.common.context.JarPathContextHolder; +import net.srt.flink.common.model.ProjectSystemConfiguration; +import net.srt.flink.common.result.ExplainResult; +import net.srt.flink.common.result.SqlExplainResult; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.common.utils.SqlUtil; +import net.srt.flink.common.utils.URLUtils; +import net.srt.flink.core.job.JobConfig; +import net.srt.flink.core.job.JobManager; +import net.srt.flink.core.job.JobParam; +import net.srt.flink.core.job.StatementParam; +import net.srt.flink.executor.constant.FlinkSQLConstant; +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.interceptor.FlinkInterceptor; +import net.srt.flink.executor.parser.AddJarSqlParser; +import net.srt.flink.executor.parser.SqlType; +import net.srt.flink.executor.trans.Operations; +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.function.util.UDFUtil; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.flink.runtime.rest.messages.JobPlanInfo; + +import java.time.LocalDateTime; +import java.util.ArrayList; +import java.util.List; + +/** + * Explainer + * + * @author zrx + * @since 2021/6/22 + **/ +public class Explainer { + + private Executor executor; + private boolean useStatementSet; + private String sqlSeparator = FlinkSQLConstant.SEPARATOR; + private ObjectMapper mapper = new ObjectMapper(); + + public Explainer(Executor executor) { + this.executor = executor; + this.useStatementSet = true; + init(); + } + + public Explainer(Executor executor, boolean useStatementSet) { + this.executor = executor; + this.useStatementSet = useStatementSet; + init(); + } + + public Explainer(Executor executor, boolean useStatementSet, String sqlSeparator) { + this.executor = executor; + this.useStatementSet = useStatementSet; + this.sqlSeparator = sqlSeparator; + } + + public void init() { + //zrx ProjectSystemConfiguration + ProcessEntity process = ProcessContextHolder.getProcess(); + sqlSeparator = ProjectSystemConfiguration.getByProjectId(process.getProjectId()).getSqlSeparator(); + } + + public static Explainer build(Executor executor) { + return new Explainer(executor, false, ";"); + } + + public static Explainer build(Executor executor, boolean useStatementSet, String sqlSeparator) { + return new Explainer(executor, useStatementSet, sqlSeparator); + } + + public Explainer initialize(JobManager jobManager, JobConfig config, String statement) { + jobManager.initClassLoader(config); + String[] statements = SqlUtil.getStatements(SqlUtil.removeNote(statement), sqlSeparator); + jobManager.initUDF(parseUDFFromStatements(statements)); + return this; + } + + public JobParam pretreatStatements(String[] statements) { + List ddl = new ArrayList<>(); + List trans = new ArrayList<>(); + List execute = new ArrayList<>(); + List statementList = new ArrayList<>(); + List udfList = new ArrayList<>(); + for (String item : statements) { + String statement = executor.pretreatStatement(item); + if (statement.isEmpty()) { + continue; + } + SqlType operationType = Operations.getOperationType(statement); + if (operationType.equals(SqlType.ADD)) { + AddJarSqlParser.getAllFilePath(statement).forEach(JarPathContextHolder::addOtherPlugins); + DinkyClassLoaderContextHolder.get() + .addURL(URLUtils.getURLs(JarPathContextHolder.getOtherPluginsFiles())); + } else if (operationType.equals(SqlType.INSERT) || operationType.equals(SqlType.SELECT) + || operationType.equals(SqlType.SHOW) + || operationType.equals(SqlType.DESCRIBE) || operationType.equals(SqlType.DESC)) { + trans.add(new StatementParam(statement, operationType)); + statementList.add(statement); + //zrx + /*if (!useStatementSet) { + break; + }*/ + } else if (operationType.equals(SqlType.EXECUTE)) { + execute.add(new StatementParam(statement, operationType)); + } else { + UDF udf = UDFUtil.toUDF(statement); + if (Asserts.isNotNull(udf)) { + udfList.add(UDFUtil.toUDF(statement)); + } + ddl.add(new StatementParam(statement, operationType)); + statementList.add(statement); + } + } + return new JobParam(statementList, ddl, trans, execute, CollUtil.removeNull(udfList)); + } + + public List parseUDFFromStatements(String[] statements) { + List udfList = new ArrayList<>(); + for (String statement : statements) { + if (statement.isEmpty()) { + continue; + } + UDF udf = UDFUtil.toUDF(statement); + if (Asserts.isNotNull(udf)) { + udfList.add(UDFUtil.toUDF(statement)); + } + } + return udfList; + } + + public List explainSqlResult(String statement) { + String[] sqls = SqlUtil.getStatements(statement, sqlSeparator); + List sqlExplainRecords = new ArrayList<>(); + int index = 1; + for (String item : sqls) { + SqlExplainResult record = new SqlExplainResult(); + String sql = ""; + try { + sql = FlinkInterceptor.pretreatStatement(executor, item, sqlSeparator); + if (Asserts.isNullString(sql)) { + continue; + } + SqlType operationType = Operations.getOperationType(sql); + if (operationType.equals(SqlType.INSERT) || operationType.equals(SqlType.SELECT)) { + record = executor.explainSqlRecord(sql); + if (Asserts.isNull(record)) { + continue; + } + } else { + record = executor.explainSqlRecord(sql); + if (Asserts.isNull(record)) { + continue; + } + executor.executeSql(sql); + } + } catch (Exception e) { + e.printStackTrace(); + record.setError(e.getMessage()); + record.setExplainTrue(false); + record.setExplainTime(LocalDateTime.now()); + record.setSql(sql); + record.setIndex(index); + sqlExplainRecords.add(record); + break; + } + record.setExplainTrue(true); + record.setExplainTime(LocalDateTime.now()); + record.setSql(sql); + record.setIndex(index++); + sqlExplainRecords.add(record); + } + return sqlExplainRecords; + } + + public ExplainResult explainSql(String statement) { + ProcessEntity process = ProcessContextHolder.getProcess(); + process.info("Start explain FlinkSQL..."); + JobParam jobParam = pretreatStatements(SqlUtil.getStatements(statement, sqlSeparator)); + List sqlExplainRecords = new ArrayList<>(); + int index = 1; + boolean correct = true; + for (StatementParam item : jobParam.getDdl()) { + SqlExplainResult record = new SqlExplainResult(); + try { + record = executor.explainSqlRecord(item.getValue()); + if (Asserts.isNull(record)) { + continue; + } + executor.executeSql(item.getValue()); + } catch (Exception e) { + String error = LogUtil.getError(e); + record.setError(error); + record.setExplainTrue(false); + record.setExplainTime(LocalDateTime.now()); + record.setSql(item.getValue()); + record.setIndex(index); + sqlExplainRecords.add(record); + correct = false; + process.error(error); + break; + } + record.setExplainTrue(true); + record.setExplainTime(LocalDateTime.now()); + record.setSql(item.getValue()); + record.setIndex(index++); + sqlExplainRecords.add(record); + } + if (correct && jobParam.getTrans().size() > 0) { + if (useStatementSet) { + SqlExplainResult record = new SqlExplainResult(); + List inserts = new ArrayList<>(); + for (StatementParam item : jobParam.getTrans()) { + if (item.getType().equals(SqlType.INSERT)) { + inserts.add(item.getValue()); + } + } + if (inserts.size() > 0) { + String sqlSet = String.join(";\r\n ", inserts); + try { + record.setExplain(executor.explainStatementSet(inserts)); + record.setParseTrue(true); + record.setExplainTrue(true); + } catch (Exception e) { + String error = LogUtil.getError(e); + record.setError(error); + record.setParseTrue(false); + record.setExplainTrue(false); + correct = false; + process.error(error); + } finally { + record.setType("Modify DML"); + record.setExplainTime(LocalDateTime.now()); + record.setSql(sqlSet); + record.setIndex(index); + sqlExplainRecords.add(record); + } + } + } else { + for (StatementParam item : jobParam.getTrans()) { + SqlExplainResult record = new SqlExplainResult(); + try { + record = executor.explainSqlRecord(item.getValue()); + // zrx + if (Asserts.isNull(record)) { + record = new SqlExplainResult(); + } + record.setParseTrue(true); + record.setExplainTrue(true); + } catch (Exception e) { + String error = LogUtil.getError(e); + record.setError(error); + record.setParseTrue(false); + record.setExplainTrue(false); + correct = false; + process.error(error); + } finally { + record.setType("Modify DML"); + record.setExplainTime(LocalDateTime.now()); + record.setSql(item.getValue()); + record.setIndex(index++); + sqlExplainRecords.add(record); + } + } + } + } + for (StatementParam item : jobParam.getExecute()) { + SqlExplainResult record = new SqlExplainResult(); + try { + record = executor.explainSqlRecord(item.getValue()); + if (Asserts.isNull(record)) { + record = new SqlExplainResult(); + } else { + executor.executeSql(item.getValue()); + } + record.setType("DATASTREAM"); + record.setParseTrue(true); + } catch (Exception e) { + String error = LogUtil.getError(e); + record.setError(error); + record.setExplainTrue(false); + record.setExplainTime(LocalDateTime.now()); + record.setSql(item.getValue()); + record.setIndex(index); + sqlExplainRecords.add(record); + correct = false; + process.error(error); + break; + } + record.setExplainTrue(true); + record.setExplainTime(LocalDateTime.now()); + record.setSql(item.getValue()); + record.setIndex(index++); + sqlExplainRecords.add(record); + } + process.info(StrUtil.format("A total of {} FlinkSQL have been Explained.", sqlExplainRecords.size())); + return new ExplainResult(correct, sqlExplainRecords.size(), sqlExplainRecords); + } + + public ObjectNode getStreamGraph(String statement) { + JobParam jobParam = pretreatStatements(SqlUtil.getStatements(statement, sqlSeparator)); + if (jobParam.getDdl().size() > 0) { + for (StatementParam statementParam : jobParam.getDdl()) { + executor.executeSql(statementParam.getValue()); + } + } + if (jobParam.getTrans().size() > 0) { + return executor.getStreamGraph(jobParam.getTransStatement()); + } else if (jobParam.getExecute().size() > 0) { + List datastreamPlans = new ArrayList<>(); + for (StatementParam item : jobParam.getExecute()) { + datastreamPlans.add(item.getValue()); + } + return executor.getStreamGraphFromDataStream(datastreamPlans); + } else { + return mapper.createObjectNode(); + } + } + + public JobPlanInfo getJobPlanInfo(String statement) { + JobParam jobParam = pretreatStatements(SqlUtil.getStatements(statement, sqlSeparator)); + if (jobParam.getDdl().size() > 0) { + for (StatementParam statementParam : jobParam.getDdl()) { + executor.executeSql(statementParam.getValue()); + } + } + if (jobParam.getTrans().size() > 0) { + return executor.getJobPlanInfo(jobParam.getTransStatement()); + } else if (jobParam.getExecute().size() > 0) { + List datastreamPlans = new ArrayList<>(); + for (StatementParam item : jobParam.getExecute()) { + datastreamPlans.add(item.getValue()); + } + return executor.getJobPlanInfoFromDataStream(datastreamPlans); + } else { + throw new RuntimeException("Creating job plan fails because this job doesn't contain an insert statement."); + } + } + + private ObjectNode translateObjectNode(String statement) { + return executor.getStreamGraph(statement); + } + + private ObjectNode translateObjectNode(List statement) { + return executor.getStreamGraph(statement); + } + + public List getLineage(String statement) { + JobConfig jobConfig = + new JobConfig( + "local", + false, + false, + true, + useStatementSet, + 1, + executor.getTableConfig().getConfiguration().toMap()); + JobManager jm = JobManager.buildPlanMode(jobConfig); + this.initialize(jm, jobConfig, statement); + String[] sqls = SqlUtil.getStatements(statement, sqlSeparator); + List lineageRelList = new ArrayList<>(); + for (String item : sqls) { + String sql = ""; + try { + sql = FlinkInterceptor.pretreatStatement(executor, item, sqlSeparator); + if (Asserts.isNullString(sql)) { + continue; + } + SqlType operationType = Operations.getOperationType(sql); + if (operationType.equals(SqlType.INSERT)) { + lineageRelList.addAll(executor.getLineage(sql)); + } else if (!operationType.equals(SqlType.SELECT)) { + executor.executeSql(sql); + } + } catch (Exception e) { + e.printStackTrace(); + break; + } + } + return lineageRelList; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageBuilder.java new file mode 100644 index 0000000..cf1fc0c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageBuilder.java @@ -0,0 +1,107 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.lineage; + +import net.srt.flink.client.base.model.LineageRel; +import net.srt.flink.core.explainer.Explainer; +import net.srt.flink.executor.executor.Executor; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * LineageBuilder + * + * @since 2022/3/15 22:58 + */ +public class LineageBuilder { + + public static LineageResult getColumnLineageByLogicalPlan(String statement) { + Explainer explainer = new Explainer(Executor.build(), false); + List lineageRelList = explainer.getLineage(statement); + List relations = new ArrayList<>(); + Map tableMap = new HashMap<>(); + int tableIndex = 1; + int relIndex = 1; + for (LineageRel lineageRel : lineageRelList) { + String sourceTablePath = lineageRel.getSourceTablePath(); + String sourceTableId = null; + String targetTableId = null; + if (tableMap.containsKey(sourceTablePath)) { + LineageTable lineageTable = tableMap.get(sourceTablePath); + LineageColumn lineageColumn = + LineageColumn.build( + lineageRel.getSourceColumn(), lineageRel.getSourceColumn()); + if (!lineageTable.getColumns().contains(lineageColumn)) { + lineageTable.getColumns().add(lineageColumn); + } + sourceTableId = lineageTable.getId(); + } else { + tableIndex++; + LineageTable lineageTable = LineageTable.build(tableIndex + "", sourceTablePath); + lineageTable + .getColumns() + .add( + LineageColumn.build( + lineageRel.getSourceColumn(), + lineageRel.getSourceColumn())); + tableMap.put(sourceTablePath, lineageTable); + sourceTableId = lineageTable.getId(); + } + String targetTablePath = lineageRel.getTargetTablePath(); + if (tableMap.containsKey(targetTablePath)) { + LineageTable lineageTable = tableMap.get(targetTablePath); + LineageColumn lineageColumn = + LineageColumn.build( + lineageRel.getTargetColumn(), lineageRel.getTargetColumn()); + if (!lineageTable.getColumns().contains(lineageColumn)) { + lineageTable.getColumns().add(lineageColumn); + } + targetTableId = lineageTable.getId(); + } else { + tableIndex++; + LineageTable lineageTable = LineageTable.build(tableIndex + "", targetTablePath); + lineageTable + .getColumns() + .add( + LineageColumn.build( + lineageRel.getTargetColumn(), + lineageRel.getTargetColumn())); + tableMap.put(targetTablePath, lineageTable); + targetTableId = lineageTable.getId(); + } + LineageRelation lineageRelation = + LineageRelation.build( + sourceTableId, + targetTableId, + lineageRel.getSourceColumn(), + lineageRel.getTargetColumn()); + if (!relations.contains(lineageRelation)) { + relIndex++; + lineageRelation.setId(relIndex + ""); + relations.add(lineageRelation); + } + } + List tables = new ArrayList<>(tableMap.values()); + return LineageResult.build(tables, relations); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageColumn.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageColumn.java new file mode 100644 index 0000000..76cc139 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageColumn.java @@ -0,0 +1,77 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.lineage; + +import java.util.Objects; + +/** + * LineageColumn + * + * @since 2022/3/15 22:55 + */ +public class LineageColumn { + + private String name; + private String title; + + public LineageColumn() {} + + public LineageColumn(String name, String title) { + this.name = name; + this.title = title; + } + + public static LineageColumn build(String name, String title) { + return new LineageColumn(name, title); + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public String getTitle() { + return title; + } + + public void setTitle(String title) { + this.title = title; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + LineageColumn that = (LineageColumn) o; + return Objects.equals(name, that.name); + } + + @Override + public int hashCode() { + return Objects.hash(name); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageRelation.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageRelation.java new file mode 100644 index 0000000..6d7fbef --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageRelation.java @@ -0,0 +1,133 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.lineage; + +import java.util.Objects; + +/** + * LineageRelation + * + * @since 2022/3/15 23:00 + */ +public class LineageRelation { + + private String id; + private String srcTableId; + private String tgtTableId; + private String srcTableColName; + private String tgtTableColName; + + public LineageRelation() {} + + public LineageRelation( + String srcTableId, String tgtTableId, String srcTableColName, String tgtTableColName) { + this.srcTableId = srcTableId; + this.tgtTableId = tgtTableId; + this.srcTableColName = srcTableColName; + this.tgtTableColName = tgtTableColName; + } + + public LineageRelation( + String id, + String srcTableId, + String tgtTableId, + String srcTableColName, + String tgtTableColName) { + this.id = id; + this.srcTableId = srcTableId; + this.tgtTableId = tgtTableId; + this.srcTableColName = srcTableColName; + this.tgtTableColName = tgtTableColName; + } + + public static LineageRelation build( + String srcTableId, String tgtTableId, String srcTableColName, String tgtTableColName) { + return new LineageRelation(srcTableId, tgtTableId, srcTableColName, tgtTableColName); + } + + public static LineageRelation build( + String id, + String srcTableId, + String tgtTableId, + String srcTableColName, + String tgtTableColName) { + return new LineageRelation(id, srcTableId, tgtTableId, srcTableColName, tgtTableColName); + } + + public String getId() { + return id; + } + + public void setId(String id) { + this.id = id; + } + + public String getSrcTableId() { + return srcTableId; + } + + public void setSrcTableId(String srcTableId) { + this.srcTableId = srcTableId; + } + + public String getTgtTableId() { + return tgtTableId; + } + + public void setTgtTableId(String tgtTableId) { + this.tgtTableId = tgtTableId; + } + + public String getSrcTableColName() { + return srcTableColName; + } + + public void setSrcTableColName(String srcTableColName) { + this.srcTableColName = srcTableColName; + } + + public String getTgtTableColName() { + return tgtTableColName; + } + + public void setTgtTableColName(String tgtTableColName) { + this.tgtTableColName = tgtTableColName; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + LineageRelation that = (LineageRelation) o; + return Objects.equals(srcTableId, that.srcTableId) + && Objects.equals(tgtTableId, that.tgtTableId) + && Objects.equals(srcTableColName, that.srcTableColName) + && Objects.equals(tgtTableColName, that.tgtTableColName); + } + + @Override + public int hashCode() { + return Objects.hash(srcTableId, tgtTableId, srcTableColName, tgtTableColName); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageResult.java new file mode 100644 index 0000000..f491c6c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageResult.java @@ -0,0 +1,60 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.lineage; + +import java.util.List; + +/** + * LineageResult + * + * @since 2022/3/15 22:59 + */ +public class LineageResult { + + private List tables; + private List relations; + + public LineageResult() {} + + public LineageResult(List tables, List relations) { + this.tables = tables; + this.relations = relations; + } + + public static LineageResult build(List tables, List relations) { + return new LineageResult(tables, relations); + } + + public List getTables() { + return tables; + } + + public void setTables(List tables) { + this.tables = tables; + } + + public List getRelations() { + return relations; + } + + public void setRelations(List relations) { + this.relations = relations; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageTable.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageTable.java new file mode 100644 index 0000000..1793e1b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/lineage/LineageTable.java @@ -0,0 +1,71 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.lineage; + +import java.util.ArrayList; +import java.util.List; + +/** + * LineageTable + * + * @since 2022/3/15 22:55 + */ +public class LineageTable { + + private String id; + private String name; + private List columns; + + public LineageTable() {} + + public LineageTable(String id, String name) { + this.id = id; + this.name = name; + this.columns = new ArrayList<>(); + } + + public static LineageTable build(String id, String name) { + return new LineageTable(id, name); + } + + public String getId() { + return id; + } + + public void setId(String id) { + this.id = id; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public List getColumns() { + return columns; + } + + public void setColumns(List columns) { + this.columns = columns; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/LineageBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/LineageBuilder.java new file mode 100644 index 0000000..df90303 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/LineageBuilder.java @@ -0,0 +1,369 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.sqllineage; + +import com.alibaba.druid.sql.SQLUtils; +import com.alibaba.druid.sql.ast.SQLExpr; +import com.alibaba.druid.sql.ast.SQLStatement; +import com.alibaba.druid.sql.ast.expr.SQLIdentifierExpr; +import com.alibaba.druid.sql.ast.expr.SQLPropertyExpr; +import com.alibaba.druid.sql.ast.statement.SQLInsertStatement; +import com.alibaba.druid.stat.TableStat; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.core.explainer.lineage.LineageRelation; +import net.srt.flink.core.explainer.lineage.LineageResult; +import net.srt.flink.core.explainer.lineage.LineageTable; +import net.srt.flink.metadata.base.driver.Driver; +import net.srt.flink.metadata.base.driver.DriverConfig; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +public class LineageBuilder { + + protected static final Logger logger = LoggerFactory.getLogger(LineageBuilder.class); + + public static LineageResult getSqlLineageByOne(String statement, String type) { + ProcessEntity process = ProcessContextHolder.getProcess(); + List tables = new ArrayList<>(); + List relations = new ArrayList<>(); + try { + List sqlStatements = + SQLUtils.parseStatements(statement.toLowerCase(), type); + // 只考虑一条语句 + SQLStatement sqlStatement = sqlStatements.get(0); + List> srcLists = new ArrayList<>(); + List tgtList = new ArrayList<>(); + // 只考虑insert语句 + if (sqlStatement instanceof SQLInsertStatement) { + String targetTable = ((SQLInsertStatement) sqlStatement).getTableName().toString(); + List columns = ((SQLInsertStatement) sqlStatement).getColumns(); + // 处理target表中字段 + for (SQLExpr column : columns) { + if (column instanceof SQLPropertyExpr) { + tgtList.add( + new TableStat.Column( + targetTable, + ((SQLPropertyExpr) column) + .getName() + .replace("`", "") + .replace("\"", ""))); + } else if (column instanceof SQLIdentifierExpr) { + tgtList.add( + new TableStat.Column( + targetTable, + ((SQLIdentifierExpr) column) + .getName() + .replace("`", "") + .replace("\"", ""))); + } + } + // 处理select 生成srcLists + LineageColumn root = new LineageColumn(); + TreeNode rootNode = new TreeNode<>(root); + LineageUtils.columnLineageAnalyzer( + ((SQLInsertStatement) sqlStatement).getQuery().toString(), type, rootNode); + for (TreeNode e : rootNode.getChildren()) { + Set leafNodes = e.getAllLeafData(); + List srcList = new ArrayList<>(); + for (LineageColumn column : leafNodes) { + String tableName = + Asserts.isNotNullString(column.getSourceTableName()) + ? (Asserts.isNotNullString(column.getSourceDbName()) + ? column.getSourceDbName() + + "." + + column.getSourceTableName() + : column.getSourceTableName()) + : ""; + srcList.add(new TableStat.Column(tableName, column.getTargetColumnName())); + } + srcLists.add(srcList); + } + // 构建 List + Map tableMap = new HashMap<>(); + List allColumnList = new ArrayList<>(); + int tid = 100; + for (TableStat.Column column : tgtList) { + if (Asserts.isNotNullString(column.getTable()) + && !tableMap.containsKey(column.getTable())) { + tableMap.put(column.getTable(), String.valueOf(tid++)); + } + } + for (List columnList : srcLists) { + allColumnList.addAll(columnList); + for (TableStat.Column column : columnList) { + if (Asserts.isNotNullString(column.getTable()) + && !tableMap.containsKey(column.getTable())) { + tableMap.put(column.getTable(), String.valueOf(tid++)); + } + } + } + allColumnList.addAll(tgtList); + for (String tableName : tableMap.keySet()) { + LineageTable table = new LineageTable(); + table.setId(tableMap.get(tableName)); + table.setName(tableName); + List tableColumns = + new ArrayList<>(); + Set tableSet = new HashSet<>(); + for (TableStat.Column column : allColumnList) { + if (tableName.equals(column.getTable()) + && !tableSet.contains(column.getName())) { + tableColumns.add( + new net.srt.flink.core.explainer.lineage.LineageColumn( + column.getName(), column.getName())); + tableSet.add(column.getName()); + } + } + table.setColumns(tableColumns); + tables.add(table); + } + // 构建 LineageRelation + int tSize = tgtList.size(); + int sSize = srcLists.size(); + if (tSize != sSize && tSize * 2 != sSize) { + logger.error("Target table fields do not match!"); + process.error("Target table fields do not match!"); + return null; + } + for (int i = 0; i < tSize; i++) { + for (TableStat.Column column : srcLists.get(i)) { + if (Asserts.isNotNullString(column.getTable())) { + relations.add( + LineageRelation.build( + i + "", + tableMap.get(column.getTable()), + tableMap.get(tgtList.get(i).getTable()), + column.getName(), + tgtList.get(i).getName())); + } + } + if (tSize * 2 == sSize) { + for (TableStat.Column column : srcLists.get(i + tSize)) { + if (Asserts.isNotNullString(column.getTable())) { + relations.add( + LineageRelation.build( + (i + tSize) + "", + tableMap.get(column.getTable()), + tableMap.get(tgtList.get(i).getTable()), + column.getName(), + tgtList.get(i).getName())); + } + } + } + } + } else { + process.info("Does not contain an insert statement, cannot analyze the lineage."); + return null; + } + } catch (Exception e) { + e.printStackTrace(); + process.error("Unexpected exceptions occur! " + e.getMessage()); + return null; + } + return LineageResult.build(tables, relations); + } + + public static LineageResult getSqlLineage( + String statement, String type, DriverConfig driverConfig) { + ProcessEntity process = ProcessContextHolder.getProcess(); + List tables = new ArrayList<>(); + List relations = new ArrayList<>(); + Map>> srcMap = new HashMap<>(); + Map> tgtMap = new HashMap<>(); + Map tableMap = new HashMap<>(); + List allColumnList = new ArrayList<>(); + String[] sqls = statement.split(";"); + try { + List sqlStatements = SQLUtils.parseStatements(statement, type); + for (int n = 0; n < sqlStatements.size(); n++) { + SQLStatement sqlStatement = sqlStatements.get(n); + List> srcLists = new ArrayList<>(); + List tgtList = new ArrayList<>(); + // 只考虑insert语句 + if (sqlStatement instanceof SQLInsertStatement) { + String targetTable = + ((SQLInsertStatement) sqlStatement) + .getTableName() + .toString() + .replace("`", "") + .replace("\"", ""); + List columns = ((SQLInsertStatement) sqlStatement).getColumns(); + // 处理target表中字段 + if (columns.size() <= 0 || sqls[n].contains("*")) { + Driver driver = Driver.build(driverConfig); + if (!targetTable.contains(".")) { + process.error("Target table not specified database!"); + return null; + } + List columns1 = + driver.listColumns( + targetTable.split("\\.")[0], targetTable.split("\\.")[1]); + for (Column column : columns1) { + tgtList.add(new TableStat.Column(targetTable, column.getName())); + } + } else { + for (SQLExpr column : columns) { + if (column instanceof SQLPropertyExpr) { + tgtList.add( + new TableStat.Column( + targetTable, + ((SQLPropertyExpr) column) + .getName() + .replace("`", "") + .replace("\"", ""))); + } else if (column instanceof SQLIdentifierExpr) { + tgtList.add( + new TableStat.Column( + targetTable, + ((SQLIdentifierExpr) column) + .getName() + .replace("`", "") + .replace("\"", ""))); + } + } + } + // 处理select 生成srcLists + LineageColumn root = new LineageColumn(); + TreeNode rootNode = new TreeNode<>(root); + LineageUtils.columnLineageAnalyzer( + ((SQLInsertStatement) sqlStatement).getQuery().toString(), + type, + rootNode); + for (TreeNode e : rootNode.getChildren()) { + Set leafNodes = e.getAllLeafData(); + List srcList = new ArrayList<>(); + for (LineageColumn column : leafNodes) { + String tableName = + Asserts.isNotNullString(column.getSourceTableName()) + ? (Asserts.isNotNullString(column.getSourceDbName()) + ? column.getSourceDbName() + + "." + + column.getSourceTableName() + : column.getSourceTableName()) + : ""; + srcList.add( + new TableStat.Column(tableName, column.getTargetColumnName())); + } + srcLists.add(srcList); + } + srcMap.put(n, srcLists); + tgtMap.put(n, tgtList); + } else { + process.info( + "Does not contain an insert statement, cannot analyze the lineage."); + return null; + } + } + // 构建 List + int tid = 100; + for (Integer i : tgtMap.keySet()) { + allColumnList.addAll(tgtMap.get(i)); + for (TableStat.Column column : tgtMap.get(i)) { + if (Asserts.isNotNullString(column.getTable()) + && !tableMap.containsKey(column.getTable())) { + tableMap.put(column.getTable(), String.valueOf(tid++)); + } + } + } + for (Integer i : srcMap.keySet()) { + for (List columnList : srcMap.get(i)) { + allColumnList.addAll(columnList); + for (TableStat.Column column : columnList) { + if (Asserts.isNotNullString(column.getTable()) + && !tableMap.containsKey(column.getTable())) { + tableMap.put(column.getTable(), String.valueOf(tid++)); + } + } + } + } + for (String tableName : tableMap.keySet()) { + LineageTable table = new LineageTable(); + table.setId(tableMap.get(tableName)); + table.setName(tableName); + List tableColumns = new ArrayList<>(); + Set tableSet = new HashSet<>(); + for (TableStat.Column column : allColumnList) { + if (tableName.equals(column.getTable()) + && !tableSet.contains(column.getName())) { + tableColumns.add( + new net.srt.flink.core.explainer.lineage.LineageColumn( + column.getName(), column.getName())); + tableSet.add(column.getName()); + } + } + table.setColumns(tableColumns); + tables.add(table); + } + // 构建 LineageRelation + for (Integer n : srcMap.keySet()) { + List> srcLists = srcMap.get(n); + List tgtList = tgtMap.get(n); + int tSize = tgtList.size(); + int sSize = srcLists.size(); + if (tSize != sSize && tSize * 2 != sSize) { + logger.error("Target table fields do not match!"); + process.error("Target table fields do not match!"); + return null; + } + for (int i = 0; i < tSize; i++) { + for (TableStat.Column column : srcLists.get(i)) { + if (Asserts.isNotNullString(column.getTable())) { + relations.add( + LineageRelation.build( + n + "_" + i, + tableMap.get(column.getTable()), + tableMap.get(tgtList.get(i).getTable()), + column.getName(), + tgtList.get(i).getName())); + } + } + if (tSize * 2 == sSize) { + for (TableStat.Column column : srcLists.get(i + tSize)) { + if (Asserts.isNotNullString(column.getTable())) { + relations.add( + LineageRelation.build( + n + "_" + (i + tSize), + tableMap.get(column.getTable()), + tableMap.get(tgtList.get(i).getTable()), + column.getName(), + tgtList.get(i).getName())); + } + } + } + } + } + } catch (Exception e) { + e.printStackTrace(); + process.error("Unexpected exceptions occur! " + e.getMessage()); + return null; + } + return LineageResult.build(tables, relations); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/LineageColumn.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/LineageColumn.java new file mode 100644 index 0000000..22715df --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/LineageColumn.java @@ -0,0 +1,170 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.sqllineage; + +import lombok.Data; +import net.srt.flink.common.assertion.Asserts; + +@Data +public class LineageColumn implements Comparable { + + public String getTargetColumnName() { + return targetColumnName; + } + + public void setTargetColumnName(String targetColumnName) { + this.targetColumnName = targetColumnName; + } + + private String targetColumnName; + + private String sourceDbName; + + public String getSourceDbName() { + return sourceDbName; + } + + public void setSourceDbName(String sourceDbName) { + this.sourceDbName = sourceDbName; + } + + public String getSourceTableName() { + return sourceTableName; + } + + public String getSourceColumnName() { + return sourceColumnName; + } + + public void setSourceColumnName(String sourceColumnName) { + this.sourceColumnName = sourceColumnName; + } + + private String sourceTableName; + + private String sourceColumnName; + + public String getExpression() { + return expression; + } + + public void setExpression(String expression) { + this.expression = expression; + } + + private String expression; + + public Boolean getIsEnd() { + return isEnd; + } + + public void setIsEnd(Boolean end) { + isEnd = end; + } + + private Boolean isEnd = false; + + public void setSourceTableName(String sourceTableName) { + sourceTableName = + Asserts.isNotNullString(sourceTableName) + ? sourceTableName.replace("`", "") + : sourceTableName; + if (sourceTableName.contains(" ")) { + sourceTableName = sourceTableName.substring(0, sourceTableName.indexOf(" ")); + } + if (sourceTableName.contains(".")) { + if (Asserts.isNullString(this.sourceDbName)) { + this.sourceDbName = sourceTableName.substring(0, sourceTableName.indexOf(".")); + } + // this.sourceDbName = sourceTableName.substring(0, sourceTableName.indexOf(".")); + this.sourceTableName = sourceTableName.substring(sourceTableName.indexOf(".") + 1); + } else { + this.sourceTableName = sourceTableName; + } + } + + public int compareTo(LineageColumn o) { + if (Asserts.isNotNullString(this.getSourceDbName()) + && Asserts.isNotNullString(this.getSourceTableName())) { + if (this.getSourceDbName().equals(o.getSourceDbName()) + && this.getSourceTableName().equals(o.getSourceTableName()) + && this.getTargetColumnName().equals(o.getTargetColumnName())) { + return 0; + } + } else if (Asserts.isNotNullString(this.getSourceTableName())) { + if (this.getSourceTableName().equals(o.getSourceTableName()) + && this.getTargetColumnName().equals(o.getTargetColumnName())) { + return 0; + } + } else { + if (this.getTargetColumnName().equals(o.getTargetColumnName())) { + return 0; + } + } + return -1; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + + if (o == null || getClass() != o.getClass()) { + return false; + } + + LineageColumn myColumn = (LineageColumn) o; + + if (!this.getTargetColumnName().equals(myColumn.getTargetColumnName())) { + return false; + } + + if (Asserts.isNotNullString(sourceTableName) + && !sourceTableName.equals(myColumn.sourceTableName)) { + return false; + } + + if (Asserts.isNotNullString(sourceColumnName)) { + return sourceColumnName.equals(myColumn.sourceColumnName); + } + + return true; + } + + @Override + public int hashCode() { + int result = getTargetColumnName().hashCode(); + + if (Asserts.isNotNullString(sourceTableName)) { + result = 31 * result + sourceTableName.hashCode(); + } + + if (Asserts.isNotNullString(sourceColumnName)) { + result = 31 * result + sourceColumnName.hashCode(); + } + + if (Asserts.isNotNullString(sourceDbName)) { + result = 31 * result + sourceDbName.hashCode(); + } + + return result; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/LineageUtils.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/LineageUtils.java new file mode 100644 index 0000000..f3680a0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/LineageUtils.java @@ -0,0 +1,481 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.sqllineage; + +import com.alibaba.druid.sql.SQLUtils; +import com.alibaba.druid.sql.ast.SQLExpr; +import com.alibaba.druid.sql.ast.SQLStatement; +import com.alibaba.druid.sql.ast.expr.SQLAggregateExpr; +import com.alibaba.druid.sql.ast.expr.SQLBinaryOpExpr; +import com.alibaba.druid.sql.ast.expr.SQLCaseExpr; +import com.alibaba.druid.sql.ast.expr.SQLCharExpr; +import com.alibaba.druid.sql.ast.expr.SQLIdentifierExpr; +import com.alibaba.druid.sql.ast.expr.SQLIntegerExpr; +import com.alibaba.druid.sql.ast.expr.SQLMethodInvokeExpr; +import com.alibaba.druid.sql.ast.expr.SQLNumberExpr; +import com.alibaba.druid.sql.ast.expr.SQLPropertyExpr; +import com.alibaba.druid.sql.ast.statement.SQLExprTableSource; +import com.alibaba.druid.sql.ast.statement.SQLJoinTableSource; +import com.alibaba.druid.sql.ast.statement.SQLSelectItem; +import com.alibaba.druid.sql.ast.statement.SQLSelectQuery; +import com.alibaba.druid.sql.ast.statement.SQLSelectQueryBlock; +import com.alibaba.druid.sql.ast.statement.SQLSelectStatement; +import com.alibaba.druid.sql.ast.statement.SQLSubqueryTableSource; +import com.alibaba.druid.sql.ast.statement.SQLTableSource; +import com.alibaba.druid.sql.ast.statement.SQLUnionQuery; +import com.alibaba.druid.sql.ast.statement.SQLUnionQueryTableSource; +import net.srt.flink.common.assertion.Asserts; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.atomic.AtomicReference; + +public class LineageUtils { + + protected static final Logger logger = LoggerFactory.getLogger(LineageUtils.class); + + public static void columnLineageAnalyzer( + String sql, String type, TreeNode node) { + if (Asserts.isNullString(sql)) { + return; + } + + AtomicReference isContinue = new AtomicReference<>(false); + List statements = new ArrayList<>(); + + // 解析 + try { + statements = SQLUtils.parseStatements(sql, type); + } catch (Exception e) { + logger.info("can't parser by druid {}", type, e); + } + + // 只考虑一条语句 + SQLStatement statement = statements.get(0); + // 只考虑查询语句 + SQLSelectStatement sqlSelectStatement = (SQLSelectStatement) statement; + SQLSelectQuery sqlSelectQuery = sqlSelectStatement.getSelect().getQuery(); + + // 非union的查询语句 + if (sqlSelectQuery instanceof SQLSelectQueryBlock) { + SQLSelectQueryBlock sqlSelectQueryBlock = (SQLSelectQueryBlock) sqlSelectQuery; + // 获取字段列表 + List selectItems = sqlSelectQueryBlock.getSelectList(); + selectItems.forEach( + x -> { + // 处理--------------------- + String column = + Asserts.isNullString(x.getAlias()) ? x.toString() : x.getAlias(); + + if (column.contains(".")) { + column = column.substring(column.indexOf(".") + 1); + } + column = column.replace("`", "").replace("\"", ""); + + String expr = x.getExpr().toString(); + LineageColumn myColumn = new LineageColumn(); + myColumn.setTargetColumnName(column); + myColumn.setExpression(expr); + + TreeNode itemNode = new TreeNode<>(myColumn); + SQLExpr expr1 = x.getExpr(); + // 解析表达式,添加解析结果子节点 + handlerExpr(expr1, itemNode); + + if (node.getLevel() == 0 + || node.getData().getTargetColumnName().equals(column)) { + node.addChild(itemNode); + isContinue.set(true); + } + }); + + if (isContinue.get()) { + // 获取表 + SQLTableSource table = sqlSelectQueryBlock.getFrom(); + + // 普通单表 + if (table instanceof SQLExprTableSource) { + // 处理最终表--------------------- + handlerSQLExprTableSource(node, (SQLExprTableSource) table); + } else if (table instanceof SQLJoinTableSource) { + // 处理join + handlerSQLJoinTableSource(node, (SQLJoinTableSource) table, type); + } else if (table instanceof SQLSubqueryTableSource) { + // 处理 subquery --------------------- + handlerSQLSubqueryTableSource(node, table, type); + } else if (table instanceof SQLUnionQueryTableSource) { + // 处理 union --------------------- + handlerSQLUnionQueryTableSource(node, (SQLUnionQueryTableSource) table, type); + } + } + + // 处理--------------------- + // union的查询语句 + } else if (sqlSelectQuery instanceof SQLUnionQuery) { + // 处理--------------------- + columnLineageAnalyzer( + ((SQLUnionQuery) sqlSelectQuery).getLeft().toString(), type, node); + columnLineageAnalyzer( + ((SQLUnionQuery) sqlSelectQuery).getRight().toString(), type, node); + } + } + + /** + * 处理UNION子句 + * + * @param node + * @param table + */ + private static void handlerSQLUnionQueryTableSource( + TreeNode node, SQLUnionQueryTableSource table, String type) { + node.getAllLeafs().stream() + .filter(e -> !e.getData().getIsEnd()) + .forEach( + e -> { + columnLineageAnalyzer(table.getUnion().toString(), type, e); + }); + } + + /** + * 处理sub子句 + * + * @param node + * @param table + */ + private static void handlerSQLSubqueryTableSource( + TreeNode node, SQLTableSource table, String type) { + node.getAllLeafs().stream() + .filter(e -> !e.getData().getIsEnd()) + .forEach( + e -> { + if (Asserts.isNotNullString(e.getData().getSourceTableName())) { + if (e.getData().getSourceTableName().equals(table.getAlias())) { + columnLineageAnalyzer( + ((SQLSubqueryTableSource) table).getSelect().toString(), + type, + e); + } + } else { + columnLineageAnalyzer( + ((SQLSubqueryTableSource) table).getSelect().toString(), + type, + e); + } + }); + } + + /** + * 处理JOIN + * + * @param node + * @param table + */ + private static void handlerSQLJoinTableSource( + TreeNode node, SQLJoinTableSource table, String type) { + // 处理--------------------- + // 子查询作为表 + node.getAllLeafs().stream() + .filter(e -> !e.getData().getIsEnd()) + .forEach( + e -> { + if (table.getLeft() instanceof SQLJoinTableSource) { + handlerSQLJoinTableSource( + node, (SQLJoinTableSource) table.getLeft(), type); + } else if (table.getLeft() instanceof SQLExprTableSource) { + handlerSQLExprTableSource( + node, (SQLExprTableSource) table.getLeft()); + } else if (table.getLeft() instanceof SQLSubqueryTableSource) { + // 处理--------------------- + handlerSQLSubqueryTableSource(node, table.getLeft(), type); + } else if (table.getLeft() instanceof SQLUnionQueryTableSource) { + // 处理--------------------- + handlerSQLUnionQueryTableSource( + node, (SQLUnionQueryTableSource) table.getLeft(), type); + } + }); + node.getAllLeafs().stream() + .filter(e -> !e.getData().getIsEnd()) + .forEach( + e -> { + if (table.getRight() instanceof SQLJoinTableSource) { + handlerSQLJoinTableSource( + node, (SQLJoinTableSource) table.getRight(), type); + } else if (table.getRight() instanceof SQLExprTableSource) { + handlerSQLExprTableSource( + node, (SQLExprTableSource) table.getRight()); + } else if (table.getRight() instanceof SQLSubqueryTableSource) { + // 处理--------------------- + handlerSQLSubqueryTableSource(node, table.getRight(), type); + } else if (table.getRight() instanceof SQLUnionQueryTableSource) { + // 处理--------------------- + handlerSQLUnionQueryTableSource( + node, (SQLUnionQueryTableSource) table.getRight(), type); + } + }); + } + + /** + * 处理最终表 + * + * @param node + * @param table + */ + private static void handlerSQLExprTableSource( + TreeNode node, SQLExprTableSource table) { + SQLExprTableSource tableSource = table; + String tableName = + tableSource.getExpr() instanceof SQLPropertyExpr + ? ((SQLPropertyExpr) tableSource.getExpr()) + .getName() + .replace("`", "") + .replace("\"", "") + : ""; + String alias = + Asserts.isNotNullString(tableSource.getAlias()) + ? tableSource.getAlias().replace("`", "").replace("\"", "") + : ""; + node.getChildren() + .forEach( + e -> { + e.getChildren() + .forEach( + f -> { + if (!f.getData().getIsEnd() + && (f.getData().getSourceTableName() == null + || f.getData() + .getSourceTableName() + .equals(tableName) + || f.getData() + .getSourceTableName() + .equals(alias))) { + f.getData() + .setSourceTableName( + tableSource.toString()); + f.getData().setIsEnd(true); + f.getData() + .setExpression( + e.getData().getExpression()); + } + }); + }); + } + + /** + * 处理表达式 + * + * @param sqlExpr + * @param itemNode + */ + private static void handlerExpr(SQLExpr sqlExpr, TreeNode itemNode) { + // 聚合 + if (sqlExpr instanceof SQLAggregateExpr) { + visitSQLAggregateExpr((SQLAggregateExpr) sqlExpr, itemNode); + } + // 方法 + else if (sqlExpr instanceof SQLMethodInvokeExpr) { + visitSQLMethodInvoke((SQLMethodInvokeExpr) sqlExpr, itemNode); + } + // case + else if (sqlExpr instanceof SQLCaseExpr) { + visitSQLCaseExpr((SQLCaseExpr) sqlExpr, itemNode); + } + // 比较 + else if (sqlExpr instanceof SQLBinaryOpExpr) { + visitSQLBinaryOpExpr((SQLBinaryOpExpr) sqlExpr, itemNode); + } + // 表达式 + else if (sqlExpr instanceof SQLPropertyExpr) { + visitSQLPropertyExpr((SQLPropertyExpr) sqlExpr, itemNode); + } + // 列 + else if (sqlExpr instanceof SQLIdentifierExpr) { + visitSQLIdentifierExpr((SQLIdentifierExpr) sqlExpr, itemNode); + } + // 赋值表达式 + else if (sqlExpr instanceof SQLIntegerExpr) { + visitSQLIntegerExpr((SQLIntegerExpr) sqlExpr, itemNode); + } + // 数字 + else if (sqlExpr instanceof SQLNumberExpr) { + visitSQLNumberExpr((SQLNumberExpr) sqlExpr, itemNode); + } + // 字符 + else if (sqlExpr instanceof SQLCharExpr) { + visitSQLCharExpr((SQLCharExpr) sqlExpr, itemNode); + } + } + + /** + * 方法 + * + * @param expr + * @param node + */ + public static void visitSQLMethodInvoke( + SQLMethodInvokeExpr expr, TreeNode node) { + if (expr.getArguments().size() == 0) { + // 计算表达式,没有更多列,结束循环 + if (node.getData().getExpression().equals(expr.toString())) { + node.getData().setIsEnd(true); + } + } else { + expr.getArguments() + .forEach( + expr1 -> { + handlerExpr(expr1, node); + }); + } + } + + /** + * 聚合 + * + * @param expr + * @param node + */ + public static void visitSQLAggregateExpr(SQLAggregateExpr expr, TreeNode node) { + expr.getArguments() + .forEach( + expr1 -> { + handlerExpr(expr1, node); + }); + } + + /** + * 选择 + * + * @param expr + * @param node + */ + public static void visitSQLCaseExpr(SQLCaseExpr expr, TreeNode node) { + handlerExpr(expr.getValueExpr(), node); + expr.getItems() + .forEach( + expr1 -> { + handlerExpr(expr1.getValueExpr(), node); + }); + handlerExpr(expr.getElseExpr(), node); + } + + /** + * 判断 + * + * @param expr + * @param node + */ + public static void visitSQLBinaryOpExpr(SQLBinaryOpExpr expr, TreeNode node) { + handlerExpr(expr.getLeft(), node); + handlerExpr(expr.getRight(), node); + } + + /** + * 表达式列 + * + * @param expr + * @param node + */ + public static void visitSQLPropertyExpr(SQLPropertyExpr expr, TreeNode node) { + LineageColumn project = new LineageColumn(); + String columnName = expr.getName().replace("`", "").replace("\"", ""); + project.setTargetColumnName(columnName); + project.setSourceTableName(expr.getOwner().toString()); + TreeNode search = node.findChildNode(project); + + if (Asserts.isNull(search)) { + node.addChild(project); + } + } + + /** + * 列 + * + * @param expr + * @param node + */ + public static void visitSQLIdentifierExpr( + SQLIdentifierExpr expr, TreeNode node) { + LineageColumn project = new LineageColumn(); + project.setTargetColumnName(expr.getName().replace("`", "").replace("\"", "")); + TreeNode search = node.findChildNode(project); + + if (Asserts.isNull(search)) { + node.addChild(project); + } + } + + /** + * 整型赋值 + * + * @param expr + * @param node + */ + public static void visitSQLIntegerExpr(SQLIntegerExpr expr, TreeNode node) { + LineageColumn project = new LineageColumn(); + project.setTargetColumnName(expr.getNumber().toString()); + // 常量不设置表信息 + project.setSourceTableName(""); + project.setIsEnd(true); + TreeNode search = node.findChildNode(project); + + if (Asserts.isNull(search)) { + node.addChild(project); + } + } + + /** + * 数字 + * + * @param expr + * @param node + */ + public static void visitSQLNumberExpr(SQLNumberExpr expr, TreeNode node) { + LineageColumn project = new LineageColumn(); + project.setTargetColumnName(expr.getNumber().toString()); + // 常量不设置表信息 + project.setSourceTableName(""); + project.setIsEnd(true); + TreeNode search = node.findChildNode(project); + + if (Asserts.isNull(search)) { + node.addChild(project); + } + } + + /** + * 字符 + * + * @param expr + * @param node + */ + public static void visitSQLCharExpr(SQLCharExpr expr, TreeNode node) { + LineageColumn project = new LineageColumn(); + project.setTargetColumnName(expr.toString()); + // 常量不设置表信息 + project.setSourceTableName(""); + project.setIsEnd(true); + TreeNode search = node.findChildNode(project); + + if (Asserts.isNull(search)) { + node.addChild(project); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/TreeNode.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/TreeNode.java new file mode 100644 index 0000000..ae3cdaa --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/TreeNode.java @@ -0,0 +1,209 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.sqllineage; + +import java.util.HashSet; +import java.util.Iterator; +import java.util.LinkedList; +import java.util.List; +import java.util.Set; + +public class TreeNode implements Iterable> { + + /** 树节点 */ + public T data; + + /** 父节点,根没有父节点 */ + public TreeNode parent; + + /** 子节点,叶子节点没有子节点 */ + public List> children; + + /** 保存了当前节点及其所有子节点,方便查询 */ + private List> elementsIndex; + + /** + * 构造函数 + * + * @param data + */ + public TreeNode(T data) { + this.data = data; + this.children = new LinkedList>(); + this.elementsIndex = new LinkedList>(); + this.elementsIndex.add(this); + } + + public T getData() { + return data; + } + + public List> getChildren() { + return children; + } + + /** + * 判断是否为根:根没有父节点 + * + * @return + */ + public boolean isRoot() { + return parent == null; + } + + /** + * 判断是否为叶子节点:子节点没有子节点 + * + * @return + */ + public boolean isLeaf() { + return children.size() == 0; + } + + /** + * 添加一个子节点 + * + * @param child + * @return + */ + public TreeNode addChild(T child) { + TreeNode childNode = new TreeNode(child); + childNode.parent = this; + this.children.add(childNode); + this.registerChildForSearch(childNode); + return childNode; + } + + public TreeNode addChild(TreeNode childNode) { + childNode.parent = this; + this.children.add(childNode); + this.registerChildForSearch(childNode); + return childNode; + } + + /** + * 获取当前节点的层 + * + * @return + */ + public int getLevel() { + if (this.isRoot()) { + return 0; + } else { + return parent.getLevel() + 1; + } + } + + /** + * 递归为当前节点以及当前节点的所有父节点增加新的节点 + * + * @param node + */ + private void registerChildForSearch(TreeNode node) { + elementsIndex.add(node); + + if (parent != null) { + parent.registerChildForSearch(node); + } + } + + /** + * 从当前节点及其所有子节点中搜索某节点 + * + * @param cmp + * @return + */ + public TreeNode findTreeNode(Comparable cmp) { + for (TreeNode element : this.elementsIndex) { + T elData = element.data; + + if (cmp.compareTo(elData) == 0) { + return element; + } + } + + return null; + } + + public TreeNode findChildNode(Comparable cmp) { + for (TreeNode element : this.getChildren()) { + T elData = element.data; + + if (cmp.compareTo(elData) == 0) { + return element; + } + } + + return null; + } + + /** + * 获取当前节点的迭代器 + * + * @return + */ + public Iterator> iterator() { + TreeNodeIterator iterator = new TreeNodeIterator(this); + return iterator; + } + + @Override + public String toString() { + return data != null ? data.toString() : "[tree data null]"; + } + + /** + * 获取所有叶子节点的数据 + * + * @return + */ + public Set> getAllLeafs() { + Set> leafNodes = new HashSet>(); + + if (this.children.isEmpty()) { + leafNodes.add(this); + } else { + for (TreeNode child : this.children) { + leafNodes.addAll(child.getAllLeafs()); + } + } + + return leafNodes; + } + + /** + * 获取所有叶子节点的数据 + * + * @return + */ + public Set getAllLeafData() { + Set leafNodes = new HashSet(); + + if (this.children.isEmpty()) { + leafNodes.add(this.data); + } else { + for (TreeNode child : this.children) { + leafNodes.addAll(child.getAllLeafData()); + } + } + + return leafNodes; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/TreeNodeIterator.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/TreeNodeIterator.java new file mode 100644 index 0000000..507f02c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/sqllineage/TreeNodeIterator.java @@ -0,0 +1,85 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.sqllineage; + +import java.util.Iterator; + +public class TreeNodeIterator implements Iterator> { + + private ProcessStages doNext; + private TreeNode next; + private Iterator> childrenCurNodeIter; + private Iterator> childrenSubNodeIter; + private TreeNode treeNode; + + public TreeNodeIterator(TreeNode treeNode) { + this.treeNode = treeNode; + this.doNext = ProcessStages.ProcessParent; + this.childrenCurNodeIter = treeNode.children.iterator(); + } + + public boolean hasNext() { + if (this.doNext == ProcessStages.ProcessParent) { + this.next = this.treeNode; + this.doNext = ProcessStages.ProcessChildCurNode; + return true; + } + + if (this.doNext == ProcessStages.ProcessChildCurNode) { + if (childrenCurNodeIter.hasNext()) { + TreeNode childDirect = childrenCurNodeIter.next(); + childrenSubNodeIter = childDirect.iterator(); + this.doNext = ProcessStages.ProcessChildSubNode; + return hasNext(); + } else { + this.doNext = null; + return false; + } + } + + if (this.doNext == ProcessStages.ProcessChildSubNode) { + if (childrenSubNodeIter.hasNext()) { + this.next = childrenSubNodeIter.next(); + return true; + } else { + this.next = null; + this.doNext = ProcessStages.ProcessChildCurNode; + return hasNext(); + } + } + + return false; + } + + public TreeNode next() { + return this.next; + } + + /** 目前不支持删除节点 */ + public void remove() { + throw new UnsupportedOperationException(); + } + + enum ProcessStages { + ProcessParent, + ProcessChildCurNode, + ProcessChildSubNode + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/watchTable/WatchStatementExplainer.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/watchTable/WatchStatementExplainer.java new file mode 100644 index 0000000..0946d90 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/explainer/watchTable/WatchStatementExplainer.java @@ -0,0 +1,71 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.explainer.watchTable; + +import java.net.InetAddress; +import java.net.UnknownHostException; +import java.text.MessageFormat; +import java.util.Optional; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +public class WatchStatementExplainer { + + public static final String PATTERN_STR = "WATCH (.+)"; + public static final Pattern PATTERN = Pattern.compile(PATTERN_STR, Pattern.CASE_INSENSITIVE); + + public static final String CREATE_SQL_TEMPLATE = + "CREATE TABLE print_{0} WITH (''connector'' = ''printnet'', " + + "''port''=''{2,number,#}'', ''hostName'' = ''{1}'')\n" + + "AS SELECT * FROM {0}"; + public static final int PORT = 7125; + + private final String statement; + + public WatchStatementExplainer(String statement) { + this.statement = statement; + } + + public String[] getTableNames() { + return splitTableNames(statement); + } + + public static String[] splitTableNames(String statement) { + Matcher matcher = PATTERN.matcher(statement); + matcher.find(); + String tableNames = matcher.group(1); + return tableNames.replaceAll(" ", "").split(","); + } + + public static String getCreateStatement(String tableName) { + Optional address = getSystemLocalIp(); + String ip = address.isPresent() ? address.get().getHostAddress() : "124.223.48.209"; + return MessageFormat.format(CREATE_SQL_TEMPLATE, tableName, ip, PORT); + } + + private static Optional getSystemLocalIp() { + try { + InetAddress ip = InetAddress.getLocalHost(); + return Optional.of(ip); + } catch (UnknownHostException e) { + throw new RuntimeException(e); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/Job.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/Job.java new file mode 100644 index 0000000..85d2fb5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/Job.java @@ -0,0 +1,103 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.job; + +import cn.hutool.core.date.DateUtil; +import cn.hutool.core.util.RandomUtil; +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.result.IResult; +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.executor.ExecutorSetting; +import net.srt.flink.gateway.GatewayType; + +import java.time.LocalDateTime; +import java.util.Date; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.UUID; + +/** + * Job + * + * @author zrx + * @since 2021/6/26 23:39 + */ +@Getter +@Setter +public class Job { + //历史id + private Integer id; + private Integer jobInstanceId; + private JobConfig jobConfig; + private String jobManagerAddress; + private JobStatus status; + private GatewayType type; + private String statement; + private String jobId; + private String error; + private IResult result; + private ExecutorSetting executorSetting; + private LocalDateTime startTime; + private LocalDateTime endTime; + private Executor executor; + private boolean useGateway; + private List jids; + private boolean endByNoInsert; + private String executeNo; + private String executeSql; + /** + * 节点调度的日志id + */ + private Integer nodeRecordId; + + public enum JobStatus { + INITIALIZE, + RUNNING, + SUCCESS, + FAILED, + CANCEL + } + + public Job(JobConfig jobConfig, GatewayType type, JobStatus status, String statement, ExecutorSetting executorSetting, Executor executor, boolean useGateway) { + this.jobConfig = jobConfig; + this.type = type; + this.status = status; + this.statement = statement; + this.executorSetting = executorSetting; + this.startTime = LocalDateTime.now(); + this.executor = executor; + this.useGateway = useGateway; + } + + public static Job init(GatewayType type, JobConfig jobConfig, ExecutorSetting executorSetting, Executor executor, String statement, boolean useGateway) { + Job job = new Job(jobConfig, type, JobStatus.INITIALIZE, statement, executorSetting, executor, useGateway); + job.setExecuteNo(UUID.randomUUID().toString().replaceAll("-", "")); + return job; + } + + public JobResult getJobResult() { + return new JobResult(id, jobInstanceId, jobConfig, jobManagerAddress, status, statement, jobId, error, result, startTime, endTime); + } + + public boolean isFailed() { + return status.equals(JobStatus.FAILED); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobConfig.java new file mode 100644 index 0000000..39dd039 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobConfig.java @@ -0,0 +1,297 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.job; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.constant.NetConstant; +import net.srt.flink.core.session.SessionConfig; +import net.srt.flink.executor.executor.ExecutorSetting; +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.gateway.GatewayType; +import net.srt.flink.gateway.config.AppConfig; +import net.srt.flink.gateway.config.ClusterConfig; +import net.srt.flink.gateway.config.FlinkConfig; +import net.srt.flink.gateway.config.GatewayConfig; +import net.srt.flink.gateway.config.SavePointStrategy; +import org.apache.flink.configuration.CoreOptions; +import org.apache.flink.configuration.RestOptions; +import org.apache.http.util.TextUtils; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +/** + * JobConfig + * + * @author zrx + * @since 2021/6/27 18:45 + */ +@Getter +@Setter +public class JobConfig { + + // flink run mode + private String type; + // task JobLifeCycle + private Integer step; + private boolean useResult; + private boolean useChangeLog; + private boolean useAutoCancel; + private boolean useSession; + private String session; + private boolean useRemote; + private Integer clusterId; + private Integer clusterConfigurationId; + private Integer jarId; + private boolean isJarTask = false; + private String address; + private Integer taskId; + private List udfList; + private String[] jarFiles; + private String[] pyFiles; + private String jobName; + private boolean useSqlFragment; + private boolean useStatementSet; + private boolean useBatchModel; + private Integer maxRowNum; + private Integer checkpoint; + private Integer parallelism; + private SavePointStrategy savePointStrategy; + private String savePointPath; + private GatewayConfig gatewayConfig; + private Map variables; + private Map config; + + public JobConfig() { + } + + public void setAddress(String address) { + if (GatewayType.LOCAL.equalsValue(type) && Asserts.isNotNull(config) + && config.containsKey(RestOptions.PORT.key())) { + this.address = address + NetConstant.COLON + config.get(RestOptions.PORT.key()); + } else { + this.address = address; + } + } + + public JobConfig(String type, boolean useSession, boolean useRemote, boolean useSqlFragment, + boolean useStatementSet, Integer parallelism, Map config) { + this.type = type; + this.useSession = useSession; + this.useRemote = useRemote; + this.useSqlFragment = useSqlFragment; + this.useStatementSet = useStatementSet; + this.parallelism = parallelism; + this.config = config; + } + + public JobConfig(String type, boolean useResult, boolean useChangeLog, boolean useAutoCancel, boolean useSession, + String session, Integer clusterId, + Integer clusterConfigurationId, Integer jarId, Integer taskId, String jobName, + boolean useSqlFragment, + boolean useStatementSet, boolean useBatchModel, Integer maxRowNum, Integer checkpoint, + Integer parallelism, + Integer savePointStrategyValue, String savePointPath, Map variables, + Map config) { + this.type = type; + this.useResult = useResult; + this.useChangeLog = useChangeLog; + this.useAutoCancel = useAutoCancel; + this.useSession = useSession; + this.session = session; + this.useRemote = true; + this.clusterId = clusterId; + this.clusterConfigurationId = clusterConfigurationId; + this.jarId = jarId; + this.taskId = taskId; + this.jobName = jobName; + this.useSqlFragment = useSqlFragment; + this.useStatementSet = useStatementSet; + this.useBatchModel = useBatchModel; + this.maxRowNum = maxRowNum; + this.checkpoint = checkpoint; + this.parallelism = parallelism; + this.savePointStrategy = SavePointStrategy.get(savePointStrategyValue); + this.savePointPath = savePointPath; + this.variables = variables; + this.config = config; + } + + public JobConfig(String type, boolean useResult, boolean useChangeLog, boolean useAutoCancel, boolean useSession, + String session, boolean useRemote, String address, + String jobName, boolean useSqlFragment, + boolean useStatementSet, Integer maxRowNum, Integer checkpoint, Integer parallelism, + Integer savePointStrategyValue, String savePointPath, Map config, + GatewayConfig gatewayConfig) { + this.type = type; + this.useResult = useResult; + this.useChangeLog = useChangeLog; + this.useAutoCancel = useAutoCancel; + this.useSession = useSession; + this.session = session; + this.useRemote = useRemote; + this.jobName = jobName; + this.useSqlFragment = useSqlFragment; + this.useStatementSet = useStatementSet; + this.maxRowNum = maxRowNum; + this.checkpoint = checkpoint; + this.parallelism = parallelism; + this.savePointStrategy = SavePointStrategy.get(savePointStrategyValue); + this.savePointPath = savePointPath; + this.config = config; + this.gatewayConfig = gatewayConfig; + setAddress(address); + } + + public JobConfig(String type, boolean useResult, boolean useSession, String session, boolean useRemote, + Integer clusterId, Integer maxRowNum) { + this.type = type; + this.useResult = useResult; + this.useSession = useSession; + this.session = session; + this.useRemote = useRemote; + this.clusterId = clusterId; + this.maxRowNum = maxRowNum; + } + + public JobConfig(String type, Integer step, boolean useResult, boolean useSession, boolean useRemote, + Integer clusterId, + Integer clusterConfigurationId, Integer jarId, Integer taskId, String jobName, + boolean useSqlFragment, + boolean useStatementSet, boolean useBatchModel, Integer checkpoint, Integer parallelism, + Integer savePointStrategyValue, + String savePointPath, Map config) { + this.type = type; + this.step = step; + this.useResult = useResult; + this.useSession = useSession; + this.useRemote = useRemote; + this.clusterId = clusterId; + this.clusterConfigurationId = clusterConfigurationId; + this.jarId = jarId; + this.taskId = taskId; + this.jobName = jobName; + this.useSqlFragment = useSqlFragment; + this.useStatementSet = useStatementSet; + this.useBatchModel = useBatchModel; + this.checkpoint = checkpoint; + this.parallelism = parallelism; + this.savePointStrategy = SavePointStrategy.get(savePointStrategyValue); + this.savePointPath = savePointPath; + this.config = config; + } + + public ExecutorSetting getExecutorSetting() { + return new ExecutorSetting(checkpoint, parallelism, useSqlFragment, useStatementSet, useBatchModel, + savePointPath, jobName, config); + } + + public void setSessionConfig(SessionConfig sessionConfig) { + if (sessionConfig != null) { + address = sessionConfig.getAddress(); + clusterId = sessionConfig.getClusterId(); + useRemote = sessionConfig.isUseRemote(); + } + } + + public void buildGatewayConfig(Map config) { + gatewayConfig = new GatewayConfig(); + if (config.containsKey("hadoopConfigPath")) { + gatewayConfig.setClusterConfig(ClusterConfig.build(config.get("flinkConfigPath").toString(), + config.get("flinkLibPath").toString(), + config.get("hadoopConfigPath").toString())); + } else { + gatewayConfig.setClusterConfig(ClusterConfig.build(config.get("flinkConfigPath").toString())); + } + AppConfig appConfig = new AppConfig(); + if (config.containsKey("userJarPath") && Asserts.isNotNullString((String) config.get("userJarPath"))) { + appConfig.setUserJarPath(config.get("userJarPath").toString()); + if (config.containsKey("userJarMainAppClass") + && Asserts.isNotNullString((String) config.get("userJarMainAppClass"))) { + appConfig.setUserJarMainAppClass(config.get("userJarMainAppClass").toString()); + } + if (config.containsKey("userJarParas") && Asserts.isNotNullString((String) config.get("userJarParas"))) { + // There may be multiple spaces between the parameter and value during user input, + // which will directly lead to a parameter passing error and needs to be eliminated + String[] temp = config.get("userJarParas").toString().split(" "); + List paraSplit = new ArrayList<>(); + for (String s : temp) { + if (!TextUtils.isEmpty(s.trim())) { + paraSplit.add(s); + } + } + appConfig.setUserJarParas(paraSplit.toArray(new String[0])); + } + gatewayConfig.setAppConfig(appConfig); + } + if (config.containsKey("flinkConfig") + && Asserts.isNotNullMap((Map) config.get("flinkConfig"))) { + gatewayConfig.setFlinkConfig(FlinkConfig.build((Map) config.get("flinkConfig"))); + gatewayConfig.getFlinkConfig().getConfiguration().put(CoreOptions.DEFAULT_PARALLELISM.key(), + String.valueOf(parallelism)); + } + if (config.containsKey("kubernetesConfig")) { + Map kubernetesConfig = (Map) config.get("kubernetesConfig"); + gatewayConfig.getFlinkConfig().getConfiguration().putAll(kubernetesConfig); + } + // at present only k8s task have this + if (config.containsKey("taskCustomConfig")) { + Map> taskCustomConfig = (Map>) config + .get("taskCustomConfig"); + if (taskCustomConfig.containsKey("kubernetesConfig")) { + gatewayConfig.getFlinkConfig().getConfiguration().putAll(taskCustomConfig.get("kubernetesConfig")); + } + if (taskCustomConfig.containsKey("flinkConfig")) { + gatewayConfig.getFlinkConfig().getConfiguration().putAll(taskCustomConfig.get("flinkConfig")); + } + } + } + + public void addGatewayConfig(List> configList) { + if (Asserts.isNull(gatewayConfig)) { + gatewayConfig = new GatewayConfig(); + } + for (Map item : configList) { + if (Asserts.isNotNull(item)) { + gatewayConfig.getFlinkConfig().getConfiguration().put(item.get("key"), item.get("value")); + } + } + } + + public void addGatewayConfig(Map config) { + if (Asserts.isNull(gatewayConfig)) { + gatewayConfig = new GatewayConfig(); + } + for (Map.Entry entry : config.entrySet()) { + gatewayConfig.getFlinkConfig().getConfiguration().put(entry.getKey(), (String) entry.getValue()); + } + } + + public boolean isUseRemote() { + return !GatewayType.LOCAL.equalsValue(type); + } + + public void buildLocal() { + type = GatewayType.LOCAL.getLongValue(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobContextHolder.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobContextHolder.java new file mode 100644 index 0000000..55c0ae8 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobContextHolder.java @@ -0,0 +1,42 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.job; + +/** + * JobContextHolder + * + * @author zrx + * @since 2021/6/26 23:29 + */ +public class JobContextHolder { + private static final ThreadLocal CONTEXT = new ThreadLocal<>(); + + public static void setJob(Job job) { + CONTEXT.set(job); + } + + public static Job getJob() { + return CONTEXT.get(); + } + + public static void clear() { + CONTEXT.remove(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobHandler.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobHandler.java new file mode 100644 index 0000000..2a49332 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobHandler.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.job; + + +import net.srt.flink.common.exception.JobException; + +import java.util.ServiceLoader; + +/** + * jobHandler + * + * @author zrx + * @since 2021/6/26 23:22 + */ +public interface JobHandler { + boolean init(); + + boolean ready(); + + boolean running(); + + boolean success(); + + boolean failed(); + + boolean callback(); + + boolean close(); + + static JobHandler build() { + ServiceLoader jobHandlers = ServiceLoader.load(JobHandler.class); + for (JobHandler jobHandler : jobHandlers) { + return jobHandler; + } + throw new JobException("There is no corresponding implementation class for this interface!"); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobManager.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobManager.java new file mode 100644 index 0000000..dd2d139 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobManager.java @@ -0,0 +1,883 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.job; + +import cn.hutool.core.collection.CollUtil; +import cn.hutool.core.io.FileUtil; +import cn.hutool.core.util.ArrayUtil; +import cn.hutool.core.util.RandomUtil; +import cn.hutool.core.util.StrUtil; +import cn.hutool.json.JSONArray; +import cn.hutool.json.JSONObject; +import com.fasterxml.jackson.databind.node.ObjectNode; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.classloader.DinkyClassLoader; +import net.srt.flink.common.context.DinkyClassLoaderContextHolder; +import net.srt.flink.common.context.JarPathContextHolder; +import net.srt.flink.common.model.ProjectSystemConfiguration; +import net.srt.flink.common.result.ExplainResult; +import net.srt.flink.common.result.IResult; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.common.utils.SqlUtil; +import net.srt.flink.common.utils.URLUtils; +import net.srt.flink.core.api.FlinkAPI; +import net.srt.flink.core.explainer.Explainer; +import net.srt.flink.core.result.ErrorResult; +import net.srt.flink.core.result.InsertResult; +import net.srt.flink.core.result.ResultBuilder; +import net.srt.flink.core.result.ResultPool; +import net.srt.flink.core.result.SelectResult; +import net.srt.flink.core.session.ExecutorEntity; +import net.srt.flink.core.session.SessionConfig; +import net.srt.flink.core.session.SessionInfo; +import net.srt.flink.core.session.SessionPool; +import net.srt.flink.executor.constant.FlinkSQLConstant; +import net.srt.flink.executor.executor.EnvironmentSetting; +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.executor.ExecutorSetting; +import net.srt.flink.executor.interceptor.FlinkInterceptor; +import net.srt.flink.executor.interceptor.FlinkInterceptorResult; +import net.srt.flink.executor.parser.SqlType; +import net.srt.flink.executor.trans.Operations; +import net.srt.flink.function.constant.PathConstant; +import net.srt.flink.function.context.UdfPathContextHolder; +import net.srt.flink.function.data.model.Env; +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.function.util.UDFUtil; +import net.srt.flink.gateway.Gateway; +import net.srt.flink.gateway.GatewayType; +import net.srt.flink.gateway.config.ActionType; +import net.srt.flink.gateway.config.FlinkConfig; +import net.srt.flink.gateway.config.GatewayConfig; +import net.srt.flink.gateway.result.GatewayResult; +import net.srt.flink.gateway.result.SavePointResult; +import net.srt.flink.gateway.result.TestResult; +import net.srt.flink.gateway.result.YarnResult; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.CoreOptions; +import org.apache.flink.configuration.DeploymentOptions; +import org.apache.flink.configuration.PipelineOptions; +import org.apache.flink.core.execution.JobClient; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.flink.runtime.jobgraph.SavepointConfigOptions; +import org.apache.flink.runtime.jobgraph.SavepointRestoreSettings; +import org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.graph.StreamGraph; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.yarn.configuration.YarnConfigOptions; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.File; +import java.lang.reflect.Field; +import java.net.URL; +import java.time.LocalDateTime; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import static net.srt.flink.function.util.UDFUtil.GATEWAY_TYPE_MAP; +import static net.srt.flink.function.util.UDFUtil.SESSION; +import static net.srt.flink.function.util.UDFUtil.YARN; + +/** + * JobManager + * + * @author wenmo + * @since 2021/5/25 15:27 + **/ +public class JobManager { + + private static final Logger logger = LoggerFactory.getLogger(JobManager.class); + + private JobHandler handler; + private EnvironmentSetting environmentSetting; + private ExecutorSetting executorSetting; + private JobConfig config; + private Executor executor; + private Configuration configuration; + private boolean useGateway = false; + private boolean isPlanMode = false; + private boolean useStatementSet = false; + private boolean useRestAPI = false; + private String sqlSeparator = FlinkSQLConstant.SEPARATOR; + private GatewayType runMode = GatewayType.LOCAL; + + public JobManager() { + } + + public void setUseGateway(boolean useGateway) { + this.useGateway = useGateway; + } + + public boolean isUseGateway() { + return useGateway; + } + + public void setPlanMode(boolean planMode) { + isPlanMode = planMode; + } + + public boolean isPlanMode() { + return isPlanMode; + } + + public boolean isUseStatementSet() { + return useStatementSet; + } + + public void setUseStatementSet(boolean useStatementSet) { + this.useStatementSet = useStatementSet; + } + + public boolean isUseRestAPI() { + return useRestAPI; + } + + public void setUseRestAPI(boolean useRestAPI) { + this.useRestAPI = useRestAPI; + } + + public String getSqlSeparator() { + return sqlSeparator; + } + + public void setSqlSeparator(String sqlSeparator) { + this.sqlSeparator = sqlSeparator; + } + + public JobManager(JobConfig config) { + this.config = config; + } + + public static JobManager build() { + JobManager manager = new JobManager(); + manager.init(); + return manager; + } + + public static JobManager build(JobConfig config) { + ProcessEntity process = ProcessContextHolder.getProcess(); + try { + initGatewayConfig(config); + JobManager manager = new JobManager(config); + manager.init(); + return manager; + } catch (Exception e) { + process.error(LogUtil.getError(e)); + process.infoEnd(); + ProcessContextHolder.clear(); + throw new RuntimeException(e.getMessage()); + } + } + + public static JobManager buildPlanMode(JobConfig config) { + JobManager manager = new JobManager(config); + manager.setPlanMode(true); + manager.init(); + ProcessContextHolder.getProcess().info("Build Flink plan mode success."); + return manager; + } + + private static void initGatewayConfig(JobConfig config) { + if (useGateway(config.getType())) { + Asserts.checkNull(config.getGatewayConfig(), "GatewayConfig 不能为空"); + config.getGatewayConfig().setType(GatewayType.get(config.getType())); + config.getGatewayConfig().setTaskId(config.getTaskId()); + config.getGatewayConfig().getFlinkConfig().setJobName(config.getJobName()); + config.getGatewayConfig().getFlinkConfig().setSavePoint(config.getSavePointPath()); + config.setUseRemote(false); + } + } + + public static boolean useGateway(String type) { + return (GatewayType.YARN_PER_JOB.equalsValue(type) || GatewayType.YARN_APPLICATION.equalsValue(type) + || GatewayType.KUBERNETES_APPLICATION.equalsValue(type)); + } + + private Executor createExecutor() { + initEnvironmentSetting(); + if (!runMode.equals(GatewayType.LOCAL) && !useGateway && config.isUseRemote()) { + executor = Executor.buildRemoteExecutor(environmentSetting, config.getExecutorSetting()); + return executor; + } else { + if (ArrayUtil.isNotEmpty(config.getJarFiles())) { + config.getExecutorSetting().getConfig().put(PipelineOptions.JARS.key(), + Stream.of(config.getJarFiles()).map(FileUtil::getAbsolutePath) + .collect(Collectors.joining(","))); + } + + executor = Executor.buildLocalExecutor(config.getExecutorSetting()); + return executor; + } + } + + private Executor createExecutorWithSession() { + if (config.isUseSession()) { + ExecutorEntity executorEntity = SessionPool.get(config.getSession()); + if (Asserts.isNotNull(executorEntity)) { + executor = executorEntity.getExecutor(); + config.setSessionConfig(executorEntity.getSessionConfig()); + initEnvironmentSetting(); + executor.update(executorSetting); + } else { + createExecutor(); + SessionPool.push(new ExecutorEntity(config.getSession(), executor)); + } + } else { + createExecutor(); + } + executor.getSqlManager().registerSqlFragment(config.getVariables()); + return executor; + } + + private void initEnvironmentSetting() { + if (Asserts.isNotNullString(config.getAddress())) { + environmentSetting = EnvironmentSetting.build(config.getAddress(), config.getJarFiles()); + } + } + + private void initExecutorSetting() { + executorSetting = config.getExecutorSetting(); + } + + public boolean init() { + ProcessEntity process = ProcessContextHolder.getProcess(); + if (!isPlanMode) { + runMode = GatewayType.get(config.getType()); + useGateway = useGateway(config.getType()); + handler = JobHandler.build(); + } + useStatementSet = config.isUseStatementSet(); + //zrx ProjectSystemConfiguration + useRestAPI = ProjectSystemConfiguration.getByProjectId(process.getProjectId()).isUseRestAPI(); + sqlSeparator = ProjectSystemConfiguration.getByProjectId(process.getProjectId()).getSqlSeparator(); + + initExecutorSetting(); + createExecutorWithSession(); + + return false; + } + + private void addConfigurationClsAndJars(List jarList, List classpaths) + throws Exception { + Field configuration = StreamExecutionEnvironment.class.getDeclaredField("configuration"); + configuration.setAccessible(true); + Configuration o = + (Configuration) configuration.get(executor.getStreamExecutionEnvironment()); + + Field confData = Configuration.class.getDeclaredField("confData"); + confData.setAccessible(true); + Map temp = (Map) confData.get(o); + temp.put( + PipelineOptions.CLASSPATHS.key(), + classpaths.stream().map(URL::toString).collect(Collectors.toList())); + temp.put( + PipelineOptions.JARS.key(), + jarList.stream().map(URL::toString).collect(Collectors.toList())); + } + + public void initUDF(List udfList) { + if (Asserts.isNotNullCollection(udfList)) { + initUDF(udfList, runMode, config.getTaskId()); + } + } + + public void initUDF(List udfList, GatewayType runMode, Integer taskId) { + if (taskId == null) { + taskId = -RandomUtil.randomInt(0, 1000); + } + ProcessEntity process = ProcessContextHolder.getProcess(); + + // 这里要分开 + // 1. 得到jar包路径,注入remote环境 + Set jarFiles = JarPathContextHolder.getUdfFile(); + + Set otherPluginsFiles = JarPathContextHolder.getOtherPluginsFiles(); + jarFiles.addAll(otherPluginsFiles); + + List udfJars = + Arrays.stream(UDFUtil.initJavaUDF(udfList, runMode, taskId)) + .map(File::new) + .collect(Collectors.toList()); + jarFiles.addAll(udfJars); + + String[] jarPaths = + CollUtil.removeNull(jarFiles).stream() + .map(File::getAbsolutePath) + .toArray(String[]::new); + + if (GATEWAY_TYPE_MAP.get(SESSION).contains(runMode)) { + config.setJarFiles(jarPaths); + } + try { + List jarList = CollUtil.newArrayList(URLUtils.getURLs(jarFiles)); + writeManifest(taskId, jarList); + + addConfigurationClsAndJars( + jarList, CollUtil.newArrayList(URLUtils.getURLs(otherPluginsFiles))); + } catch (Exception e) { + logger.error("add configuration failed;reason:{}", LogUtil.getError(e)); + throw new RuntimeException(e); + } + + // 2.编译python + String[] pyPaths = + UDFUtil.initPythonUDF( + udfList, + runMode, + config.getTaskId(), + executor.getTableConfig().getConfiguration()); + + executor.initUDF(jarPaths); + + executor.initPyUDF(Env.getPath(), pyPaths); + if (GATEWAY_TYPE_MAP.get(YARN).contains(runMode)) { + config.getGatewayConfig().setJarPaths(ArrayUtil.append(jarPaths, pyPaths)); + } + process.info(StrUtil.format("A total of {} UDF have been Init.", udfList.size())); + process.info("Initializing Flink UDF...Finish"); + } + + private void writeManifest(Integer taskId, List jarPaths) { + JSONArray array = + jarPaths.stream().map(URL::getFile).collect(Collectors.toCollection(JSONArray::new)); + JSONObject object = new JSONObject(); + object.set("jars", array); + FileUtil.writeUtf8String( + object.toStringPretty(), + PathConstant.getUdfPackagePath(taskId) + PathConstant.DEP_MANIFEST); + } + + private boolean ready() { + return handler.init(); + } + + private boolean success() { + return handler.success(); + } + + private boolean failed() { + return handler.failed(); + } + + public boolean close() { + JobContextHolder.clear(); + return false; + } + + public void initClassLoader(JobConfig config) { + if (CollUtil.isNotEmpty(config.getConfig())) { + String pipelineJars = config.getConfig().get(PipelineOptions.JARS.key()); + String classpaths = config.getConfig().get(PipelineOptions.CLASSPATHS.key()); + // add custom jar path + if (StrUtil.isNotBlank(pipelineJars)) { + String[] paths = pipelineJars.split(","); + for (String path : paths) { + File file = FileUtil.file(path); + if (!file.exists()) { + throw new RuntimeException("file: " + path + " .not exists! "); + } + JarPathContextHolder.addUdfPath(file); + } + } + // add custom classpath + if (StrUtil.isNotBlank(classpaths)) { + String[] paths = pipelineJars.split(","); + for (String path : paths) { + File file = FileUtil.file(path); + if (!file.exists()) { + throw new RuntimeException("file: " + path + " .not exists! "); + } + JarPathContextHolder.addOtherPlugins(file); + } + } + } + + DinkyClassLoader classLoader = + new DinkyClassLoader( + CollUtil.addAll( + JarPathContextHolder.getUdfFile(), + JarPathContextHolder.getOtherPluginsFiles()), + Thread.currentThread().getContextClassLoader()); + DinkyClassLoaderContextHolder.set(classLoader); + } + + public JobResult executeSql(String statement) { + initClassLoader(config); + ProcessEntity process = ProcessContextHolder.getProcess(); + Job job = Job.init(runMode, config, executorSetting, executor, statement, useGateway); + job.setNodeRecordId(process.getNodeRecordId()); + if (!useGateway) { + job.setJobManagerAddress(environmentSetting.getAddress()); + } + JobContextHolder.setJob(job); + ready(); + boolean hasReady = true; + String currentSql = ""; + JobParam jobParam = Explainer.build(executor, useStatementSet, sqlSeparator) + .pretreatStatements(SqlUtil.getStatements(statement, sqlSeparator)); + try { + initUDF(jobParam.getUdfList(), runMode, config.getTaskId()); + + for (StatementParam item : jobParam.getDdl()) { + currentSql = item.getValue(); + executor.executeSql(item.getValue()); + } + if (!jobParam.getTrans().isEmpty()) { + // Use statement set or gateway only submit inserts. + if (useStatementSet && useGateway) { + List inserts = new ArrayList<>(); + for (StatementParam item : jobParam.getTrans()) { + if (item.getType().isInsert()) { + inserts.add(item.getValue()); + } + } + if (!inserts.isEmpty()) { + // Use statement set need to merge all insert sql into a sql. + currentSql = String.join(sqlSeparator, inserts); + //zrx + job.setExecuteSql(currentSql); + GatewayResult gatewayResult = submitByGateway(inserts); + // Use statement set only has one jid. + job.setResult(InsertResult.success(gatewayResult.getAppId())); + job.setJobId(gatewayResult.getAppId()); + job.setJids(gatewayResult.getJids()); + job.setJobManagerAddress(formatAddress(gatewayResult.getWebURL())); + } else { + job.setEndByNoInsert(true); + } + job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.SUCCESS); + success(); + hasReady = false; + } else if (useStatementSet) { + List inserts = new ArrayList<>(); + for (StatementParam item : jobParam.getTrans()) { + if (item.getType().isInsert()) { + inserts.add(item.getValue()); + } + } + if (inserts.size() > 0) { + currentSql = String.join(sqlSeparator, inserts); + //zrx + job.setExecuteSql(currentSql); + // Remote mode can get the table result. + TableResult tableResult = executor.executeStatementSet(inserts); + if (tableResult.getJobClient().isPresent()) { + job.setJobId(tableResult.getJobClient().get().getJobID().toHexString()); + job.setJids(new ArrayList() { + + { + add(job.getJobId()); + } + }); + } + if (config.isUseResult()) { + // Build insert result. + IResult result = ResultBuilder + .build(SqlType.INSERT, config.getMaxRowNum(), config.isUseChangeLog(), + config.isUseAutoCancel(), executor.getTimeZone()) + .getResult(tableResult); + job.setResult(result); + } + } + job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.SUCCESS); + success(); + hasReady = false; + } else if (useGateway) { + List inserts = new ArrayList<>(); + for (StatementParam item : jobParam.getTrans()) { + inserts.add(item.getValue()); + // Only can submit the first of insert sql, when not use statement set. + // zrx + //break; + } + currentSql = String.join(sqlSeparator, inserts); + //zrx + job.setExecuteSql(currentSql); + GatewayResult gatewayResult = submitByGateway(inserts); + job.setResult(InsertResult.success(gatewayResult.getAppId())); + job.setJobId(gatewayResult.getAppId()); + job.setJids(gatewayResult.getJids()); + job.setJobManagerAddress(formatAddress(gatewayResult.getWebURL())); + job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.SUCCESS); + success(); + hasReady = false; + } else { + int i = 0; + for (StatementParam item : jobParam.getTrans()) { + if (!hasReady) { + ready(); + } + hasReady = false; + i++; + currentSql = item.getValue(); + //zrx 一个sql相当于一个任务 + job.setExecuteSql(currentSql); + FlinkInterceptorResult flinkInterceptorResult = FlinkInterceptor.build(executor, + item.getValue()); + if (Asserts.isNotNull(flinkInterceptorResult.getTableResult())) { + //只取最后一个结果 + if (config.isUseResult() && i == jobParam.getTrans().size()) { + IResult result = ResultBuilder + .build(item.getType(), config.getMaxRowNum(), config.isUseChangeLog(), + config.isUseAutoCancel(), executor.getTimeZone()) + .getResult(flinkInterceptorResult.getTableResult()); + job.setResult(result); + } + } else { + if (!flinkInterceptorResult.isNoExecute()) { + TableResult tableResult = executor.executeSql(item.getValue()); + if (tableResult.getJobClient().isPresent()) { + job.setJobId(tableResult.getJobClient().get().getJobID().toHexString()); + job.setJids(new ArrayList() { + { + add(job.getJobId()); + } + }); + } + //只取最后一个结果 + if (config.isUseResult() && i == jobParam.getTrans().size()) { + IResult result = ResultBuilder.build(item.getType(), config.getMaxRowNum(), + config.isUseChangeLog(), config.isUseAutoCancel(), + executor.getTimeZone()).getResult(tableResult); + job.setResult(result); + } + } + } + job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.SUCCESS); + success(); + // Only can submit the first of insert sql, when not use statement set. + // zrx + //break; + } + } + } + if (!jobParam.getExecute().isEmpty()) { + if (useGateway) { + if (!hasReady) { + ready(); + } + List sqls = new ArrayList<>(); + String lastSql = ""; + for (StatementParam item : jobParam.getExecute()) { + currentSql = item.getValue(); + executor.executeSql(item.getValue()); + sqls.add(item.getValue()); + //zrx + /*if (!useStatementSet) { + break; + }*/ + } + lastSql = String.join(sqlSeparator, sqls); + //zrx + job.setExecuteSql(lastSql); + GatewayResult gatewayResult; + config.addGatewayConfig(executor.getSetConfig()); + if (runMode.isApplicationMode()) { + gatewayResult = Gateway.build(config.getGatewayConfig()).submitJar(); + } else { + StreamGraph streamGraph = executor.getStreamGraph(); + streamGraph.setJobName(config.getJobName()); + JobGraph jobGraph = streamGraph.getJobGraph(); + if (Asserts.isNotNullString(config.getSavePointPath())) { + jobGraph.setSavepointRestoreSettings( + SavepointRestoreSettings.forPath(config.getSavePointPath(), true)); + } + gatewayResult = Gateway.build(config.getGatewayConfig()).submitJobGraph(jobGraph); + } + job.setResult(InsertResult.success(gatewayResult.getAppId())); + job.setJobId(gatewayResult.getAppId()); + job.setJids(gatewayResult.getJids()); + job.setJobManagerAddress(formatAddress(gatewayResult.getWebURL())); + + job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.SUCCESS); + success(); + } else { + if (!hasReady) { + ready(); + } + String lastSql = ""; + List sqls = new ArrayList<>(); + for (StatementParam item : jobParam.getExecute()) { + currentSql = item.getValue(); + executor.executeSql(item.getValue()); + sqls.add(item.getValue()); + //zrx + /*if (!useStatementSet) { + break; + }*/ + } + lastSql = String.join(sqlSeparator, sqls); + //zrx + job.setExecuteSql(lastSql); + JobClient jobClient = executor.executeAsync(config.getJobName()); + if (Asserts.isNotNull(jobClient)) { + job.setJobId(jobClient.getJobID().toHexString()); + job.setJids(new ArrayList() { + { + add(job.getJobId()); + } + }); + } + if (config.isUseResult()) { + IResult result = ResultBuilder + .build(SqlType.EXECUTE, config.getMaxRowNum(), config.isUseChangeLog(), + config.isUseAutoCancel(), executor.getTimeZone()) + .getResult(null); + job.setResult(result); + } + + job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.SUCCESS); + success(); + } + } + /*job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.SUCCESS); + success();*/ + } catch (Exception e) { + String error = LogUtil.getError("Exception in executing FlinkSQL:\n" + currentSql, e); + job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.FAILED); + job.setError(error); + process.error(error); + failed(); + } finally { + close(); + } + return job.getJobResult(); + } + + private GatewayResult submitByGateway(List inserts) { + GatewayResult gatewayResult = null; + + // Use gateway need to build gateway config, include flink configeration. + config.addGatewayConfig(executor.getSetConfig()); + + if (runMode.isApplicationMode()) { + // Application mode need to submit dlink-app.jar that in the hdfs or image. + gatewayResult = Gateway.build(config.getGatewayConfig()).submitJar(); + } else { + JobGraph jobGraph = executor.getJobGraphFromInserts(inserts); + // Perjob mode need to set savepoint restore path, when recovery from savepoint. + if (Asserts.isNotNullString(config.getSavePointPath())) { + jobGraph.setSavepointRestoreSettings(SavepointRestoreSettings.forPath(config.getSavePointPath(), true)); + } + // Perjob mode need to submit job graph. + gatewayResult = Gateway.build(config.getGatewayConfig()).submitJobGraph(jobGraph); + } + return gatewayResult; + } + + private String formatAddress(String webURL) { + if (Asserts.isNotNullString(webURL)) { + return webURL.replaceAll("http://", ""); + } else { + return ""; + } + } + + public IResult executeDDL(String statement) { + String[] statements = SqlUtil.getStatements(statement, sqlSeparator); + try { + IResult result = null; + for (String item : statements) { + String newStatement = executor.pretreatStatement(item); + if (newStatement.trim().isEmpty()) { + continue; + } + SqlType operationType = Operations.getOperationType(newStatement); + if (SqlType.INSERT == operationType || SqlType.SELECT == operationType) { + continue; + } + LocalDateTime startTime = LocalDateTime.now(); + TableResult tableResult = executor.executeSql(newStatement); + result = ResultBuilder.build(operationType, config.getMaxRowNum(), false, false, executor.getTimeZone()) + .getResult(tableResult); + result.setStartTime(startTime); + } + return result; + } catch (Exception e) { + e.printStackTrace(); + } + return new ErrorResult(); + } + + public static SelectResult getJobData(String jobId) { + return ResultPool.get(jobId); + } + + public static SessionInfo createSession(String session, SessionConfig sessionConfig, String createUser) { + if (SessionPool.exist(session)) { + return SessionPool.getInfo(session); + } + Executor sessionExecutor = null; + if (sessionConfig.isUseRemote()) { + sessionExecutor = Executor.buildRemoteExecutor(EnvironmentSetting.build(sessionConfig.getAddress()), + ExecutorSetting.DEFAULT); + } else { + sessionExecutor = Executor.buildLocalExecutor(sessionConfig.getExecutorSetting()); + } + ExecutorEntity executorEntity = new ExecutorEntity(session, sessionConfig, createUser, LocalDateTime.now(), + sessionExecutor); + SessionPool.push(executorEntity); + return SessionInfo.build(executorEntity); + } + + public static List listSession(String createUser) { + return SessionPool.filter(createUser); + } + + public ExplainResult explainSql(String statement) { + return Explainer.build(executor, useStatementSet, sqlSeparator) + .initialize(this, config, statement).explainSql(statement); + } + + public ObjectNode getStreamGraph(String statement) { + return Explainer.build(executor, useStatementSet, sqlSeparator).initialize(this, config, statement).getStreamGraph(statement); + } + + public String getJobPlanJson(String statement) { + return Explainer.build(executor, useStatementSet, sqlSeparator).initialize(this, config, statement).getJobPlanInfo(statement).getJsonPlan(); + } + + public boolean cancel(String jobId) { + if (useGateway && !useRestAPI) { + config.getGatewayConfig().setFlinkConfig(FlinkConfig.build(jobId, ActionType.CANCEL.getValue(), + null, null)); + Gateway.build(config.getGatewayConfig()).savepointJob(); + return true; + } else { + try { + return FlinkAPI.build(config.getAddress()).stop(jobId); + } catch (Exception e) { + logger.error("停止作业时集群不存在: " + e); + } + return false; + } + } + + public SavePointResult savepoint(String jobId, String savePointType, String savePoint) { + if (useGateway && !useRestAPI) { + config.getGatewayConfig().setFlinkConfig(FlinkConfig.build(jobId, ActionType.SAVEPOINT.getValue(), + savePointType, null)); + return Gateway.build(config.getGatewayConfig()).savepointJob(savePoint); + } else { + return FlinkAPI.build(config.getAddress()).savepoints(jobId, savePointType); + } + } + + public JobResult executeJar() { + ProcessEntity process = ProcessContextHolder.getProcess(); + Job job = Job.init(runMode, config, executorSetting, executor, null, useGateway); + job.setNodeRecordId(process.getNodeRecordId()); + JobContextHolder.setJob(job); + ready(); + try { + GatewayResult gatewayResult = Gateway.build(config.getGatewayConfig()).submitJar(); + job.setResult(InsertResult.success(gatewayResult.getAppId())); + job.setJobId(gatewayResult.getAppId()); + job.setJids(gatewayResult.getJids()); + job.setJobManagerAddress(formatAddress(gatewayResult.getWebURL())); + job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.SUCCESS); + success(); + } catch (Exception e) { + String error = LogUtil.getError( + "Exception in executing Jar:\n" + config.getGatewayConfig().getAppConfig().getUserJarPath(), e); + job.setEndTime(LocalDateTime.now()); + job.setStatus(Job.JobStatus.FAILED); + job.setError(error); + failed(); + process.error(error); + } finally { + close(); + } + return job.getJobResult(); + } + + public static TestResult testGateway(GatewayConfig gatewayConfig) { + return Gateway.build(gatewayConfig).test(); + } + + public String exportSql(String sql) { + String statement = executor.pretreatStatement(sql); + StringBuilder sb = new StringBuilder(); + if (Asserts.isNotNullString(config.getJobName())) { + sb.append("set " + PipelineOptions.NAME.key() + " = " + config.getJobName() + ";\r\n"); + } + if (Asserts.isNotNull(config.getParallelism())) { + sb.append("set " + CoreOptions.DEFAULT_PARALLELISM.key() + " = " + config.getParallelism() + ";\r\n"); + } + if (Asserts.isNotNull(config.getCheckpoint())) { + sb.append("set " + ExecutionCheckpointingOptions.CHECKPOINTING_INTERVAL.key() + " = " + + config.getCheckpoint() + ";\r\n"); + } + if (Asserts.isNotNullString(config.getSavePointPath())) { + sb.append("set " + SavepointConfigOptions.SAVEPOINT_PATH + " = " + config.getSavePointPath() + ";\r\n"); + } + if (Asserts.isNotNull(config.getGatewayConfig()) + && Asserts.isNotNull(config.getGatewayConfig().getFlinkConfig().getConfiguration())) { + for (Map.Entry entry : config.getGatewayConfig().getFlinkConfig().getConfiguration() + .entrySet()) { + sb.append("set " + entry.getKey() + " = " + entry.getValue() + ";\r\n"); + } + } + + switch (GatewayType.get(config.getType())) { + case YARN_PER_JOB: + case YARN_APPLICATION: + sb.append("set " + DeploymentOptions.TARGET.key() + " = " + + GatewayType.get(config.getType()).getLongValue() + ";\r\n"); + if (Asserts.isNotNull(config.getGatewayConfig())) { + sb.append("set " + YarnConfigOptions.PROVIDED_LIB_DIRS.key() + " = " + + Collections.singletonList(config.getGatewayConfig().getClusterConfig().getFlinkLibPath()) + + ";\r\n"); + } + if (Asserts.isNotNull(config.getGatewayConfig()) + && Asserts.isNotNullString(config.getGatewayConfig().getFlinkConfig().getJobName())) { + sb.append("set " + YarnConfigOptions.APPLICATION_NAME.key() + " = " + + config.getGatewayConfig().getFlinkConfig().getJobName() + ";\r\n"); + } + break; + default: + } + sb.append(statement); + return sb.toString(); + } + + public Executor getExecutor() { + return executor; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobParam.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobParam.java new file mode 100644 index 0000000..f09fa44 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobParam.java @@ -0,0 +1,113 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.job; + +import net.srt.flink.function.data.model.UDF; + +import java.util.ArrayList; +import java.util.List; + +/** + * JobParam + * + * @author zrx + * @since 2021/11/16 + */ +public class JobParam { + + private List statements; + private List ddl; + private List trans; + private List execute; + private List udfList; + + public JobParam(List ddl, List trans) { + this.ddl = ddl; + this.trans = trans; + } + + public JobParam( + List statements, + List ddl, + List trans, + List execute) { + this.statements = statements; + this.ddl = ddl; + this.trans = trans; + this.execute = execute; + } + + public JobParam( + List statements, + List ddl, + List trans, + List execute, + List udfList) { + this.statements = statements; + this.ddl = ddl; + this.trans = trans; + this.execute = execute; + this.udfList = udfList; + } + + public List getStatements() { + return statements; + } + + public void setStatements(List statements) { + this.statements = statements; + } + + public List getDdl() { + return ddl; + } + + public void setDdl(List ddl) { + this.ddl = ddl; + } + + public List getTrans() { + return trans; + } + + public List getTransStatement() { + List statementList = new ArrayList<>(); + for (StatementParam statementParam : trans) { + statementList.add(statementParam.getValue()); + } + return statementList; + } + + public void setTrans(List trans) { + this.trans = trans; + } + + public List getExecute() { + return execute; + } + + public void setExecute(List execute) { + this.execute = execute; + } + + public List getUdfList() { + return udfList; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobResult.java new file mode 100644 index 0000000..7a3c7d4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/JobResult.java @@ -0,0 +1,89 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.job; + +import com.fasterxml.jackson.annotation.JsonFormat; +import com.fasterxml.jackson.databind.annotation.JsonDeserialize; +import com.fasterxml.jackson.databind.annotation.JsonSerialize; +import com.fasterxml.jackson.datatype.jsr310.deser.LocalDateTimeDeserializer; +import com.fasterxml.jackson.datatype.jsr310.ser.LocalDateTimeSerializer; +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.result.IResult; + +import java.time.LocalDateTime; + +/** + * JobResult + * + * @author zrx + * @since 2021/6/29 23:56 + */ +@Getter +@Setter +public class JobResult { + private Integer id; + private JobConfig jobConfig; + private String jobManagerAddress; + private Job.JobStatus status; + private boolean success; + private String statement; + private String jobId; + private Integer jobInstanceId; + private String error; + private IResult result; + @JsonDeserialize(using = LocalDateTimeDeserializer.class) + @JsonSerialize(using = LocalDateTimeSerializer.class) + @JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss") + private LocalDateTime startTime; + @JsonDeserialize(using = LocalDateTimeDeserializer.class) + @JsonSerialize(using = LocalDateTimeSerializer.class) + @JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss") + private LocalDateTime endTime; + + private String log; + + public JobResult() { + } + + public JobResult(Integer id, Integer jobInstanceId, JobConfig jobConfig, String jobManagerAddress, Job.JobStatus status, + String statement, String jobId, String error, IResult result, LocalDateTime startTime, LocalDateTime endTime) { + this.id = id; + this.jobInstanceId = jobInstanceId; + this.jobConfig = jobConfig; + this.jobManagerAddress = jobManagerAddress; + this.status = status; + this.success = status.equals(Job.JobStatus.SUCCESS); + this.statement = statement; + this.jobId = jobId; + this.error = error; + this.result = result; + this.startTime = startTime; + this.endTime = endTime; + } + + public void setStartTimeNow() { + this.setStartTime(LocalDateTime.now()); + } + + public void setEndTimeNow() { + this.setEndTime(LocalDateTime.now()); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/RunTime.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/RunTime.java new file mode 100644 index 0000000..59dcd9a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/RunTime.java @@ -0,0 +1,39 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.job; + +/** + * RunTime + * + * @author zrx + * @since 2021/6/27 18:06 + */ +public abstract class RunTime { + + abstract boolean init(); + + abstract boolean ready(); + + abstract boolean success(); + + abstract boolean failed(); + + abstract boolean close(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/StatementParam.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/StatementParam.java new file mode 100644 index 0000000..b69f3d0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/job/StatementParam.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.job; + + +import net.srt.flink.executor.parser.SqlType; + +/** + * StatementParam + * + * @author zrx + * @since 2021/11/16 + */ +public class StatementParam { + private String value; + private SqlType type; + + public StatementParam(String value, SqlType type) { + this.value = value; + this.type = type; + } + + public String getValue() { + return value; + } + + public void setValue(String value) { + this.value = value; + } + + public SqlType getType() { + return type; + } + + public void setType(SqlType type) { + this.type = type; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/plus/SqlResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/plus/SqlResult.java new file mode 100644 index 0000000..c6de83e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/plus/SqlResult.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.plus; + +import org.apache.flink.table.api.TableResult; + +/** + * SqlResult + * + * @author zrx + * @since 2021/6/22 + **/ +public class SqlResult { + private TableResult tableResult; + private boolean isSuccess = true; + private String errorMsg; + + public static SqlResult NULL = new SqlResult(false, "未检测到有效的Sql"); + + public SqlResult(TableResult tableResult) { + this.tableResult = tableResult; + } + + public SqlResult(boolean isSuccess, String errorMsg) { + this.isSuccess = isSuccess; + this.errorMsg = errorMsg; + } + + public TableResult getTableResult() { + return tableResult; + } + + public void setTableResult(TableResult tableResult) { + this.tableResult = tableResult; + } + + public boolean isSuccess() { + return isSuccess; + } + + public void setSuccess(boolean success) { + isSuccess = success; + } + + public String getErrorMsg() { + return errorMsg; + } + + public void setErrorMsg(String errorMsg) { + this.errorMsg = errorMsg; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/DDLResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/DDLResult.java new file mode 100644 index 0000000..556b371 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/DDLResult.java @@ -0,0 +1,63 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.result.AbstractResult; +import net.srt.flink.common.result.IResult; + +import java.time.LocalDateTime; +import java.util.List; +import java.util.Map; +import java.util.Set; + +/** + * DDLResult + * + * @author zrx + * @since 2021/6/29 22:06 + */ +@Setter +@Getter +public class DDLResult extends AbstractResult implements IResult { + + private List> rowData; + private Integer total; + private Set columns; + + public DDLResult(boolean success) { + this.success = success; + this.endTime = LocalDateTime.now(); + } + + public DDLResult(List> rowData, Integer total, Set columns) { + this.rowData = rowData; + this.total = total; + this.columns = columns; + this.success = true; + this.endTime = LocalDateTime.now(); + } + + @Override + public String getJobId() { + return null; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/DDLResultBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/DDLResultBuilder.java new file mode 100644 index 0000000..f8f6cfa --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/DDLResultBuilder.java @@ -0,0 +1,36 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import net.srt.flink.common.result.IResult; +import org.apache.flink.table.api.TableResult; + +/** + * DDLResultBuilder + * + * @author zrx + * @since 2021/6/29 22:43 + */ +public class DDLResultBuilder implements ResultBuilder { + @Override + public IResult getResult(TableResult tableResult) { + return new DDLResult(true); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ErrorResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ErrorResult.java new file mode 100644 index 0000000..5da417a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ErrorResult.java @@ -0,0 +1,44 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import net.srt.flink.common.result.AbstractResult; +import net.srt.flink.common.result.IResult; + +import java.time.LocalDateTime; + +/** + * ErrorResult + * + * @author zrx + * @since 2021/6/29 22:57 + */ +public class ErrorResult extends AbstractResult implements IResult { + + public ErrorResult() { + this.success = false; + this.endTime = LocalDateTime.now(); + } + + @Override + public String getJobId() { + return null; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/InsertResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/InsertResult.java new file mode 100644 index 0000000..ebf8662 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/InsertResult.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.result.AbstractResult; +import net.srt.flink.common.result.IResult; + +import java.time.LocalDateTime; + +/** + * InsertResult + * + * @author zrx + * @since 2021/5/25 19:08 + **/ +@Getter +@Setter +public class InsertResult extends AbstractResult implements IResult { + + private String jobID; + + public InsertResult(String jobID, boolean success) { + this.jobID = jobID; + this.success = success; + this.endTime = LocalDateTime.now(); + } + + public static InsertResult success(String jobID) { + return new InsertResult(jobID, true); + } + + @Override + public String getJobId() { + return jobID; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/InsertResultBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/InsertResultBuilder.java new file mode 100644 index 0000000..ec30c66 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/InsertResultBuilder.java @@ -0,0 +1,42 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import net.srt.flink.common.result.IResult; +import org.apache.flink.table.api.TableResult; + +/** + * InsertBuilder + * + * @author zrx + * @since 2021/6/29 22:23 + */ +public class InsertResultBuilder implements ResultBuilder { + + @Override + public IResult getResult(TableResult tableResult) { + if (tableResult.getJobClient().isPresent()) { + String jobId = tableResult.getJobClient().get().getJobID().toHexString(); + return new InsertResult(jobId, true); + } else { + return new InsertResult(null, false); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/JobSubmitResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/JobSubmitResult.java new file mode 100644 index 0000000..12a4e7c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/JobSubmitResult.java @@ -0,0 +1,29 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +/** + * JobSubmitRecord + * + * @author zrx + * @since 2021/5/25 15:32 + **/ +public class JobSubmitResult { +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ResultBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ResultBuilder.java new file mode 100644 index 0000000..e339255 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ResultBuilder.java @@ -0,0 +1,50 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import net.srt.flink.common.result.IResult; +import net.srt.flink.executor.parser.SqlType; +import org.apache.flink.table.api.TableResult; + +/** + * ResultBuilder + * + * @author zrx + * @since 2021/5/25 15:59 + **/ +public interface ResultBuilder { + + static ResultBuilder build(SqlType operationType, Integer maxRowNum, boolean isChangeLog, boolean isAutoCancel, String timeZone) { + switch (operationType) { + case SELECT: + return new SelectResultBuilder(maxRowNum, isChangeLog, isAutoCancel, timeZone); + case SHOW: + case DESC: + case DESCRIBE: + return new ShowResultBuilder(); + case INSERT: + return new InsertResultBuilder(); + default: + return new DDLResultBuilder(); + } + } + + IResult getResult(TableResult tableResult); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ResultPool.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ResultPool.java new file mode 100644 index 0000000..1dea538 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ResultPool.java @@ -0,0 +1,66 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +/** + * ResultPool + * + * @author zrx + * @since 2021/7/1 22:20 + */ +public final class ResultPool { + + private ResultPool() { + } + + private static final Map results = new ConcurrentHashMap<>(); + + public static boolean containsKey(String key) { + return results.containsKey(key); + } + + public static void put(SelectResult result) { + results.put(result.getJobId(), result); + } + + public static SelectResult get(String key) { + return results.getOrDefault(key, SelectResult.buildDestruction(key)); + } + + public static SelectResult realGet(String key) { + return results.get(key); + } + + public static boolean remove(String key) { + if (results.containsKey(key)) { + results.remove(key); + return true; + } + return false; + } + + public static void clear() { + results.clear(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ResultRunnable.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ResultRunnable.java new file mode 100644 index 0000000..9b30f9b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ResultRunnable.java @@ -0,0 +1,159 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import com.google.common.collect.Streams; +import lombok.extern.slf4j.Slf4j; +import net.srt.flink.client.utils.FlinkUtil; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.executor.constant.FlinkConstant; +import org.apache.flink.core.execution.JobClient; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.types.Row; +import org.apache.flink.types.RowKind; + +import java.time.Instant; +import java.time.ZoneId; +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Map; + +/** + * ResultRunnable + * + * @author zrx + * @since 2021/7/1 22:50 + */ +@Slf4j +public class ResultRunnable implements Runnable { + + private static final String nullColumn = ""; + private final TableResult tableResult; + private final Integer maxRowNum; + private final boolean isChangeLog; + private final boolean isAutoCancel; + private final String timeZone; + + public ResultRunnable(TableResult tableResult, Integer maxRowNum, boolean isChangeLog, boolean isAutoCancel, + String timeZone) { + this.tableResult = tableResult; + this.maxRowNum = maxRowNum; + this.isChangeLog = isChangeLog; + this.isAutoCancel = isAutoCancel; + this.timeZone = timeZone; + } + + @Override + public void run() { + try { + getResult(); + } catch (Exception e) { + // Nothing to do + } + } + + public void syncRun() { + try { + getResult(); + } catch (Exception e) { + // Nothing to do + } + } + + private void getResult() { + tableResult.getJobClient().ifPresent(jobClient -> { + String jobId = jobClient.getJobID().toHexString(); + if (!ResultPool.containsKey(jobId)) { + ResultPool.put(new SelectResult(jobId, new ArrayList<>(), new LinkedHashSet<>())); + } + + SelectResult selectResult = ResultPool.get(jobId); + try { + // zrx 睡5s后获取,避免获取不到,比如非statement下前一条语句为insert紧跟着select,这个时候select不会有任何数据 + //Thread.sleep(5000); + if (isChangeLog) { + catchChangLog(selectResult); + } else { + catchData(selectResult); + } + selectResult.setEnd(true); + } catch (Exception e) { + selectResult.setSuccess(false); + selectResult.setError(LogUtil.getError(e)); + selectResult.setEnd(true); + log.error(e.getMessage(), e); + } + }); + } + + private void catchChangLog(SelectResult selectResult) { + List> rows = selectResult.getRowData(); + List columns = FlinkUtil.catchColumn(tableResult); + + columns.add(0, FlinkConstant.OP); + selectResult.setColumns(new LinkedHashSet<>(columns)); + Streams.stream(tableResult.collect()).limit(maxRowNum).forEach(row -> { + Map map = getFieldMap(columns.subList(1, columns.size()), row); + map.put(FlinkConstant.OP, row.getKind().name() + ":" + row.getKind().shortString()); + rows.add(map); + }); + + if (isAutoCancel) { + tableResult.getJobClient().ifPresent(JobClient::cancel); + } + } + + private void catchData(SelectResult selectResult) { + List> rows = selectResult.getRowData(); + List columns = FlinkUtil.catchColumn(tableResult); + + selectResult.setColumns(new LinkedHashSet<>(columns)); + Streams.stream(tableResult.collect()).limit(maxRowNum).forEach(row -> { + Map map = getFieldMap(columns, row); + if (RowKind.UPDATE_BEFORE == row.getKind() || RowKind.DELETE == row.getKind()) { + rows.remove(map); + } else { + rows.add(map); + } + }); + + if (isAutoCancel) { + tableResult.getJobClient().ifPresent(JobClient::cancel); + } + } + + private Map getFieldMap(List columns, Row row) { + Map map = new LinkedHashMap<>(); + for (int i = 0; i < row.getArity(); ++i) { + Object field = row.getField(i); + String column = columns.get(i); + if (field == null) { + map.put(column, nullColumn); + } else if (field instanceof Instant) { + map.put(column, ((Instant) field).atZone(ZoneId.of(timeZone)).toLocalDateTime().toString()); + } else { + map.put(column, field); + } + } + return map; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/RunResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/RunResult.java new file mode 100644 index 0000000..961aa36 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/RunResult.java @@ -0,0 +1,163 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import net.srt.flink.common.result.IResult; +import net.srt.flink.executor.executor.ExecutorSetting; + +import java.time.LocalDateTime; + +/** + * RunResult + * + * @author zrx + * @since 2021/5/25 16:46 + **/ +public class RunResult { + private String sessionId; + private String jobId; + private String jobName; + private String statement; + private String flinkHost; + private Integer flinkPort; + private boolean success; + private long time; + private LocalDateTime finishDate; + private String msg; + private String error; + private IResult result; + private ExecutorSetting setting; + + public RunResult() { + } + + public RunResult(String sessionId, String statement, String flinkHost, Integer flinkPort, ExecutorSetting setting, String jobName) { + this.sessionId = sessionId; + this.statement = statement; + this.flinkHost = flinkHost; + this.flinkPort = flinkPort; + this.setting = setting; + this.jobName = jobName; + } + + public String getJobName() { + return jobName; + } + + public void setJobName(String jobName) { + this.jobName = jobName; + } + + public String getJobId() { + return jobId; + } + + public void setJobId(String jobId) { + this.jobId = jobId; + } + + public ExecutorSetting getSetting() { + return setting; + } + + public void setSetting(ExecutorSetting setting) { + this.setting = setting; + } + + public String getSessionId() { + return sessionId; + } + + public void setSessionId(String sessionId) { + this.sessionId = sessionId; + } + + public String getStatement() { + return statement; + } + + public void setStatement(String statement) { + this.statement = statement; + } + + public boolean isSuccess() { + return success; + } + + public void setSuccess(boolean success) { + this.success = success; + } + + public String getError() { + return error; + } + + public void setError(String error) { + this.error = error; + } + + public IResult getResult() { + return result; + } + + public void setResult(IResult result) { + this.result = result; + } + + public String getFlinkHost() { + return flinkHost; + } + + public void setFlinkHost(String flinkHost) { + this.flinkHost = flinkHost; + } + + public long getTime() { + return time; + } + + public void setTime(long time) { + this.time = time; + } + + public LocalDateTime getFinishDate() { + return finishDate; + } + + public void setFinishDate(LocalDateTime finishDate) { + this.finishDate = finishDate; + } + + public String getMsg() { + return msg; + } + + public void setMsg(String msg) { + this.msg = msg; + } + + public Integer getFlinkPort() { + return flinkPort; + } + + public void setFlinkPort(Integer flinkPort) { + this.flinkPort = flinkPort; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/SelectResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/SelectResult.java new file mode 100644 index 0000000..00b7386 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/SelectResult.java @@ -0,0 +1,103 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import lombok.AllArgsConstructor; +import lombok.Getter; +import lombok.NoArgsConstructor; +import lombok.Setter; +import net.srt.flink.common.result.AbstractResult; +import net.srt.flink.common.result.IResult; + +import java.time.LocalDateTime; +import java.util.ArrayList; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +/** + * SelectResult + * + * @author zrx + * @since 2021/5/25 16:01 + **/ +@Setter +@Getter +@NoArgsConstructor +@AllArgsConstructor +public class SelectResult extends AbstractResult implements IResult { + + private String jobId; + private List> rowData; + private Integer total; + private Integer currentCount; + private LinkedHashSet columns; + private boolean isDestroyed; + private boolean end = false; + + public SelectResult(List> rowData, Integer total, Integer currentCount, LinkedHashSet columns, + String jobId, boolean success) { + this.rowData = rowData; + this.total = total; + this.currentCount = currentCount; + this.columns = columns; + this.jobId = jobId; + this.success = success; + //this.endTime = LocalDateTime.now(); + this.isDestroyed = false; + } + + public SelectResult(String jobId, List> rowData, LinkedHashSet columns) { + this.jobId = jobId; + this.rowData = rowData; + this.total = rowData.size(); + this.columns = columns; + this.success = true; + this.isDestroyed = false; + } + + public SelectResult(String jobId, boolean isDestroyed, boolean success) { + this.jobId = jobId; + this.isDestroyed = isDestroyed; + this.success = success; + this.endTime = LocalDateTime.now(); + } + + @Override + public String getJobId() { + return jobId; + } + + public static SelectResult buildDestruction(String jobID) { + return new SelectResult(jobID, true, false); + } + + public static SelectResult buildSuccess(String jobID) { + SelectResult selectResult = new SelectResult(jobID, false, true); + selectResult.setRowData(new ArrayList<>()); + selectResult.setColumns(new LinkedHashSet<>()); + return selectResult; + } + + public static SelectResult buildFailed() { + return new SelectResult(null, false, false); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/SelectResultBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/SelectResultBuilder.java new file mode 100644 index 0000000..1971bd6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/SelectResultBuilder.java @@ -0,0 +1,66 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import net.srt.flink.common.result.IResult; +import org.apache.flink.table.api.TableResult; + +import java.util.ArrayList; +import java.util.LinkedHashSet; + +/** + * SelectBuilder + * + * @author zrx + * @since 2021/5/25 16:03 + **/ +public class SelectResultBuilder implements ResultBuilder { + + private final Integer maxRowNum; + private final boolean isChangeLog; + private final boolean isAutoCancel; + private final String timeZone; + + public SelectResultBuilder(Integer maxRowNum, boolean isChangeLog, boolean isAutoCancel, String timeZone) { + this.maxRowNum = maxRowNum; + this.isChangeLog = isChangeLog; + this.isAutoCancel = isAutoCancel; + this.timeZone = timeZone; + } + + @Override + public IResult getResult(TableResult tableResult) { + //清空之前的结果 + //ResultPool.clear(); + if (tableResult.getJobClient().isPresent()) { + String jobId = tableResult.getJobClient().get().getJobID().toHexString(); + if (!ResultPool.containsKey(jobId)) { + ResultPool.put(new SelectResult(jobId, new ArrayList<>(), new LinkedHashSet<>())); + } + ResultRunnable runnable = new ResultRunnable(tableResult, maxRowNum, isChangeLog, isAutoCancel, timeZone); + Thread thread = new Thread(runnable, jobId); + thread.start(); + return SelectResult.buildSuccess(jobId); + } else { + return SelectResult.buildFailed(); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ShowResultBuilder.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ShowResultBuilder.java new file mode 100644 index 0000000..41a842b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/ShowResultBuilder.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import net.srt.flink.client.utils.FlinkUtil; +import net.srt.flink.common.result.IResult; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.types.Row; + +import java.util.ArrayList; +import java.util.Iterator; +import java.util.LinkedHashMap; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +/** + * ShowResultBuilder + * + * @author zrx + * @since 2021/7/1 23:57 + */ +public class ShowResultBuilder implements ResultBuilder { + + private String nullColumn = ""; + + public ShowResultBuilder() { + } + + @Override + public IResult getResult(TableResult tableResult) { + List columns = FlinkUtil.catchColumn(tableResult); + Set column = new LinkedHashSet(columns); + List> rows = new ArrayList<>(); + Iterator it = tableResult.collect(); + while (it.hasNext()) { + Map map = new LinkedHashMap<>(); + Row row = it.next(); + for (int i = 0; i < row.getArity(); ++i) { + Object field = row.getField(i); + if (field == null) { + map.put(columns.get(i), nullColumn); + } else { + map.put(columns.get(i), field.toString()); + } + } + rows.add(map); + } + return new DDLResult(rows, rows.size(), column); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/SubmitResult.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/SubmitResult.java new file mode 100644 index 0000000..f2f71ec --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/result/SubmitResult.java @@ -0,0 +1,152 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.result; + +import net.srt.flink.common.result.IResult; + +import java.time.LocalDateTime; +import java.util.List; + +/** + * SubmitResult + * + * @author zrx + * @since 2021/5/25 19:04 + **/ +public class SubmitResult { + private String sessionId; + private List statements; + private String flinkHost; + private String jobId; + private String jobName; + private boolean success; + private long time; + private LocalDateTime finishDate; + private String msg; + private String error; + private IResult result; + + public SubmitResult() { + } + + public static SubmitResult error(String error) { + return new SubmitResult(false, error); + } + + public SubmitResult(boolean success, String error) { + this.success = success; + this.error = error; + } + + public SubmitResult(String sessionId, List statements, String flinkHost, String jobName) { + this.sessionId = sessionId; + this.statements = statements; + this.flinkHost = flinkHost; + this.jobName = jobName; + } + + public String getSessionId() { + return sessionId; + } + + public void setSessionId(String sessionId) { + this.sessionId = sessionId; + } + + public List getStatements() { + return statements; + } + + public void setStatements(List statements) { + this.statements = statements; + } + + public String getFlinkHost() { + return flinkHost; + } + + public void setFlinkHost(String flinkHost) { + this.flinkHost = flinkHost; + } + + public boolean isSuccess() { + return success; + } + + public void setSuccess(boolean success) { + this.success = success; + } + + public long getTime() { + return time; + } + + public void setTime(long time) { + this.time = time; + } + + public LocalDateTime getFinishDate() { + return finishDate; + } + + public void setFinishDate(LocalDateTime finishDate) { + this.finishDate = finishDate; + } + + public String getMsg() { + return msg; + } + + public void setMsg(String msg) { + this.msg = msg; + } + + public String getError() { + return error; + } + + public void setError(String error) { + this.error = error; + } + + public IResult getResult() { + return result; + } + + public void setResult(IResult result) { + this.result = result; + } + + public String getJobId() { + return jobId; + } + + public void setJobId(String jobId) { + this.jobId = jobId; + } + + public String getJobName() { + return jobName; + } + + public void setJobName(String jobName) { + this.jobName = jobName; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/ExecutorEntity.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/ExecutorEntity.java new file mode 100644 index 0000000..dc05d78 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/ExecutorEntity.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.session; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.executor.executor.Executor; + +import java.time.LocalDateTime; + +/** + * FlinkEntity + * + * @author zrx + * @since 2021/5/25 14:45 + **/ +@Setter +@Getter +public class ExecutorEntity { + private String sessionId; + private SessionConfig sessionConfig; + private String createUser; + private LocalDateTime createTime; + private Executor executor; + + public ExecutorEntity(String sessionId, Executor executor) { + this.sessionId = sessionId; + this.executor = executor; + } + + public ExecutorEntity(String sessionId, SessionConfig sessionConfig, String createUser, LocalDateTime createTime, Executor executor) { + this.sessionId = sessionId; + this.sessionConfig = sessionConfig; + this.createUser = createUser; + this.createTime = createTime; + this.executor = executor; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/SessionConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/SessionConfig.java new file mode 100644 index 0000000..7bc4bb4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/SessionConfig.java @@ -0,0 +1,62 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.session; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.executor.executor.ExecutorSetting; + +/** + * SessionConfig + * + * @author zrx + * @since 2021/7/6 21:59 + */ +@Getter +@Setter +public class SessionConfig { + private SessionType type; + private boolean useRemote; + private Integer clusterId; + private String clusterName; + private String address; + + public enum SessionType { + PUBLIC, + PRIVATE + } + + public SessionConfig(SessionType type, boolean useRemote, Integer clusterId, String clusterName, String address) { + this.type = type; + this.useRemote = useRemote; + this.clusterId = clusterId; + this.clusterName = clusterName; + this.address = address; + } + + public static SessionConfig build(String type, boolean useRemote, Integer clusterId, String clusterName, String address) { + return new SessionConfig(SessionType.valueOf(type), useRemote, clusterId, clusterName, address); + } + + public ExecutorSetting getExecutorSetting() { + return new ExecutorSetting(true); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/SessionInfo.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/SessionInfo.java new file mode 100644 index 0000000..0308f5d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/SessionInfo.java @@ -0,0 +1,52 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.session; + +import lombok.Getter; +import lombok.Setter; + +import java.time.LocalDateTime; + +/** + * SessionInfo + * + * @author zrx + * @since 2021/7/6 22:22 + */ +@Setter +@Getter +public class SessionInfo { + private String session; + private SessionConfig sessionConfig; + private String createUser; + private LocalDateTime createTime; + + public SessionInfo(String session, SessionConfig sessionConfig, String createUser, LocalDateTime createTime) { + this.session = session; + this.sessionConfig = sessionConfig; + this.createUser = createUser; + this.createTime = createTime; + } + + public static SessionInfo build(ExecutorEntity executorEntity) { + return new SessionInfo(executorEntity.getSessionId(), executorEntity.getSessionConfig(), executorEntity.getCreateUser(), executorEntity.getCreateTime()); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/SessionPool.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/SessionPool.java new file mode 100644 index 0000000..c524fb3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/session/SessionPool.java @@ -0,0 +1,104 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.session; + + +import net.srt.flink.executor.constant.FlinkConstant; + +import java.util.ArrayList; +import java.util.List; +import java.util.Vector; + +/** + * SessionPool + * + * @author zrx + * @since 2021/5/25 14:32 + **/ +public class SessionPool { + + private static volatile List executorList = new Vector<>(FlinkConstant.DEFAULT_SESSION_COUNT); + + public static boolean exist(String sessionId) { + for (ExecutorEntity executorEntity : executorList) { + if (executorEntity.getSessionId().equals(sessionId)) { + return true; + } + } + return false; + } + + public static Integer push(ExecutorEntity executorEntity) { + if (executorList.size() >= FlinkConstant.DEFAULT_SESSION_COUNT * FlinkConstant.DEFAULT_FACTOR) { + executorList.remove(0); + } else if (executorList.size() >= FlinkConstant.DEFAULT_SESSION_COUNT) { + executorList.clear(); + } + executorList.add(executorEntity); + return executorList.size(); + } + + public static Integer remove(String sessionId) { + int count = executorList.size(); + for (int i = 0; i < executorList.size(); i++) { + if (sessionId.equals(executorList.get(i).getSessionId())) { + executorList.remove(i); + break; + } + } + return count - executorList.size(); + } + + public static ExecutorEntity get(String sessionId) { + for (ExecutorEntity executorEntity : executorList) { + if (executorEntity.getSessionId().equals(sessionId)) { + return executorEntity; + } + } + return null; + } + + public static List list() { + return executorList; + } + + public static List filter(String createUser) { + List sessionInfos = new ArrayList<>(); + for (ExecutorEntity item : executorList) { + if (item.getSessionConfig().getType() == SessionConfig.SessionType.PUBLIC) { + sessionInfos.add(SessionInfo.build(item)); + } else { + if (createUser != null && createUser.equals(item.getCreateUser())) { + sessionInfos.add(SessionInfo.build(item)); + } + } + } + return sessionInfos; + } + + public static SessionInfo getInfo(String sessionId) { + ExecutorEntity executorEntity = get(sessionId); + if (executorEntity != null) { + return SessionInfo.build(executorEntity); + } else { + return null; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/utils/MapParseUtils.java b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/utils/MapParseUtils.java new file mode 100644 index 0000000..fc4d425 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/java/net/srt/flink/core/utils/MapParseUtils.java @@ -0,0 +1,414 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.core.utils; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Deque; +import java.util.HashMap; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.Stack; +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import java.util.stream.Collectors; + +/** + * MapParseUtils + * + * @author zrx + * @since 2021/6/22 + **/ +public class MapParseUtils { + + /** + * 数组是否嵌套 + * + * @param inStr + * @return + */ + public static Boolean getStrIsNest(String inStr) { + if (inStr == null || inStr.isEmpty()) { + return false; + } + Deque stack = new LinkedList<>(); + for (int i = 0; i < inStr.length(); i++) { + if (inStr.charAt(i) == '[') { + stack.push(i); + } + if (inStr.charAt(i) == ']') { + stack.pop(); + if (stack.size() != 0) { + return true; + } + } + } + return false; + } + + /** + * 获取嵌套最外层的下标对 table=[[default_catalog, default_database, score, project=[sid, cls, score]]], fields=[sid, cls, score] + * ^(下标x) ^(下标y) ^(下标z) ^(下标n) + * List [x, y, z, n] + * + * @param inStr + * @return + */ + public static List getNestList(String inStr) { + Stack nestIndexList = new Stack(); + if (inStr == null || inStr.isEmpty()) { + return nestIndexList; + } + Deque stack = new LinkedList<>(); + for (int i = 0; i < inStr.length(); i++) { + if (inStr.charAt(i) == '[') { + if (stack.isEmpty()) { + nestIndexList.add(i); + } + stack.push(i); + } + if (inStr.charAt(i) == ']') { + stack.pop(); + if (stack.size() == 0) { + nestIndexList.add(i); + } + } + } + return nestIndexList; + } + + /** + * 获取最外层括号下标 table=[((f.SERIAL_NO || f.PRESC_NO) || f.ITEM_NO) AS EXPR$0, ((f.DATE || f.TIME) || f.ITEM_NO) AS EXPR$2] + * ^(下标x) ^(下标y) ^(下标z) ^(下标n) + * List [x, y, z, n] + * + * @param inStr + * @return + */ + public static List getBracketsList(String inStr) { + Stack nestIndexList = new Stack(); + if (inStr == null || inStr.isEmpty()) { + return nestIndexList; + } + Deque stack = new LinkedList<>(); + for (int i = 0; i < inStr.length(); i++) { + if (inStr.charAt(i) == '(') { + if (stack.isEmpty()) { + nestIndexList.add(i); + } + stack.push(i); + } + if (inStr.charAt(i) == ')') { + stack.pop(); + if (stack.size() == 0) { + nestIndexList.add(i); + } + } + } + return nestIndexList; + } + + public static List getSelectList(String inStr) { + List selects = new ArrayList<>(); + if (inStr == null || inStr.isEmpty()) { + return selects; + } + int startIndex = -1; + // lineage only need select or field + if (inStr.contains("select=[")) { + startIndex = inStr.indexOf("select=[") + 8; + } else if (inStr.contains("field=[")) { + startIndex = inStr.indexOf("field=[") + 7; + } + if (startIndex < 0) { + return selects; + } + Deque stack = new LinkedList<>(); + for (int i = startIndex; i < inStr.length(); i++) { + if (inStr.charAt(i) == ']' && stack.size() == 0) { + selects.add(inStr.substring(startIndex, i)); + return selects; + } + if (inStr.charAt(i) == ',' && stack.size() == 0) { + selects.add(inStr.substring(startIndex, i)); + startIndex = i + 1; + } + if (inStr.charAt(i) == '(') { + stack.push(i); + } + if (inStr.charAt(i) == ')') { + stack.pop(); + } + } + if (startIndex < inStr.length()) { + selects.add(inStr.substring(startIndex, inStr.length() - 1)); + } + return selects; + } + + private static Map> getKeyAndValues(String inStr) { + Map> map = new HashMap<>(); + if (inStr == null || inStr.isEmpty()) { + return map; + } + Deque stack = new LinkedList<>(); + int startIndex = 0; + String key = null; + for (int i = 0; i < inStr.length(); i++) { + char currentChar = inStr.charAt(i); + if (stack.size() == 0 && currentChar == '[') { + key = inStr.substring(startIndex, i - 1).trim(); + map.put(key, new ArrayList<>()); + startIndex = i + 1; + continue; + } + if (stack.size() == 0 && currentChar == ']') { + map.get(key).add(inStr.substring(startIndex, i).trim()); + startIndex = i + 2; + key = null; + continue; + } + if (key != null && stack.size() == 0 && currentChar == ',') { + map.get(key).add(inStr.substring(startIndex, i).trim()); + startIndex = i + 1; + continue; + } + if (currentChar == '(') { + stack.push(i); + continue; + } + if (currentChar == ')') { + stack.pop(); + } + } + return map; + } + + public static boolean hasField(String fragement, String field) { + if (field.startsWith("$")) { + field = field.substring(1, field.length()); + } + String sign = "([^a-zA-Z0-9_])"; + Pattern p = Pattern.compile(sign + field + sign); + Matcher m = p.matcher(" " + fragement + " "); + while (m.find()) { + return true; + } + return false; + } + + public static String replaceField(String operation, String field, String fragement) { + String newOperation = operation; + String sign = "([^a-zA-Z0-9_])"; + Pattern p = Pattern.compile(sign + field + sign); + Matcher m = p.matcher(operation); + while (m.find()) { + newOperation = newOperation.substring(0, m.start(1) + 1) + fragement + newOperation.substring(m.end(1) + 1, newOperation.length()); + } + return newOperation; + } + + /** + * 转换map + * + * @param inStr + * @return + */ + public static Map parse(String inStr, String... blackKeys) { + if (getStrIsNest(inStr)) { + return parseForNest(inStr, blackKeys); + } else { + return parseForNotNest(inStr); + } + } + + /** + * 嵌套解析 + * + * @param inStr + * @return + */ + public static Map parseForNest(String inStr, String... blackKeys) { + Map map = new HashMap(); + List nestList = getNestList(inStr); + int num = nestList.size() / 2; + for (int i = 0; i < num; i++) { + if (i == 0) { + String substring = inStr.substring(0, nestList.get(i + 1) + 1); + String key = getMapKey(substring); + boolean isNext = true; + for (int j = 0; j < blackKeys.length; j++) { + if (key.equals(blackKeys[j])) { + isNext = false; + } + } + if (isNext) { + if (getStrIsNest(substring)) { + map.put(key, getMapListNest(substring)); + } else { + map.put(key, getMapList(substring)); + } + } else { + map.put(key, getTextValue(substring)); + } + } else { + String substring = inStr.substring(nestList.get(2 * i - 1) + 2, nestList.get(2 * i + 1) + 1); + String key = getMapKey(substring); + boolean isNext = true; + for (int j = 0; j < blackKeys.length; j++) { + if (key.equals(blackKeys[j])) { + isNext = false; + } + } + if (isNext) { + if (getStrIsNest(substring)) { + map.put(key, getMapListNest(substring)); + } else { + map.put(key, getMapList(substring)); + } + } else { + map.put(key, getTextValue(substring)); + } + } + } + return map; + } + + /** + * @return java.util.Map + * @author lewnn + * @operate + * @date 2021/8/20 15:03 + */ + public static Map parseForSelect(String inStr) { + return getKeyAndValues(inStr); + } + + /** + * 非嵌套解析 + * + * @param inStr + * @return + */ + public static Map parseForNotNest(String inStr) { + String[] split = inStr.split("], "); + Map map = new HashMap(); + for (int i = 0; i < split.length; i++) { + if (i == split.length - 1) { + map.put(getMapKey(split[i]), getMapList(split[i])); + } else { + map.put(getMapKey(split[i] + "]"), getMapList(split[i] + "]")); + } + } + return map; + } + + /** + * 获取主键 例子where=[(sid = sid0)] =[ 前即key + * + * @param splitStr + * @return + */ + public static String getMapKey(String splitStr) { + if (splitStr == null || splitStr.indexOf("=[") == -1) { + return ""; + } + return splitStr.substring(0, splitStr.indexOf("=[")).replace(" ", ""); + } + + public static String getMapKeyOnlySelectOrField(String splitStr) { + if (splitStr == null || splitStr.indexOf("=[") == -1) { + return ""; + } + if (splitStr.contains("select=[")) { + return "select"; + } else if (splitStr.contains("field=[")) { + return "field"; + } + return ""; + } + + /** + * 获取主键对应的集合值 例子where=[(sid = sid0)] []中内容为集合内容 + * + * @param splitStr + * @return + */ + public static List getMapList(String splitStr) { + if (splitStr == null || splitStr.indexOf("[") == -1 || splitStr.indexOf("]") == -1) { + return new ArrayList(); + } + return Arrays.stream(splitStr.substring(splitStr.indexOf("[") + 1, splitStr.lastIndexOf("]")).split(", ")).collect(Collectors.toList()); + } + + /** + * 获取嵌套主键对应的集合值 例子table=[[default_catalog, default_database, score, project=[sid, cls, score]]] []中内容为集合内容 + * + * @param splitStr + * @return + */ + public static List getMapListNest(String splitStr) { + List list = new ArrayList(); + if (splitStr == null || splitStr.indexOf("[") == -1 || splitStr.indexOf("]") == -1) { + return new ArrayList(); + } + String substring = splitStr.substring(splitStr.indexOf("[") + 1, splitStr.lastIndexOf("]")).trim(); + //样例 [default_catalog, default_database, score, project=[sid, cls, score]] + if (substring.startsWith("[")) { + //还是一个集合 + list.add(getMapListNest(substring)); + } else { + //不是一个集合 而是元素时 default_catalog, default_database, score, project=[sid, cls, score], course=[en, ds, as] + //嵌套所以 还会有[] + List nestList = getNestList(substring); + int num = nestList.size() / 2; + String[] str = new String[num]; + for (int i = 0; i < num; i++) { + str[i] = substring.substring(nestList.get(2 * i), nestList.get(2 * i + 1) + 1); + } + //倒叙替换 去除集合内容干扰 + for (int i = num - 1; i >= 0; i--) { + substring = substring.substring(0, nestList.get(2 * i)) + "_str" + i + "_" + substring.substring(nestList.get(2 * i + 1) + 1); + } + //去除干扰后 default_catalog, default_database, score, project=_str0_, course=_str1_ + // _str0_ = [sid, cls, score] + // _str1_ = [en, ds, as] + String[] split = substring.split(", "); + int index = 0; + for (String s : split) { + if (s.startsWith("[")) { + list.add(getMapListNest(splitStr)); + } else if (s.indexOf("_str") != -1) { + // project=_str0_ 还原集合干扰 project=[sid, cls, score] + list.add(parseForNest(s.replace("_str" + index + "_", str[index]))); + index++; + } else { + list.add(s); + } + } + } + return list; + } + + private static String getTextValue(String splitStr) { + return splitStr.substring(splitStr.indexOf("[") + 1, splitStr.lastIndexOf("]")); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/resources/META-INF/services/net.srt.flink.core.job.JobHandler b/srt-cloud-framework/srt-cloud-flink/flink-core-all/src/main/resources/META-INF/services/net.srt.flink.core.job.JobHandler new file mode 100644 index 0000000..e69de29 diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-daemon/pom.xml new file mode 100644 index 0000000..7dc55b2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/pom.xml @@ -0,0 +1,23 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-daemon + + + + + net.srt + flink-common + ${project.version} + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/constant/FlinkTaskConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/constant/FlinkTaskConstant.java new file mode 100644 index 0000000..bf0317d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/constant/FlinkTaskConstant.java @@ -0,0 +1,38 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.daemon.constant; + +public interface FlinkTaskConstant { + + /** + * 检测停顿时间 + */ + int TIME_SLEEP = 1000; + + /** + * 启动线程轮询日志时间,用于设置work等信息 + */ + int MAX_POLLING_GAP = 1000; + /** + * 最小 + */ + int MIN_POLLING_GAP = 50; + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/entity/TaskQueue.java b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/entity/TaskQueue.java new file mode 100644 index 0000000..4f911d2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/entity/TaskQueue.java @@ -0,0 +1,57 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.daemon.entity; + +import java.util.LinkedList; + +public class TaskQueue { + + private final LinkedList tasks = new LinkedList<>(); + + private final Object lock = new Object(); + + public void enqueue(T task) { + synchronized (lock) { + lock.notifyAll(); + tasks.addLast(task); + } + } + + public T dequeue() { + synchronized (lock) { + while (tasks.isEmpty()) { + try { + lock.wait(); + } catch (InterruptedException e) { + e.printStackTrace(); + } + } + + T task = tasks.removeFirst(); + return task; + } + } + + public int getTaskSize() { + synchronized (lock) { + return tasks.size(); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/entity/TaskWorker.java b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/entity/TaskWorker.java new file mode 100644 index 0000000..d5e5b3a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/entity/TaskWorker.java @@ -0,0 +1,56 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.daemon.entity; + +import net.srt.flink.daemon.task.DaemonTask; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class TaskWorker implements Runnable { + private static final Logger log = LoggerFactory.getLogger(TaskWorker.class); + + private volatile boolean running = true; + + private TaskQueue queue; + + public TaskWorker(TaskQueue queue) { + this.queue = queue; + } + + @Override + public void run() { + //log.info("TaskWorker run"); + while (running) { + DaemonTask daemonTask = queue.dequeue(); + if (daemonTask != null) { + try { + daemonTask.dealTask(); + } catch (Exception e) { + e.printStackTrace(); + } + } + } + } + + public void shutdown() { + //log.info(Thread.currentThread().getName() + "TaskWorker shutdown"); + running = false; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/exception/DaemonTaskException.java b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/exception/DaemonTaskException.java new file mode 100644 index 0000000..6577d13 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/exception/DaemonTaskException.java @@ -0,0 +1,31 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.daemon.exception; + +public class DaemonTaskException extends RuntimeException { + + public DaemonTaskException(String message, Throwable cause) { + super(message, cause); + } + + public DaemonTaskException(String message) { + super(message); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/pool/DefaultThreadPool.java b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/pool/DefaultThreadPool.java new file mode 100644 index 0000000..e7a5671 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/pool/DefaultThreadPool.java @@ -0,0 +1,136 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.daemon.pool; + +import net.srt.flink.daemon.entity.TaskQueue; +import net.srt.flink.daemon.entity.TaskWorker; +import net.srt.flink.daemon.task.DaemonTask; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * @author lcg + * @operate + * @return + */ +public class DefaultThreadPool implements ThreadPool { + private static final int MAX_WORKER_NUM = 10; + private static final int DEFAULT_WORKER_NUM = 5; + private static final int MIN_WORKER_NUM = 1; + + private final List workers = Collections.synchronizedList(new ArrayList<>()); + + private final Object lock = new Object(); + + private volatile AtomicInteger workerNum = new AtomicInteger(0); + + private final TaskQueue queue = new TaskQueue<>(); + + private static DefaultThreadPool defaultThreadPool; + + private DefaultThreadPool() { + addWorkers(DEFAULT_WORKER_NUM); + } + + public static DefaultThreadPool getInstance() { + if (defaultThreadPool == null) { + synchronized (DefaultThreadPool.class) { + if (defaultThreadPool == null) { + defaultThreadPool = new DefaultThreadPool(); + } + } + } + return defaultThreadPool; + } + + @Override + public void execute(DaemonTask daemonTask) { + if (daemonTask != null) { + queue.enqueue(daemonTask); + } + } + + @Override + public void addWorkers(int num) { + synchronized (lock) { + if (num + this.workerNum.get() > MAX_WORKER_NUM) { + num = MAX_WORKER_NUM - this.workerNum.get(); + if (num <= 0) { + return; + } + } + for (int i = 0; i < num; i++) { + TaskWorker worker = new TaskWorker(queue); + workers.add(worker); + Thread thread = new Thread(worker, "ThreadPool-Worker-" + workerNum.incrementAndGet()); + thread.start(); + } + } + + } + + @Override + public void removeWorker(int num) { + + synchronized (lock) { + if (num >= this.workerNum.get()) { + num = this.workerNum.get() - MIN_WORKER_NUM; + if (num <= 0) { + return; + } + } + int count = num - 1; + while (count >= 0) { + TaskWorker worker = workers.get(count); + if (workers.remove(worker)) { + worker.shutdown(); + count--; + } + } + //减少线程 + workerNum.getAndAdd(-num); + } + + } + + @Override + public void shutdown() { + synchronized (lock) { + for (TaskWorker worker : workers) { + worker.shutdown(); + } + workers.clear(); + } + } + + @Override + public int getTaskSize() { + return queue.getTaskSize(); + } + + public int getWorkCount() { + synchronized (lock) { + return this.workerNum.get(); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/pool/ThreadPool.java b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/pool/ThreadPool.java new file mode 100644 index 0000000..10ab744 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/pool/ThreadPool.java @@ -0,0 +1,45 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.daemon.pool; + + +import net.srt.flink.daemon.task.DaemonTask; + +/** + * @author lcg + * @operate + * @return + */ +public interface ThreadPool { + + //执行任务 + void execute(DaemonTask daemonTask); + + //关闭连接池 + void shutdown(); + + //增加工作数 + void addWorkers(int num); + + //减少工作数 + void removeWorker(int num); + + int getTaskSize(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/task/DaemonFactory.java b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/task/DaemonFactory.java new file mode 100644 index 0000000..26cb60f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/task/DaemonFactory.java @@ -0,0 +1,59 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.daemon.task; + + +import net.srt.flink.daemon.constant.FlinkTaskConstant; +import net.srt.flink.daemon.pool.DefaultThreadPool; + +import java.util.List; + +public class DaemonFactory { + + public static void start(List configList) { + Thread thread = new Thread(() -> { + DefaultThreadPool defaultThreadPool = DefaultThreadPool.getInstance(); + for (DaemonTaskConfig config : configList) { + DaemonTask daemonTask = DaemonTask.build(config); + defaultThreadPool.execute(daemonTask); + } + while (true) { + int taskSize = defaultThreadPool.getTaskSize(); + try { + Thread.sleep(Math.max(FlinkTaskConstant.MAX_POLLING_GAP / (taskSize + 1), FlinkTaskConstant.MIN_POLLING_GAP)); + } catch (InterruptedException e) { + e.printStackTrace(); + } + + int num = taskSize / 100 + 1; + if (defaultThreadPool.getWorkCount() < num) { + defaultThreadPool.addWorkers(num - defaultThreadPool.getWorkCount()); + } else if (defaultThreadPool.getWorkCount() > num) { + defaultThreadPool.removeWorker(defaultThreadPool.getWorkCount() - num); + } + } + }); + thread.start(); + } + + public static void addTask(DaemonTaskConfig config) { + DefaultThreadPool.getInstance().execute(DaemonTask.build(config)); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/task/DaemonTask.java b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/task/DaemonTask.java new file mode 100644 index 0000000..b3c90fe --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/task/DaemonTask.java @@ -0,0 +1,59 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.daemon.task; + + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.daemon.exception.DaemonTaskException; + +import java.util.Optional; +import java.util.ServiceLoader; + +public interface DaemonTask { + + static Optional get(DaemonTaskConfig config) { + Asserts.checkNotNull(config, "线程任务配置不能为空"); + ServiceLoader daemonTasks = ServiceLoader.load(DaemonTask.class); + for (DaemonTask daemonTask : daemonTasks) { + if (daemonTask.canHandle(config.getType())) { + return Optional.of(daemonTask.setConfig(config)); + } + } + return Optional.empty(); + } + + static DaemonTask build(DaemonTaskConfig config) { + Optional optionalDaemonTask = DaemonTask.get(config); + if (!optionalDaemonTask.isPresent()) { + throw new DaemonTaskException("不支持线程任务类型【" + config.getType() + "】"); + } + return optionalDaemonTask.get(); + } + + DaemonTask setConfig(DaemonTaskConfig config); + + default boolean canHandle(String type) { + return Asserts.isEqualsIgnoreCase(getType(), type); + } + + String getType(); + + void dealTask(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/task/DaemonTaskConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/task/DaemonTaskConfig.java new file mode 100644 index 0000000..455e828 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-daemon/src/main/java/net/srt/flink/daemon/task/DaemonTaskConfig.java @@ -0,0 +1,54 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.daemon.task; + +public class DaemonTaskConfig { + + private String type; + private Integer id; + + public DaemonTaskConfig() { + } + + public DaemonTaskConfig(String type, Integer id) { + this.type = type; + this.id = id; + } + + public static DaemonTaskConfig build(String type, Integer id) { + return new DaemonTaskConfig(type, id); + } + + public String getType() { + return type; + } + + public void setType(String type) { + this.type = type; + } + + public Integer getId() { + return id; + } + + public void setId(Integer id) { + this.id = id; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-executor/pom.xml new file mode 100644 index 0000000..4b28ff6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/pom.xml @@ -0,0 +1,112 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-executor + + + 3.3.2 + + + + + net.srt + flink-common + ${project.version} + + + junit + junit + provided + + + net.srt + flink-metadata-base + ${project.version} + provided + + + org.slf4j + slf4j-log4j12 + ${slf4j.version} + provided + + + net.srt + flink-metadata-mysql + ${project.version} + provided + + + org.apache.hadoop + hadoop-common + ${hadoop.version} + provided + + + com.google.guava + guava + + + javax.servlet + servlet-api + + + + + + + + flink-1.16 + + + net.srt + flink-client-1.16 + ${project.version} + provided + + + net.srt + flink-1.16 + ${project.version} + provided + + + com.fasterxml.jackson.datatype + jackson-datatype-jsr310 + provided + + + + + flink-1.14 + + + net.srt + flink-client-1.14 + ${project.version} + provided + + + net.srt + flink-1.14 + ${project.version} + provided + + + com.fasterxml.jackson.datatype + jackson-datatype-jsr310 + provided + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/constant/FlinkConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/constant/FlinkConstant.java new file mode 100644 index 0000000..9767c51 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/constant/FlinkConstant.java @@ -0,0 +1,50 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.constant; + +/** + * FlinkConstant + * + * @author zrx + * @since 2021/5/25 14:39 + **/ +public interface FlinkConstant { + + /** + * flink端口 + */ + Integer FLINK_REST_DEFAULT_PORT = 8081; + /** + * flink会话默认个数 + */ + Integer DEFAULT_SESSION_COUNT = 256; + /** + * flink加载因子 + */ + Double DEFAULT_FACTOR = 0.75; + /** + * 本地模式host + */ + String LOCAL_HOST = "localhost:8081"; + /** + * changlog op + */ + String OP = "op"; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/constant/FlinkSQLConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/constant/FlinkSQLConstant.java new file mode 100644 index 0000000..ae711c7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/constant/FlinkSQLConstant.java @@ -0,0 +1,62 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.constant; + +/** + * FlinkSQLConstant + * + * @author zrx + * @since 2021/5/25 15:51 + **/ +public interface FlinkSQLConstant { + + /** + * 分隔符 + */ + String SEPARATOR = ";\n"; + /** + * DDL 类型 + */ + String DDL = "DDL"; + /** + * DML 类型 + */ + String DML = "DML"; + /** + * DATASTREAM 类型 + */ + String DATASTREAM = "DATASTREAM"; + /** + * 片段 Fragments 标识 + */ + String FRAGMENTS = ":="; + + /** + * 内置日期变量前缀 + */ + String INNER_DATETIME_KEY = "_CURRENT_DATE_"; + + /** + * 内置日期变量格式 + * 确定后不能修改 + */ + String INNER_DATETIME_FORMAT = "yyyyMMdd"; + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/exception/FlinkException.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/exception/FlinkException.java new file mode 100644 index 0000000..26bfa26 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/exception/FlinkException.java @@ -0,0 +1,37 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.exception; + +/** + * FlinkException + * + * @author zrx + * @since 2021/10/22 11:13 + **/ +public class FlinkException extends RuntimeException { + + public FlinkException(String message, Throwable cause) { + super(message, cause); + } + + public FlinkException(String message) { + super(message); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/AppBatchExecutor.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/AppBatchExecutor.java new file mode 100644 index 0000000..7673184 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/AppBatchExecutor.java @@ -0,0 +1,51 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.executor.CustomTableEnvironmentImpl; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +/** + * AppBatchExecutor + * + * @author zrx + * @since 2022/2/7 22:14 + */ +public class AppBatchExecutor extends Executor { + + public AppBatchExecutor(ExecutorSetting executorSetting) { + this.executorSetting = executorSetting; + if (Asserts.isNotNull(executorSetting.getConfig())) { + Configuration configuration = Configuration.fromMap(executorSetting.getConfig()); + this.environment = StreamExecutionEnvironment.getExecutionEnvironment(configuration); + } else { + this.environment = StreamExecutionEnvironment.createLocalEnvironment(); + } + init(); + } + + @Override + CustomTableEnvironment createCustomTableEnvironment() { + return CustomTableEnvironmentImpl.createBatch(environment); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/AppStreamExecutor.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/AppStreamExecutor.java new file mode 100644 index 0000000..9b432fd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/AppStreamExecutor.java @@ -0,0 +1,51 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.executor.CustomTableEnvironmentImpl; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +/** + * AppStreamExecutor + * + * @author zrx + * @since 2021/11/18 + */ +public class AppStreamExecutor extends Executor { + + public AppStreamExecutor(ExecutorSetting executorSetting) { + this.executorSetting = executorSetting; + if (Asserts.isNotNull(executorSetting.getConfig())) { + Configuration configuration = Configuration.fromMap(executorSetting.getConfig()); + this.environment = StreamExecutionEnvironment.getExecutionEnvironment(configuration); + } else { + this.environment = StreamExecutionEnvironment.getExecutionEnvironment(); + } + init(); + } + + @Override + CustomTableEnvironment createCustomTableEnvironment() { + return CustomTableEnvironmentImpl.create(environment); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/EnvironmentSetting.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/EnvironmentSetting.java new file mode 100644 index 0000000..e224320 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/EnvironmentSetting.java @@ -0,0 +1,80 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.constant.NetConstant; +import net.srt.flink.executor.constant.FlinkConstant; + +/** + * EnvironmentSetting + * + * @author zrx + * @since 2021/5/25 13:45 + **/ +@Getter +@Setter +public class EnvironmentSetting { + + private String host; + private Integer port; + private String[] jarFiles; + + private boolean useRemote; + + public static final EnvironmentSetting LOCAL = new EnvironmentSetting(false); + + public EnvironmentSetting(boolean useRemote) { + this.useRemote = useRemote; + } + + public EnvironmentSetting(String... jarFiles) { + this.useRemote = false; + this.jarFiles = jarFiles; + } + + public EnvironmentSetting(String host, Integer port, String... jarFiles) { + this.host = host; + this.port = port; + this.useRemote = true; + this.jarFiles = jarFiles; + } + + public static EnvironmentSetting build(String address, String... jarFiles) { + Asserts.checkNull(address, "Flink 地址不能为空"); + String[] strs = address.split(NetConstant.COLON); + if (strs.length >= 2) { + return new EnvironmentSetting(strs[0], Integer.parseInt(strs[1]), jarFiles); + } else { + return new EnvironmentSetting(jarFiles); + } + } + + public String getAddress() { + if (Asserts.isAllNotNull(host, port)) { + return host + NetConstant.COLON + port; + } else { + return ""; + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/Executor.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/Executor.java new file mode 100644 index 0000000..bd91670 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/Executor.java @@ -0,0 +1,494 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.node.ObjectNode; +import lombok.extern.slf4j.Slf4j; +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.base.model.LineageRel; +import net.srt.flink.client.executor.CustomTableResultImpl; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.context.DinkyClassLoaderContextHolder; +import net.srt.flink.common.model.ProjectSystemConfiguration; +import net.srt.flink.common.result.SqlExplainResult; +import net.srt.flink.executor.interceptor.FlinkInterceptor; +import net.srt.flink.executor.interceptor.FlinkInterceptorResult; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.flink.api.common.ExecutionConfig; +import org.apache.flink.api.common.JobExecutionResult; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.PipelineOptions; +import org.apache.flink.core.execution.JobClient; +import org.apache.flink.python.PythonOptions; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.flink.runtime.jobgraph.jsonplan.JsonPlanGenerator; +import org.apache.flink.runtime.rest.messages.JobPlanInfo; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.streaming.api.graph.JSONGenerator; +import org.apache.flink.streaming.api.graph.StreamGraph; +import org.apache.flink.table.api.ExplainDetail; +import org.apache.flink.table.api.StatementSet; +import org.apache.flink.table.api.TableConfig; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.table.catalog.CatalogManager; +import org.apache.hadoop.security.UserGroupInformation; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.net.URL; +import java.net.URLClassLoader; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * Executor + * + * @author wenmo + * @since 2021/11/17 + **/ +@Slf4j +public abstract class Executor { + + private static final Logger logger = LoggerFactory.getLogger(Executor.class); + + protected StreamExecutionEnvironment environment; + protected CustomTableEnvironment stEnvironment; + protected EnvironmentSetting environmentSetting; + protected ExecutorSetting executorSetting; + protected Map setConfig = new HashMap<>(); + + protected SqlManager sqlManager = new SqlManager(); + protected boolean useSqlFragment = true; + + public static Executor build() { + return new LocalStreamExecutor(ExecutorSetting.DEFAULT); + } + + public static Executor build(EnvironmentSetting environmentSetting, ExecutorSetting executorSetting) { + if (environmentSetting.isUseRemote()) { + return buildRemoteExecutor(environmentSetting, executorSetting); + } else { + return buildLocalExecutor(executorSetting); + } + } + + public static Executor buildLocalExecutor(ExecutorSetting executorSetting) { + if (executorSetting.isUseBatchModel()) { + return new LocalBatchExecutor(executorSetting); + } else { + return new LocalStreamExecutor(executorSetting); + } + } + + public static Executor buildAppStreamExecutor(ExecutorSetting executorSetting) { + if (executorSetting.isUseBatchModel()) { + return new AppBatchExecutor(executorSetting); + } else { + return new AppStreamExecutor(executorSetting); + } + } + + public static Executor buildRemoteExecutor(EnvironmentSetting environmentSetting, ExecutorSetting executorSetting) { + environmentSetting.setUseRemote(true); + if (executorSetting.isUseBatchModel()) { + return new RemoteBatchExecutor(environmentSetting, executorSetting); + } else { + return new RemoteStreamExecutor(environmentSetting, executorSetting); + } + } + + public SqlManager getSqlManager() { + return sqlManager; + } + + public boolean isUseSqlFragment() { + return useSqlFragment; + } + + public ExecutionConfig getExecutionConfig() { + return environment.getConfig(); + } + + public StreamExecutionEnvironment getStreamExecutionEnvironment() { + return environment; + } + + public CustomTableEnvironment getCustomTableEnvironment() { + return stEnvironment; + } + + public ExecutorSetting getExecutorSetting() { + return executorSetting; + } + + public EnvironmentSetting getEnvironmentSetting() { + return environmentSetting; + } + + public Map getSetConfig() { + return setConfig; + } + + public void setSetConfig(Map setConfig) { + this.setConfig = setConfig; + } + + public TableConfig getTableConfig() { + return stEnvironment.getConfig(); + } + + public String getTimeZone() { + return getTableConfig().getLocalTimeZone().getId(); + } + + protected void init() { + initEnvironment(); + initStreamExecutionEnvironment(); + } + + public void update(ExecutorSetting executorSetting) { + updateEnvironment(executorSetting); + updateStreamExecutionEnvironment(executorSetting); + } + + public void initEnvironment() { + updateEnvironment(executorSetting); + } + + public void updateEnvironment(ExecutorSetting executorSetting) { + if (executorSetting.isValidParallelism()) { + environment.setParallelism(executorSetting.getParallelism()); + } + + if (executorSetting.getConfig() != null) { + Configuration configuration = Configuration.fromMap(executorSetting.getConfig()); + environment.getConfig().configure(configuration, null); + } + } + + abstract CustomTableEnvironment createCustomTableEnvironment(); + + private void initStreamExecutionEnvironment() { + updateStreamExecutionEnvironment(executorSetting); + } + + private void updateStreamExecutionEnvironment(ExecutorSetting executorSetting) { + useSqlFragment = executorSetting.isUseSqlFragment(); + + CustomTableEnvironment newestEnvironment = createCustomTableEnvironment(); + if (stEnvironment != null) { + for (String catalog : stEnvironment.listCatalogs()) { + stEnvironment.getCatalog(catalog).ifPresent(t -> { + newestEnvironment.getCatalogManager().unregisterCatalog(catalog, true); + newestEnvironment.registerCatalog(catalog, t); + }); + } + } + stEnvironment = newestEnvironment; + + final Configuration configuration = stEnvironment.getConfig().getConfiguration(); + if (executorSetting.isValidJobName()) { + configuration.setString(PipelineOptions.NAME.key(), executorSetting.getJobName()); + } + + setConfig.put(PipelineOptions.NAME.key(), executorSetting.getJobName()); + if (executorSetting.getConfig() != null) { + for (Map.Entry entry : executorSetting.getConfig().entrySet()) { + configuration.setString(entry.getKey(), entry.getValue()); + } + } + } + + public String pretreatStatement(String statement) { + //zrx ProjectSystemConfiguration + ProcessEntity process = ProcessContextHolder.getProcess(); + return FlinkInterceptor.pretreatStatement(this, statement, ProjectSystemConfiguration.getByProjectId(process.getProjectId()).getSqlSeparator()); + } + + public String pretreatStatement(String statement, String sqlSeparator) { + return FlinkInterceptor.pretreatStatement(this, statement, sqlSeparator); + } + + private FlinkInterceptorResult pretreatExecute(String statement) { + return FlinkInterceptor.build(this, statement); + } + + public JobExecutionResult execute(String jobName) throws Exception { + return environment.execute(jobName); + } + + public JobClient executeAsync(String jobName) throws Exception { + return environment.executeAsync(jobName); + } + + public TableResult executeSql(String statement) { + statement = pretreatStatement(statement); + return commonExecute(statement); + } + + public TableResult executeSql(String statement, String sqlsepator) { + statement = pretreatStatement(statement, sqlsepator); + return commonExecute(statement); + } + + private TableResult commonExecute(String statement) { + FlinkInterceptorResult flinkInterceptorResult = pretreatExecute(statement); + if (Asserts.isNotNull(flinkInterceptorResult.getTableResult())) { + return flinkInterceptorResult.getTableResult(); + } + if (!flinkInterceptorResult.isNoExecute()) { + this.loginFromKeytabIfNeed(); + return stEnvironment.executeSql(statement); + } else { + return CustomTableResultImpl.TABLE_RESULT_OK; + } + } + + + private void reset() { + try { + if (UserGroupInformation.isLoginKeytabBased()) { + Method reset = UserGroupInformation.class.getDeclaredMethod("reset"); + reset.invoke(UserGroupInformation.class); + log.info("Reset kerberos authentication..."); + } + } catch (NoSuchMethodException | IllegalAccessException | InvocationTargetException e) { + throw new RuntimeException(e); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + + private void loginFromKeytabIfNeed() { + setConfig.forEach((k, v) -> log.debug("setConfig key: [{}], value: [{}]", k, v)); + String krb5ConfPath = (String) setConfig.getOrDefault("java.security.krb5.conf", ""); + String keytabPath = (String) setConfig.getOrDefault("security.kerberos.login.keytab", ""); + String principal = (String) setConfig.getOrDefault("security.kerberos.login.principal", ""); + + if (Asserts.isAllNullString(krb5ConfPath, keytabPath, principal)) { + log.info("Simple authentication mode"); + return; + } + log.info("Kerberos authentication mode"); + if (Asserts.isNullString(krb5ConfPath)) { + log.error("Parameter [java.security.krb5.conf] is null or empty."); + return; + } + + if (Asserts.isNullString(keytabPath)) { + log.error("Parameter [security.kerberos.login.keytab] is null or empty."); + return; + } + + if (Asserts.isNullString(principal)) { + log.error("Parameter [security.kerberos.login.principal] is null or empty."); + return; + } + + this.reset(); + + System.setProperty("java.security.krb5.conf", krb5ConfPath); + org.apache.hadoop.conf.Configuration config = new org.apache.hadoop.conf.Configuration(); + config.set("hadoop.security.authentication", "Kerberos"); + config.setBoolean("hadoop.security.authorization", true); + UserGroupInformation.setConfiguration(config); + try { + UserGroupInformation.loginUserFromKeytab(principal, keytabPath); + log.error("Kerberos [{}] authentication success.", UserGroupInformation.getLoginUser().getUserName()); + } catch (IOException e) { + log.error("Kerberos authentication failed."); + e.printStackTrace(); + } + } + + /** + * init udf + * + * @param udfFilePath udf文件路径 + */ + public void initUDF(String... udfFilePath) { + DinkyClassLoaderContextHolder.get().addURL(udfFilePath); + } + + public void initPyUDF(String executable, String... udfPyFilePath) { + if (udfPyFilePath == null || udfPyFilePath.length == 0) { + return; + } + Map config = executorSetting.getConfig(); + if (Asserts.isNotNull(config)) { + config.put(PythonOptions.PYTHON_FILES.key(), String.join(",", udfPyFilePath)); + config.put(PythonOptions.PYTHON_CLIENT_EXECUTABLE.key(), executable); + } + update(executorSetting); + } + + private static void loadJar(final URL jarUrl) { + // 从URLClassLoader类加载器中获取类的addURL方法 + Method method = null; + try { + method = URLClassLoader.class.getDeclaredMethod("addURL", URL.class); + } catch (NoSuchMethodException | SecurityException e) { + logger.error(e.getMessage()); + } + + // 获取方法的访问权限 + boolean accessible = method.isAccessible(); + try { + // 修改访问权限为可写 + if (!accessible) { + method.setAccessible(true); + } + // 获取系统类加载器 + URLClassLoader classLoader = (URLClassLoader) ClassLoader.getSystemClassLoader(); + // jar路径加入到系统url路径里 + method.invoke(classLoader, jarUrl); + } catch (Exception e) { + logger.error(e.getMessage()); + } finally { + method.setAccessible(accessible); + } + } + + public String explainSql(String statement, ExplainDetail... extraDetails) { + statement = pretreatStatement(statement); + if (pretreatExecute(statement).isNoExecute()) { + return ""; + } + + return stEnvironment.explainSql(statement, extraDetails); + } + + public SqlExplainResult explainSqlRecord(String statement, ExplainDetail... extraDetails) { + statement = pretreatStatement(statement); + if (Asserts.isNotNullString(statement) && !pretreatExecute(statement).isNoExecute()) { + return stEnvironment.explainSqlRecord(statement, extraDetails); + } + + return null; + } + + public ObjectNode getStreamGraph(String statement) { + statement = pretreatStatement(statement); + if (pretreatExecute(statement).isNoExecute()) { + return null; + } + + return stEnvironment.getStreamGraph(statement); + } + + public ObjectNode getStreamGraph(List statements) { + StreamGraph streamGraph = stEnvironment.getStreamGraphFromInserts(statements); + return getStreamGraphJsonNode(streamGraph); + } + + private ObjectNode getStreamGraphJsonNode(StreamGraph streamGraph) { + JSONGenerator jsonGenerator = new JSONGenerator(streamGraph); + String json = jsonGenerator.getJSON(); + ObjectMapper mapper = new ObjectMapper(); + ObjectNode objectNode = mapper.createObjectNode(); + try { + objectNode = (ObjectNode) mapper.readTree(json); + } catch (JsonProcessingException e) { + e.printStackTrace(); + } + + return objectNode; + } + + public StreamGraph getStreamGraph() { + return environment.getStreamGraph(); + } + + public ObjectNode getStreamGraphFromDataStream(List statements) { + for (String statement : statements) { + executeSql(statement); + } + + StreamGraph streamGraph = getStreamGraph(); + return getStreamGraphJsonNode(streamGraph); + } + + public JobPlanInfo getJobPlanInfo(List statements) { + return stEnvironment.getJobPlanInfo(statements); + } + + public JobPlanInfo getJobPlanInfoFromDataStream(List statements) { + for (String statement : statements) { + executeSql(statement); + } + StreamGraph streamGraph = getStreamGraph(); + return new JobPlanInfo(JsonPlanGenerator.generatePlan(streamGraph.getJobGraph())); + } + + public CatalogManager getCatalogManager() { + return stEnvironment.getCatalogManager(); + } + + public JobGraph getJobGraphFromInserts(List statements) { + return stEnvironment.getJobGraphFromInserts(statements); + } + + public StatementSet createStatementSet() { + return stEnvironment.createStatementSet(); + } + + public TableResult executeStatementSet(List statements) { + StatementSet statementSet = stEnvironment.createStatementSet(); + for (String item : statements) { + statementSet.addInsertSql(item); + } + return statementSet.execute(); + } + + public String explainStatementSet(List statements) { + StatementSet statementSet = stEnvironment.createStatementSet(); + for (String item : statements) { + statementSet.addInsertSql(item); + } + return statementSet.explain(); + } + + public void submitSql(String statements) { + executeSql(statements); + } + + public void submitSql(String statements, String sqlsepator) { + executeSql(statements, sqlsepator); + } + + public void submitStatementSet(List statements) { + executeStatementSet(statements); + } + + public boolean parseAndLoadConfiguration(String statement) { + return stEnvironment.parseAndLoadConfiguration(statement, environment, setConfig); + } + + public List getLineage(String statement) { + return stEnvironment.getLineage(statement); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/ExecutorSetting.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/ExecutorSetting.java new file mode 100644 index 0000000..481a58f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/ExecutorSetting.java @@ -0,0 +1,150 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.assertion.Asserts; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * ExecutorSetting + * + * @author zrx + * @since 2021/5/25 13:43 + **/ +@Setter +@Getter +public class ExecutorSetting { + + private static final Logger log = LoggerFactory.getLogger(ExecutorSetting.class); + public static final ExecutorSetting DEFAULT = new ExecutorSetting(0, 1, true); + + public static final String CHECKPOINT_CONST = "checkpoint"; + public static final String PARALLELISM_CONST = "parallelism"; + public static final String USE_SQL_FRAGMENT = "useSqlFragment"; + public static final String USE_STATEMENT_SET = "useStatementSet"; + public static final String USE_BATCH_MODEL = "useBatchModel"; + public static final String SAVE_POINT_PATH = "savePointPath"; + public static final String JOB_NAME = "jobName"; + public static final String CONFIG_CONST = "config"; + + private static final ObjectMapper mapper = new ObjectMapper(); + private boolean useBatchModel; + private Integer checkpoint; + private Integer parallelism; + private boolean useSqlFragment; + private boolean useStatementSet; + private String savePointPath; + private String jobName; + private Map config; + + public ExecutorSetting(boolean useSqlFragment) { + this(null, useSqlFragment); + } + + public ExecutorSetting(Integer checkpoint) { + this(checkpoint, false); + } + + public ExecutorSetting(Integer checkpoint, boolean useSqlFragment) { + this(checkpoint, null, useSqlFragment, null, null); + } + + public ExecutorSetting(Integer checkpoint, Integer parallelism, boolean useSqlFragment) { + this(checkpoint, parallelism, useSqlFragment, null, null); + } + + public ExecutorSetting(Integer checkpoint, Integer parallelism, boolean useSqlFragment, String savePointPath, String jobName) { + this(checkpoint, parallelism, useSqlFragment, savePointPath, jobName, null); + } + + public ExecutorSetting(Integer checkpoint, Integer parallelism, boolean useSqlFragment, String savePointPath) { + this(checkpoint, parallelism, useSqlFragment, savePointPath, null, null); + } + + public ExecutorSetting(Integer checkpoint, Integer parallelism, boolean useSqlFragment, String savePointPath, String jobName, Map config) { + this(checkpoint, parallelism, useSqlFragment, false, false, savePointPath, jobName, config); + } + + public ExecutorSetting(Integer checkpoint, Integer parallelism, boolean useSqlFragment, boolean useStatementSet, boolean useBatchModel, String savePointPath, String jobName, + Map config) { + this.checkpoint = checkpoint; + this.parallelism = parallelism; + this.useSqlFragment = useSqlFragment; + this.useStatementSet = useStatementSet; + this.useBatchModel = useBatchModel; + this.savePointPath = savePointPath; + this.jobName = jobName; + this.config = config; + } + + public static ExecutorSetting build(Integer checkpoint, Integer parallelism, boolean useSqlFragment, boolean useStatementSet, boolean useBatchModel, String savePointPath, String jobName, + String configJson) { + List> configList = new ArrayList<>(); + if (Asserts.isNotNullString(configJson)) { + try { + configList = mapper.readValue(configJson, ArrayList.class); + } catch (JsonProcessingException e) { + log.error(e.getMessage()); + } + } + + Map config = new HashMap<>(); + for (Map item : configList) { + if (Asserts.isNotNull(item) + && Asserts.isAllNotNullString(item.get("key"), item.get("value"))) { + config.put(item.get("key"), item.get("value")); + } + } + return new ExecutorSetting(checkpoint, parallelism, useSqlFragment, useStatementSet, useBatchModel, + savePointPath, jobName, config); + } + + public static ExecutorSetting build(Map settingMap) { + Integer checkpoint = Integer.valueOf(settingMap.get(CHECKPOINT_CONST)); + Integer parallelism = Integer.valueOf(settingMap.get(PARALLELISM_CONST)); + + return build(checkpoint, parallelism, "1".equals(settingMap.get(USE_SQL_FRAGMENT)), "1".equals(settingMap.get(USE_STATEMENT_SET)), "1".equals(settingMap.get(USE_BATCH_MODEL)), + settingMap.get(SAVE_POINT_PATH), settingMap.get(JOB_NAME), settingMap.get(CONFIG_CONST)); + } + + public boolean isValidParallelism() { + return this.getParallelism() != null && this.getParallelism() > 0; + } + + public boolean isValidJobName() { + return this.getJobName() != null && !"".equals(this.getJobName()); + } + + @Override + public String toString() { + return String.format("ExecutorSetting{checkpoint=%d, parallelism=%d, useSqlFragment=%s, useStatementSet=%s, savePointPath='%s', jobName='%s', config=%s}", checkpoint, parallelism, + useSqlFragment, useStatementSet, savePointPath, jobName, config); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/LocalBatchExecutor.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/LocalBatchExecutor.java new file mode 100644 index 0000000..6f65dd5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/LocalBatchExecutor.java @@ -0,0 +1,56 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.executor.CustomTableEnvironmentImpl; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.RestOptions; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +/** + * LocalBatchExecutor + * + * @author zrx + * @since 2022/2/4 0:04 + */ +public class LocalBatchExecutor extends Executor { + + public LocalBatchExecutor(ExecutorSetting executorSetting) { + this.executorSetting = executorSetting; + if (Asserts.isNotNull(executorSetting.getConfig())) { + Configuration configuration = Configuration.fromMap(executorSetting.getConfig()); + if (configuration.contains(RestOptions.PORT)) { + this.environment = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(configuration); + } else { + this.environment = StreamExecutionEnvironment.createLocalEnvironment(configuration); + } + } else { + this.environment = StreamExecutionEnvironment.createLocalEnvironment(); + } + init(); + } + + @Override + CustomTableEnvironment createCustomTableEnvironment() { + return CustomTableEnvironmentImpl.createBatch(environment); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/LocalStreamExecutor.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/LocalStreamExecutor.java new file mode 100644 index 0000000..e4ba476 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/LocalStreamExecutor.java @@ -0,0 +1,56 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.executor.CustomTableEnvironmentImpl; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.RestOptions; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +/** + * LocalStreamExecuter + * + * @author zrx + * @since 2021/5/25 13:48 + **/ +public class LocalStreamExecutor extends Executor { + + public LocalStreamExecutor(ExecutorSetting executorSetting) { + this.executorSetting = executorSetting; + if (Asserts.isNotNull(executorSetting.getConfig())) { + Configuration configuration = Configuration.fromMap(executorSetting.getConfig()); + if (configuration.contains(RestOptions.PORT)) { + this.environment = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(configuration); + } else { + this.environment = StreamExecutionEnvironment.createLocalEnvironment(configuration); + } + } else { + this.environment = StreamExecutionEnvironment.createLocalEnvironment(); + } + init(); + } + + @Override + CustomTableEnvironment createCustomTableEnvironment() { + return CustomTableEnvironmentImpl.create(environment); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/RemoteBatchExecutor.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/RemoteBatchExecutor.java new file mode 100644 index 0000000..6fa8327 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/RemoteBatchExecutor.java @@ -0,0 +1,61 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.executor.CustomTableEnvironmentImpl; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +/** + * RemoteBatchExecutor + * + * @author zrx + * @since 2022/2/7 22:10 + */ +public class RemoteBatchExecutor extends Executor { + + public RemoteBatchExecutor(EnvironmentSetting environmentSetting, ExecutorSetting executorSetting) { + this.environmentSetting = environmentSetting; + this.executorSetting = executorSetting; + if (Asserts.isNotNull(executorSetting.getConfig())) { + Configuration configuration = Configuration.fromMap(executorSetting.getConfig()); + this.environment = + StreamExecutionEnvironment.createRemoteEnvironment( + environmentSetting.getHost(), + environmentSetting.getPort(), + configuration, + environmentSetting.getJarFiles()); + } else { + this.environment = + StreamExecutionEnvironment.createRemoteEnvironment( + environmentSetting.getHost(), + environmentSetting.getPort(), + environmentSetting.getJarFiles()); + } + init(); + } + + @Override + CustomTableEnvironment createCustomTableEnvironment() { + return CustomTableEnvironmentImpl.createBatch(environment); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/RemoteStreamExecutor.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/RemoteStreamExecutor.java new file mode 100644 index 0000000..b544155 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/RemoteStreamExecutor.java @@ -0,0 +1,54 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import net.srt.flink.client.base.executor.CustomTableEnvironment; +import net.srt.flink.client.executor.CustomTableEnvironmentImpl; +import net.srt.flink.common.assertion.Asserts; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.streaming.api.environment.RemoteStreamEnvironment; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; + +/** + * RemoteStreamExecutor + * + * @author zrx + * @since 2021/5/25 14:05 + **/ +public class RemoteStreamExecutor extends Executor { + + public RemoteStreamExecutor(EnvironmentSetting environmentSetting, ExecutorSetting executorSetting) { + this.environmentSetting = environmentSetting; + this.executorSetting = executorSetting; + Configuration configuration = + Asserts.isNotNull(executorSetting.getConfig()) + ? Configuration.fromMap(executorSetting.getConfig()) + : null; + this.environment = + new RemoteStreamEnvironment( + environmentSetting.getHost(), environmentSetting.getPort(), configuration); + init(); + } + + @Override + CustomTableEnvironment createCustomTableEnvironment() { + return CustomTableEnvironmentImpl.create(environment); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/SqlManager.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/SqlManager.java new file mode 100644 index 0000000..58567de --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/executor/SqlManager.java @@ -0,0 +1,298 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.executor; + +import net.srt.flink.client.executor.CustomTableEnvironmentImpl; +import net.srt.flink.client.executor.CustomTableResultImpl; +import net.srt.flink.client.executor.TableSchemaField; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.SystemConfiguration; +import net.srt.flink.executor.constant.FlinkSQLConstant; +import org.apache.flink.table.api.DataTypes; +import org.apache.flink.table.api.ExpressionParserException; +import org.apache.flink.table.api.Table; +import org.apache.flink.table.api.TableResult; +import org.apache.flink.table.catalog.exceptions.CatalogException; +import org.apache.flink.types.Row; +import org.apache.flink.util.StringUtils; + +import java.text.SimpleDateFormat; +import java.util.ArrayList; +import java.util.Calendar; +import java.util.Collections; +import java.util.Date; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import static java.lang.String.format; +import static org.apache.flink.util.Preconditions.checkArgument; +import static org.apache.flink.util.Preconditions.checkNotNull; + +/** + * Flink Sql Fragment Manager + * + * @author zrx + * @since 2021/6/7 22:06 + **/ +public final class SqlManager { + + public static final String FRAGMENT = "fragment"; + static final String SHOW_FRAGMENTS = "SHOW FRAGMENTS"; + private final Map sqlFragments; + + public SqlManager() { + sqlFragments = new HashMap<>(); + } + + /** + * Get names of sql fragments loaded. + * + * @return a list of names of sql fragments loaded + */ + public List listSqlFragments() { + return new ArrayList<>(sqlFragments.keySet()); + } + + /** + * Registers a fragment of sql under the given name. The sql fragment name must be unique. + * + * @param sqlFragmentName name under which to register the given sql fragment + * @param sqlFragment a fragment of sql to register + * @throws CatalogException if the registration of the sql fragment under the given name failed. + * But at the moment, with CatalogException, not SqlException + */ + public void registerSqlFragment(String sqlFragmentName, String sqlFragment) { + checkArgument(!StringUtils.isNullOrWhitespaceOnly(sqlFragmentName), + "sql fragment name cannot be null or empty."); + checkNotNull(sqlFragment, "sql fragment cannot be null"); + + if (sqlFragments.containsKey(sqlFragmentName)) { + throw new CatalogException(format("The fragment of sql %s already exists.", sqlFragmentName)); + } + //zrx trim + sqlFragments.put(sqlFragmentName.trim(), sqlFragment); + } + + /** + * Registers a fragment map of sql under the given name. The sql fragment name must be unique. + * + * @param sqlFragmentMap a fragment map of sql to register + * @throws CatalogException if the registration of the sql fragment under the given name failed. + * But at the moment, with CatalogException, not SqlException + */ + public void registerSqlFragment(Map sqlFragmentMap) { + if (Asserts.isNotNull(sqlFragmentMap)) { + sqlFragments.putAll(sqlFragmentMap); + } + } + + /** + * Unregisters a fragment of sql under the given name. The sql fragment name must be existed. + * + * @param sqlFragmentName name under which to unregister the given sql fragment. + * @param ignoreIfNotExists If false exception will be thrown if the fragment of sql to be + * altered does not exist. + * @throws CatalogException if the unregistration of the sql fragment under the given name + * failed. But at the moment, with CatalogException, not SqlException + */ + public void unregisterSqlFragment(String sqlFragmentName, boolean ignoreIfNotExists) { + checkArgument(!StringUtils.isNullOrWhitespaceOnly(sqlFragmentName), + "sql fragmentName name cannot be null or empty."); + + if (sqlFragments.containsKey(sqlFragmentName)) { + sqlFragments.remove(sqlFragmentName); + } else if (!ignoreIfNotExists) { + throw new CatalogException(format("The fragment of sql %s does not exist.", sqlFragmentName)); + } + } + + /** + * Get a fragment of sql under the given name. The sql fragment name must be existed. + * + * @param sqlFragmentName name under which to unregister the given sql fragment. + * @throws CatalogException if the unregistration of the sql fragment under the given name + * failed. But at the moment, with CatalogException, not SqlException + */ + public String getSqlFragment(String sqlFragmentName) { + checkArgument(!StringUtils.isNullOrWhitespaceOnly(sqlFragmentName), + "sql fragmentName name cannot be null or empty."); + // zrx trim + sqlFragmentName = sqlFragmentName.trim(); + if (sqlFragments.containsKey(sqlFragmentName)) { + return sqlFragments.get(sqlFragmentName); + } + + if (isInnerDateVar(sqlFragmentName)) { + return parseDateVar(sqlFragmentName); + } + + throw new CatalogException(format("The fragment of sql %s does not exist.", sqlFragmentName)); + } + + public TableResult getSqlFragmentResult(String sqlFragmentName) { + if (Asserts.isNullString(sqlFragmentName)) { + return CustomTableResultImpl.buildTableResult( + Collections.singletonList(new TableSchemaField(FRAGMENT, DataTypes.STRING())), new ArrayList<>()); + } + + String sqlFragment = getSqlFragment(sqlFragmentName); + return CustomTableResultImpl.buildTableResult( + Collections.singletonList(new TableSchemaField(FRAGMENT, DataTypes.STRING())), + Collections.singletonList(Row.of(sqlFragment))); + } + + /** + * Get a fragment of sql under the given name. The sql fragment name must be existed. + * + * @throws CatalogException if the unregistration of the sql fragment under the given name + * failed. But at the moment, with CatalogException, not SqlException + */ + public Map getSqlFragment() { + return sqlFragments; + } + + public TableResult getSqlFragments() { + List rows = new ArrayList<>(); + for (String key : sqlFragments.keySet()) { + rows.add(Row.of(key)); + } + return CustomTableResultImpl.buildTableResult( + Collections.singletonList(new TableSchemaField("fragmentName", DataTypes.STRING())), rows); + } + + public Iterator getSqlFragmentsIterator() { + return sqlFragments.entrySet().iterator(); + } + + public Table getSqlFragmentsTable(CustomTableEnvironmentImpl environment) { + List keys = new ArrayList<>(sqlFragments.keySet()); + return environment.fromValues(keys); + } + + public boolean checkShowFragments(String sql) { + return SHOW_FRAGMENTS.equals(sql.trim().toUpperCase()); + } + + /** + * Parse some variables under the given sql. + * + * @param statement A sql will be parsed. + * @throws ExpressionParserException if the name of the variable under the given sql failed. + */ + public String parseVariable(String statement, String sqlSeparator) { + if (Asserts.isNullString(statement)) { + return statement; + } + //zrx + String[] values = statement.split(sqlSeparator); + StringBuilder sb = new StringBuilder(); + for (String assignment : values) { + String[] splits = assignment.split(FlinkSQLConstant.FRAGMENTS, 2); + if (splits.length == 2) { + if (splits[0].trim().isEmpty()) { + throw new ExpressionParserException("Illegal variable name."); + } + this.registerSqlFragment(splits[0], replaceVariable(splits[1])); + } else if (splits.length == 1) { + // string not contains FlinkSQLConstant.FRAGMENTS + sb.append(replaceVariable(assignment)); + } else { + throw new ExpressionParserException("Illegal variable definition."); + } + } + return sb.toString(); + } + + /** + * Replace some variables under the given sql. + * + * @param statement A sql will be replaced. + */ + private String replaceVariable(String statement) { + Pattern p = Pattern.compile("\\$\\{(.+?)}"); + Matcher m = p.matcher(statement); + StringBuffer sb = new StringBuffer(); + while (m.find()) { + String key = m.group(1); + String value = this.getSqlFragment(key); + m.appendReplacement(sb, ""); + + // 内置时间变量的情况 + if (value == null && isInnerDateVar(key)) { + value = parseDateVar(key); + } + + sb.append(value == null ? "" : value); + } + m.appendTail(sb); + return sb.toString(); + } + + /** + * verify if key is inner variable + * + * @param key + * @return + */ + private boolean isInnerDateVar(String key) { + if (key.startsWith(FlinkSQLConstant.INNER_DATETIME_KEY)) { + return true; + } + + return false; + } + + /** + * parse datetime var + * + * @param key + * @return + */ + private String parseDateVar(String key) { + int days = 0; + try { + if (key.contains("+")) { + int s = key.indexOf("+") + 1; + String num = key.substring(s).trim(); + days = Integer.parseInt(num); + } else if (key.contains("-")) { + int s = key.indexOf("-") + 1; + String num = key.substring(s).trim(); + days = Integer.parseInt(num) * -1; + } + } catch (Exception e) { + e.printStackTrace(); + return null; + } + + SimpleDateFormat dtf = new SimpleDateFormat(FlinkSQLConstant.INNER_DATETIME_FORMAT); + Date endDate = new Date(); + Calendar calendar = Calendar.getInstance(); + calendar.setTime(endDate); + calendar.add(Calendar.DAY_OF_YEAR, days); + Date startDate = calendar.getTime(); + + return dtf.format(startDate); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/interceptor/FlinkInterceptor.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/interceptor/FlinkInterceptor.java new file mode 100644 index 0000000..e6d810e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/interceptor/FlinkInterceptor.java @@ -0,0 +1,58 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.interceptor; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.utils.SqlUtil; +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.trans.Operation; +import net.srt.flink.executor.trans.Operations; +import org.apache.flink.table.api.TableResult; + +/** + * FlinkInterceptor + * + * @author zrx + * @since 2021/6/11 22:17 + */ +public class FlinkInterceptor { + private FlinkInterceptor() { + } + + public static String pretreatStatement(Executor executor, String statement, String sqlSeparator) { + statement = SqlUtil.removeNote(statement); + if (executor.isUseSqlFragment()) { + statement = executor.getSqlManager().parseVariable(statement, sqlSeparator); + } + return statement.trim(); + } + + // return false to continue with executeSql + public static FlinkInterceptorResult build(Executor executor, String statement) { + boolean noExecute = false; + TableResult tableResult = null; + Operation operation = Operations.buildOperation(statement); + if (Asserts.isNotNull(operation)) { + tableResult = operation.build(executor); + noExecute = operation.noExecute(); + } + return FlinkInterceptorResult.build(noExecute, tableResult); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/interceptor/FlinkInterceptorResult.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/interceptor/FlinkInterceptorResult.java new file mode 100644 index 0000000..1698fb4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/interceptor/FlinkInterceptorResult.java @@ -0,0 +1,66 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.interceptor; + +import org.apache.flink.table.api.TableResult; + +/** + * FlinkInterceptorResult + * + * @author zrx + * @since 2022/2/17 16:36 + **/ +public class FlinkInterceptorResult { + + private boolean noExecute; + private TableResult tableResult; + + public FlinkInterceptorResult() { + } + + public FlinkInterceptorResult(boolean noExecute, TableResult tableResult) { + this.noExecute = noExecute; + this.tableResult = tableResult; + } + + public boolean isNoExecute() { + return noExecute; + } + + public void setNoExecute(boolean noExecute) { + this.noExecute = noExecute; + } + + public TableResult getTableResult() { + return tableResult; + } + + public void setTableResult(TableResult tableResult) { + this.tableResult = tableResult; + } + + public static FlinkInterceptorResult buildResult(TableResult tableResult) { + return new FlinkInterceptorResult(false, tableResult); + } + + public static FlinkInterceptorResult build(boolean noExecute, TableResult tableResult) { + return new FlinkInterceptorResult(noExecute, tableResult); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/AddJarSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/AddJarSqlParser.java new file mode 100644 index 0000000..741e91c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/AddJarSqlParser.java @@ -0,0 +1,70 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +import cn.hutool.core.io.FileUtil; +import cn.hutool.core.util.ReUtil; +import cn.hutool.core.util.StrUtil; + +import java.io.File; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.regex.Pattern; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +/** + * @author ZackYoung + * @since 0.7.0 + */ +public class AddJarSqlParser { + + private static final String ADD_JAR = "(add\\s+jar)\\s+'(.*.jar)'"; + private static final Pattern ADD_JAR_PATTERN = + Pattern.compile(ADD_JAR, Pattern.CASE_INSENSITIVE); + + protected static List patternStatements(String[] statements) { + return Stream.of(statements) + .filter(s -> ReUtil.isMatch(ADD_JAR_PATTERN, s)) + .map(x -> ReUtil.findAllGroup0(ADD_JAR_PATTERN, x).get(0)) + .collect(Collectors.toList()); + } + + public static Set getAllFilePath(String[] statements) { + Set fileSet = new HashSet<>(); + patternStatements(statements).stream() + .map(x -> ReUtil.findAll(ADD_JAR_PATTERN, x, 2).get(0)) + .distinct() + .forEach( + path -> { + if (!FileUtil.exist(path)) { + throw new RuntimeException( + StrUtil.format("file : {} not exists!", path)); + } + fileSet.add(FileUtil.file(path)); + }); + return fileSet; + } + + public static Set getAllFilePath(String statements) { + return getAllFilePath(new String[] {statements}); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/BaseSingleSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/BaseSingleSqlParser.java new file mode 100644 index 0000000..ff9c399 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/BaseSingleSqlParser.java @@ -0,0 +1,84 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + + +import net.srt.flink.common.assertion.Asserts; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * BaseSingleSqlParser + * + * @author zrx + * @since 2021/6/14 16:43 + */ +public abstract class BaseSingleSqlParser { + + //原始Sql语句 + protected String originalSql; + + //Sql语句片段 + protected List segments; + + /** + * 构造函数,传入原始Sql语句,进行劈分。 + **/ + public BaseSingleSqlParser(String originalSql) { + this.originalSql = originalSql; + segments = new ArrayList(); + initializeSegments(); + } + + /** + * 初始化segments,强制子类实现 + **/ + protected abstract void initializeSegments(); + + /** + * 将originalSql劈分成一个个片段 + **/ + protected Map> splitSql2Segment() { + Map> map = new HashMap<>(); + for (SqlSegment sqlSegment : segments) { + sqlSegment.parse(originalSql); + if (Asserts.isNotNullString(sqlSegment.getStart())) { + map.put(sqlSegment.getType().toUpperCase(), sqlSegment.getBodyPieces()); + } + } + return map; + } + + /** + * 得到解析完毕的Sql语句 + **/ + public String getParsedSql() { + StringBuilder sb = new StringBuilder(); + for (SqlSegment sqlSegment : segments) { + sb.append(sqlSegment.getParsedSqlSegment() + "\n"); + } + String retval = sb.toString().replaceAll("\n+", "\n"); + return retval; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/CreateAggTableSelectSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/CreateAggTableSelectSqlParser.java new file mode 100644 index 0000000..7e09eec --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/CreateAggTableSelectSqlParser.java @@ -0,0 +1,44 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * CreateAggTableSelectSqlParser + * + * @author zrx + * @since 2021/6/14 16:56 + */ +public class CreateAggTableSelectSqlParser extends BaseSingleSqlParser { + + public CreateAggTableSelectSqlParser(String originalSql) { + super(originalSql); + } + + @Override + protected void initializeSegments() { + segments.add(new SqlSegment("(create\\s+aggtable)(.+)(as\\s+select)", "[,]")); + segments.add(new SqlSegment("(select)(.+)(from)", "[,]")); + segments.add(new SqlSegment("(from)(.+?)( where | on | having | group\\s+by | order\\s+by | agg\\s+by | ENDOFSQL)", "(,|\\s+left\\s+join\\s+|\\s+right\\s+join\\s+|\\s+inner\\s+join\\s+)")); + segments.add(new SqlSegment("(where|on|having)(.+?)( group\\s+by | order\\s+by | agg\\s+by | ENDOFSQL)", "(and|or)")); + segments.add(new SqlSegment("(group\\s+by)(.+?)( order\\s+by | agg\\s+by | ENDOFSQL)", "[,]")); + segments.add(new SqlSegment("(order\\s+by)(.+?)( agg\\s+by | ENDOFSQL)", "[,]")); + segments.add(new SqlSegment("(agg\\s+by)(.+?)( ENDOFSQL)", "[,]")); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/CreateCDCSourceSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/CreateCDCSourceSqlParser.java new file mode 100644 index 0000000..92d5d1f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/CreateCDCSourceSqlParser.java @@ -0,0 +1,39 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * CreateCDCSourceSqlParser + * + * @author zrx + * @since 2022/1/29 23:39 + */ +public class CreateCDCSourceSqlParser extends BaseSingleSqlParser { + + public CreateCDCSourceSqlParser(String originalSql) { + super(originalSql); + } + + @Override + protected void initializeSegments() { + segments.add(new SqlSegment("CDCSOURCE", "(execute\\s+cdcsource\\s+)(.+)(\\s+with\\s+\\()", "[,]")); + segments.add(new SqlSegment("WITH", "(with\\s+\\()(.+)(\\))", "',")); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/DeleteSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/DeleteSqlParser.java new file mode 100644 index 0000000..ba27c78 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/DeleteSqlParser.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * DeleteSqlParser + * + * @author zrx + * @since 2021/6/14 16:51 + */ +public class DeleteSqlParser extends BaseSingleSqlParser { + + public DeleteSqlParser(String originalSql) { + super(originalSql); + } + + @Override + protected void initializeSegments() { + segments.add(new SqlSegment("(delete\\s+from)(.+)( where | ENDOFSQL)", "[,]")); + segments.add(new SqlSegment("(where)(.+)( ENDOFSQL)", "(and|or)")); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/InsertSelectSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/InsertSelectSqlParser.java new file mode 100644 index 0000000..2814d28 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/InsertSelectSqlParser.java @@ -0,0 +1,43 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * InsertSelectSqlParser + * + * @author zrx + * @since 2021/6/14 16:53 + */ +public class InsertSelectSqlParser extends BaseSingleSqlParser { + + public InsertSelectSqlParser(String originalSql) { + super(originalSql); + } + + @Override + protected void initializeSegments() { + segments.add(new SqlSegment("(insert\\s+into)(.+)( select )", "[,]")); + segments.add(new SqlSegment("(select)(.+)(from)", "[,]")); + segments.add(new SqlSegment("(from)(.+?)( where | on | having | group\\s+by | order\\s+by | ENDOFSQL)", "(,|\\s+left\\s+join\\s+|\\s+right\\s+join\\s+|\\s+inner\\s+join\\s+)")); + segments.add(new SqlSegment("(where|on|having)(.+?)( group\\s+by | order\\s+by | ENDOFSQL)", "(and|or)")); + segments.add(new SqlSegment("(group\\s+by)(.+?)( order\\s+by| ENDOFSQL)", "[,]")); + segments.add(new SqlSegment("(order\\s+by)(.+?)( ENDOFSQL)", "[,]")); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/InsertSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/InsertSqlParser.java new file mode 100644 index 0000000..b345bd4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/InsertSqlParser.java @@ -0,0 +1,48 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * InsertSqlParser + * + * @author zrx + * @since 2021/6/14 16:54 + */ +public class InsertSqlParser extends BaseSingleSqlParser { + + public InsertSqlParser(String originalSql) { + super(originalSql); + } + + @Override + protected void initializeSegments() { + segments.add(new SqlSegment("(insert\\s+into)(.+?)([(])", "[,]")); + segments.add(new SqlSegment("([(])(.+?)([)]\\s+values\\s+[(])", "[,]")); + segments.add(new SqlSegment("([)]\\s+values\\s+[(])(.+)([)]\\s+ENDOFSQL)", "[,]")); + } + + @Override + public String getParsedSql() { + String retval = super.getParsedSql(); + retval = retval + ")"; + return retval; + } +} + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SelectSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SelectSqlParser.java new file mode 100644 index 0000000..c7373d3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SelectSqlParser.java @@ -0,0 +1,44 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * SelectSqlParser + * + * @author zrx + * @since 2021/6/14 16:53 + */ +public class SelectSqlParser extends BaseSingleSqlParser { + + public SelectSqlParser(String originalSql) { + super(originalSql); + } + + @Override + protected void initializeSegments() { + segments.add(new SqlSegment("(select)(.+)(from)", "[,]")); + segments.add(new SqlSegment("(from)(.+?)(where |group\\s+by|having|order\\s+by | ENDOFSQL)", "(,|s+lefts+joins+|s+rights+joins+|s+inners+joins+)")); + segments.add(new SqlSegment("(where)(.+?)(group\\s+by |having| order\\s+by | ENDOFSQL)", "(and|or)")); + segments.add(new SqlSegment("(group\\s+by)(.+?)(having|order\\s+by| ENDOFSQL)", "[,]")); + segments.add(new SqlSegment("(having)(.+?)(order\\s+by| ENDOFSQL)", "(and|or)")); + segments.add(new SqlSegment("(order\\s+by)(.+)( ENDOFSQL)", "[,]")); + } +} + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SetSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SetSqlParser.java new file mode 100644 index 0000000..84ec952 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SetSqlParser.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * SetSqlParser + * + * @author zrx + * @since 2021/10/21 18:41 + **/ +public class SetSqlParser extends BaseSingleSqlParser { + + public SetSqlParser(String originalSql) { + super(originalSql); + } + + @Override + protected void initializeSegments() { + //SET(\s+(\S+)\s*=(.*))? + segments.add(new SqlSegment("(set)\\s+(.+)(\\s*=)", "[.]")); + segments.add(new SqlSegment("(=)\\s*(.*)( ENDOFSQL)", ",")); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/ShowFragmentParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/ShowFragmentParser.java new file mode 100644 index 0000000..1e4d45b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/ShowFragmentParser.java @@ -0,0 +1,39 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * ShowFragmentsParser + * + * @author zrx + * @since 2022/2/17 16:19 + **/ +public class ShowFragmentParser extends BaseSingleSqlParser { + + public ShowFragmentParser(String originalSql) { + super(originalSql); + } + + @Override + protected void initializeSegments() { + //SHOW FRAGMENT (.+) + segments.add(new SqlSegment("FRAGMENT", "(show\\s+fragment)\\s+(.*)( ENDOFSQL)", ",")); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SingleSqlParserFactory.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SingleSqlParserFactory.java new file mode 100644 index 0000000..c6fc60d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SingleSqlParserFactory.java @@ -0,0 +1,78 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +import java.util.List; +import java.util.Map; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +/** + * SingleSqlParserFactory + * + * @author zrx + * @since 2021/6/14 16:49 + */ +public class SingleSqlParserFactory { + + public static Map> generateParser(String sql) { + BaseSingleSqlParser tmp = null; + //sql = sql.replace("\n"," ").replaceAll("\\s{1,}", " ") +" ENDOFSQL"; + sql = sql.replace("\r\n", " ").replace("\n", " ") + " ENDOFSQL"; + if (contains(sql, "(insert\\s+into)(.+)(select)(.+)(from)(.+)")) { + tmp = new InsertSelectSqlParser(sql); + } else if (contains(sql, "(create\\s+aggtable)(.+)(as\\s+select)(.+)")) { + tmp = new CreateAggTableSelectSqlParser(sql); + } else if (contains(sql, "(execute\\s+cdcsource)")) { + tmp = new CreateCDCSourceSqlParser(sql); + } else if (contains(sql, "(select)(.+)(from)(.+)")) { + tmp = new SelectSqlParser(sql); + } else if (contains(sql, "(delete\\s+from)(.+)")) { + tmp = new DeleteSqlParser(sql); + } else if (contains(sql, "(update)(.+)(set)(.+)")) { + tmp = new UpdateSqlParser(sql); + } else if (contains(sql, "(insert\\s+into)(.+)(values)(.+)")) { + tmp = new InsertSqlParser(sql); + //} else if (contains(sql, "(create\\s+table)(.+)")) { + //} else if (contains(sql, "(create\\s+database)(.+)")) { + //} else if (contains(sql, "(show\\s+databases)")) { + //} else if (contains(sql, "(use)(.+)")) { + } else if (contains(sql, "(set)(.+)")) { + tmp = new SetSqlParser(sql); + } else if (contains(sql, "(show\\s+fragment)\\s+(.+)")) { + tmp = new ShowFragmentParser(sql); + } + return tmp.splitSql2Segment(); + } + + /** + * 看word是否在lineText中存在,支持正则表达式 + * + * @param sql:要解析的sql语句 + * @param regExp:正则表达式 + * @return + **/ + private static boolean contains(String sql, String regExp) { + Pattern pattern = Pattern.compile(regExp, Pattern.CASE_INSENSITIVE); + Matcher matcher = pattern.matcher(sql); + return matcher.find(); + } +} + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SqlSegment.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SqlSegment.java new file mode 100644 index 0000000..693151e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SqlSegment.java @@ -0,0 +1,204 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + + +import net.srt.flink.common.assertion.Asserts; + +import java.util.ArrayList; +import java.util.List; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +/** + * SqlSegment + * + * @author zrx + * @since 2021/6/14 16:12 + */ +public class SqlSegment { + private static final String Crlf = "|"; + @SuppressWarnings("unused") + private static final String FourSpace = "  "; + /** + * Sql语句片段类型,大写 + **/ + private String type; + /** + * Sql语句片段开头部分 + **/ + private String start; + /** + * Sql语句片段中间部分 + **/ + private String body; + /** + * Sql语句片段结束部分 + **/ + private String end; + /** + * 用于分割中间部分的正则表达式 + **/ + private String bodySplitPattern; + /** + * 表示片段的正则表达式 + **/ + private String segmentRegExp; + /** + * 分割后的Body小片段 + **/ + private List bodyPieces; + + /** + * 构造函数 + * + * @param segmentRegExp 表示这个Sql片段的正则表达式 + * @param bodySplitPattern 用于分割body的正则表达式 + **/ + public SqlSegment(String segmentRegExp, String bodySplitPattern) { + this.type = ""; + this.start = ""; + this.body = ""; + this.end = ""; + this.segmentRegExp = segmentRegExp; + this.bodySplitPattern = bodySplitPattern; + this.bodyPieces = new ArrayList(); + } + + public SqlSegment(String type, String segmentRegExp, String bodySplitPattern) { + this.type = type; + this.start = ""; + this.body = ""; + this.end = ""; + this.segmentRegExp = segmentRegExp; + this.bodySplitPattern = bodySplitPattern; + this.bodyPieces = new ArrayList(); + } + + /** + * 从sql中查找符合segmentRegExp的部分,并赋值到start,body,end等三个属性中 + **/ + public void parse(String sql) { + Pattern pattern = Pattern.compile(segmentRegExp, Pattern.CASE_INSENSITIVE); + Matcher matcher = pattern.matcher(sql); + while (matcher.find()) { + start = matcher.group(1); + body = matcher.group(2); + end = matcher.group(3); + if (Asserts.isNullString(type)) { + type = start.replace("\n", " ").replaceAll("\\s{1,}", " ").toUpperCase(); + } + parseBody(); + } + } + + /** + * 解析body部分 + **/ + private void parseBody() { + List ls = new ArrayList(); + Pattern p = Pattern.compile(bodySplitPattern, Pattern.CASE_INSENSITIVE); + body = body.trim(); + Matcher m = p.matcher(body); + StringBuffer sb = new StringBuffer(); + boolean result = m.find(); + while (result) { + m.appendReplacement(sb, Crlf); + result = m.find(); + } + m.appendTail(sb); + //ls.add(start); + String[] arr = sb.toString().split("[|]"); + int arrLength = arr.length; + for (int i = 0; i < arrLength; i++) { + ls.add(arr[i]); + } + bodyPieces = ls; + } + + /** + * 取得解析好的Sql片段 + **/ + public String getParsedSqlSegment() { + StringBuffer sb = new StringBuffer(); + sb.append(start + Crlf); + for (String piece : bodyPieces) { + sb.append(piece + Crlf); + } + return sb.toString(); + } + + public String getType() { + return type; + } + + public void setType(String type) { + this.type = type; + } + + public String getStart() { + return start; + } + + public void setStart(String start) { + this.start = start; + } + + public String getBody() { + return body; + } + + public void setBody(String body) { + this.body = body; + } + + public String getEnd() { + return end; + } + + public void setEnd(String end) { + this.end = end; + } + + public String getBodySplitPattern() { + return bodySplitPattern; + } + + public void setBodySplitPattern(String bodySplitPattern) { + this.bodySplitPattern = bodySplitPattern; + } + + public String getSegmentRegExp() { + return segmentRegExp; + } + + public void setSegmentRegExp(String segmentRegExp) { + this.segmentRegExp = segmentRegExp; + } + + public List getBodyPieces() { + return bodyPieces; + } + + public void setBodyPieces(List bodyPieces) { + this.bodyPieces = bodyPieces; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SqlType.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SqlType.java new file mode 100644 index 0000000..d5130b0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/SqlType.java @@ -0,0 +1,70 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * SqlType + * + * @author zrx + * @since 2021/7/3 11:11 + */ +public enum SqlType { + SELECT("SELECT"), + CREATE("CREATE"), + DROP("DROP"), + ALTER("ALTER"), + INSERT("INSERT"), + DESC("DESC"), + DESCRIBE("DESCRIBE"), + EXPLAIN("EXPLAIN"), + USE("USE"), + SHOW("SHOW"), + LOAD("LOAD"), + UNLOAD("UNLOAD"), + SET("SET"), + RESET("RESET"), + EXECUTE("EXECUTE"), + ADD("ADD"), + UNKNOWN("UNKNOWN"), + ; + + private String type; + + SqlType(String type) { + this.type = type; + } + + public void setType(String type) { + this.type = type; + } + + public String getType() { + return type; + } + + public boolean equalsValue(String value) { + return type.equalsIgnoreCase(value); + } + + public boolean isInsert() { + return type.equals("INSERT"); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/UpdateSqlParser.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/UpdateSqlParser.java new file mode 100644 index 0000000..73fca11 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/parser/UpdateSqlParser.java @@ -0,0 +1,42 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.parser; + +/** + * UpdateSqlParser + * + * @author zrx + * @since 2021/6/14 16:52 + */ +public class UpdateSqlParser extends BaseSingleSqlParser { + + public UpdateSqlParser(String originalSql) { + super(originalSql); + } + + @Override + protected void initializeSegments() { + segments.add(new SqlSegment("(update)(.+)(set)", "[,]")); + segments.add(new SqlSegment("(set)(.+?)( where | ENDOFSQL)", "[,]")); + segments.add(new SqlSegment("(where)(.+)(ENDOFSQL)", "(and|or)")); + } + +} + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/AbstractOperation.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/AbstractOperation.java new file mode 100644 index 0000000..82a1f00 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/AbstractOperation.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans; + +import net.srt.flink.client.executor.CustomTableEnvironmentImpl; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Arrays; +import java.util.List; + +/** + * AbstractOperation + * + * @author zrx + * @since 2021/6/14 18:18 + */ +public class AbstractOperation { + + protected static final Logger logger = LoggerFactory.getLogger(AbstractOperation.class); + + protected String statement; + + public AbstractOperation() { + } + + public AbstractOperation(String statement) { + this.statement = statement; + } + + public String getStatement() { + return statement; + } + + public void setStatement(String statement) { + this.statement = statement; + } + + public boolean checkFunctionExist(CustomTableEnvironmentImpl stEnvironment, String key) { + String[] udfs = stEnvironment.listUserDefinedFunctions(); + List udflist = Arrays.asList(udfs); + if (udflist.contains(key.toLowerCase())) { + return true; + } else { + return false; + } + } + + public boolean noExecute() { + return true; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/CreateOperation.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/CreateOperation.java new file mode 100644 index 0000000..c9faf8d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/CreateOperation.java @@ -0,0 +1,30 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans; + +/** + * + * @author zrx + * @since 2021/6/13 19:34 + */ +public interface CreateOperation extends Operation { + + //void create(CustomTableEnvironmentImpl stEnvironment); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/Operation.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/Operation.java new file mode 100644 index 0000000..e5c10ac --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/Operation.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans; + +import net.srt.flink.executor.executor.Executor; +import org.apache.flink.table.api.TableResult; + +/** + * Operation + * + * @author zrx + * @since 2021/6/13 19:24 + */ +public interface Operation { + + String getHandle(); + + Operation create(String statement); + + TableResult build(Executor executor); + + boolean noExecute(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/Operations.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/Operations.java new file mode 100644 index 0000000..0f04c86 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/Operations.java @@ -0,0 +1,91 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans; + +import net.srt.flink.executor.parser.SqlType; +import net.srt.flink.executor.trans.ddl.AddJarOperation; +import net.srt.flink.executor.trans.ddl.CreateAggTableOperation; +import net.srt.flink.executor.trans.ddl.CreateCDCSourceOperation; +import net.srt.flink.executor.trans.ddl.SetOperation; +import net.srt.flink.executor.trans.ddl.ShowFragmentOperation; +import net.srt.flink.executor.trans.ddl.ShowFragmentsOperation; + +import java.util.Arrays; + +/** + * Operations + * + * @author zrx + * @since 2021/5/25 15:50 + **/ +public class Operations { + + private Operations() { + } + + private static final Operation[] ALL_OPERATIONS = { + new AddJarOperation(), + new CreateAggTableOperation() + , new SetOperation() + , new CreateCDCSourceOperation() + , new ShowFragmentsOperation() + , new ShowFragmentOperation() + }; + + public static SqlType getSqlTypeFromStatements(String statement) { + String[] statements = statement.split(";"); + SqlType sqlType = SqlType.UNKNOWN; + for (String item : statements) { + if (item.trim().isEmpty()) { + continue; + } + sqlType = Operations.getOperationType(item); + if (sqlType == SqlType.INSERT || sqlType == SqlType.SELECT) { + return sqlType; + } + } + return sqlType; + } + + public static SqlType getOperationType(String sql) { + String sqlTrim = sql.replaceAll("[\\s\\t\\n\\r]", "").trim().toUpperCase(); + SqlType type = SqlType.UNKNOWN; + for (SqlType sqlType : SqlType.values()) { + if (sqlTrim.startsWith(sqlType.getType())) { + type = sqlType; + break; + } + } + return type; + } + + public static Operation buildOperation(String statement) { + String sql = statement.replace("\n", " ") + .replaceAll("\\s+", " ") + .trim() + .toUpperCase(); + + return Arrays.stream(ALL_OPERATIONS) + .filter(p -> sql.startsWith(p.getHandle())) + .findFirst() + .map(p -> p.create(statement)) + .orElse(null); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/AddJarOperation.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/AddJarOperation.java new file mode 100644 index 0000000..01c27e5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/AddJarOperation.java @@ -0,0 +1,61 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans.ddl; + +import net.srt.flink.common.context.JarPathContextHolder; +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.parser.AddJarSqlParser; +import net.srt.flink.executor.trans.AbstractOperation; +import net.srt.flink.executor.trans.Operation; +import org.apache.flink.table.api.TableResult; + +/** + * @author zrx + * @since 0.7.0 + */ +public class AddJarOperation extends AbstractOperation implements Operation { + + private static final String KEY_WORD = "ADD JAR"; + + public AddJarOperation(String statement) { + super(statement); + } + + public AddJarOperation() {} + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public Operation create(String statement) { + return new AddJarOperation(statement); + } + + @Override + public TableResult build(Executor executor) { + return null; + } + + public void init() { + AddJarSqlParser.getAllFilePath(statement).forEach(JarPathContextHolder::addOtherPlugins); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/AggTable.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/AggTable.java new file mode 100644 index 0000000..fdb42e0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/AggTable.java @@ -0,0 +1,123 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans.ddl; + +import net.srt.flink.executor.parser.SingleSqlParserFactory; +import org.apache.commons.lang3.StringUtils; + +import java.util.List; +import java.util.Map; + +/** + * AggTable + * + * @author zrx + * @since 2021/6/13 20:32 + */ +public class AggTable { + private String statement; + private String name; + private String columns; + private String table; + private List wheres; + private String groupBy; + private String aggBy; + + public AggTable(String statement, String name, String columns, String table, List wheres, String groupBy, String aggBy) { + this.statement = statement; + this.name = name; + this.columns = columns; + this.table = table; + this.wheres = wheres; + this.groupBy = groupBy; + this.aggBy = aggBy; + } + + public static AggTable build(String statement) { + Map> map = SingleSqlParserFactory.generateParser(statement); + return new AggTable(statement, + getString(map, "CREATE AGGTABLE"), + getString(map, "SELECT"), + getString(map, "FROM"), + map.get("WHERE"), + getString(map, "GROUP BY"), + getString(map, "AGG BY")); + } + + private static String getString(Map> map, String key) { + return StringUtils.join(map.get(key), ","); + } + + public String getStatement() { + return statement; + } + + public void setStatement(String statement) { + this.statement = statement; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public String getColumns() { + return columns; + } + + public void setColumns(String columns) { + this.columns = columns; + } + + public String getTable() { + return table; + } + + public void setTable(String table) { + this.table = table; + } + + public List getWheres() { + return wheres; + } + + public void setWheres(List wheres) { + this.wheres = wheres; + } + + public String getGroupBy() { + return groupBy; + } + + public void setGroupBy(String groupBy) { + this.groupBy = groupBy; + } + + public String getAggBy() { + return aggBy; + } + + public void setAggBy(String aggBy) { + this.aggBy = aggBy; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/CDCSource.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/CDCSource.java new file mode 100644 index 0000000..3d6121b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/CDCSource.java @@ -0,0 +1,366 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans.ddl; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.executor.parser.SingleSqlParserFactory; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +/** + * CDCSource + * + * @author zrx + * @since 2022/1/29 23:30 + */ +public class CDCSource { + private String connector; + private String statement; + private String name; + private String hostname; + private Integer port; + private String username; + private String password; + private Integer checkpoint; + private Integer parallelism; + private String database; + private String schema; + private String table; + private String startupMode; + private Map debezium; + private Map split; + private Map jdbc; + private Map source; + private Map sink; + private List> sinks; + + public CDCSource(String connector, String statement, String name, String hostname, Integer port, String username, String password, Integer checkpoint, Integer parallelism, String startupMode, + Map split, Map debezium, Map source, Map sink, Map jdbc) { + this(connector, statement, name, hostname, port, username, password, checkpoint, parallelism, startupMode, split, debezium, source, sink, null, jdbc); + } + + public CDCSource(String connector, String statement, String name, String hostname, Integer port, String username, String password, Integer checkpoint, Integer parallelism, String startupMode, + Map split, Map debezium, Map source, Map sink, List> sinks, Map jdbc) { + this.connector = connector; + this.statement = statement; + this.name = name; + this.hostname = hostname; + this.port = port; + this.username = username; + this.password = password; + this.checkpoint = checkpoint; + this.parallelism = parallelism; + this.startupMode = startupMode; + this.debezium = debezium; + this.split = split; + this.jdbc = jdbc; + this.source = source; + this.sink = sink; + this.sinks = sinks; + } + + public static CDCSource build(String statement) { + Map> map = SingleSqlParserFactory.generateParser(statement); + Map config = getKeyValue(map.get("WITH")); + Map debezium = new HashMap<>(); + Map split = new HashMap<>(); + for (Map.Entry entry : config.entrySet()) { + if (entry.getKey().startsWith("debezium.")) { + String key = entry.getKey(); + key = key.replaceFirst("debezium.", ""); + if (!debezium.containsKey(key)) { + debezium.put(key, entry.getValue()); + } + } + } + for (Map.Entry entry : config.entrySet()) { + if (entry.getKey().startsWith("split.")) { + String key = entry.getKey(); + key = key.replaceFirst("split.", ""); + if (!split.containsKey(key)) { + split.put(key, entry.getValue()); + } + } + } + splitMapInit(split); + Map source = new HashMap<>(); + for (Map.Entry entry : config.entrySet()) { + if (entry.getKey().startsWith("source.")) { + String key = entry.getKey(); + key = key.replaceFirst("source.", ""); + if (!source.containsKey(key)) { + source.put(key, entry.getValue()); + } + } + } + // jdbc参数(jdbc.properties.*) + Map jdbc = new HashMap<>(); + for (Map.Entry entry : config.entrySet()) { + if (entry.getKey().startsWith("jdbc.properties.")) { + String key = entry.getKey(); + key = key.replaceFirst("jdbc.properties.", ""); + if (!jdbc.containsKey(key)) { + jdbc.put(key, entry.getValue()); + } + } + } + Map sink = new HashMap<>(); + for (Map.Entry entry : config.entrySet()) { + if (entry.getKey().startsWith("sink.")) { + String key = entry.getKey(); + key = key.replaceFirst("sink.", ""); + if (!sink.containsKey(key)) { + sink.put(key, entry.getValue()); + } + } + } + /** + * 支持多目标写入功能, 从0开始顺序写入配置. + */ + Map> sinks = new HashMap<>(); + final Pattern p = Pattern.compile("sink\\[(?.*)\\]"); + for (Map.Entry entry : config.entrySet()) { + if (entry.getKey().startsWith("sink[")) { + String key = entry.getKey(); + Matcher matcher = p.matcher(key); + if (matcher.find()) { + final String index = matcher.group("index"); + Map sinkMap = sinks.get(index); + if (sinkMap == null) { + sinkMap = new HashMap<>(); + sinks.put(index, sinkMap); + } + key = key.replaceFirst("sink\\[" + index + "\\].", ""); + if (!sinkMap.containsKey(key)) { + sinkMap.put(key, entry.getValue()); + } + } + } + } + final ArrayList> sinkList = new ArrayList<>(sinks.values()); + if (sink.isEmpty() && sinkList.size() > 0) { + sink = sinkList.get(0); + } + CDCSource cdcSource = new CDCSource( + config.get("connector"), + statement, + map.get("CDCSOURCE").toString(), + config.get("hostname"), + Integer.valueOf(config.get("port")), + config.get("username"), + config.get("password"), + Integer.valueOf(config.get("checkpoint")), + Integer.valueOf(config.get("parallelism")), + config.get("scan.startup.mode"), + split, + debezium, + source, + sink, + sinkList, + jdbc + ); + if (Asserts.isNotNullString(config.get("database-name"))) { + cdcSource.setDatabase(config.get("database-name")); + } + if (Asserts.isNotNullString(config.get("schema-name"))) { + cdcSource.setSchema(config.get("schema-name")); + } + if (Asserts.isNotNullString(config.get("table-name"))) { + cdcSource.setTable(config.get("table-name")); + } + return cdcSource; + } + + private static void splitMapInit(Map split) { + split.putIfAbsent("max_match_value", "100"); + split.putIfAbsent("match_number_regex", "_[0-9]+"); + split.putIfAbsent("match_way", "suffix"); + split.putIfAbsent("enable", "false"); + } + + private static Map getKeyValue(List list) { + Map map = new HashMap<>(); + Pattern p = Pattern.compile("'(.*?)'\\s*=\\s*'(.*?)'"); + for (int i = 0; i < list.size(); i++) { + Matcher m = p.matcher(list.get(i) + "'"); + if (m.find()) { + map.put(m.group(1), m.group(2)); + } + } + return map; + } + + public String getConnector() { + return connector; + } + + public void setConnector(String connector) { + this.connector = connector; + } + + public String getStatement() { + return statement; + } + + public void setStatement(String statement) { + this.statement = statement; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public String getHostname() { + return hostname; + } + + public void setHostname(String hostname) { + this.hostname = hostname; + } + + public Integer getPort() { + return port; + } + + public void setPort(Integer port) { + this.port = port; + } + + public String getUsername() { + return username; + } + + public void setUsername(String username) { + this.username = username; + } + + public String getPassword() { + return password; + } + + public void setPassword(String password) { + this.password = password; + } + + public Integer getCheckpoint() { + return checkpoint; + } + + public void setCheckpoint(Integer checkpoint) { + this.checkpoint = checkpoint; + } + + public Integer getParallelism() { + return parallelism; + } + + public void setParallelism(Integer parallelism) { + this.parallelism = parallelism; + } + + public String getDatabase() { + return database; + } + + public void setDatabase(String database) { + this.database = database; + } + + public String getSchema() { + return schema; + } + + public void setSchema(String schema) { + this.schema = schema; + } + + public String getTable() { + return table; + } + + public void setTable(String table) { + this.table = table; + } + + public Map getSink() { + return sink; + } + + public void setSink(Map sink) { + this.sink = sink; + } + + public String getStartupMode() { + return startupMode; + } + + public void setStartupMode(String startupMode) { + this.startupMode = startupMode; + } + + public Map getDebezium() { + return debezium; + } + + public void setDebezium(Map debezium) { + this.debezium = debezium; + } + + public Map getSplit() { + return split; + } + + public void setSplit(Map split) { + this.split = split; + } + + public void setSinks(List> sinks) { + this.sinks = sinks; + } + + public Map getSource() { + return source; + } + + public void setSource(Map source) { + this.source = source; + } + + public Map getJdbc() { + return jdbc; + } + + public void setJdbc(Map jdbc) { + this.jdbc = jdbc; + } + + public List> getSinks() { + return sinks; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/CreateAggTableOperation.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/CreateAggTableOperation.java new file mode 100644 index 0000000..9606687 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/CreateAggTableOperation.java @@ -0,0 +1,75 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans.ddl; + +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.trans.AbstractOperation; +import net.srt.flink.executor.trans.Operation; +import org.apache.flink.table.api.Table; +import org.apache.flink.table.api.TableResult; + +import java.util.List; + +import static org.apache.flink.table.api.Expressions.$; + +/** + * CreateAggTableOperation + * + * @author zrx + * @since 2021/6/13 19:24 + */ +public class CreateAggTableOperation extends AbstractOperation implements Operation { + + private static final String KEY_WORD = "CREATE AGGTABLE"; + + public CreateAggTableOperation() { + } + + public CreateAggTableOperation(String statement) { + super(statement); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public Operation create(String statement) { + return new CreateAggTableOperation(statement); + } + + @Override + public TableResult build(Executor executor) { + AggTable aggTable = AggTable.build(statement); + Table source = executor.getCustomTableEnvironment().sqlQuery("select * from " + aggTable.getTable()); + List wheres = aggTable.getWheres(); + if (wheres != null && wheres.size() > 0) { + for (String s : wheres) { + source = source.filter($(s)); + } + } + Table sink = source.groupBy($(aggTable.getGroupBy())) + .flatAggregate($(aggTable.getAggBy())) + .select($(aggTable.getColumns())); + executor.getCustomTableEnvironment().registerTable(aggTable.getName(), sink); + return null; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/CreateCDCSourceOperation.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/CreateCDCSourceOperation.java new file mode 100644 index 0000000..f9eb9cf --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/CreateCDCSourceOperation.java @@ -0,0 +1,217 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans.ddl; +import net.srt.flink.client.base.model.FlinkCDCConfig; +import net.srt.flink.client.cdc.CDCBuilder; +import net.srt.flink.client.cdc.CDCBuilderFactory; +import net.srt.flink.client.cdc.SinkBuilder; +import net.srt.flink.client.cdc.SinkBuilderFactory; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.utils.SplitUtil; +import net.srt.flink.common.utils.SqlUtil; +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.trans.AbstractOperation; +import net.srt.flink.executor.trans.Operation; +import net.srt.flink.metadata.base.driver.Driver; +import net.srt.flink.metadata.base.driver.DriverConfig; +import org.apache.flink.streaming.api.datastream.DataStreamSource; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.table.api.TableResult; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; + + +/** + * CreateCDCSourceOperation + * + * @author zrx + * @since 2022/1/29 23:25 + */ +public class CreateCDCSourceOperation extends AbstractOperation implements Operation { + private static final String KEY_WORD = "EXECUTE CDCSOURCE"; + + public CreateCDCSourceOperation() { + } + + public CreateCDCSourceOperation(String statement) { + super(statement); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public Operation create(String statement) { + return new CreateCDCSourceOperation(statement); + } + + @Override + public TableResult build(Executor executor) { + logger.info("Start build CDCSOURCE Task..."); + CDCSource cdcSource = CDCSource.build(statement); + FlinkCDCConfig config = new FlinkCDCConfig(cdcSource.getConnector(), cdcSource.getHostname(), cdcSource.getPort(), cdcSource.getUsername() + , cdcSource.getPassword(), cdcSource.getCheckpoint(), cdcSource.getParallelism(), cdcSource.getDatabase(), cdcSource.getSchema() + , cdcSource.getTable(), cdcSource.getStartupMode(), cdcSource.getSplit(), cdcSource.getDebezium(), cdcSource.getSource(), cdcSource.getSink(), cdcSource.getJdbc()); + try { + CDCBuilder cdcBuilder = CDCBuilderFactory.buildCDCBuilder(config); + Map> allConfigMap = cdcBuilder.parseMetaDataConfigs(); + config.setSchemaFieldName(cdcBuilder.getSchemaFieldName()); + SinkBuilder sinkBuilder = SinkBuilderFactory.buildSinkBuilder(config); + List schemaList = new ArrayList<>(); + final List schemaNameList = cdcBuilder.getSchemaList(); + final List tableRegList = cdcBuilder.getTableList(); + final List schemaTableNameList = new ArrayList<>(); + if (SplitUtil.isEnabled(cdcSource.getSplit())) { + DriverConfig driverConfig = DriverConfig.build(cdcBuilder.parseMetaDataConfig()); + Driver driver = Driver.build(driverConfig); + + // 这直接传正则过去 + schemaTableNameList.addAll(tableRegList.stream().map(x -> x.replaceFirst("\\\\.", ".")).collect(Collectors.toList())); + + Driver sinkDriver = checkAndCreateSinkSchema(config, schemaTableNameList.get(0)); + + Set
tables = driver.getSplitTables(tableRegList, cdcSource.getSplit()); + + for (Table table : tables) { + String schemaName = table.getSchema(); + Schema schema = Schema.build(schemaName); + schema.setTables(Collections.singletonList(table)); + //分库分表所有表结构都是一样的,取出列表中第一个表名即可 + String schemaTableName = table.getSchemaTableNameList().get(0); + //真实的表名 + String tableName = schemaTableName.split("\\.")[1]; + table.setColumns(driver.listColumnsSortByPK(schemaName, tableName)); + table.setColumns(driver.listColumnsSortByPK(schemaName, table.getName())); + schemaList.add(schema); + + if (null != sinkDriver) { + Table sinkTable = (Table) table.clone(); + sinkTable.setSchema(sinkBuilder.getSinkSchemaName(table)); + sinkTable.setName(sinkBuilder.getSinkTableName(table)); + checkAndCreateSinkTable(sinkDriver, sinkTable); + } + } + } else { + for (String schemaName : schemaNameList) { + Schema schema = Schema.build(schemaName); + if (!allConfigMap.containsKey(schemaName)) { + continue; + } + + Driver sinkDriver = checkAndCreateSinkSchema(config, schemaName); + + DriverConfig driverConfig = DriverConfig.build(allConfigMap.get(schemaName)); + Driver driver = Driver.build(driverConfig); + final List
tables = driver.listTables(schemaName); + for (Table table : tables) { + if (!Asserts.isEquals(table.getType(), "VIEW")) { + if (Asserts.isNotNullCollection(tableRegList)) { + for (String tableReg : tableRegList) { + if (table.getSchemaTableName().matches(tableReg.trim()) && !schema.getTables().contains(Table.build(table.getName()))) { + table.setColumns(driver.listColumnsSortByPK(schemaName, table.getName())); + schema.getTables().add(table); + schemaTableNameList.add(table.getSchemaTableName()); + break; + } + } + } else { + table.setColumns(driver.listColumnsSortByPK(schemaName, table.getName())); + schemaTableNameList.add(table.getSchemaTableName()); + schema.getTables().add(table); + } + } + } + + if (null != sinkDriver) { + for (Table table : schema.getTables()) { + Table sinkTable = (Table) table.clone(); + sinkTable.setSchema(sinkBuilder.getSinkSchemaName(table)); + sinkTable.setName(sinkBuilder.getSinkTableName(table)); + checkAndCreateSinkTable(sinkDriver, sinkTable); + } + } + schemaList.add(schema); + } + } + + logger.info("A total of " + schemaTableNameList.size() + " tables were detected..."); + for (int i = 0; i < schemaTableNameList.size(); i++) { + logger.info((i + 1) + ": " + schemaTableNameList.get(i)); + } + config.setSchemaTableNameList(schemaTableNameList); + config.setSchemaList(schemaList); + StreamExecutionEnvironment streamExecutionEnvironment = executor.getStreamExecutionEnvironment(); + if (Asserts.isNotNull(config.getParallelism())) { + streamExecutionEnvironment.setParallelism(config.getParallelism()); + logger.info("Set parallelism: " + config.getParallelism()); + } + if (Asserts.isNotNull(config.getCheckpoint())) { + streamExecutionEnvironment.enableCheckpointing(config.getCheckpoint()); + logger.info("Set checkpoint: " + config.getCheckpoint()); + } + DataStreamSource streamSource = cdcBuilder.build(streamExecutionEnvironment); + logger.info("Build " + config.getType() + " successful..."); + if (cdcSource.getSinks() == null || cdcSource.getSinks().size() == 0) { + sinkBuilder.build(cdcBuilder, streamExecutionEnvironment, executor.getCustomTableEnvironment(), streamSource); + } else { + for (Map sink : cdcSource.getSinks()) { + config.setSink(sink); + sinkBuilder.build(cdcBuilder, streamExecutionEnvironment, executor.getCustomTableEnvironment(), streamSource); + } + } + logger.info("Build CDCSOURCE Task successful!"); + } catch (Exception e) { + logger.error(e.getMessage(), e); + } + return null; + } + + Driver checkAndCreateSinkSchema(FlinkCDCConfig config, String schemaName) throws Exception { + Map sink = config.getSink(); + String autoCreate = sink.get("auto.create"); + if (!Asserts.isEqualsIgnoreCase(autoCreate, "true") || Asserts.isNullString(schemaName)) { + return null; + } + String url = sink.get("url"); + String schema = SqlUtil.replaceAllParam(sink.get("sink.db"), "schemaName", schemaName); + Driver driver = Driver.build(sink.get("connector"), url, sink.get("username"), sink.get("password")); + if (null != driver && !driver.existSchema(schema)) { + driver.createSchema(schema); + } + sink.put("sink.db", schema); + sink.put("url", url + "/" + schema); + return driver; + } + + void checkAndCreateSinkTable(Driver driver, Table table) throws Exception { + if (null != driver && !driver.existTable(table)) { + driver.generateCreateTable(table); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/SetOperation.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/SetOperation.java new file mode 100644 index 0000000..3d071c0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/SetOperation.java @@ -0,0 +1,83 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans.ddl; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.parser.SingleSqlParserFactory; +import net.srt.flink.executor.trans.AbstractOperation; +import net.srt.flink.executor.trans.Operation; +import org.apache.commons.lang3.StringUtils; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.table.api.TableResult; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * SetOperation + * + * @author zrx + * @since 2021/10/21 19:56 + **/ +public class SetOperation extends AbstractOperation implements Operation { + + private static final String KEY_WORD = "SET"; + + public SetOperation() { + } + + public SetOperation(String statement) { + super(statement); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public Operation create(String statement) { + return new SetOperation(statement); + } + + @Override + public TableResult build(Executor executor) { + try { + if (null != Class.forName("org.apache.log4j.Logger")) { + executor.parseAndLoadConfiguration(statement); + return null; + } + } catch (ClassNotFoundException e) { + e.printStackTrace(); + } + Map> map = SingleSqlParserFactory.generateParser(statement); + if (Asserts.isNotNullMap(map) && map.size() == 2) { + Map confMap = new HashMap<>(); + confMap.put(StringUtils.join(map.get("SET"), "."), StringUtils.join(map.get("="), ",")); + executor.getCustomTableEnvironment().getConfig().addConfiguration(Configuration.fromMap(confMap)); + Configuration configuration = Configuration.fromMap(confMap); + executor.getExecutionConfig().configure(configuration, null); + executor.getCustomTableEnvironment().getConfig().addConfiguration(configuration); + } + return null; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/ShowFragmentOperation.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/ShowFragmentOperation.java new file mode 100644 index 0000000..3cafd36 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/ShowFragmentOperation.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans.ddl; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.parser.SingleSqlParserFactory; +import net.srt.flink.executor.trans.AbstractOperation; +import net.srt.flink.executor.trans.Operation; +import org.apache.commons.lang3.StringUtils; +import org.apache.flink.table.api.TableResult; + +import java.util.List; +import java.util.Map; + +/** + * ShowFragmentOperation + * + * @author zrx + * @since 2022/2/17 17:08 + **/ +public class ShowFragmentOperation extends AbstractOperation implements Operation { + private static final String KEY_WORD = "SHOW FRAGMENT "; + + public ShowFragmentOperation() { + } + + public ShowFragmentOperation(String statement) { + super(statement); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public Operation create(String statement) { + return new ShowFragmentOperation(statement); + } + + @Override + public TableResult build(Executor executor) { + Map> map = SingleSqlParserFactory.generateParser(statement); + if (Asserts.isNotNullMap(map)) { + if (map.containsKey("FRAGMENT")) { + return executor.getSqlManager().getSqlFragmentResult(StringUtils.join(map.get("FRAGMENT"), "")); + } + } + return executor.getSqlManager().getSqlFragmentResult(null); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/ShowFragmentsOperation.java b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/ShowFragmentsOperation.java new file mode 100644 index 0000000..a438861 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-executor/src/main/java/net/srt/flink/executor/trans/ddl/ShowFragmentsOperation.java @@ -0,0 +1,58 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.executor.trans.ddl; + +import net.srt.flink.executor.executor.Executor; +import net.srt.flink.executor.trans.AbstractOperation; +import net.srt.flink.executor.trans.Operation; +import org.apache.flink.table.api.TableResult; + +/** + * ShowFragmentsOperation + * + * @author zrx + * @since 2022/2/17 16:31 + **/ +public class ShowFragmentsOperation extends AbstractOperation implements Operation { + + private static final String KEY_WORD = "SHOW FRAGMENTS"; + + public ShowFragmentsOperation() { + } + + public ShowFragmentsOperation(String statement) { + super(statement); + } + + @Override + public String getHandle() { + return KEY_WORD; + } + + @Override + public Operation create(String statement) { + return new ShowFragmentsOperation(statement); + } + + @Override + public TableResult build(Executor executor) { + return executor.getSqlManager().getSqlFragments(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-function/pom.xml new file mode 100644 index 0000000..2d7cbec --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/pom.xml @@ -0,0 +1,102 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-function + + + 2.12.10 + + + + + org.freemarker + freemarker + + + cn.hutool + hutool-all + ${hutool.version} + provided + + + net.srt + flink-gateway + ${project.version} + provided + + + net.srt + flink-process + ${project.version} + provided + + + org.codehaus.groovy + groovy + 3.0.13 + + + org.springframework + spring-context + 5.3.23 + provided + + + org.scala-lang + scala-compiler + ${scala.version} + provided + + + javax.annotation + javax.annotation-api + + + + + + + flink-1.16 + + + net.srt + flink-client-1.16 + ${project.version} + provided + + + net.srt + flink-1.16 + ${project.version} + provided + + + + + flink-1.14 + + + net.srt + flink-client-1.14 + ${project.version} + provided + + + net.srt + flink-1.14 + ${project.version} + provided + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/FunctionFactory.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/FunctionFactory.java new file mode 100644 index 0000000..97318ce --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/FunctionFactory.java @@ -0,0 +1,51 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function; + +import net.srt.flink.function.compiler.FunctionCompiler; +import net.srt.flink.function.compiler.FunctionPackage; +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.function.data.model.UDFPath; +import org.apache.flink.configuration.Configuration; + +import java.util.List; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +public class FunctionFactory { + + /** + * udf编译 & 打包 初始化 + * + * @param udfClassList udf列表 + * @param missionId 当前任务id + * @return 打包过后的路径 + */ + public static UDFPath initUDF(List udfClassList, Integer missionId, Configuration configuration) { + + // 编译 + FunctionCompiler.getCompiler(udfClassList, configuration, missionId); + + // 打包 + return FunctionPackage.bale(udfClassList, missionId); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/catalog/FunctionManager.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/catalog/FunctionManager.java new file mode 100644 index 0000000..a70fb54 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/catalog/FunctionManager.java @@ -0,0 +1,67 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.catalog; + +import net.srt.flink.function.constant.FlinkFunctionConstant; +import net.srt.flink.function.udf.GetKey; +import net.srt.flink.function.udtaf.RowsToMap; +import net.srt.flink.function.udtaf.Top2; + +import java.util.HashMap; +import java.util.Map; + +/** + * FunctionManager + * + * @author zrx + * @since 2021/6/14 21:19 + */ +@Deprecated +public class FunctionManager { + + private static Map functions = new HashMap() { + + { + put(FlinkFunctionConstant.GET_KEY, + new UDFunction(FlinkFunctionConstant.GET_KEY, + UDFunction.UDFunctionType.Scalar, + new GetKey())); + put(FlinkFunctionConstant.TO_MAP, + new UDFunction(FlinkFunctionConstant.TO_MAP, + UDFunction.UDFunctionType.TableAggregate, + new RowsToMap())); + put(FlinkFunctionConstant.TOP2, + new UDFunction(FlinkFunctionConstant.TOP2, + UDFunction.UDFunctionType.TableAggregate, + new Top2())); + } + }; + + public static Map getUsedFunctions(String statement) { + Map map = new HashMap<>(); + String sql = statement.toLowerCase(); + for (Map.Entry entry : functions.entrySet()) { + if (sql.contains(entry.getKey().toLowerCase())) { + map.put(entry.getKey(), entry.getValue()); + } + } + return map; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/catalog/UDFunction.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/catalog/UDFunction.java new file mode 100644 index 0000000..e8d5ffb --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/catalog/UDFunction.java @@ -0,0 +1,71 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.catalog; + + +import org.apache.flink.table.functions.FunctionDefinition; + +/** + * UDFunction + * + * @author zrx + * @since 2021/6/14 22:14 + */ +@Deprecated +public class UDFunction { + + public enum UDFunctionType { + Scalar, Table, Aggregate, TableAggregate + } + + private String name; + private UDFunctionType type; + private FunctionDefinition function; + + public UDFunction(String name, UDFunctionType type, FunctionDefinition function) { + this.name = name; + this.type = type; + this.function = function; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public UDFunctionType getType() { + return type; + } + + public void setType(UDFunctionType type) { + this.type = type; + } + + public FunctionDefinition getFunction() { + return function; + } + + public void setFunction(FunctionDefinition function) { + this.function = function; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/CustomStringJavaCompiler.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/CustomStringJavaCompiler.java new file mode 100644 index 0000000..0f61107 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/CustomStringJavaCompiler.java @@ -0,0 +1,232 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.compiler; + +import cn.hutool.core.io.FileUtil; +import cn.hutool.core.util.StrUtil; + +import javax.tools.Diagnostic; +import javax.tools.DiagnosticCollector; +import javax.tools.FileObject; +import javax.tools.ForwardingJavaFileManager; +import javax.tools.JavaCompiler; +import javax.tools.JavaFileManager; +import javax.tools.JavaFileObject; +import javax.tools.SimpleJavaFileObject; +import javax.tools.StandardJavaFileManager; +import javax.tools.StandardLocation; +import javax.tools.ToolProvider; +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.IOException; +import java.io.OutputStream; +import java.net.URI; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +/** + * CustomStringJavaCompiler + * + * @author zrx + * @since 2021/12/28 22:46 + */ +public class CustomStringJavaCompiler { + + // 类全名 + private String fullClassName; + private String sourceCode; + // 存放编译之后的字节码(key:类全名,value:编译之后输出的字节码) + private Map javaFileObjectMap = new ConcurrentHashMap<>(); + // 获取java的编译器 + private JavaCompiler compiler = ToolProvider.getSystemJavaCompiler(); + // 存放编译过程中输出的信息 + private DiagnosticCollector diagnosticsCollector = new DiagnosticCollector<>(); + // 编译耗时(单位ms) + private long compilerTakeTime; + + public String getFullClassName() { + return fullClassName; + } + + public ByteJavaFileObject getJavaFileObjectMap(String name) { + return javaFileObjectMap.get(name); + } + + public CustomStringJavaCompiler(String sourceCode) { + this.sourceCode = sourceCode; + this.fullClassName = getFullClassName(sourceCode); + } + + /** + * 编译字符串源代码,编译失败在 diagnosticsCollector 中获取提示信息 + * + * @return true:编译成功 false:编译失败 + */ + public boolean compiler() { + long startTime = System.currentTimeMillis(); + // 标准的内容管理器,更换成自己的实现,覆盖部分方法 + StandardJavaFileManager standardFileManager = compiler.getStandardFileManager(diagnosticsCollector, null, null); + JavaFileManager javaFileManager = new StringJavaFileManage(standardFileManager); + // 构造源代码对象 + JavaFileObject javaFileObject = new StringJavaFileObject(fullClassName, sourceCode); + // 获取一个编译任务 + JavaCompiler.CompilationTask task = compiler.getTask(null, javaFileManager, diagnosticsCollector, null, null, + Arrays.asList(javaFileObject)); + // 设置编译耗时 + compilerTakeTime = System.currentTimeMillis() - startTime; + return task.call(); + } + + /** + * 编译字符串源代码,并放在缓存目录下,编译失败在 diagnosticsCollector 中获取提示信息 + * + * @return true:编译成功 false:编译失败 + */ + public boolean compilerToTmpPath(String tmpPath) { + long startTime = System.currentTimeMillis(); + File codeFile = + FileUtil.writeUtf8String(sourceCode, tmpPath + StrUtil.replace(fullClassName, ".", "/") + ".java"); + // 标准的内容管理器,更换成自己的实现,覆盖部分方法 + StandardJavaFileManager standardFileManager = compiler.getStandardFileManager(diagnosticsCollector, null, null); + try { + standardFileManager.setLocation(StandardLocation.CLASS_OUTPUT, + Collections.singletonList(new File(tmpPath))); + } catch (IOException e) { + throw new RuntimeException(e); + } + Iterable javaFileObject = + standardFileManager.getJavaFileObjectsFromFiles(Collections.singletonList(codeFile)); + // 获取一个编译任务 + JavaCompiler.CompilationTask task = + compiler.getTask(null, standardFileManager, diagnosticsCollector, null, null, javaFileObject); + // 设置编译耗时 + compilerTakeTime = System.currentTimeMillis() - startTime; + return task.call(); + } + + /** + * @return 编译信息(错误 警告) + */ + public String getCompilerMessage() { + StringBuilder sb = new StringBuilder(); + List> diagnostics = diagnosticsCollector.getDiagnostics(); + for (Diagnostic diagnostic : diagnostics) { + sb.append(diagnostic.toString()).append("\r\n"); + } + return sb.toString(); + } + + public long getCompilerTakeTime() { + return compilerTakeTime; + } + + /** + * 获取类的全名称 + * + * @param sourceCode 源码 + * @return 类的全名称 + */ + public static String getFullClassName(String sourceCode) { + String className = ""; + Pattern pattern = Pattern.compile("package\\s+\\S+\\s*;"); + Matcher matcher = pattern.matcher(sourceCode); + if (matcher.find()) { + className = matcher.group().replaceFirst("package", "").replace(";", "").trim() + "."; + } + + pattern = Pattern.compile("class\\s+(\\S+)\\s+"); + matcher = pattern.matcher(sourceCode); + if (matcher.find()) { + className += matcher.group(1).trim(); + } + return className; + } + + /** + * 自定义一个字符串的源码对象 + */ + private class StringJavaFileObject extends SimpleJavaFileObject { + + // 等待编译的源码字段 + private String contents; + + // java源代码 => StringJavaFileObject对象 的时候使用 + public StringJavaFileObject(String className, String contents) { + super(URI.create("string:///" + className.replaceAll("\\.", "/") + Kind.SOURCE.extension), Kind.SOURCE); + this.contents = contents; + } + + // 字符串源码会调用该方法 + @Override + public CharSequence getCharContent(boolean ignoreEncodingErrors) throws IOException { + return contents; + } + + } + + /** + * 自定义一个编译之后的字节码对象 + */ + public class ByteJavaFileObject extends SimpleJavaFileObject { + + // 存放编译后的字节码 + private ByteArrayOutputStream outPutStream; + + public ByteJavaFileObject(String className, Kind kind) { + super(URI.create("string:///" + className.replaceAll("\\.", "/") + Kind.SOURCE.extension), kind); + } + + // StringJavaFileManage 编译之后的字节码输出会调用该方法(把字节码输出到outputStream) + @Override + public OutputStream openOutputStream() { + outPutStream = new ByteArrayOutputStream(); + return outPutStream; + } + + // 在类加载器加载的时候需要用到 + public byte[] getCompiledBytes() { + return outPutStream.toByteArray(); + } + } + + /** + * 自定义一个JavaFileManage来控制编译之后字节码的输出位置 + */ + private class StringJavaFileManage extends ForwardingJavaFileManager { + + StringJavaFileManage(JavaFileManager fileManager) { + super(fileManager); + } + + // 获取输出的文件对象,它表示给定位置处指定类型的指定类。 + @Override + public JavaFileObject getJavaFileForOutput(Location location, String className, JavaFileObject.Kind kind, + FileObject sibling) throws IOException { + ByteJavaFileObject javaFileObject = new ByteJavaFileObject(className, kind); + javaFileObjectMap.put(className, javaFileObject); + return javaFileObject; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/CustomStringScalaCompiler.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/CustomStringScalaCompiler.java new file mode 100644 index 0000000..4e3816e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/CustomStringScalaCompiler.java @@ -0,0 +1,53 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.compiler; + +import lombok.extern.slf4j.Slf4j; +import net.srt.flink.function.constant.PathConstant; +import scala.runtime.AbstractFunction1; +import scala.runtime.BoxedUnit; +import scala.tools.nsc.GenericRunnerSettings; +import scala.tools.nsc.interpreter.IMain; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +@Slf4j +public class CustomStringScalaCompiler { + + private static class ErrorHandler extends AbstractFunction1 { + + @Override + public BoxedUnit apply(String msg) { + log.error("Interpreter error: {}", msg); + return BoxedUnit.UNIT; + } + } + + public static IMain getInterpreter(Integer missionId) { + + GenericRunnerSettings settings = new GenericRunnerSettings(new ErrorHandler()); + + settings.usejavacp().tryToSetFromPropertyValue("true"); + settings.Yreploutdir().tryToSetFromPropertyValue(PathConstant.getUdfCompilerJavaPath(missionId)); + return new IMain(settings); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/FunctionCompiler.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/FunctionCompiler.java new file mode 100644 index 0000000..18f35cc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/FunctionCompiler.java @@ -0,0 +1,88 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.compiler; + +import cn.hutool.core.lang.Singleton; +import cn.hutool.core.util.StrUtil; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.function.exception.UDFCompilerException; +import org.apache.flink.configuration.ReadableConfig; + +import java.util.List; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +public interface FunctionCompiler { + + /** + * 函数代码在线动态编译 + * + * @param udf udf + * @param conf flink-conf + * @param missionId 任务id + * @return 是否成功 + */ + boolean compiler(UDF udf, ReadableConfig conf, Integer missionId); + + /** + * 编译 + * @param udf udf实例 + * @param conf flink-conf + * @param missionId 任务id + * @return 编译状态 + */ + static boolean getCompiler(UDF udf, ReadableConfig conf, Integer missionId) { + Asserts.checkNull(udf, "udf为空"); + Asserts.checkNull(udf.getCode(), "udf 代码为空"); + boolean success; + switch (udf.getFunctionLanguage()) { + case JAVA: + success = Singleton.get(JavaCompiler.class).compiler(udf, conf, missionId); + break; + case SCALA: + success = Singleton.get(ScalaCompiler.class).compiler(udf, conf, missionId); + break; + case PYTHON: + success = Singleton.get(PythonFunction.class).compiler(udf, conf, missionId); + break; + default: + throw UDFCompilerException.notSupportedException(udf.getFunctionLanguage().name()); + } + return success; + } + + /** + * 编译 + * @param udfList udf、实例列表 + * @param conf flink-conf + * @param missionId 任务id + */ + static void getCompiler(List udfList, ReadableConfig conf, Integer missionId) { + for (UDF udf : udfList) { + if (!getCompiler(udf, conf, missionId)) { + throw new UDFCompilerException(StrUtil.format("codeLanguage:{} , className:{} 编译失败", + udf.getFunctionLanguage(), udf.getClassName())); + } + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/FunctionPackage.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/FunctionPackage.java new file mode 100644 index 0000000..ad94470 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/FunctionPackage.java @@ -0,0 +1,70 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.compiler; + + +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.function.data.model.UDFPath; + +import java.util.ArrayList; +import java.util.List; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +public interface FunctionPackage { + + /** + * 打包 + * + * @param udfList udf列表 + * @param missionId 任务id + * @return 文件绝对路径 + */ + String[] pack(List udfList, Integer missionId); + + /** + * 打包 + * + * @param udfList udf 列表 + * @param missionId 任务id + * @return 打包结果 + */ + static UDFPath bale(List udfList, Integer missionId) { + List jvmList = new ArrayList<>(); + List pythonList = new ArrayList<>(); + for (UDF udf : udfList) { + switch (udf.getFunctionLanguage()) { + default: + case JAVA: + case SCALA: + jvmList.add(udf); + break; + case PYTHON: + pythonList.add(udf); + } + } + return UDFPath.builder() + .jarPaths(new JVMPackage().pack(jvmList, missionId)) + .pyPaths(new PythonFunction().pack(pythonList, missionId)) + .build(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/JVMPackage.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/JVMPackage.java new file mode 100644 index 0000000..f1cdeb1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/JVMPackage.java @@ -0,0 +1,76 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.compiler; + +import cn.hutool.core.collection.CollUtil; +import cn.hutool.core.io.FileUtil; +import cn.hutool.core.util.StrUtil; +import net.srt.flink.function.constant.PathConstant; +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.function.util.ZipUtils; +import org.apache.flink.table.catalog.FunctionLanguage; + +import java.io.File; +import java.io.InputStream; +import java.nio.charset.Charset; +import java.util.List; +import java.util.stream.Collectors; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +public class JVMPackage implements FunctionPackage { + + @Override + public String[] pack(List udfList, Integer missionId) { + if (CollUtil.isEmpty(udfList)) { + return new String[0]; + } + List classNameList = udfList.stream() + .filter(udf -> udf.getFunctionLanguage() == FunctionLanguage.JAVA + || udf.getFunctionLanguage() == FunctionLanguage.SCALA) + .map(UDF::getClassName) + .collect(Collectors.toList()); + String[] clazzs = new String[classNameList.size()]; + InputStream[] fileInputStreams = new InputStream[classNameList.size()]; + if (CollUtil.isEmpty(classNameList)) { + return new String[0]; + } + + for (int i = 0; i < classNameList.size(); i++) { + String className = classNameList.get(i); + String classFile = StrUtil.replace(className, ".", "/") + ".class"; + String absoluteFilePath = PathConstant.getUdfCompilerJavaPath(missionId, classFile); + + clazzs[i] = classFile; + fileInputStreams[i] = FileUtil.getInputStream(absoluteFilePath); + } + + String jarPath = PathConstant.getUdfPackagePath(missionId) + PathConstant.UDF_JAR_NAME; + // 编译好的文件打包jar + File file = FileUtil.file(jarPath); + FileUtil.del(file); + try (ZipUtils zipWriter = new ZipUtils(file, Charset.defaultCharset())) { + zipWriter.add(clazzs, fileInputStreams); + } + return new String[]{jarPath}; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/JavaCompiler.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/JavaCompiler.java new file mode 100644 index 0000000..a57ea36 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/JavaCompiler.java @@ -0,0 +1,65 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.compiler; + +import lombok.extern.slf4j.Slf4j; +import net.srt.flink.function.constant.PathConstant; +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.flink.configuration.ReadableConfig; + +/** + * java 编译 + * + * @author ZackYoung + * @since 0.6.8 + */ +@Slf4j +public class JavaCompiler implements FunctionCompiler { + + /** + * 函数代码在线动态编译 + * + * @param udf udf + * @param conf flink-conf + * @param missionId 任务id + * @return 是否成功 + */ + @Override + public boolean compiler(UDF udf, ReadableConfig conf, Integer missionId) { + ProcessEntity process = ProcessContextHolder.getProcess(); + process.info("正在编译 java 代码 , class: " + udf.getClassName()); + + CustomStringJavaCompiler compiler = new CustomStringJavaCompiler(udf.getCode()); + boolean res = compiler.compilerToTmpPath(PathConstant.getUdfCompilerJavaPath(missionId)); + String className = compiler.getFullClassName(); + if (res) { + process.info("class编译成功:" + className); + process.info("compilerTakeTime:" + compiler.getCompilerTakeTime()); + return true; + } else { + log.error("class编译失败:{}", className); + process.error("class编译失败:" + className); + process.error(compiler.getCompilerMessage()); + return false; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/PythonFunction.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/PythonFunction.java new file mode 100644 index 0000000..aaac8f5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/PythonFunction.java @@ -0,0 +1,121 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.compiler; + +import cn.hutool.core.collection.CollUtil; +import cn.hutool.core.exceptions.ExceptionUtil; +import cn.hutool.core.io.FileUtil; +import cn.hutool.core.util.StrUtil; +import cn.hutool.core.util.ZipUtil; +import lombok.extern.slf4j.Slf4j; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.function.constant.PathConstant; +import net.srt.flink.function.data.model.Env; +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.function.util.UDFUtil; +import net.srt.flink.function.util.ZipUtils; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.flink.client.python.PythonFunctionFactory; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ReadableConfig; +import org.apache.flink.python.PythonOptions; +import org.apache.flink.table.catalog.FunctionLanguage; + +import java.io.File; +import java.io.InputStream; +import java.nio.charset.Charset; +import java.util.List; +import java.util.stream.Collectors; + +/** + * python 编译 + * + * @author ZackYoung + * @since 0.6.8 + */ +@Slf4j +public class PythonFunction implements FunctionCompiler, FunctionPackage { + + /** + * 函数代码在线动态编译 + * + * @param udf udf + * @param conf flink-conf + * @param missionId 任务id + * @return 是否成功 + */ + @Override + public boolean compiler(UDF udf, ReadableConfig conf, Integer missionId) { + Asserts.checkNull(udf, "flink-config 不能为空"); + ProcessEntity process = ProcessContextHolder.getProcess(); + + process.info("正在编译 python 代码 , class: " + udf.getClassName()); + File pyFile = FileUtil.writeUtf8String(udf.getCode(), + PathConstant.getUdfCompilerPythonPath(missionId, UDFUtil.getPyFileName(udf.getClassName()) + ".py")); + File zipFile = ZipUtil.zip(pyFile); + FileUtil.del(pyFile); + try { + Configuration configuration = new Configuration((Configuration) conf); + configuration.set(PythonOptions.PYTHON_FILES, zipFile.getAbsolutePath()); + configuration.set(PythonOptions.PYTHON_CLIENT_EXECUTABLE, Env.getPath()); + configuration.set(PythonOptions.PYTHON_EXECUTABLE, Env.getPath()); + + PythonFunctionFactory.getPythonFunction(udf.getClassName(), configuration, null); + process.info("Python udf编译成功 ; className:" + udf.getClassName()); + } catch (Exception e) { + process.error("Python udf编译失败 ; className:" + udf.getClassName() + " 。 原因: " + + ExceptionUtil.getRootCauseMessage(e)); + return false; + } + FileUtil.del(zipFile); + return true; + } + + @Override + public String[] pack(List udfList, Integer missionId) { + if (CollUtil.isEmpty(udfList)) { + return new String[0]; + } + udfList = udfList.stream() + .filter(udf -> udf.getFunctionLanguage() == FunctionLanguage.PYTHON) + .collect(Collectors.toList()); + + if (CollUtil.isEmpty(udfList)) { + return new String[0]; + } + + InputStream[] inputStreams = udfList.stream().map(udf -> { + File file = FileUtil.writeUtf8String(udf.getCode(), PathConstant.getUdfCompilerPythonPath(missionId, + UDFUtil.getPyFileName(udf.getClassName()) + ".py")); + return FileUtil.getInputStream(file); + }).toArray(InputStream[]::new); + + String[] paths = + udfList.stream().map(x -> StrUtil.split(x.getClassName(), ".").get(0) + ".py").toArray(String[]::new); + String path = PathConstant.getUdfPackagePath(missionId, PathConstant.UDF_PYTHON_NAME); + File file = FileUtil.file(path); + FileUtil.del(file); + try (ZipUtils zipWriter = new ZipUtils(file, Charset.defaultCharset())) { + zipWriter.add(paths, inputStreams); + } + return new String[] {path}; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/ScalaCompiler.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/ScalaCompiler.java new file mode 100644 index 0000000..37721dc --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/compiler/ScalaCompiler.java @@ -0,0 +1,49 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.compiler; + +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.flink.configuration.ReadableConfig; + +/** + * scala编译 + * + * @author ZackYoung + * @since 0.6.8 + */ +public class ScalaCompiler implements FunctionCompiler { + + @Override + public boolean compiler(UDF udf, ReadableConfig conf, Integer missionId) { + ProcessEntity process = ProcessContextHolder.getProcess(); + + String className = udf.getClassName(); + process.info("正在编译 scala 代码 , class: " + className); + if (CustomStringScalaCompiler.getInterpreter(missionId).compileString(udf.getCode())) { + process.info("scala class编译成功:" + className); + return true; + } else { + process.error("scala class编译失败:" + className); + return false; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/constant/FlinkFunctionConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/constant/FlinkFunctionConstant.java new file mode 100644 index 0000000..968de85 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/constant/FlinkFunctionConstant.java @@ -0,0 +1,36 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.constant; + +public interface FlinkFunctionConstant { + + /** + * TO_MAP 函数 + */ + String TO_MAP = "to_map"; + /** + * GET_KEY 函数 + */ + String GET_KEY = "get_key"; + /** + * TOP2 函数 + */ + String TOP2 = "top2"; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/constant/PathConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/constant/PathConstant.java new file mode 100644 index 0000000..58c7956 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/constant/PathConstant.java @@ -0,0 +1,73 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.constant; + +import cn.hutool.core.util.StrUtil; + +import java.io.File; + +/** + * 文件路径常量 + * + * @author ZackYoung + * @since 0.6.8 + */ +public class PathConstant { + + /** 基本路径,dinky 部署的路径 */ + public static final String WORK_DIR = System.getProperty("user.dir"); + /** tmp路径 */ + public static final String TMP_PATH = WORK_DIR + File.separator + "tmp" + File.separator; + + /** udf路径 */ + public static final String UDF_PATH = TMP_PATH + "udf" + File.separator; + + public static final String COMPILER = "compiler"; + public static final String PACKAGE = "package"; + /** udf jar规则 */ + public static final String UDF_JAR_RULE = "udf-\\d+.jar"; + /** udf版本规则 */ + public static final String UDF_VERSION_RULE = "\\d+"; + /** udf jar tmp名字 */ + public static final String UDF_JAR_TMP_NAME = "udf-tmp.jar"; + + public static final String UDF_JAR_NAME = "udf.jar"; + public static final String DEP_MANIFEST = "dep_manifest.json"; + public static final String DEP_ZIP = "dep.zip"; + public static final String UDF_PYTHON_NAME = "python_udf.zip"; + /** udf jar tmp路径 */ + public static final String UDF_JAR_TMP_PATH = UDF_PATH + UDF_JAR_TMP_NAME; + + public static String getPath(Object... path) { + return StrUtil.join(File.separator, path) + File.separator; + } + + public static String getUdfCompilerJavaPath(Integer missionId, Object... path) { + return getPath(UDF_PATH, missionId, COMPILER, "java", path); + } + + public static String getUdfCompilerPythonPath(Integer missionId, Object... path) { + return getPath(UDF_PATH, missionId, COMPILER, "python", path); + } + + public static String getUdfPackagePath(Integer missionId, Object... path) { + return getPath(UDF_PATH, missionId, PACKAGE, path); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/context/UdfPathContextHolder.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/context/UdfPathContextHolder.java new file mode 100644 index 0000000..d81b8df --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/context/UdfPathContextHolder.java @@ -0,0 +1,28 @@ +package net.srt.flink.function.context; + +import cn.hutool.core.collection.ConcurrentHashSet; + +import java.util.Set; + +/** + * @author ZackYoung + * @since 0.7.0 + */ +public class UdfPathContextHolder { + private static final ThreadLocal> UDF_PATH_CONTEXT = new ThreadLocal<>(); + + public static void add(String path) { + if (UDF_PATH_CONTEXT.get() == null) { + UDF_PATH_CONTEXT.set(new ConcurrentHashSet<>()); + } + UDF_PATH_CONTEXT.get().add(path); + } + + public static Set get() { + return UDF_PATH_CONTEXT.get(); + } + + public static void clear() { + UDF_PATH_CONTEXT.remove(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/data/model/Env.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/data/model/Env.java new file mode 100644 index 0000000..5d482a1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/data/model/Env.java @@ -0,0 +1,46 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.data.model; + +import org.springframework.beans.factory.annotation.Value; +import org.springframework.stereotype.Component; + +import javax.annotation.PostConstruct; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +@Component +public class Env { + + /*@Value("${dinky.python.path}")*/ + private String path; + private static String PATH; + + public static String getPath() { + return PATH; + } + + @PostConstruct + public void init() { + PATH = path == null ? "python" : path; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/data/model/UDF.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/data/model/UDF.java new file mode 100644 index 0000000..687a444 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/data/model/UDF.java @@ -0,0 +1,52 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.data.model; + +import lombok.Builder; +import lombok.Getter; +import lombok.Setter; +import org.apache.flink.table.catalog.FunctionLanguage; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +@Getter +@Setter +@Builder +public class UDF { + + /** + * 函数名 + */ + String name; + /** + * 类名 + */ + String className; + /** + * udf 代码语言 + */ + FunctionLanguage functionLanguage; + /** + * udf源代码 + */ + String code; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/data/model/UDFPath.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/data/model/UDFPath.java new file mode 100644 index 0000000..afadf04 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/data/model/UDFPath.java @@ -0,0 +1,41 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.data.model; + +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Getter; +import lombok.NoArgsConstructor; +import lombok.Setter; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +@Getter +@Setter +@Builder +@NoArgsConstructor +@AllArgsConstructor +public class UDFPath { + + String[] jarPaths; + String[] pyPaths; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/exception/UDFCompilerException.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/exception/UDFCompilerException.java new file mode 100644 index 0000000..30a0246 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/exception/UDFCompilerException.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.exception; + +import cn.hutool.core.util.StrUtil; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +public class UDFCompilerException extends RuntimeException { + + public UDFCompilerException() { + } + + public UDFCompilerException(String message) { + super(message); + } + + public static UDFCompilerException notSupportedException(String codeType) { + return new UDFCompilerException(StrUtil.format("未知的代码类型:{}", codeType)); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/pool/UdfCodePool.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/pool/UdfCodePool.java new file mode 100644 index 0000000..8c1a90b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/pool/UdfCodePool.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.pool; + +import cn.hutool.core.util.StrUtil; +import net.srt.flink.function.data.model.UDF; + +import java.util.List; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.stream.Collectors; + +/** + * @author ZackYoung + * @since 0.7.0 + */ +public class UdfCodePool { + + /** udf code pool key -> class name value -> udf */ + private static final Map CODE_POOL = new ConcurrentHashMap<>(); + + public static void registerPool(List udfList) { + CODE_POOL.clear(); + CODE_POOL.putAll(udfList.stream().collect(Collectors.toMap(UDF::getClassName, udf -> udf))); + } + + public static void addOrUpdate(UDF udf) { + CODE_POOL.put(udf.getClassName(), udf); + } + + public static UDF getUDF(String className) { + UDF udf = CODE_POOL.get(className); + if (udf == null) { + throw new RuntimeException(StrUtil.format("class: {} is not exists!", className)); + } + return udf; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udf/GetKey.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udf/GetKey.java new file mode 100644 index 0000000..ed7f715 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udf/GetKey.java @@ -0,0 +1,64 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.udf; + +import org.apache.flink.table.functions.ScalarFunction; + +import java.util.Objects; + +public class GetKey extends ScalarFunction { + + public int eval(String map, String key, int defaultValue) { + if (map == null || !map.contains(key)) { + return defaultValue; + } + + String[] maps = extractProperties(map); + + for (String s : maps) { + String[] items = s.split("="); + if (items.length == 2 && Objects.equals(key, items[0])) { + return Integer.parseInt(items[1]); + } + } + return defaultValue; + } + + public String eval(String map, String key, String defaultValue) { + if (map == null || !map.contains(key)) { + return defaultValue; + } + + String[] maps = extractProperties(map); + + for (String s : maps) { + String[] items = s.split("="); + if (items.length == 2 && Objects.equals(key, items[0])) { + return items[1]; + } + } + return defaultValue; + } + + private String[] extractProperties(String map) { + map = map.replace("{", "").replace("}", ""); + return map.split(", "); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udtaf/RowsToMap.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udtaf/RowsToMap.java new file mode 100644 index 0000000..e5d1125 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udtaf/RowsToMap.java @@ -0,0 +1,138 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.udtaf; + +import org.apache.flink.table.api.DataTypes; +import org.apache.flink.table.catalog.DataTypeFactory; +import org.apache.flink.table.functions.TableAggregateFunction; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.inference.InputTypeStrategies; +import org.apache.flink.table.types.inference.TypeInference; +import org.apache.flink.util.Collector; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +/** + * RowsToMap + * + * @param Map key type + * @param Map value type + * @author zrx, lixiaoPing + * @since 2021/5/25 15:50 + **/ + +public class RowsToMap extends TableAggregateFunction, RowsToMap.MyAccum> { + + private static final long serialVersionUID = 42L; + + @Override + public TypeInference getTypeInference(DataTypeFactory typeFactory) { + return TypeInference.newBuilder() + .inputTypeStrategy(InputTypeStrategies.sequence( + InputTypeStrategies.ANY, + InputTypeStrategies.ANY)) + .accumulatorTypeStrategy(callContext -> { + List argumentDataTypes = callContext.getArgumentDataTypes(); + final DataType arg0DataType = argumentDataTypes.get(0); + final DataType arg1DataType = argumentDataTypes.get(1); + final DataType accDataType = DataTypes.STRUCTURED( + MyAccum.class, + DataTypes.FIELD("mapView", + DataTypes.MAP(arg0DataType, arg1DataType))); + return Optional.of(accDataType); + }) + .outputTypeStrategy(callContext -> { + List argumentDataTypes = callContext.getArgumentDataTypes(); + final DataType arg0DataType = argumentDataTypes.get(0); + final DataType arg1DataType = argumentDataTypes.get(1); + return Optional.of(DataTypes.MAP(arg0DataType, arg1DataType)); + }) + .build(); + } + + @Override + public MyAccum createAccumulator() { + return new MyAccum<>(); + } + + public void accumulate( + MyAccum acc, K cls, V v) { + if (v == null) { + return; + } + + acc.mapView.put(cls, v); + } + + /** + * Retracts the input values from the accumulator instance. The current design assumes the + * inputs are the values that have been previously accumulated. The method retract can be + * overloaded with different custom types and arguments. This function must be implemented for + * datastream bounded over aggregate. + * + * @param acc the accumulator which contains the current aggregated results + */ + public void retract(MyAccum acc, K cls, V v) { + if (v == null) { + return; + } + acc.mapView.remove(cls); + } + + /** + * Merges a group of accumulator instances into one accumulator instance. This function must be + * implemented for datastream session window grouping aggregate and bounded grouping aggregate. + * + * @param acc the accumulator which will keep the merged aggregate results. It should be + * noted that the accumulator may contain the previous aggregated results. + * Therefore user should not replace or clean this instance in the custom merge + * method. + * @param iterable an {@link Iterable} pointed to a group of accumulators that will be merged. + */ + public void merge(MyAccum acc, Iterable> iterable) { + for (MyAccum otherAcc : iterable) { + for (Map.Entry entry : otherAcc.mapView.entrySet()) { + accumulate(acc, entry.getKey(), entry.getValue()); + } + } + } + + public void emitValue(MyAccum acc, Collector> out) { + out.collect(acc.mapView); + } + + public static class MyAccum { + + /** + * 不能 final + */ + public Map mapView; + + /** + * 不能删除,否则不能生成查询计划 + */ + public MyAccum() { + this.mapView = new HashMap<>(); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udtaf/Top2.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udtaf/Top2.java new file mode 100644 index 0000000..5357562 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udtaf/Top2.java @@ -0,0 +1,75 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.udtaf; + +import org.apache.flink.api.java.tuple.Tuple2; +import org.apache.flink.table.functions.TableAggregateFunction; +import org.apache.flink.util.Collector; + +/** + * 官网Demo Top2 + * + * @author zrx + * @since 2021/6/14 20:44 + */ + +public class Top2 extends TableAggregateFunction, Top2.Top2Accumulator> { + + public static class Top2Accumulator { + + public Integer first; + public Integer second; + } + + @Override + public Top2Accumulator createAccumulator() { + Top2Accumulator acc = new Top2Accumulator(); + acc.first = Integer.MIN_VALUE; + acc.second = Integer.MIN_VALUE; + return acc; + } + + public void accumulate(Top2Accumulator acc, Integer value) { + if (value > acc.first) { + acc.second = acc.first; + acc.first = value; + } else if (value > acc.second) { + acc.second = value; + } + } + + public void merge(Top2Accumulator acc, Iterable it) { + for (Top2Accumulator otherAcc : it) { + accumulate(acc, otherAcc.first); + accumulate(acc, otherAcc.second); + } + } + + public void emitValue(Top2Accumulator acc, Collector> out) { + // emit the value and rank + if (acc.first != Integer.MIN_VALUE) { + out.collect(Tuple2.of(acc.first, 1)); + } + if (acc.second != Integer.MIN_VALUE) { + out.collect(Tuple2.of(acc.second, 2)); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udtaf/Top2WithRetract.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udtaf/Top2WithRetract.java new file mode 100644 index 0000000..0adf8d3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/udtaf/Top2WithRetract.java @@ -0,0 +1,106 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.udtaf; + +import org.apache.flink.api.java.tuple.Tuple2; +import org.apache.flink.table.functions.TableAggregateFunction; +import org.apache.flink.util.Collector; + +/** + * Top2WithRetract + * + * @author zrx + * @since 2021/12/17 18:55 + */ + +public class Top2WithRetract + extends + TableAggregateFunction, Top2WithRetract.Top2WithRetractAccumulator> { + + public static class Top2WithRetractAccumulator { + + public Integer first; + public Integer second; + public Integer oldFirst; + public Integer oldSecond; + } + + @Override + public Top2WithRetractAccumulator createAccumulator() { + Top2WithRetractAccumulator acc = new Top2WithRetractAccumulator(); + acc.first = Integer.MIN_VALUE; + acc.second = Integer.MIN_VALUE; + acc.oldFirst = Integer.MIN_VALUE; + acc.oldSecond = Integer.MIN_VALUE; + return acc; + } + + public void accumulate(Top2WithRetractAccumulator acc, Integer v) { + if (v > acc.first) { + acc.second = acc.first; + acc.first = v; + } else if (v > acc.second) { + acc.second = v; + } + } + + public void retract(Top2WithRetractAccumulator acc, Integer v) { + if (v.equals(acc.first)) { + acc.oldFirst = acc.first; + acc.oldSecond = acc.second; + acc.first = acc.second; + acc.second = Integer.MIN_VALUE; + } else if (v.equals(acc.second)) { + acc.oldSecond = acc.second; + acc.second = Integer.MIN_VALUE; + } + } + + public void emitValue(Top2WithRetractAccumulator acc, Collector> out) { + // emit the value and rank + if (acc.first != Integer.MIN_VALUE) { + out.collect(Tuple2.of(acc.first, 1)); + } + if (acc.second != Integer.MIN_VALUE) { + out.collect(Tuple2.of(acc.second, 2)); + } + } + + public void emitUpdateWithRetract( + Top2WithRetractAccumulator acc, + RetractableCollector> out) { + if (!acc.first.equals(acc.oldFirst)) { + // if there is an update, retract the old value then emit a new value + if (acc.oldFirst != Integer.MIN_VALUE) { + out.retract(Tuple2.of(acc.oldFirst, 1)); + } + out.collect(Tuple2.of(acc.first, 1)); + acc.oldFirst = acc.first; + } + if (!acc.second.equals(acc.oldSecond)) { + // if there is an update, retract the old value then emit a new value + if (acc.oldSecond != Integer.MIN_VALUE) { + out.retract(Tuple2.of(acc.oldSecond, 2)); + } + out.collect(Tuple2.of(acc.second, 2)); + acc.oldSecond = acc.second; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/util/FlinkUtils.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/util/FlinkUtils.java new file mode 100644 index 0000000..153134d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/util/FlinkUtils.java @@ -0,0 +1,31 @@ +package net.srt.flink.function.util; + +import cn.hutool.core.convert.Convert; +import cn.hutool.core.util.StrUtil; +import org.apache.flink.runtime.util.EnvironmentInformation; + +/** + * @author ZackYoung + * @since 0.6.8 + */ +public class FlinkUtils { + public static String getFlinkVersion() { + return EnvironmentInformation.getVersion(); + } + + /** + * @param version flink version。如:1.14.6 + * @return flink 大版本,如 14 + */ + public static String getFlinkBigVersion(String version) { + return StrUtil.split(version, ".").get(1); + } + + /** + * + * @return 获取当前 flink 大版本 + */ + public static Integer getCurFlinkBigVersion() { + return Convert.toInt(getFlinkBigVersion(getFlinkVersion())); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/util/UDFUtil.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/util/UDFUtil.java new file mode 100644 index 0000000..40a8d9a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/util/UDFUtil.java @@ -0,0 +1,356 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.util; + +import cn.hutool.core.collection.CollUtil; +import cn.hutool.core.compress.ZipWriter; +import cn.hutool.core.convert.Convert; +import cn.hutool.core.io.FileUtil; +import cn.hutool.core.lang.Dict; +import cn.hutool.core.lang.Opt; +import cn.hutool.core.map.MapUtil; +import cn.hutool.core.util.ClassLoaderUtil; +import cn.hutool.core.util.ReUtil; +import cn.hutool.core.util.StrUtil; +import cn.hutool.crypto.digest.MD5; +import cn.hutool.extra.template.TemplateConfig; +import cn.hutool.extra.template.TemplateEngine; +import cn.hutool.extra.template.engine.freemarker.FreemarkerEngine; +import groovy.lang.GroovyClassLoader; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.config.Dialect; +import net.srt.flink.common.context.DinkyClassLoaderContextHolder; +import net.srt.flink.common.context.JarPathContextHolder; +import net.srt.flink.common.pool.ClassEntity; +import net.srt.flink.common.pool.ClassPool; +import net.srt.flink.function.FunctionFactory; +import net.srt.flink.function.compiler.CustomStringJavaCompiler; +import net.srt.flink.function.compiler.CustomStringScalaCompiler; +import net.srt.flink.function.constant.PathConstant; +import net.srt.flink.function.data.model.UDF; +import net.srt.flink.function.pool.UdfCodePool; +import net.srt.flink.gateway.GatewayType; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.table.catalog.FunctionLanguage; +import org.codehaus.groovy.control.CompilerConfiguration; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.InputStream; +import java.nio.charset.Charset; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.regex.Pattern; + +/** + * UDFUtil + * + * @author wenmo + * @since 2021/12/27 23:25 + */ +public class UDFUtil { + + public static final String FUNCTION_SQL_REGEX = + "create\\s+.*function\\s+(.*)\\s+as\\s+'(.*)'(\\s+language (.*))?"; + + public static final String SESSION = "SESSION"; + public static final String YARN = "YARN"; + public static final String APPLICATION = "APPLICATION"; + + /** 网关类型 map 快速获取 session 与 application 等类型,为了减少判断 */ + public static final Map> GATEWAY_TYPE_MAP = + MapUtil.builder( + SESSION, + Arrays.asList( + GatewayType.YARN_SESSION, + GatewayType.KUBERNETES_SESSION, + GatewayType.STANDALONE)) + .put( + YARN, + Arrays.asList(GatewayType.YARN_APPLICATION, GatewayType.YARN_PER_JOB)) + .put( + APPLICATION, + Arrays.asList( + GatewayType.YARN_APPLICATION, + GatewayType.KUBERNETES_APPLICATION)) + .build(); + + protected static final Logger log = LoggerFactory.getLogger(UDFUtil.class); + /** 存放 udf md5与版本对应的k,v值 */ + protected static final Map UDF_MD5_MAP = new HashMap<>(); + + private static final String FUNCTION_REGEX = "function (.*?)'(.*?)'"; + private static final String LANGUAGE_REGEX = "language (.*);"; + public static final String PYTHON_UDF_ATTR = "(\\S)\\s+=\\s+ud(?:f|tf|af|taf)"; + public static final String PYTHON_UDF_DEF = "@ud(?:f|tf|af|taf).*\\n+def\\s+(.*)\\(.*\\):"; + public static final String SCALA_UDF_CLASS = "class\\s+(\\w+)(\\s*\\(.*\\)){0,1}\\s+extends"; + public static final String SCALA_UDF_PACKAGE = "package\\s+(.*);"; + private static final TemplateEngine ENGINE = new FreemarkerEngine(new TemplateConfig()); + + /** + * 模板解析 + * + * @param dialect 方言 + * @param template 模板 + * @param className 类名 + * @return {@link String} + */ + public static String templateParse(String dialect, String template, String className) { + + List split = StrUtil.split(className, "."); + switch (Dialect.get(dialect)) { + case JAVA: + case SCALA: + String clazz = CollUtil.getLast(split); + String packageName = StrUtil.strip(className, clazz); + Dict data = + Dict.create() + .set("className", clazz) + .set( + "package", + Asserts.isNullString(packageName) + ? "" + : StrUtil.strip(packageName, ".")); + return ENGINE.getTemplate(template).render(data); + case PYTHON: + default: + String clazzName = split.get(0); + Dict data2 = + Dict.create() + .set("className", clazzName) + .set("attr", split.size() > 1 ? split.get(1) : null); + return ENGINE.getTemplate(template).render(data2); + } + } + + public static String[] initJavaUDF(List udf, GatewayType gatewayType, Integer missionId) { + return FunctionFactory.initUDF( + CollUtil.newArrayList( + CollUtil.filterNew( + udf, + x -> x.getFunctionLanguage() != FunctionLanguage.PYTHON)), + missionId, + null) + .getJarPaths(); + } + + public static String[] initPythonUDF( + List udf, + GatewayType gatewayType, + Integer missionId, + Configuration configuration) { + return FunctionFactory.initUDF( + CollUtil.newArrayList( + CollUtil.filterNew( + udf, + x -> x.getFunctionLanguage() == FunctionLanguage.PYTHON)), + missionId, + configuration) + .getPyPaths(); + } + + public static String getPyFileName(String className) { + Asserts.checkNullString(className, "类名不能为空"); + return StrUtil.split(className, ".").get(0); + } + + public static String getPyUDFAttr(String code) { + return Opt.ofBlankAble(ReUtil.getGroup1(UDFUtil.PYTHON_UDF_ATTR, code)) + .orElse(ReUtil.getGroup1(UDFUtil.PYTHON_UDF_DEF, code)); + } + + public static String getScalaFullClassName(String code) { + String packageName = ReUtil.getGroup1(UDFUtil.SCALA_UDF_PACKAGE, code); + String clazz = ReUtil.getGroup1(UDFUtil.SCALA_UDF_CLASS, code); + return String.join(".", Arrays.asList(packageName, clazz)); + } + + public static void initClassLoader(String name) { + ClassEntity classEntity = ClassPool.get(name); + ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader(); + CompilerConfiguration config = new CompilerConfiguration(); + config.setSourceEncoding("UTF-8"); + GroovyClassLoader groovyClassLoader = new GroovyClassLoader(contextClassLoader, config); + groovyClassLoader.setShouldRecompile(true); + groovyClassLoader.defineClass(classEntity.getName(), classEntity.getClassByte()); + Thread.currentThread().setContextClassLoader(groovyClassLoader); + } + + @Deprecated + public static Map> buildJar(List codeList) { + List successList = new ArrayList<>(); + List failedList = new ArrayList<>(); + String tmpPath = PathConstant.UDF_PATH; + String udfJarPath = PathConstant.UDF_JAR_TMP_PATH; + // 删除jar缓存 + FileUtil.del(udfJarPath); + codeList.forEach( + udf -> { + if (udf.getFunctionLanguage() == FunctionLanguage.JAVA) { + CustomStringJavaCompiler compiler = + new CustomStringJavaCompiler(udf.getCode()); + boolean res = compiler.compilerToTmpPath(tmpPath); + String className = compiler.getFullClassName(); + if (res) { + log.info("class编译成功:{}" + className); + log.info("compilerTakeTime:" + compiler.getCompilerTakeTime()); + ClassPool.push(ClassEntity.build(className, udf.getCode())); + successList.add(className); + } else { + log.warn("class编译失败:{}" + className); + log.warn(compiler.getCompilerMessage()); + failedList.add(className); + } + } else if (udf.getFunctionLanguage() == FunctionLanguage.SCALA) { + String className = udf.getClassName(); + if (CustomStringScalaCompiler.getInterpreter(null) + .compileString(udf.getCode())) { + log.info("scala class编译成功:{}" + className); + ClassPool.push(ClassEntity.build(className, udf.getCode())); + successList.add(className); + } else { + log.warn("scala class编译失败:{}" + className); + failedList.add(className); + } + } + }); + String[] clazzs = + successList.stream() + .map(className -> StrUtil.replace(className, ".", "/") + ".class") + .toArray(String[]::new); + InputStream[] fileInputStreams = + successList.stream() + .map(className -> tmpPath + StrUtil.replace(className, ".", "/") + ".class") + .map(FileUtil::getInputStream) + .toArray(InputStream[]::new); + // 编译好的文件打包jar + try (ZipWriter zipWriter = + new ZipWriter(FileUtil.file(udfJarPath), Charset.defaultCharset())) { + zipWriter.add(clazzs, fileInputStreams); + } + String md5 = md5sum(udfJarPath); + return MapUtil.builder("success", successList) + .put("failed", failedList) + .put("md5", Collections.singletonList(md5)) + .build(); + } + + /** + * 得到udf版本和构建jar + * + * @param codeList 代码列表 + * @return {@link java.lang.String} + */ + @Deprecated + public static String getUdfFileAndBuildJar(List codeList) { + // 1. 检查所有jar的版本,通常名字为 udf-${version}.jar;如 udf-1.jar,没有这个目录则跳过 + String md5 = buildJar(codeList).get("md5").get(0); + if (!FileUtil.exist(PathConstant.UDF_PATH)) { + FileUtil.mkdir(PathConstant.UDF_PATH); + } + + try { + // 获取所有的udf jar的 md5 值,放入 map 里面 + if (UDF_MD5_MAP.isEmpty()) { + scanUDFMD5(); + } + // 2. 如果有匹配的,返回对应udf 版本,没有则构建jar,对应信息写入 jar + if (UDF_MD5_MAP.containsKey(md5)) { + FileUtil.del(PathConstant.UDF_JAR_TMP_PATH); + return StrUtil.format("udf-{}.jar", UDF_MD5_MAP.get(md5)); + } + // 3. 生成新版本jar + Integer newVersion = + UDF_MD5_MAP.values().size() > 0 ? CollUtil.max(UDF_MD5_MAP.values()) + 1 : 1; + String jarName = StrUtil.format("udf-{}.jar", newVersion); + String newName = PathConstant.UDF_PATH + jarName; + FileUtil.rename(FileUtil.file(PathConstant.UDF_JAR_TMP_PATH), newName, true); + UDF_MD5_MAP.put(md5, newVersion); + return jarName; + } catch (Exception e) { + log.warn("builder jar failed! please check env. msg:{}", e.getMessage()); + throw new RuntimeException(e); + } + } + + /** 扫描udf包文件,写入md5到 UDF_MD5_MAP */ + @Deprecated + private static void scanUDFMD5() { + List fileList = FileUtil.listFileNames(PathConstant.UDF_PATH); + fileList.stream() + .filter(fileName -> ReUtil.isMatch(PathConstant.UDF_JAR_RULE, fileName)) + .distinct() + .forEach( + fileName -> { + Integer version = + Convert.toInt( + ReUtil.getGroup0( + PathConstant.UDF_VERSION_RULE, fileName)); + UDF_MD5_MAP.put(md5sum(PathConstant.UDF_PATH + fileName), version); + }); + } + + private static String md5sum(String filePath) { + return MD5.create().digestHex(FileUtil.file(filePath)); + } + + public static boolean isUdfStatement(Pattern pattern, String statement) { + return !StrUtil.isBlank(statement) + && CollUtil.isNotEmpty(ReUtil.findAll(pattern, statement, 0)); + } + + public static UDF toUDF(String statement) { + Pattern pattern = Pattern.compile(FUNCTION_SQL_REGEX, Pattern.CASE_INSENSITIVE); + if (isUdfStatement(pattern, statement)) { + List groups = CollUtil.removeEmpty(ReUtil.getAllGroups(pattern, statement)); + String udfName = groups.get(1); + String className = groups.get(2); + if (ClassLoaderUtil.isPresent(className)) { + // 获取已经加载在java的类,对应的包路径 + try { + JarPathContextHolder.addUdfPath( + FileUtil.file( + DinkyClassLoaderContextHolder.get() + .loadClass(className) + .getProtectionDomain() + .getCodeSource() + .getLocation() + .getPath())); + } catch (ClassNotFoundException e) { + throw new RuntimeException(e); + } + return null; + } + + UDF udf = UdfCodePool.getUDF(className); + return UDF.builder() + .name(udfName) + .className(className) + .code(udf.getCode()) + .functionLanguage(udf.getFunctionLanguage()) + .build(); + } + return null; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/util/ZipUtils.java b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/util/ZipUtils.java new file mode 100644 index 0000000..54ad0ab --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-function/src/main/java/net/srt/flink/function/util/ZipUtils.java @@ -0,0 +1,106 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.function.util; + +import cn.hutool.core.compress.ZipWriter; +import cn.hutool.core.io.FileUtil; +import cn.hutool.core.io.IORuntimeException; +import cn.hutool.core.io.IoUtil; +import cn.hutool.core.util.ArrayUtil; +import cn.hutool.core.util.StrUtil; + +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.charset.Charset; +import java.util.stream.Stream; +import java.util.zip.ZipEntry; +import java.util.zip.ZipOutputStream; + +/** + * zip压缩包工具类 + * + * @author ZackYoung + * @since 0.6.8 + */ +public class ZipUtils extends ZipWriter { + + public ZipUtils(File zipFile, Charset charset) { + super(zipFile, charset); + } + + public ZipUtils(OutputStream out, Charset charset) { + super(out, charset); + } + + public ZipUtils(ZipOutputStream out) { + super(out); + } + + @Override + public ZipWriter add(String[] paths, InputStream[] ins) throws IORuntimeException { + if (ArrayUtil.isEmpty(paths) || ArrayUtil.isEmpty(ins)) { + throw new IllegalArgumentException("Paths or ins is empty !"); + } + if (paths.length != ins.length) { + throw new IllegalArgumentException("Paths length is not equals to ins length !"); + } + long maxTime = Stream.of(paths).map(FileUtil::file).mapToLong(File::lastModified).max().getAsLong(); + for (int i = 0; i < paths.length; i++) { + add(paths[i], ins[i], maxTime); + } + + return this; + } + + public ZipWriter add(String path, InputStream in, long fileTime) throws IORuntimeException { + path = StrUtil.nullToEmpty(path); + if (null == in) { + // 空目录需要检查路径规范性,目录以"/"结尾 + path = StrUtil.addSuffixIfNot(path, StrUtil.SLASH); + if (StrUtil.isBlank(path)) { + return this; + } + } + + return putEntry(path, in, fileTime); + } + + private ZipWriter putEntry(String path, InputStream in, long fileTime) throws IORuntimeException { + try { + ZipEntry zipEntry = new ZipEntry(path); + zipEntry.setTime(fileTime); + super.getOut().putNextEntry(zipEntry); + if (null != in) { + IoUtil.copy(in, super.getOut()); + } + super.getOut().closeEntry(); + } catch (IOException e) { + throw new IORuntimeException(e); + } finally { + IoUtil.close(in); + } + + IoUtil.flush(super.getOut()); + return this; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-gateway/pom.xml new file mode 100644 index 0000000..7c68d75 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/pom.xml @@ -0,0 +1,105 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-gateway + + + + + net.srt + flink-common + ${project.version} + + + com.fasterxml.jackson.core + jackson-annotations + + + com.fasterxml.jackson.core + jackson-databind + + + cn.hutool + hutool-all + + + junit + junit + provided + + + net.srt + flink-client-hadoop + ${project.version} + provided + + + + + org.bouncycastle + bcpkix-jdk15on + 1.69 + + + org.bouncycastle + bcprov-jdk15on + 1.69 + + + org.bouncycastle + bcprov-ext-jdk15on + 1.69 + + + net.srt + flink-process + provided + + + + + + flink-1.16 + + + net.srt + flink-client-1.16 + ${project.version} + provided + + + net.srt + flink-1.16 + ${project.version} + provided + + + + + flink-1.14 + + + net.srt + flink-client-1.14 + ${project.version} + provided + + + net.srt + flink-1.14 + ${project.version} + provided + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/AbstractGateway.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/AbstractGateway.java new file mode 100644 index 0000000..08341bf --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/AbstractGateway.java @@ -0,0 +1,63 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway; + +import net.srt.flink.common.model.JobStatus; +import net.srt.flink.gateway.config.GatewayConfig; +import org.apache.flink.configuration.Configuration; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * AbstractGateway + * + * @author zrx + * @since 2021/10/29 + **/ +public abstract class AbstractGateway implements Gateway { + + protected static final Logger logger = LoggerFactory.getLogger(AbstractGateway.class); + protected GatewayConfig config; + protected Configuration configuration; + + public AbstractGateway() { + } + + public AbstractGateway(GatewayConfig config) { + this.config = config; + } + + @Override + public boolean canHandle(GatewayType type) { + return type == getType(); + } + + @Override + public void setGatewayConfig(GatewayConfig config) { + this.config = config; + } + + protected abstract void init(); + + @Override + public JobStatus getJobStatusById(String id) { + return JobStatus.UNKNOWN; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/Gateway.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/Gateway.java new file mode 100644 index 0000000..d12d703 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/Gateway.java @@ -0,0 +1,87 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.JobStatus; +import net.srt.flink.gateway.config.GatewayConfig; +import net.srt.flink.gateway.exception.GatewayException; +import net.srt.flink.gateway.result.GatewayResult; +import net.srt.flink.gateway.result.SavePointResult; +import net.srt.flink.gateway.result.TestResult; +import org.apache.flink.runtime.jobgraph.JobGraph; + +import java.util.Iterator; +import java.util.Optional; +import java.util.ServiceLoader; + +/** + * Submiter + * + * @author zrx + * @since 2021/10/29 + **/ +public interface Gateway { + + static Optional get(GatewayConfig config) { + Asserts.checkNotNull(config, "配置不能为空"); + Asserts.checkNotNull(config.getType(), "配置类型不能为空"); + ServiceLoader loader = ServiceLoader.load(Gateway.class); + Iterator iterator = loader.iterator(); + while (iterator.hasNext()) { + Gateway gateway = iterator.next(); + if (gateway.canHandle(config.getType())) { + gateway.setGatewayConfig(config); + return Optional.of(gateway); + } + } + return Optional.empty(); + } + + static Gateway build(GatewayConfig config) { + Optional optionalGateway = Gateway.get(config); + if (!optionalGateway.isPresent()) { + throw new GatewayException("不支持 Flink Gateway 类型【" + config.getType().getLongValue() + "】,请添加扩展包"); + } + return optionalGateway.get(); + } + + boolean canHandle(GatewayType type); + + GatewayType getType(); + + void setGatewayConfig(GatewayConfig config); + + GatewayResult submitJobGraph(JobGraph jobGraph); + + GatewayResult submitJar(); + + SavePointResult savepointCluster(); + + SavePointResult savepointCluster(String savePoint); + + SavePointResult savepointJob(); + + SavePointResult savepointJob(String savePoint); + + TestResult test(); + + JobStatus getJobStatusById(String id); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/GatewayType.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/GatewayType.java new file mode 100644 index 0000000..62c6335 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/GatewayType.java @@ -0,0 +1,116 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway; + + +import net.srt.flink.common.assertion.Asserts; + +/** + * SubmitType + * + * @author zrx + * @since 2021/10/29 + **/ +public enum GatewayType { + + LOCAL(0, "l", "local"), + STANDALONE(1, "s", "standalone"), + YARN_SESSION(2, "ys", "yarn-session"), + YARN_PER_JOB(3, "ypj", "yarn-per-job"), + YARN_APPLICATION(4, "ya", "yarn-application"), + KUBERNETES_SESSION(5, "ks", "kubernetes-session"), + KUBERNETES_APPLICATION(6, "ka", "kubernetes-application"); + + private Integer code; + private final String value; + private final String longValue; + + GatewayType(Integer code, String value, String longValue) { + this.code = code; + this.value = value; + this.longValue = longValue; + } + + public String getValue() { + return value; + } + + public String getLongValue() { + return longValue; + } + + public Integer getCode() { + return code; + } + + public static GatewayType get(String value) { + for (GatewayType type : GatewayType.values()) { + if (Asserts.isEquals(type.getValue(), value) || Asserts.isEquals(type.getLongValue(), value)) { + return type; + } + } + return GatewayType.YARN_APPLICATION; + } + + public static GatewayType getByCode(String code) { + for (GatewayType type : GatewayType.values()) { + if (Asserts.isEquals(type.getCode().toString(), code)) { + return type; + } + } + return GatewayType.YARN_APPLICATION; + } + + public boolean equalsValue(String type) { + return Asserts.isEquals(value, type) || Asserts.isEquals(longValue, type); + } + + public static boolean isDeployCluster(String type) { + switch (get(type)) { + case YARN_APPLICATION: + case YARN_PER_JOB: + case KUBERNETES_APPLICATION: + return true; + default: + return false; + } + } + + public boolean isDeployCluster() { + switch (value) { + case "ya": + case "ypj": + case "ka": + return true; + default: + return false; + } + } + + public boolean isApplicationMode() { + switch (value) { + case "ya": + case "ka": + return true; + default: + return false; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/ActionType.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/ActionType.java new file mode 100644 index 0000000..86d37e0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/ActionType.java @@ -0,0 +1,52 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.config; + + +import net.srt.flink.common.assertion.Asserts; + +/** + * ActionType + * + * @author zrx + * @since 2021/11/3 21:58 + */ +public enum ActionType { + SAVEPOINT("savepoint"), CANCEL("cancel"); + + private String value; + + ActionType(String value) { + this.value = value; + } + + public String getValue() { + return value; + } + + public static ActionType get(String value) { + for (ActionType type : ActionType.values()) { + if (Asserts.isEquals(type.getValue(), value)) { + return type; + } + } + return ActionType.SAVEPOINT; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/AppConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/AppConfig.java new file mode 100644 index 0000000..74a810c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/AppConfig.java @@ -0,0 +1,56 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.config; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.assertion.Asserts; + +/** + * AppConfig + * + * @author zrx + * @since 2021/11/3 21:55 + */ +@Setter +@Getter +public class AppConfig { + private String userJarPath; + private String[] userJarParas; + private String userJarMainAppClass; + + public AppConfig() { + } + + public AppConfig(String userJarPath, String[] userJarParas, String userJarMainAppClass) { + this.userJarPath = userJarPath; + this.userJarParas = userJarParas; + this.userJarMainAppClass = userJarMainAppClass; + } + + public static AppConfig build(String userJarPath, String userJarParasStr, String userJarMainAppClass) { + if (Asserts.isNotNullString(userJarParasStr)) { + return new AppConfig(userJarPath, userJarParasStr.split(" "), userJarMainAppClass); + } else { + return new AppConfig(userJarPath, new String[]{}, userJarMainAppClass); + + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/ClusterConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/ClusterConfig.java new file mode 100644 index 0000000..ab94497 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/ClusterConfig.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.config; + +import lombok.Getter; +import lombok.Setter; + +/** + * ClusterConfig + * + * @author zrx + * @since 2021/11/3 21:52 + */ +@Getter +@Setter +public class ClusterConfig { + private String flinkConfigPath; + private String flinkLibPath; + private String yarnConfigPath; + private String appId; + + public ClusterConfig() { + } + + public ClusterConfig(String flinkConfigPath) { + this.flinkConfigPath = flinkConfigPath; + } + + public ClusterConfig(String flinkConfigPath, String flinkLibPath, String yarnConfigPath) { + this.flinkConfigPath = flinkConfigPath; + this.flinkLibPath = flinkLibPath; + this.yarnConfigPath = yarnConfigPath; + } + + public static ClusterConfig build(String flinkConfigPath) { + return new ClusterConfig(flinkConfigPath); + } + + public static ClusterConfig build(String flinkConfigPath, String flinkLibPath, String yarnConfigPath) { + return new ClusterConfig(flinkConfigPath, flinkLibPath, yarnConfigPath); + } + + @Override + public String toString() { + return "ClusterConfig{" + + "flinkConfigPath='" + flinkConfigPath + '\'' + + ", flinkLibPath='" + flinkLibPath + '\'' + + ", yarnConfigPath='" + yarnConfigPath + '\'' + + ", appId='" + appId + '\'' + + '}'; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/ConfigPara.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/ConfigPara.java new file mode 100644 index 0000000..a4d17d7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/ConfigPara.java @@ -0,0 +1,55 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.config; + +/** + * ConfigPara + * + * @author zrx + * @since 2021/11/2 + **/ +public class ConfigPara { + private String key; + private String value; + + public ConfigPara() { + } + + public ConfigPara(String key, String value) { + this.key = key; + this.value = value; + } + + public String getKey() { + return key; + } + + public void setKey(String key) { + this.key = key; + } + + public String getValue() { + return value; + } + + public void setValue(String value) { + this.value = value; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/FlinkConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/FlinkConfig.java new file mode 100644 index 0000000..a502106 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/FlinkConfig.java @@ -0,0 +1,98 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.config; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.assertion.Asserts; + +import java.util.HashMap; +import java.util.Map; + +/** + * FlinkConfig + * + * @author zrx + * @since 2021/11/3 21:56 + */ +@Getter +@Setter +public class FlinkConfig { + private String jobName; + private String jobId; + private ActionType action; + private SavePointType savePointType; + private String savePoint; + // private List configParas; + private Map configuration = new HashMap<>(); + + private static final ObjectMapper mapper = new ObjectMapper(); + + public static final String DEFAULT_SAVEPOINT_PREFIX = "hdfs:///flink/savepoints/"; + + public FlinkConfig() { + } + + public FlinkConfig(Map configuration) { + this.configuration = configuration; + } + + public FlinkConfig(String jobName, String jobId, ActionType action, SavePointType savePointType, String savePoint, Map configuration) { + this.jobName = jobName; + this.jobId = jobId; + this.action = action; + this.savePointType = savePointType; + this.savePoint = savePoint; + this.configuration = configuration; + } + + public static FlinkConfig build(Map paras) { + /*List configParasList = new ArrayList<>(); + for (Map.Entry entry : paras.entrySet()) { + configParasList.add(new ConfigPara(entry.getKey(),entry.getValue())); + }*/ + return new FlinkConfig(paras); + } + + public static FlinkConfig build(String jobName, String jobId, String actionStr, String savePointTypeStr, String savePoint, String configParasStr) { + //List configParasList = new ArrayList<>(); + Map configMap = new HashMap<>(); + JsonNode paras = null; + if (Asserts.isNotNullString(configParasStr)) { + try { + paras = mapper.readTree(configParasStr); + } catch (JsonProcessingException e) { + e.printStackTrace(); + } + paras.forEach((JsonNode node) -> { + configMap.put(node.get("key").asText(), node.get("value").asText()); + }); + } + return new FlinkConfig(jobName, jobId, ActionType.get(actionStr), SavePointType.get(savePointTypeStr), savePoint, configMap); + } + + public static FlinkConfig build(String jobId, String actionStr, String savePointTypeStr, String savePoint) { + return new FlinkConfig(null, jobId, ActionType.get(actionStr), SavePointType.get(savePointTypeStr), savePoint, null); + } +} + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/GatewayConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/GatewayConfig.java new file mode 100644 index 0000000..ef206de --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/GatewayConfig.java @@ -0,0 +1,102 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.config; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.gateway.GatewayType; + +import java.util.HashMap; +import java.util.Map; + +/** + * SubmitConfig + * + * @author zrx + * @since 2021/10/29 + **/ +@Getter +@Setter +public class GatewayConfig { + + private Integer taskId; + private String[] jarPaths; + private GatewayType type; + private ClusterConfig clusterConfig; + private FlinkConfig flinkConfig; + private AppConfig appConfig; + + private static final ObjectMapper mapper = new ObjectMapper(); + + public GatewayConfig() { + clusterConfig = new ClusterConfig(); + flinkConfig = new FlinkConfig(); + appConfig = new AppConfig(); + } + + public static GatewayConfig build(JsonNode para) { + GatewayConfig config = new GatewayConfig(); + if (para.has("taskId")) { + config.setTaskId(para.get("taskId").asInt()); + } + config.setType(GatewayType.get(para.get("type").asText())); + if (para.has("flinkConfigPath")) { + config.getClusterConfig().setFlinkConfigPath(para.get("flinkConfigPath").asText()); + } + if (para.has("flinkLibPath")) { + config.getClusterConfig().setFlinkLibPath(para.get("flinkLibPath").asText()); + } + if (para.has("yarnConfigPath")) { + config.getClusterConfig().setYarnConfigPath(para.get("yarnConfigPath").asText()); + } + if (para.has("jobName")) { + config.getFlinkConfig().setJobName(para.get("jobName").asText()); + } + if (para.has("userJarPath")) { + config.getAppConfig().setUserJarPath(para.get("userJarPath").asText()); + } + if (para.has("userJarParas")) { + config.getAppConfig().setUserJarParas(para.get("userJarParas").asText().split("\\s+")); + } + if (para.has("userJarMainAppClass")) { + config.getAppConfig().setUserJarMainAppClass(para.get("userJarMainAppClass").asText()); + } + if (para.has("savePoint")) { + config.getFlinkConfig().setSavePoint(para.get("savePoint").asText()); + } + if (para.has("configParas")) { + try { + Map configMap = new HashMap<>(); + JsonNode paras = mapper.readTree(para.get("configParas").asText()); + paras.forEach((JsonNode node) -> { + configMap.put(node.get("key").asText(), node.get("value").asText()); + }); + config.getFlinkConfig().setConfiguration(configMap); + } catch (JsonProcessingException e) { + e.printStackTrace(); + } + } + return config; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/SavePointStrategy.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/SavePointStrategy.java new file mode 100644 index 0000000..aad66b4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/SavePointStrategy.java @@ -0,0 +1,49 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.config; + +/** + * SavePointStrategy + * + * @author zrx + * @since 2021/11/23 10:28 + **/ +public enum SavePointStrategy { + NONE(0), LATEST(1), EARLIEST(2), CUSTOM(3); + + private Integer value; + + SavePointStrategy(Integer value) { + this.value = value; + } + + public Integer getValue() { + return value; + } + + public static SavePointStrategy get(Integer value) { + for (SavePointStrategy type : SavePointStrategy.values()) { + if (type.getValue().equals(value)) { + return type; + } + } + return SavePointStrategy.NONE; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/SavePointType.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/SavePointType.java new file mode 100644 index 0000000..ea482ce --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/config/SavePointType.java @@ -0,0 +1,52 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.config; + + +import net.srt.flink.common.assertion.Asserts; + +/** + * SavePointType + * + * @author zrx + * @since 2021/11/3 21:58 + */ +public enum SavePointType { + TRIGGER("trigger"), DISPOSE("dispose"), STOP("stop"), CANCEL("cancel"); + + private String value; + + SavePointType(String value) { + this.value = value; + } + + public String getValue() { + return value; + } + + public static SavePointType get(String value) { + for (SavePointType type : SavePointType.values()) { + if (Asserts.isEqualsIgnoreCase(type.getValue(), value)) { + return type; + } + } + return SavePointType.TRIGGER; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/exception/GatewayException.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/exception/GatewayException.java new file mode 100644 index 0000000..7615d4e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/exception/GatewayException.java @@ -0,0 +1,37 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.exception; + +/** + * GatewayException + * + * @author zrx + * @since 2021/10/29 + **/ +public class GatewayException extends RuntimeException { + + public GatewayException(String message, Throwable cause) { + super(message, cause); + } + + public GatewayException(String message) { + super(message); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/kubernetes/KubernetesApplicationGateway.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/kubernetes/KubernetesApplicationGateway.java new file mode 100644 index 0000000..938ec9b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/kubernetes/KubernetesApplicationGateway.java @@ -0,0 +1,138 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.kubernetes; + +import lombok.SneakyThrows; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.ProjectSystemConfiguration; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.gateway.GatewayType; +import net.srt.flink.gateway.config.AppConfig; +import net.srt.flink.gateway.exception.GatewayException; +import net.srt.flink.gateway.result.GatewayResult; +import net.srt.flink.gateway.result.KubernetesResult; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.flink.client.deployment.ClusterSpecification; +import org.apache.flink.client.deployment.application.ApplicationConfiguration; +import org.apache.flink.client.program.ClusterClient; +import org.apache.flink.client.program.ClusterClientProvider; +import org.apache.flink.configuration.JobManagerOptions; +import org.apache.flink.configuration.PipelineOptions; +import org.apache.flink.configuration.TaskManagerOptions; +import org.apache.flink.kubernetes.KubernetesClusterDescriptor; +import org.apache.flink.runtime.client.JobStatusMessage; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.http.util.TextUtils; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; + +/** + * KubernetesApplicationGateway + * + * @author zrx + * @since 2021/12/26 14:59 + */ +public class KubernetesApplicationGateway extends KubernetesGateway { + @Override + public GatewayType getType() { + return GatewayType.KUBERNETES_APPLICATION; + } + + @Override + public GatewayResult submitJobGraph(JobGraph jobGraph) { + throw new GatewayException("Couldn't deploy Kubernetes Application Cluster with job graph."); + } + + @SneakyThrows + @Override + public GatewayResult submitJar() { + if (Asserts.isNull(client)) { + init(); + } + KubernetesResult result = KubernetesResult.build(getType()); + AppConfig appConfig = config.getAppConfig(); + configuration.set(PipelineOptions.JARS, Collections.singletonList(appConfig.getUserJarPath())); + String[] userJarParas = appConfig.getUserJarParas(); + if (Asserts.isNull(userJarParas)) { + userJarParas = new String[0]; + } + ApplicationConfiguration applicationConfiguration = new ApplicationConfiguration(userJarParas, appConfig.getUserJarMainAppClass()); + KubernetesClusterDescriptor kubernetesClusterDescriptor = new KubernetesClusterDescriptor(configuration, client); + + ClusterSpecification.ClusterSpecificationBuilder clusterSpecificationBuilder = new ClusterSpecification.ClusterSpecificationBuilder(); + if (configuration.contains(JobManagerOptions.TOTAL_PROCESS_MEMORY)) { + clusterSpecificationBuilder.setMasterMemoryMB(configuration.get(JobManagerOptions.TOTAL_PROCESS_MEMORY).getMebiBytes()); + } + if (configuration.contains(TaskManagerOptions.TOTAL_PROCESS_MEMORY)) { + clusterSpecificationBuilder.setTaskManagerMemoryMB(configuration.get(TaskManagerOptions.TOTAL_PROCESS_MEMORY).getMebiBytes()); + } + if (configuration.contains(TaskManagerOptions.NUM_TASK_SLOTS)) { + clusterSpecificationBuilder.setSlotsPerTaskManager(configuration.get(TaskManagerOptions.NUM_TASK_SLOTS)).createClusterSpecification(); + } + + try { + ClusterClientProvider clusterClientProvider = kubernetesClusterDescriptor.deployApplicationCluster( + clusterSpecificationBuilder.createClusterSpecification(), applicationConfiguration); + ClusterClient clusterClient = clusterClientProvider.getClusterClient(); + Collection jobStatusMessages = clusterClient.listJobs().get(); + //zrx ProjectSystemConfiguration + ProcessEntity process = ProcessContextHolder.getProcess(); + int counts = ProjectSystemConfiguration.getByProjectId(process.getProjectId()).getJobIdWait(); + while (jobStatusMessages.size() == 0 && counts > 0) { + Thread.sleep(1000); + counts--; + jobStatusMessages = clusterClient.listJobs().get(); + if (jobStatusMessages.size() > 0) { + break; + } + } + if (jobStatusMessages.size() > 0) { + List jids = new ArrayList<>(); + for (JobStatusMessage jobStatusMessage : jobStatusMessages) { + jids.add(jobStatusMessage.getJobId().toHexString()); + } + result.setJids(jids); + } + String jobId = ""; + //application mode only have one job, so we can get any one to be jobId + for (JobStatusMessage jobStatusMessage : jobStatusMessages) { + jobId = jobStatusMessage.getJobId().toHexString(); + } + //if JobStatusMessage not have job id, use timestamp + //and... it`s maybe wrong with submit + if (TextUtils.isEmpty(jobId)) { + jobId = "unknown" + System.currentTimeMillis(); + } + result.setClusterId(jobId); + result.setWebURL(clusterClient.getWebInterfaceURL()); + result.success(); + // zrx + } /*catch (Exception e) { + result.fail(LogUtil.getError(e)); + }*/ finally { + kubernetesClusterDescriptor.close(); + } + return result; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/kubernetes/KubernetesGateway.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/kubernetes/KubernetesGateway.java new file mode 100644 index 0000000..852f819 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/kubernetes/KubernetesGateway.java @@ -0,0 +1,222 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.kubernetes; + +import net.srt.flink.client.utils.FlinkUtil; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.gateway.AbstractGateway; +import net.srt.flink.gateway.config.ActionType; +import net.srt.flink.gateway.config.GatewayConfig; +import net.srt.flink.gateway.exception.GatewayException; +import net.srt.flink.gateway.model.JobInfo; +import net.srt.flink.gateway.result.SavePointResult; +import net.srt.flink.gateway.result.TestResult; +import org.apache.flink.api.common.JobID; +import org.apache.flink.client.program.ClusterClient; +import org.apache.flink.configuration.CheckpointingOptions; +import org.apache.flink.configuration.DeploymentOptions; +import org.apache.flink.configuration.GlobalConfiguration; +import org.apache.flink.kubernetes.KubernetesClusterClientFactory; +import org.apache.flink.kubernetes.KubernetesClusterDescriptor; +import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions; +import org.apache.flink.kubernetes.kubeclient.FlinkKubeClient; +import org.apache.flink.kubernetes.kubeclient.FlinkKubeClientFactory; +import org.apache.flink.runtime.client.JobStatusMessage; +import org.apache.flink.runtime.jobgraph.SavepointConfigOptions; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Map; +import java.util.UUID; +import java.util.concurrent.CompletableFuture; + +/** + * KubernetesGateway + * + * @author zrx + * @since 2021/12/26 14:09 + */ +public abstract class KubernetesGateway extends AbstractGateway { + + protected FlinkKubeClient client; + + public KubernetesGateway() { + } + + public KubernetesGateway(GatewayConfig config) { + super(config); + } + + @Override + public void init() { + initConfig(); + initKubeClient(); + } + + private void initConfig() { + configuration = GlobalConfiguration.loadConfiguration(config.getClusterConfig().getFlinkConfigPath()); + if (Asserts.isNotNull(config.getFlinkConfig().getConfiguration())) { + addConfigParas(config.getFlinkConfig().getConfiguration()); + } + configuration.set(DeploymentOptions.TARGET, getType().getLongValue()); + if (Asserts.isNotNullString(config.getFlinkConfig().getSavePoint())) { + configuration.setString(SavepointConfigOptions.SAVEPOINT_PATH, config.getFlinkConfig().getSavePoint()); + } + if (Asserts.isNotNullString(config.getFlinkConfig().getJobName())) { + configuration.set(KubernetesConfigOptions.CLUSTER_ID, config.getFlinkConfig().getJobName()); + } + if (getType().isApplicationMode()) { + String uuid = UUID.randomUUID().toString().replace("-", ""); + if (configuration.contains(CheckpointingOptions.CHECKPOINTS_DIRECTORY)) { + configuration.set(CheckpointingOptions.CHECKPOINTS_DIRECTORY, configuration.getString(CheckpointingOptions.CHECKPOINTS_DIRECTORY) + "/" + uuid); + } + if (configuration.contains(CheckpointingOptions.SAVEPOINT_DIRECTORY)) { + configuration.set(CheckpointingOptions.SAVEPOINT_DIRECTORY, configuration.getString(CheckpointingOptions.SAVEPOINT_DIRECTORY) + "/" + uuid); + } + } + } + + private void initKubeClient() { + client = FlinkKubeClientFactory.getInstance().fromConfiguration(configuration, "client"); + } + + private void addConfigParas(Map configMap) { + if (Asserts.isNotNull(configMap)) { + for (Map.Entry entry : configMap.entrySet()) { + if (Asserts.isAllNotNullString(entry.getKey(), entry.getValue())) { + this.configuration.setString(entry.getKey(), entry.getValue()); + } + } + } + } + + @Override + public SavePointResult savepointCluster() { + return savepointCluster(null); + } + + @Override + public SavePointResult savepointCluster(String savePoint) { + if (Asserts.isNull(client)) { + init(); + } + SavePointResult result = SavePointResult.build(getType()); + configuration.set(KubernetesConfigOptions.CLUSTER_ID, config.getClusterConfig().getAppId()); + KubernetesClusterClientFactory clusterClientFactory = new KubernetesClusterClientFactory(); + String clusterId = clusterClientFactory.getClusterId(configuration); + if (Asserts.isNull(clusterId)) { + throw new GatewayException("No cluster id was specified. Please specify a cluster to which you would like to connect."); + } + KubernetesClusterDescriptor clusterDescriptor = clusterClientFactory.createClusterDescriptor(configuration); + try (ClusterClient clusterClient = clusterDescriptor.retrieve( + clusterId).getClusterClient()) { + List jobInfos = new ArrayList<>(); + CompletableFuture> listJobsFuture = clusterClient.listJobs(); + for (JobStatusMessage jobStatusMessage : listJobsFuture.get()) { + JobInfo jobInfo = new JobInfo(jobStatusMessage.getJobId().toHexString()); + jobInfo.setStatus(JobInfo.JobStatus.RUN); + jobInfos.add(jobInfo); + } + runSavePointJob(jobInfos, clusterClient, savePoint); + result.setJobInfos(jobInfos); + } catch (Exception e) { + result.fail(LogUtil.getError(e)); + } + return null; + } + + @Override + public SavePointResult savepointJob() { + return savepointJob(null); + } + + @Override + public SavePointResult savepointJob(String savePoint) { + if (Asserts.isNull(client)) { + init(); + } + if (Asserts.isNull(config.getFlinkConfig().getJobId())) { + throw new GatewayException("No job id was specified. Please specify a job to which you would like to savepont."); + } + SavePointResult result = SavePointResult.build(getType()); + configuration.set(KubernetesConfigOptions.CLUSTER_ID, config.getClusterConfig().getAppId()); + KubernetesClusterClientFactory clusterClientFactory = new KubernetesClusterClientFactory(); + String clusterId = clusterClientFactory.getClusterId(configuration); + if (Asserts.isNull(clusterId)) { + throw new GatewayException("No cluster id was specified. Please specify a cluster to which you would like to connect."); + } + KubernetesClusterDescriptor clusterDescriptor = clusterClientFactory.createClusterDescriptor(configuration); + try (ClusterClient clusterClient = clusterDescriptor.retrieve(clusterId).getClusterClient()) { + List jobInfos = new ArrayList<>(); + jobInfos.add(new JobInfo(config.getFlinkConfig().getJobId(), JobInfo.JobStatus.FAIL)); + runSavePointJob(jobInfos, clusterClient, savePoint); + result.setJobInfos(jobInfos); + } catch (Exception e) { + result.fail(LogUtil.getError(e)); + // zrx + throw new RuntimeException(LogUtil.getError(e)); + } + return result; + } + + private void runSavePointJob(List jobInfos, ClusterClient clusterClient, String savePoint) throws Exception { + for (JobInfo jobInfo : jobInfos) { + if (ActionType.CANCEL == config.getFlinkConfig().getAction()) { + clusterClient.cancel(JobID.fromHexString(jobInfo.getJobId())); + jobInfo.setStatus(JobInfo.JobStatus.CANCEL); + continue; + } + switch (config.getFlinkConfig().getSavePointType()) { + case TRIGGER: + jobInfo.setSavePoint(FlinkUtil.triggerSavepoint(clusterClient,jobInfo.getJobId(),savePoint)); + break; + case STOP: + jobInfo.setSavePoint(FlinkUtil.stopWithSavepoint(clusterClient,jobInfo.getJobId(),savePoint)); + jobInfo.setStatus(JobInfo.JobStatus.STOP); + break; + case CANCEL: + jobInfo.setSavePoint(FlinkUtil.cancelWithSavepoint(clusterClient,jobInfo.getJobId(),savePoint)); + jobInfo.setStatus(JobInfo.JobStatus.CANCEL); + break; + default: + } + } + } + + @Override + public TestResult test() { + try { + initConfig(); + } catch (Exception e) { + logger.error("测试 Flink 配置失败:" + e.getMessage()); + return TestResult.fail("测试 Flink 配置失败:" + e.getMessage()); + } + try { + initKubeClient(); + logger.info("配置连接测试成功"); + return TestResult.success(); + } catch (Exception e) { + logger.error("测试 Kubernetes 配置失败:" + e.getMessage()); + return TestResult.fail("测试 Kubernetes 配置失败:" + e.getMessage()); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/model/JobInfo.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/model/JobInfo.java new file mode 100644 index 0000000..8a44b8c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/model/JobInfo.java @@ -0,0 +1,56 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.model; + +import lombok.Getter; +import lombok.Setter; + +/** + * JobInfo + * + * @author zrx + * @since 2021/11/3 21:45 + */ +@Getter +@Setter +public class JobInfo { + private String jobId; + private String savePoint; + private JobStatus status; + + public JobInfo(String jobId) { + this.jobId = jobId; + } + + public JobInfo(String jobId, JobStatus status) { + this.jobId = jobId; + this.status = status; + } + + public enum JobStatus { + RUN("run"), STOP("stop"), CANCEL("cancel"), FAIL("fail"); + + private String value; + + JobStatus(String value) { + this.value = value; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/AbstractGatewayResult.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/AbstractGatewayResult.java new file mode 100644 index 0000000..cddfc1d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/AbstractGatewayResult.java @@ -0,0 +1,71 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.result; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.gateway.GatewayType; + +import java.time.LocalDateTime; + +/** + * AbstractGatewayResult + * + * @author zrx + * @since 2021/10/29 15:44 + **/ +@Setter +@Getter +public abstract class AbstractGatewayResult implements GatewayResult { + + protected GatewayType type; + protected LocalDateTime startTime; + protected LocalDateTime endTime; + protected boolean isSuccess; + protected String exceptionMsg; + + public AbstractGatewayResult(GatewayType type, LocalDateTime startTime) { + this.type = type; + this.startTime = startTime; + } + + public AbstractGatewayResult(LocalDateTime startTime, LocalDateTime endTime, boolean isSuccess, String exceptionMsg) { + this.startTime = startTime; + this.endTime = endTime; + this.isSuccess = isSuccess; + this.exceptionMsg = exceptionMsg; + } + + public void success() { + this.isSuccess = true; + this.endTime = LocalDateTime.now(); + } + + public void fail(String error) { + this.isSuccess = false; + this.endTime = LocalDateTime.now(); + this.exceptionMsg = error; + } + + @Override + public String getError() { + return exceptionMsg; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/GatewayResult.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/GatewayResult.java new file mode 100644 index 0000000..4f42951 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/GatewayResult.java @@ -0,0 +1,39 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.result; + +import java.util.List; + +/** + * GatewayResult + * + * @author zrx + * @since 2021/10/29 15:39 + **/ +public interface GatewayResult { + + String getAppId(); + + String getWebURL(); + + List getJids(); + + String getError(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/KubernetesResult.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/KubernetesResult.java new file mode 100644 index 0000000..0b15f0d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/KubernetesResult.java @@ -0,0 +1,82 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.result; + + +import net.srt.flink.gateway.GatewayType; + +import java.time.LocalDateTime; +import java.util.List; + +/** + * KubernetesResult + * + * @author zrx + * @since 2021/12/26 15:06 + */ +public class KubernetesResult extends AbstractGatewayResult { + private String clusterId; + private String webURL; + private List jids; + + public KubernetesResult(GatewayType type, LocalDateTime startTime) { + super(type, startTime); + } + + public KubernetesResult(String clusterId, LocalDateTime startTime, LocalDateTime endTime, boolean isSuccess, String exceptionMsg) { + super(startTime, endTime, isSuccess, exceptionMsg); + this.clusterId = clusterId; + } + + public String getClusterId() { + return clusterId; + } + + @Override + public String getAppId() { + return clusterId; + } + + public void setClusterId(String clusterId) { + this.clusterId = clusterId; + } + + public void setWebURL(String webURL) { + this.webURL = webURL; + } + + @Override + public String getWebURL() { + return webURL; + } + + @Override + public List getJids() { + return jids; + } + + public void setJids(List jids) { + this.jids = jids; + } + + public static KubernetesResult build(GatewayType type) { + return new KubernetesResult(type, LocalDateTime.now()); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/SavePointResult.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/SavePointResult.java new file mode 100644 index 0000000..61c984a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/SavePointResult.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.result; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.gateway.GatewayType; +import net.srt.flink.gateway.model.JobInfo; + +import java.time.LocalDateTime; +import java.util.List; + +/** + * TODO + * + * @author zrx + * @since 2021/11/3 22:20 + */ +@Getter +@Setter +public class SavePointResult extends AbstractGatewayResult { + private String appId; + private List jobInfos; + + public SavePointResult(GatewayType type, LocalDateTime startTime) { + super(type, startTime); + } + + public SavePointResult(LocalDateTime startTime, LocalDateTime endTime, boolean isSuccess, String exceptionMsg) { + super(startTime, endTime, isSuccess, exceptionMsg); + } + + @Override + public String getAppId() { + return appId; + } + + @Override + public String getWebURL() { + return null; + } + + @Override + public List getJids() { + return null; + } + + public static SavePointResult build(GatewayType type) { + return new SavePointResult(type, LocalDateTime.now()); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/TestResult.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/TestResult.java new file mode 100644 index 0000000..a7a1070 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/TestResult.java @@ -0,0 +1,52 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.result; + +/** + * TestResult + * + * @author zrx + * @since 2021/11/27 16:12 + **/ +public class TestResult { + private boolean isAvailable; + private String error; + + public boolean isAvailable() { + return isAvailable; + } + + public String getError() { + return error; + } + + public TestResult(boolean isAvailable, String error) { + this.isAvailable = isAvailable; + this.error = error; + } + + public static TestResult success() { + return new TestResult(true, null); + } + + public static TestResult fail(String error) { + return new TestResult(false, error); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/YarnResult.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/YarnResult.java new file mode 100644 index 0000000..847925c --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/result/YarnResult.java @@ -0,0 +1,80 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.result; + + +import net.srt.flink.gateway.GatewayType; + +import java.time.LocalDateTime; +import java.util.List; + +/** + * YarnResult + * + * @author zrx + * @since 2021/10/29 + **/ +public class YarnResult extends AbstractGatewayResult { + + private String appId; + private String webURL; + private List jids; + + public YarnResult(GatewayType type, LocalDateTime startTime) { + super(type, startTime); + } + + public YarnResult(String appId, LocalDateTime startTime, LocalDateTime endTime, boolean isSuccess, String exceptionMsg) { + super(startTime, endTime, isSuccess, exceptionMsg); + this.appId = appId; + } + + @Override + public String getAppId() { + return appId; + } + + @Override + public String getWebURL() { + return webURL; + } + + public void setAppId(String appId) { + this.appId = appId; + } + + public void setWebURL(String webURL) { + this.webURL = webURL; + } + + @Override + public List getJids() { + return jids; + } + + public void setJids(List jids) { + this.jids = jids; + } + + public static YarnResult build(GatewayType type) { + return new YarnResult(type, LocalDateTime.now()); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/yarn/YarnApplicationGateway.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/yarn/YarnApplicationGateway.java new file mode 100644 index 0000000..711bd3a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/yarn/YarnApplicationGateway.java @@ -0,0 +1,155 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.yarn; + +import cn.hutool.core.io.FileUtil; +import lombok.SneakyThrows; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.ProjectSystemConfiguration; +import net.srt.flink.common.model.SystemConfiguration; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.gateway.GatewayType; +import net.srt.flink.gateway.config.AppConfig; +import net.srt.flink.gateway.config.GatewayConfig; +import net.srt.flink.gateway.exception.GatewayException; +import net.srt.flink.gateway.result.GatewayResult; +import net.srt.flink.gateway.result.YarnResult; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.flink.client.deployment.ClusterSpecification; +import org.apache.flink.client.deployment.application.ApplicationConfiguration; +import org.apache.flink.client.program.ClusterClient; +import org.apache.flink.client.program.ClusterClientProvider; +import org.apache.flink.configuration.JobManagerOptions; +import org.apache.flink.configuration.PipelineOptions; +import org.apache.flink.configuration.TaskManagerOptions; +import org.apache.flink.runtime.client.JobStatusMessage; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.flink.yarn.YarnClientYarnClusterInformationRetriever; +import org.apache.flink.yarn.YarnClusterDescriptor; +import org.apache.hadoop.yarn.api.records.ApplicationId; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * YarnApplicationGateway + * + * @author zrx + * @since 2021/10/29 + **/ +public class YarnApplicationGateway extends YarnGateway { + + public YarnApplicationGateway(GatewayConfig config) { + super(config); + } + + public YarnApplicationGateway() { + } + + @Override + public GatewayType getType() { + return GatewayType.YARN_APPLICATION; + } + + @Override + public GatewayResult submitJobGraph(JobGraph jobGraph) { + throw new GatewayException("Couldn't deploy Yarn Application Cluster with job graph."); + } + + @SneakyThrows + @Override + public GatewayResult submitJar() { + if (Asserts.isNull(yarnClient)) { + init(); + } + YarnResult result = YarnResult.build(getType()); + AppConfig appConfig = config.getAppConfig(); + configuration.set(PipelineOptions.JARS, Collections.singletonList(appConfig.getUserJarPath())); + String[] userJarParas = appConfig.getUserJarParas(); + if (Asserts.isNull(userJarParas)) { + userJarParas = new String[0]; + } + ApplicationConfiguration applicationConfiguration = + new ApplicationConfiguration(userJarParas, appConfig.getUserJarMainAppClass()); + YarnClusterDescriptor yarnClusterDescriptor = new YarnClusterDescriptor( + configuration, yarnConfiguration, yarnClient, + YarnClientYarnClusterInformationRetriever.create(yarnClient), true); + + ClusterSpecification.ClusterSpecificationBuilder clusterSpecificationBuilder = + new ClusterSpecification.ClusterSpecificationBuilder(); + if (configuration.contains(JobManagerOptions.TOTAL_PROCESS_MEMORY)) { + clusterSpecificationBuilder + .setMasterMemoryMB(configuration.get(JobManagerOptions.TOTAL_PROCESS_MEMORY).getMebiBytes()); + } + if (configuration.contains(TaskManagerOptions.TOTAL_PROCESS_MEMORY)) { + clusterSpecificationBuilder + .setTaskManagerMemoryMB(configuration.get(TaskManagerOptions.TOTAL_PROCESS_MEMORY).getMebiBytes()); + } + if (configuration.contains(TaskManagerOptions.NUM_TASK_SLOTS)) { + clusterSpecificationBuilder.setSlotsPerTaskManager(configuration.get(TaskManagerOptions.NUM_TASK_SLOTS)) + .createClusterSpecification(); + } + if (Asserts.isNotNull(config.getJarPaths())) { + yarnClusterDescriptor + .addShipFiles(Arrays.stream(config.getJarPaths()).map(FileUtil::file).collect(Collectors.toList())); + } + + try { + ClusterClientProvider clusterClientProvider = yarnClusterDescriptor.deployApplicationCluster( + clusterSpecificationBuilder.createClusterSpecification(), + applicationConfiguration); + ClusterClient clusterClient = clusterClientProvider.getClusterClient(); + Collection jobStatusMessages = clusterClient.listJobs().get(); + //zrx ProjectSystemConfiguration + ProcessEntity process = ProcessContextHolder.getProcess(); + int counts = ProjectSystemConfiguration.getByProjectId(process.getProjectId()).getJobIdWait(); + while (jobStatusMessages.size() == 0 && counts > 0) { + Thread.sleep(1000); + counts--; + jobStatusMessages = clusterClient.listJobs().get(); + if (jobStatusMessages.size() > 0) { + break; + } + } + if (jobStatusMessages.size() > 0) { + List jids = new ArrayList<>(); + for (JobStatusMessage jobStatusMessage : jobStatusMessages) { + jids.add(jobStatusMessage.getJobId().toHexString()); + } + result.setJids(jids); + } + ApplicationId applicationId = clusterClient.getClusterId(); + result.setAppId(applicationId.toString()); + result.setWebURL(clusterClient.getWebInterfaceURL()); + result.success(); + //zrx + } /*catch (Exception e) { + result.fail(LogUtil.getError(e)); + }*/ finally { + yarnClusterDescriptor.close(); + } + return result; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/yarn/YarnGateway.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/yarn/YarnGateway.java new file mode 100644 index 0000000..882514f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/yarn/YarnGateway.java @@ -0,0 +1,377 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.yarn; + +import net.srt.flink.client.utils.FlinkUtil; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.JobStatus; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.gateway.AbstractGateway; +import net.srt.flink.gateway.config.ActionType; +import net.srt.flink.gateway.config.GatewayConfig; +import net.srt.flink.gateway.config.SavePointType; +import net.srt.flink.gateway.exception.GatewayException; +import net.srt.flink.gateway.model.JobInfo; +import net.srt.flink.gateway.result.SavePointResult; +import net.srt.flink.gateway.result.TestResult; +import org.apache.flink.api.common.JobID; +import org.apache.flink.client.deployment.ClusterRetrieveException; +import org.apache.flink.client.program.ClusterClient; +import org.apache.flink.configuration.CheckpointingOptions; +import org.apache.flink.configuration.DeploymentOptions; +import org.apache.flink.configuration.GlobalConfiguration; +import org.apache.flink.configuration.SecurityOptions; +import org.apache.flink.runtime.client.JobStatusMessage; +import org.apache.flink.runtime.jobgraph.SavepointConfigOptions; +import org.apache.flink.runtime.security.SecurityConfiguration; +import org.apache.flink.runtime.security.SecurityUtils; +import org.apache.flink.yarn.YarnClientYarnClusterInformationRetriever; +import org.apache.flink.yarn.YarnClusterClientFactory; +import org.apache.flink.yarn.YarnClusterDescriptor; +import org.apache.flink.yarn.configuration.YarnConfigOptions; +import org.apache.flink.yarn.configuration.YarnLogConfigUtil; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.service.Service; +import org.apache.hadoop.yarn.api.records.ApplicationId; +import org.apache.hadoop.yarn.api.records.ApplicationReport; +import org.apache.hadoop.yarn.api.records.FinalApplicationStatus; +import org.apache.hadoop.yarn.api.records.YarnApplicationState; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; + +import java.io.IOException; +import java.io.PrintWriter; +import java.io.StringWriter; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.UUID; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Executors; + +/** + * YarnSubmiter + * + * @author zrx + * @since 2021/10/29 + **/ +public abstract class YarnGateway extends AbstractGateway { + + public static final String HADOOP_CONFIG = "fs.hdfs.hadoopconf"; + + protected YarnConfiguration yarnConfiguration; + protected YarnClient yarnClient; + + public YarnGateway() { + } + + public YarnGateway(GatewayConfig config) { + super(config); + } + + @Override + public void init() { + initConfig(); + initYarnClient(); + } + + private void initConfig() { + configuration = GlobalConfiguration.loadConfiguration(config.getClusterConfig().getFlinkConfigPath()); + if (Asserts.isNotNull(config.getFlinkConfig().getConfiguration())) { + addConfigParas(config.getFlinkConfig().getConfiguration()); + } + configuration.set(DeploymentOptions.TARGET, getType().getLongValue()); + if (Asserts.isNotNullString(config.getFlinkConfig().getSavePoint())) { + configuration.setString(SavepointConfigOptions.SAVEPOINT_PATH, config.getFlinkConfig().getSavePoint()); + } + configuration.set(YarnConfigOptions.PROVIDED_LIB_DIRS, + Collections.singletonList(config.getClusterConfig().getFlinkLibPath())); + /*configuration.set(YarnConfigOptions.FLINK_DIST_JAR, "hdfs://node01:9000/flink-jar/lib/flink-dist_2.12-1.14.3.jar");*/ + if (Asserts.isNotNullString(config.getFlinkConfig().getJobName())) { + configuration.set(YarnConfigOptions.APPLICATION_NAME, config.getFlinkConfig().getJobName()); + } + + if (Asserts.isNotNullString(config.getClusterConfig().getYarnConfigPath())) { + configuration.setString(HADOOP_CONFIG, config.getClusterConfig().getYarnConfigPath()); + } + + if (configuration.containsKey(SecurityOptions.KERBEROS_LOGIN_KEYTAB.key())) { + try { + SecurityUtils.install(new SecurityConfiguration(configuration)); + UserGroupInformation currentUser = UserGroupInformation.getCurrentUser(); + logger.info("安全认证结束,用户和认证方式:" + currentUser.toString()); + } catch (Exception e) { + logger.error(e.getMessage()); + e.printStackTrace(); + } + } + if (getType().isApplicationMode()) { + configuration.set(YarnConfigOptions.APPLICATION_TYPE, "SRT Flink APP"); + String uuid = UUID.randomUUID().toString().replace("-", ""); + if (configuration.contains(CheckpointingOptions.CHECKPOINTS_DIRECTORY)) { + configuration.set(CheckpointingOptions.CHECKPOINTS_DIRECTORY, + configuration.getString(CheckpointingOptions.CHECKPOINTS_DIRECTORY) + "/" + uuid); + } + if (configuration.contains(CheckpointingOptions.SAVEPOINT_DIRECTORY)) { + configuration.set(CheckpointingOptions.SAVEPOINT_DIRECTORY, + configuration.getString(CheckpointingOptions.SAVEPOINT_DIRECTORY) + "/" + uuid); + } + } + YarnLogConfigUtil.setLogConfigFileInConfig(configuration, config.getClusterConfig().getFlinkConfigPath()); + } + + private void initYarnClient() { + yarnConfiguration = new YarnConfiguration(); + yarnConfiguration + .addResource(new Path(URI.create(config.getClusterConfig().getYarnConfigPath() + "/yarn-site.xml"))); + yarnConfiguration + .addResource(new Path(URI.create(config.getClusterConfig().getYarnConfigPath() + "/core-site.xml"))); + yarnConfiguration + .addResource(new Path(URI.create(config.getClusterConfig().getYarnConfigPath() + "/hdfs-site.xml"))); + yarnClient = YarnClient.createYarnClient(); + yarnClient.init(yarnConfiguration); + yarnClient.start(); + } + + private void addConfigParas(Map configMap) { + if (Asserts.isNotNull(configMap)) { + for (Map.Entry entry : configMap.entrySet()) { + if (Asserts.isAllNotNullString(entry.getKey(), entry.getValue())) { + this.configuration.setString(entry.getKey(), entry.getValue()); + } + } + } + } + + @Override + public SavePointResult savepointCluster() { + return savepointCluster(null); + } + + @Override + public SavePointResult savepointCluster(String savePoint) { + if (Asserts.isNull(yarnClient)) { + init(); + } + /* + * if(Asserts.isNotNullString(config.getClusterConfig().getYarnConfigPath())) { configuration = + * GlobalConfiguration.loadConfiguration(config.getClusterConfig().getYarnConfigPath()); }else { configuration = + * new Configuration(); } + */ + SavePointResult result = SavePointResult.build(getType()); + ApplicationId applicationId = getApplicationId(); + /* + * YarnClusterDescriptor clusterDescriptor = clusterClientFactory .createClusterDescriptor( configuration); + */ + YarnClusterDescriptor clusterDescriptor = new YarnClusterDescriptor( + configuration, yarnConfiguration, yarnClient, + YarnClientYarnClusterInformationRetriever.create(yarnClient), true); + try ( + ClusterClient clusterClient = clusterDescriptor.retrieve( + applicationId).getClusterClient()) { + List jobInfos = new ArrayList<>(); + CompletableFuture> listJobsFuture = clusterClient.listJobs(); + for (JobStatusMessage jobStatusMessage : listJobsFuture.get()) { + JobInfo jobInfo = new JobInfo(jobStatusMessage.getJobId().toHexString()); + jobInfo.setStatus(JobInfo.JobStatus.RUN); + jobInfos.add(jobInfo); + } + runSavePointJob(jobInfos, clusterClient, savePoint); + result.setJobInfos(jobInfos); + } catch (Exception e) { + e.printStackTrace(); + StringWriter sw = new StringWriter(); + PrintWriter pw = new PrintWriter(sw); + e.printStackTrace(pw); + logger.error(e.getMessage()); + result.fail(e.getMessage()); + } + return result; + } + + @Override + public SavePointResult savepointJob() { + return savepointJob(null); + } + + @Override + public SavePointResult savepointJob(String savePoint) { + if (Asserts.isNull(yarnClient)) { + init(); + } + if (Asserts.isNull(config.getFlinkConfig().getJobId())) { + throw new GatewayException( + "No job id was specified. Please specify a job to which you would like to savepont."); + } + /* + * if(Asserts.isNotNullString(config.getClusterConfig().getYarnConfigPath())) { configuration = + * GlobalConfiguration.loadConfiguration(config.getClusterConfig().getYarnConfigPath()); }else { configuration = + * new Configuration(); } + */ + SavePointResult result = SavePointResult.build(getType()); + ApplicationId applicationId = getApplicationId(); + /* + * YarnClusterDescriptor clusterDescriptor = clusterClientFactory .createClusterDescriptor( configuration); + */ + YarnClusterDescriptor clusterDescriptor = new YarnClusterDescriptor( + configuration, yarnConfiguration, yarnClient, + YarnClientYarnClusterInformationRetriever.create(yarnClient), true); + try ( + ClusterClient clusterClient = clusterDescriptor.retrieve( + applicationId).getClusterClient()) { + List jobInfos = new ArrayList<>(); + jobInfos.add(new JobInfo(config.getFlinkConfig().getJobId(), JobInfo.JobStatus.FAIL)); + runSavePointJob(jobInfos, clusterClient, savePoint); + result.setJobInfos(jobInfos); + } catch (Exception e) { + result.fail(LogUtil.getError(e)); + // zrx + throw new RuntimeException(LogUtil.getError(e)); + } + if (ActionType.CANCEL == config.getFlinkConfig().getAction() + || SavePointType.CANCEL.equals(config.getFlinkConfig().getSavePointType())) { + try { + autoCancelCluster(clusterDescriptor.retrieve(applicationId).getClusterClient()); + } catch (ClusterRetrieveException e) { + e.printStackTrace(); + // zrx + throw new RuntimeException(LogUtil.getError(e)); + } + } + return result; + } + + private void runSavePointJob(List jobInfos, ClusterClient clusterClient, + String savePoint) throws Exception { + for (JobInfo jobInfo : jobInfos) { + if (ActionType.CANCEL == config.getFlinkConfig().getAction()) { + clusterClient.cancel(JobID.fromHexString(jobInfo.getJobId())); + jobInfo.setStatus(JobInfo.JobStatus.CANCEL); + continue; + } + switch (config.getFlinkConfig().getSavePointType()) { + case TRIGGER: + jobInfo.setSavePoint(FlinkUtil.triggerSavepoint(clusterClient, jobInfo.getJobId(), savePoint)); + break; + case STOP: + jobInfo.setSavePoint(FlinkUtil.stopWithSavepoint(clusterClient, jobInfo.getJobId(), savePoint)); + jobInfo.setStatus(JobInfo.JobStatus.STOP); + break; + case CANCEL: + jobInfo.setSavePoint(FlinkUtil.cancelWithSavepoint(clusterClient, jobInfo.getJobId(), savePoint)); + jobInfo.setStatus(JobInfo.JobStatus.CANCEL); + break; + default: + } + } + } + + private void autoCancelCluster(ClusterClient clusterClient) { + Executors.newCachedThreadPool().submit(() -> { + try { + Thread.sleep(3000); + clusterClient.shutDownCluster(); + } catch (InterruptedException e) { + e.printStackTrace(); + } finally { + clusterClient.close(); + } + }); + } + + @Override + public TestResult test() { + try { + initConfig(); + } catch (Exception e) { + logger.error("测试 Flink 配置失败:" + e.getMessage()); + return TestResult.fail("测试 Flink 配置失败:" + e.getMessage()); + } + try { + initYarnClient(); + if (yarnClient.isInState(Service.STATE.STARTED)) { + logger.info("配置连接测试成功"); + return TestResult.success(); + } else { + logger.error("该配置无对应 Yarn 集群存在"); + return TestResult.fail("该配置无对应 Yarn 集群存在"); + } + } catch (Exception e) { + logger.error("测试 Yarn 配置失败:" + e.getMessage()); + return TestResult.fail("测试 Yarn 配置失败:" + e.getMessage()); + } + } + + private ApplicationId getApplicationId() { + YarnClusterClientFactory clusterClientFactory = new YarnClusterClientFactory(); + configuration.set(YarnConfigOptions.APPLICATION_ID, config.getClusterConfig().getAppId()); + ApplicationId applicationId = clusterClientFactory.getClusterId(configuration); + if (Asserts.isNull(applicationId)) { + throw new GatewayException( + "No cluster id was specified. Please specify a cluster to which you would like to connect."); + } + return applicationId; + } + + @Override + public JobStatus getJobStatusById(String id) { + if (Asserts.isNull(yarnClient)) { + init(); + } + config.getClusterConfig().setAppId(id); + ApplicationReport applicationReport = null; + try { + applicationReport = yarnClient.getApplicationReport(getApplicationId()); + YarnApplicationState yarnApplicationState = applicationReport.getYarnApplicationState(); + FinalApplicationStatus finalApplicationStatus = applicationReport.getFinalApplicationStatus(); + switch (yarnApplicationState) { + case FINISHED: + switch (finalApplicationStatus) { + case KILLED: + return JobStatus.CANCELED; + case FAILED: + return JobStatus.FAILED; + default: + return JobStatus.FINISHED; + } + case RUNNING: + return JobStatus.RUNNING; + case FAILED: + return JobStatus.FAILED; + case KILLED: + return JobStatus.CANCELED; + case SUBMITTED: + return JobStatus.CREATED; + default: + return JobStatus.INITIALIZING; + } + } catch (YarnException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return JobStatus.UNKNOWN; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/yarn/YarnPerJobGateway.java b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/yarn/YarnPerJobGateway.java new file mode 100644 index 0000000..303ab15 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/java/net/srt/flink/gateway/yarn/YarnPerJobGateway.java @@ -0,0 +1,143 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.gateway.yarn; + +import cn.hutool.core.io.FileUtil; +import cn.hutool.core.util.URLUtil; +import lombok.SneakyThrows; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.ProjectSystemConfiguration; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.gateway.GatewayType; +import net.srt.flink.gateway.config.GatewayConfig; +import net.srt.flink.gateway.exception.GatewayException; +import net.srt.flink.gateway.result.GatewayResult; +import net.srt.flink.gateway.result.YarnResult; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.apache.flink.client.deployment.ClusterSpecification; +import org.apache.flink.client.program.ClusterClient; +import org.apache.flink.client.program.ClusterClientProvider; +import org.apache.flink.configuration.JobManagerOptions; +import org.apache.flink.configuration.TaskManagerOptions; +import org.apache.flink.runtime.client.JobStatusMessage; +import org.apache.flink.runtime.jobgraph.JobGraph; +import org.apache.flink.yarn.YarnClientYarnClusterInformationRetriever; +import org.apache.flink.yarn.YarnClusterDescriptor; +import org.apache.hadoop.yarn.api.records.ApplicationId; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.List; +import java.util.stream.Collectors; + +/** + * YarnApplicationGateway + * + * @author zrx + * @since 2021/10/29 + **/ +public class YarnPerJobGateway extends YarnGateway { + + public YarnPerJobGateway(GatewayConfig config) { + super(config); + } + + public YarnPerJobGateway() { + } + + @Override + public GatewayType getType() { + return GatewayType.YARN_PER_JOB; + } + + @SneakyThrows + @Override + public GatewayResult submitJobGraph(JobGraph jobGraph) { + if (Asserts.isNull(yarnClient)) { + init(); + } + YarnResult result = YarnResult.build(getType()); + YarnClusterDescriptor yarnClusterDescriptor = new YarnClusterDescriptor( + configuration, yarnConfiguration, yarnClient, + YarnClientYarnClusterInformationRetriever.create(yarnClient), true); + + ClusterSpecification.ClusterSpecificationBuilder clusterSpecificationBuilder = + new ClusterSpecification.ClusterSpecificationBuilder(); + if (configuration.contains(JobManagerOptions.TOTAL_PROCESS_MEMORY)) { + clusterSpecificationBuilder + .setMasterMemoryMB(configuration.get(JobManagerOptions.TOTAL_PROCESS_MEMORY).getMebiBytes()); + } + if (configuration.contains(TaskManagerOptions.TOTAL_PROCESS_MEMORY)) { + clusterSpecificationBuilder + .setTaskManagerMemoryMB(configuration.get(TaskManagerOptions.TOTAL_PROCESS_MEMORY).getMebiBytes()); + } + if (configuration.contains(TaskManagerOptions.NUM_TASK_SLOTS)) { + clusterSpecificationBuilder.setSlotsPerTaskManager(configuration.get(TaskManagerOptions.NUM_TASK_SLOTS)) + .createClusterSpecification(); + } + + if (Asserts.isNotNull(config.getJarPaths())) { + jobGraph.addJars(Arrays.stream(config.getJarPaths()).map(path -> URLUtil.getURL(FileUtil.file(path))) + .collect(Collectors.toList())); + } + + try { + ClusterClientProvider clusterClientProvider = yarnClusterDescriptor.deployJobCluster( + clusterSpecificationBuilder.createClusterSpecification(), jobGraph, true); + ClusterClient clusterClient = clusterClientProvider.getClusterClient(); + ApplicationId applicationId = clusterClient.getClusterId(); + result.setAppId(applicationId.toString()); + result.setWebURL(clusterClient.getWebInterfaceURL()); + Collection jobStatusMessages = clusterClient.listJobs().get(); + //zrx ProjectSystemConfiguration + ProcessEntity process = ProcessContextHolder.getProcess(); + int counts = ProjectSystemConfiguration.getByProjectId(process.getProjectId()).getJobIdWait(); + while (jobStatusMessages.size() == 0 && counts > 0) { + Thread.sleep(1000); + counts--; + jobStatusMessages = clusterClient.listJobs().get(); + if (jobStatusMessages.size() > 0) { + break; + } + } + if (jobStatusMessages.size() > 0) { + List jids = new ArrayList<>(); + for (JobStatusMessage jobStatusMessage : jobStatusMessages) { + jids.add(jobStatusMessage.getJobId().toHexString()); + } + result.setJids(jids); + } + result.success(); + // zrx + } /*catch (Exception e) { + result.fail(LogUtil.getError(e)); + }*/ finally { + yarnClusterDescriptor.close(); + } + return result; + } + + @Override + public GatewayResult submitJar() { + throw new GatewayException("Couldn't deploy Yarn Per-Job Cluster with User Application Jar."); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/resources/META-INF/services/net.srt.flink.gateway.Gateway b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/resources/META-INF/services/net.srt.flink.gateway.Gateway new file mode 100644 index 0000000..66d74f2 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-gateway/src/main/resources/META-INF/services/net.srt.flink.gateway.Gateway @@ -0,0 +1,3 @@ +net.srt.flink.gateway.yarn.YarnApplicationGateway +net.srt.flink.gateway.yarn.YarnPerJobGateway +net.srt.flink.gateway.kubernetes.KubernetesApplicationGateway \ No newline at end of file diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/pom.xml new file mode 100644 index 0000000..89e104a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/pom.xml @@ -0,0 +1,43 @@ + + + + flink-metadata + net.srt + 2.0.0 + + 4.0.0 + + flink-metadata-base + + + + net.srt + flink-common + ${project.version} + + + net.srt + flink-process + ${project.version} + + + com.fasterxml.jackson.core + jackson-annotations + + + com.fasterxml.jackson.core + jackson-databind + + + org.slf4j + slf4j-api + + + com.alibaba + druid-spring-boot-starter + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/convert/ITypeConvert.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/convert/ITypeConvert.java new file mode 100644 index 0000000..9a1caf7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/convert/ITypeConvert.java @@ -0,0 +1,82 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.convert; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.ColumnType; + +import java.sql.ResultSet; +import java.sql.SQLException; + +/** + * ITypeConvert + * + * @author zrx + * @since 2021/7/20 14:39 + **/ +public interface ITypeConvert { + + default String convertToDB(Column column) { + return convertToDB(column.getJavaType()); + } + + ColumnType convert(Column column); + + String convertToDB(ColumnType columnType); + + default Object convertValue(ResultSet results, String columnName, String javaType) throws SQLException { + if (Asserts.isNull(javaType)) { + return results.getString(columnName); + } + switch (javaType.toLowerCase()) { + case "string": + return results.getString(columnName); + case "double": + return results.getDouble(columnName); + case "int": + return results.getInt(columnName); + case "float": + return results.getFloat(columnName); + case "bigint": + return results.getLong(columnName); + case "decimal": + return results.getBigDecimal(columnName); + case "date": + case "localdate": + return results.getDate(columnName); + case "time": + case "localtime": + return results.getTime(columnName); + case "timestamp": + return results.getTimestamp(columnName); + case "blob": + return results.getBlob(columnName); + case "boolean": + return results.getBoolean(columnName); + case "byte": + return results.getByte(columnName); + case "bytes": + return results.getBytes(columnName); + default: + return results.getString(columnName); + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/AbstractDriver.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/AbstractDriver.java new file mode 100644 index 0000000..daae644 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/AbstractDriver.java @@ -0,0 +1,97 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.driver; + + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import net.srt.flink.metadata.base.convert.ITypeConvert; +import net.srt.flink.metadata.base.query.IDBQuery; + +import java.util.List; +import java.util.Map; +import java.util.stream.Collectors; + +/** + * AbstractDriver + * + * @author zrx + * @since 2021/7/19 23:32 + */ +public abstract class AbstractDriver implements Driver { + + protected DriverConfig config; + + public abstract IDBQuery getDBQuery(); + + public abstract ITypeConvert getTypeConvert(); + + @Override + public boolean canHandle(String type) { + return Asserts.isEqualsIgnoreCase(getType(), type); + } + + @Override + public Driver setDriverConfig(DriverConfig config) { + this.config = config; + return this; + } + + @Override + public boolean isHealth() { + return false; + } + + @Override + public List getSchemasAndTables() { + return listSchemas().stream().peek(schema -> schema.setTables(listTables(schema.getName()))).sorted().collect(Collectors.toList()); + } + + @Override + public List
getTablesAndColumns(String schema) { + return listTables(schema).stream().peek(table -> table.setColumns(listColumns(schema, table.getName()))).sorted().collect(Collectors.toList()); + } + + @Override + public Table getTable(String schemaName, String tableName) { + List
tables = listTables(schemaName); + Table table = null; + for (Table item : tables) { + if (Asserts.isEquals(item.getName(), tableName)) { + table = item; + } + } + if (Asserts.isNotNull(table)) { + table.setColumns(listColumns(schemaName, table.getName())); + } + return table; + } + + @Override + public boolean existTable(Table table) { + return listTables(table.getSchema()).stream().anyMatch(tableItem -> Asserts.isEquals(tableItem.getName(), table.getName())); + } + + @Override + public List> getSplitSchemaList() { + throw new RuntimeException("该数据源暂不支持分库分表"); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/AbstractJdbcDriver.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/AbstractJdbcDriver.java new file mode 100644 index 0000000..0fe9ccf --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/AbstractJdbcDriver.java @@ -0,0 +1,803 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.driver; + +import static net.srt.flink.common.utils.SplitUtil.contains; +import static net.srt.flink.common.utils.SplitUtil.getReValue; +import static net.srt.flink.common.utils.SplitUtil.isSplit; + +import cn.hutool.core.text.CharSequenceUtil; +import com.alibaba.druid.pool.DruidDataSource; +import com.alibaba.druid.pool.DruidPooledConnection; +import com.alibaba.druid.sql.SQLUtils; +import com.alibaba.druid.sql.ast.SQLStatement; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.constant.CommonConstant; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.QueryData; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.model.TableType; +import net.srt.flink.common.result.SqlExplainResult; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.common.utils.TextUtil; +import net.srt.flink.metadata.base.query.IDBQuery; +import net.srt.flink.metadata.base.result.JdbcSelectResult; +import net.srt.flink.process.context.ProcessContextHolder; +import net.srt.flink.process.model.ProcessEntity; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.text.ParseException; +import java.text.SimpleDateFormat; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.TreeSet; +import java.util.stream.Collectors; + + +/** + * AbstractJdbcDriver + * + * @author zrx + * @since 2021/7/20 14:09 + **/ +public abstract class AbstractJdbcDriver extends AbstractDriver { + + protected static Logger logger = LoggerFactory.getLogger(AbstractJdbcDriver.class); + + protected ThreadLocal conn = new ThreadLocal<>(); + + private DruidDataSource dataSource; + + public abstract String getDriverClass(); + + @Override + public String test() { + Asserts.checkNotNull(config, "无效的数据源配置"); + try { + Class.forName(getDriverClass()); + DriverManager.getConnection(config.getUrl(), config.getUsername(), config.getPassword()).close(); + } catch (Exception e) { + logger.error("Jdbc链接测试失败!错误信息为:" + e.getMessage(), e); + return e.getMessage(); + } + return CommonConstant.HEALTHY; + } + + public DruidDataSource createDataSource() throws SQLException { + if (null == dataSource) { + synchronized (this.getClass()) { + if (null == dataSource) { + DruidDataSource ds = new DruidDataSource(); + createDataSource(ds, config); + ds.init(); + this.dataSource = ds; + } + } + } + return dataSource; + } + + @Override + public Driver setDriverConfig(DriverConfig config) { + this.config = config; + try { + this.dataSource = createDataSource(); + } catch (SQLException e) { + throw new RuntimeException(e); + } + return this; + } + + protected void createDataSource(DruidDataSource ds, DriverConfig config) { + ds.setName(config.getName().replaceAll(":", "")); + ds.setUrl(config.getUrl()); + ds.setDriverClassName(getDriverClass()); + ds.setUsername(config.getUsername()); + ds.setPassword(config.getPassword()); + if (getDriverClass().contains("oracle")) { + ds.setValidationQuery("SELECT 'Hello' from DUAL"); + // https://blog.csdn.net/qq_20960159/article/details/78593936 + System.getProperties().setProperty("oracle.jdbc.J2EE13Compliant", "true"); + } else if (getDriverClass().contains("db2")) { + ds.setValidationQuery("SELECT 1 FROM SYSIBM.SYSDUMMY1"); + } else { + ds.setValidationQuery("select 1"); + } + ds.setTestWhileIdle(true); + ds.setBreakAfterAcquireFailure(true); + ds.setFailFast(true); + ds.setConnectionErrorRetryAttempts(3); + ds.setLoginTimeout(10); + ds.setInitialSize(1); + ds.setMaxActive(8); + ds.setMinIdle(5); + } + + @Override + public Driver connect() { + if (Asserts.isNull(conn.get())) { + try { + Class.forName(getDriverClass()); + DruidPooledConnection connection = createDataSource().getConnection(); + conn.set(connection); + } catch (ClassNotFoundException | SQLException e) { + throw new RuntimeException(e); + } + } + return this; + } + + @Override + public boolean isHealth() { + try { + if (Asserts.isNotNull(conn.get())) { + return !conn.get().isClosed(); + } + return false; + } catch (Exception e) { + e.printStackTrace(); + return false; + } + } + + @Override + public void close() { + try { + if (Asserts.isNotNull(conn.get())) { + conn.get().close(); + conn.remove(); + } + } catch (SQLException e) { + e.printStackTrace(); + } + } + + public void close(PreparedStatement preparedStatement, ResultSet results) { + try { + if (Asserts.isNotNull(results)) { + results.close(); + } + if (Asserts.isNotNull(preparedStatement)) { + preparedStatement.close(); + } + } catch (SQLException e) { + e.printStackTrace(); + } + } + + @Override + public List listSchemas() { + List schemas = new ArrayList<>(); + PreparedStatement preparedStatement = null; + ResultSet results = null; + String schemasSql = getDBQuery().schemaAllSql(); + try { + preparedStatement = conn.get().prepareStatement(schemasSql); + results = preparedStatement.executeQuery(); + while (results.next()) { + String schemaName = results.getString(getDBQuery().schemaName()); + if (Asserts.isNotNullString(schemaName)) { + Schema schema = new Schema(schemaName); + schemas.add(schema); + } + } + } catch (Exception e) { + e.printStackTrace(); + } finally { + close(preparedStatement, results); + } + return schemas; + } + + @Override + public boolean existSchema(String schemaName) { + return listSchemas().stream().anyMatch(schemaItem -> Asserts.isEquals(schemaItem.getName(), schemaName)); + } + + @Override + public boolean createSchema(String schemaName) throws Exception { + String sql = generateCreateSchemaSql(schemaName).replaceAll("\r\n", " "); + if (Asserts.isNotNull(sql)) { + return execute(sql); + } else { + return false; + } + } + + @Override + public String generateCreateSchemaSql(String schemaName) { + StringBuilder sb = new StringBuilder(); + sb.append("CREATE DATABASE ").append(schemaName); + return sb.toString(); + } + + @Override + public List
listTables(String schemaName) { + List
tableList = new ArrayList<>(); + PreparedStatement preparedStatement = null; + ResultSet results = null; + IDBQuery dbQuery = getDBQuery(); + String sql = dbQuery.tablesSql(schemaName); + try { + preparedStatement = conn.get().prepareStatement(sql); + results = preparedStatement.executeQuery(); + ResultSetMetaData metaData = results.getMetaData(); + List columnList = new ArrayList<>(); + for (int i = 1; i <= metaData.getColumnCount(); i++) { + columnList.add(metaData.getColumnLabel(i)); + } + while (results.next()) { + String tableName = results.getString(dbQuery.tableName()); + if (Asserts.isNotNullString(tableName)) { + Table tableInfo = new Table(); + tableInfo.setName(tableName); + if (columnList.contains(dbQuery.tableComment())) { + tableInfo.setComment(results.getString(dbQuery.tableComment())); + } + tableInfo.setSchema(schemaName); + if (columnList.contains(dbQuery.tableType())) { + tableInfo.setType(results.getString(dbQuery.tableType())); + } + if (columnList.contains(dbQuery.catalogName())) { + tableInfo.setCatalog(results.getString(dbQuery.catalogName())); + } + if (columnList.contains(dbQuery.engine())) { + tableInfo.setEngine(results.getString(dbQuery.engine())); + } + if (columnList.contains(dbQuery.options())) { + tableInfo.setOptions(results.getString(dbQuery.options())); + } + if (columnList.contains(dbQuery.rows())) { + tableInfo.setRows(results.getLong(dbQuery.rows())); + } + if (columnList.contains(dbQuery.createTime())) { + tableInfo.setCreateTime(results.getTimestamp(dbQuery.createTime())); + } + if (columnList.contains(dbQuery.updateTime())) { + tableInfo.setUpdateTime(results.getTimestamp(dbQuery.updateTime())); + } + tableList.add(tableInfo); + } + } + } catch (SQLException e) { + e.printStackTrace(); + } finally { + close(preparedStatement, results); + } + return tableList; + } + + @Override + public List listColumns(String schemaName, String tableName) { + List columns = new ArrayList<>(); + PreparedStatement preparedStatement = null; + ResultSet results = null; + IDBQuery dbQuery = getDBQuery(); + String tableFieldsSql = dbQuery.columnsSql(schemaName, tableName); + try { + preparedStatement = conn.get().prepareStatement(tableFieldsSql); + results = preparedStatement.executeQuery(); + ResultSetMetaData metaData = results.getMetaData(); + List columnList = new ArrayList<>(); + for (int i = 1; i <= metaData.getColumnCount(); i++) { + columnList.add(metaData.getColumnLabel(i)); + } + while (results.next()) { + Column field = new Column(); + String columnName = results.getString(dbQuery.columnName()); + if (columnList.contains(dbQuery.columnKey())) { + String key = results.getString(dbQuery.columnKey()); + field.setKeyFlag(Asserts.isNotNullString(key) && Asserts.isEqualsIgnoreCase(dbQuery.isPK(), key)); + } + field.setName(columnName); + if (columnList.contains(dbQuery.columnType())) { + String columnType = results.getString(dbQuery.columnType()); + if (columnType.contains("(")) { + String type = columnType.replaceAll("\\(.*\\)", ""); + if (!columnType.contains(",")) { + Integer length = Integer.valueOf(columnType.replaceAll("\\D", "")); + field.setLength(length); + } else { + // some database does not have precision + if (dbQuery.precision() != null) { + // 例如浮点类型的长度和精度是一样的,decimal(10,2) + field.setLength(results.getInt(dbQuery.precision())); + } + } + field.setType(type); + } else { + field.setType(columnType); + } + } + if (columnList.contains(dbQuery.columnComment()) + && Asserts.isNotNull(results.getString(dbQuery.columnComment()))) { + String columnComment = results.getString(dbQuery.columnComment()).replaceAll("\"|'", ""); + field.setComment(columnComment); + } + if (columnList.contains(dbQuery.columnLength())) { + int length = results.getInt(dbQuery.columnLength()); + if (!results.wasNull()) { + field.setLength(length); + } + } + if (columnList.contains(dbQuery.isNullable())) { + field.setNullable(Asserts.isEqualsIgnoreCase(results.getString(dbQuery.isNullable()), + dbQuery.nullableValue())); + } + if (columnList.contains(dbQuery.characterSet())) { + field.setCharacterSet(results.getString(dbQuery.characterSet())); + } + if (columnList.contains(dbQuery.collation())) { + field.setCollation(results.getString(dbQuery.collation())); + } + if (columnList.contains(dbQuery.columnPosition())) { + field.setPosition(results.getInt(dbQuery.columnPosition())); + } + if (columnList.contains(dbQuery.precision())) { + field.setPrecision(results.getInt(dbQuery.precision())); + } + if (columnList.contains(dbQuery.scale())) { + field.setScale(results.getInt(dbQuery.scale())); + } + if (columnList.contains(dbQuery.defaultValue())) { + field.setDefaultValue(results.getString(dbQuery.defaultValue())); + } + if (columnList.contains(dbQuery.autoIncrement())) { + field.setAutoIncrement( + Asserts.isEqualsIgnoreCase(results.getString(dbQuery.autoIncrement()), "auto_increment")); + } + if (columnList.contains(dbQuery.defaultValue())) { + field.setDefaultValue(results.getString(dbQuery.defaultValue())); + } + field.setJavaType(getTypeConvert().convert(field)); + columns.add(field); + } + } catch (SQLException e) { + e.printStackTrace(); + } finally { + close(preparedStatement, results); + } + return columns; + } + + @Override + public List listColumnsSortByPK(String schemaName, String tableName) { + List columnList = listColumns(schemaName, tableName); + columnList.sort(Comparator.comparing(Column::isKeyFlag).reversed()); + return columnList; + } + + @Override + public boolean createTable(Table table) throws Exception { + String sql = getCreateTableSql(table).replaceAll("\r\n", " "); + if (Asserts.isNotNull(sql)) { + return execute(sql); + } else { + return false; + } + } + + @Override + public boolean generateCreateTable(Table table) throws Exception { + String sql = generateCreateTableSql(table).replaceAll("\r\n", " "); + if (Asserts.isNotNull(sql)) { + return execute(sql); + } else { + return false; + } + } + + @Override + public boolean dropTable(Table table) throws Exception { + String sql = getDropTableSql(table).replaceAll("\r\n", " "); + if (Asserts.isNotNull(sql)) { + return execute(sql); + } else { + return false; + } + } + + @Override + public boolean truncateTable(Table table) throws Exception { + String sql = getTruncateTableSql(table).replaceAll("\r\n", " "); + if (Asserts.isNotNull(sql)) { + return execute(sql); + } else { + return false; + } + } + + @Override + public String getCreateTableSql(Table table) { + String createTable = null; + PreparedStatement preparedStatement = null; + ResultSet results = null; + String createTableSql = getDBQuery().createTableSql(table.getSchema(), table.getName()); + try { + preparedStatement = conn.get().prepareStatement(createTableSql); + results = preparedStatement.executeQuery(); + if (results.next()) { + ResultSetMetaData rsmd = results.getMetaData(); + int columns = rsmd.getColumnCount(); + for (int x = 1; x <= columns; x++) { + if (getDBQuery().createTableName().equals(rsmd.getColumnName(x))) { + createTable = results.getString(getDBQuery().createTableName()); + break; + } + if (getDBQuery().createViewName().equals(rsmd.getColumnName(x))) { + createTable = results.getString(getDBQuery().createViewName()); + break; + } + } + } + } catch (Exception e) { + e.printStackTrace(); + } finally { + close(preparedStatement, results); + } + return createTable; + } + + @Override + public String getDropTableSql(Table table) { + StringBuilder sb = new StringBuilder(); + sb.append("DROP TABLE "); + if (Asserts.isNotNullString(table.getSchema())) { + sb.append(table.getSchema() + "."); + } + sb.append(table.getName()); + return sb.toString(); + } + + @Override + public String getTruncateTableSql(Table table) { + StringBuilder sb = new StringBuilder(); + sb.append("TRUNCATE TABLE "); + if (Asserts.isNotNullString(table.getSchema())) { + sb.append(table.getSchema() + "."); + } + sb.append(table.getName()); + return sb.toString(); + } + + // todu impl by subclass + @Override + public String generateCreateTableSql(Table table) { + StringBuilder sb = new StringBuilder(); + return sb.toString(); + } + + @Override + public boolean execute(String sql) throws Exception { + Asserts.checkNullString(sql, "Sql 语句为空"); + try (Statement statement = conn.get().createStatement()) { + // logger.info("执行sql的连接id:" + ((DruidPooledConnection) conn).getTransactionInfo().getId()); + statement.execute(sql); + } + return true; + } + + @Override + public int executeUpdate(String sql) throws Exception { + Asserts.checkNullString(sql, "Sql 语句为空"); + int res = 0; + try (Statement statement = conn.get().createStatement()) { + res = statement.executeUpdate(sql); + } + return res; + } + + /** + * 标准sql where与order语法都是相同的 + * 不同数据库limit语句不一样,需要单独交由driver去处理,例如oracle + * 通过{@query(String sql, Integer limit)}去截断返回数据,但是在大量数据情况下会导致数据库负载过高。 + */ + @Override + public StringBuilder genQueryOption(QueryData queryData) { + + String where = queryData.getOption().getWhere(); + String order = queryData.getOption().getOrder(); + String limitStart = queryData.getOption().getLimitStart(); + String limitEnd = queryData.getOption().getLimitEnd(); + + StringBuilder optionBuilder = new StringBuilder() + .append("select * from ") + .append(queryData.getSchemaName()) + .append(".") + .append(queryData.getTableName()); + + if (where != null && !where.equals("")) { + optionBuilder.append(" where ").append(where); + } + if (order != null && !order.equals("")) { + optionBuilder.append(" order by ").append(order); + } + + if (TextUtil.isEmpty(limitStart)) { + limitStart = "0"; + } + if (TextUtil.isEmpty(limitEnd)) { + limitEnd = "100"; + } + optionBuilder.append(" limit ") + .append(limitStart) + .append(",") + .append(limitEnd); + + return optionBuilder; + } + + @Override + public JdbcSelectResult query(String sql, Integer limit) { + ProcessEntity process = ProcessContextHolder.getProcess(); + if (Asserts.isNull(limit)) { + limit = 100; + } + JdbcSelectResult result = new JdbcSelectResult(); + List> datas = new ArrayList<>(); + List columns = new ArrayList<>(); + List columnNameList = new ArrayList<>(); + PreparedStatement preparedStatement = null; + ResultSet results = null; + int count = 0; + try { + preparedStatement = conn.get().prepareStatement(sql); + results = preparedStatement.executeQuery(); + if (Asserts.isNull(results)) { + result.setSuccess(true); + close(preparedStatement, results); + return result; + } + ResultSetMetaData metaData = results.getMetaData(); + for (int i = 1; i <= metaData.getColumnCount(); i++) { + columnNameList.add(metaData.getColumnLabel(i)); + Column column = new Column(); + column.setName(metaData.getColumnLabel(i)); + column.setType(metaData.getColumnTypeName(i)); + column.setAutoIncrement(metaData.isAutoIncrement(i)); + column.setNullable(metaData.isNullable(i) != ResultSetMetaData.columnNoNulls); + column.setJavaType(getTypeConvert().convert(column)); + columns.add(column); + } + result.setColumns(columnNameList); + while (results.next()) { + LinkedHashMap data = new LinkedHashMap<>(); + for (Column column : columns) { + data.put(column.getName(), + getTypeConvert().convertValue(results, column.getName(), column.getType())); + } + datas.add(data); + count++; + if (count >= limit) { + break; + } + } + result.setSuccess(true); + } catch (Exception e) { + result.setError(LogUtil.getError(e)); + result.setSuccess(false); + process.error(e.getMessage()); + } finally { + close(preparedStatement, results); + result.setRowData(datas); + return result; + } + } + + /** + * 如果执行多条语句返回最后一条语句执行结果 + * + * @param sql + * @param limit + * @return + */ + @Override + public JdbcSelectResult executeSql(String sql, Integer limit) { + ProcessEntity process = ProcessContextHolder.getProcess(); + process.info("Start parse sql..."); + List stmtList = SQLUtils.parseStatements(sql, config.getType().toLowerCase()); + process.info(CharSequenceUtil.format("A total of {} statement have been Parsed.", stmtList.size())); + List resList = new ArrayList<>(); + JdbcSelectResult result = JdbcSelectResult.buildResult(); + process.info("Start execute sql..."); + for (SQLStatement item : stmtList) { + String type = item.getClass().getSimpleName(); + if (type.toUpperCase().contains("SELECT") || type.toUpperCase().contains("SHOW") + || type.toUpperCase().contains("DESC") || type.toUpperCase().contains("SQLEXPLAINSTATEMENT")) { + process.info("Execute query."); + result = query(item.toString(), limit); + } else if (type.toUpperCase().contains("INSERT") || type.toUpperCase().contains("UPDATE") + || type.toUpperCase().contains("DELETE")) { + try { + process.info("Execute update."); + resList.add(executeUpdate(item.toString())); + result.setStatusList(resList); + } catch (Exception e) { + resList.add(0); + result.setStatusList(resList); + result.error(LogUtil.getError(e)); + process.error(e.getMessage()); + return result; + } + } else { + try { + process.info("Execute DDL."); + execute(item.toString()); + resList.add(1); + result.setStatusList(resList); + } catch (Exception e) { + resList.add(0); + result.setStatusList(resList); + result.error(LogUtil.getError(e)); + process.error(e.getMessage()); + return result; + } + } + } + result.success(); + return result; + } + + @Override + public List explain(String sql) { + ProcessEntity process = ProcessContextHolder.getProcess(); + List sqlExplainResults = new ArrayList<>(); + String current = null; + process.info("Start check sql..."); + try { + List stmtList = SQLUtils.parseStatements(sql, config.getType().toLowerCase()); + for (SQLStatement item : stmtList) { + current = item.toString(); + String type = item.getClass().getSimpleName(); + sqlExplainResults.add(SqlExplainResult.success(type, current, null)); + } + process.info("Sql is correct."); + + } catch (Exception e) { + sqlExplainResults.add(SqlExplainResult.fail(current, LogUtil.getError(e))); + process.error(e.getMessage()); + } + return sqlExplainResults; + } + + @Override + public Map getFlinkColumnTypeConversion() { + return new HashMap<>(); + } + + @Override + public List> getSplitSchemaList() { + PreparedStatement preparedStatement = null; + ResultSet results = null; + IDBQuery dbQuery = getDBQuery(); + String sql = "select DATA_LENGTH,TABLE_NAME AS `NAME`,TABLE_SCHEMA AS `Database`,TABLE_COMMENT AS COMMENT,TABLE_CATALOG AS `CATALOG`,TABLE_TYPE" + + " AS `TYPE`,ENGINE AS `ENGINE`,CREATE_OPTIONS AS `OPTIONS`,TABLE_ROWS AS `ROWS`,CREATE_TIME,UPDATE_TIME from information_schema.tables WHERE TABLE_TYPE='BASE TABLE'"; + List> schemas = null; + try { + preparedStatement = conn.get().prepareStatement(sql); + results = preparedStatement.executeQuery(); + ResultSetMetaData metaData = results.getMetaData(); + List columnList = new ArrayList<>(); + schemas = new ArrayList<>(); + for (int i = 1; i <= metaData.getColumnCount(); i++) { + columnList.add(metaData.getColumnLabel(i)); + } + while (results.next()) { + Map map = new HashMap<>(); + for (String column : columnList) { + map.put(column, results.getString(column)); + } + schemas.add(map); + + } + } catch (SQLException e) { + e.printStackTrace(); + } finally { + close(preparedStatement, results); + } + return schemas; + } + + @Override + public Set
getSplitTables(List tableRegList, Map splitConfig) { + Set
set = new HashSet<>(); + List> schemaList = getSplitSchemaList(); + IDBQuery dbQuery = getDBQuery(); + + for (String table : tableRegList) { + String[] split = table.split("\\\\."); + String database = split[0]; + String tableName = split[1]; + // 匹配对应的表 + List> mapList = schemaList.stream() + // 过滤不匹配的表 + .filter(x -> contains(database, x.get(dbQuery.schemaName())) + && contains(tableName, x.get(dbQuery.tableName()))) + .collect(Collectors.toList()); + List
tableList = mapList.stream() + // 去重 + .collect(Collectors.collectingAndThen(Collectors.toCollection( + () -> new TreeSet<>( + Comparator.comparing(x -> getReValue(x.get(dbQuery.schemaName()), splitConfig) + "." + + getReValue(x.get(dbQuery.tableName()), splitConfig)))), + ArrayList::new)) + .stream().map(x -> { + Table tableInfo = new Table(); + tableInfo.setName(getReValue(x.get(dbQuery.tableName()), splitConfig)); + tableInfo.setComment(x.get(dbQuery.tableComment())); + tableInfo.setSchema(getReValue(x.get(dbQuery.schemaName()), splitConfig)); + tableInfo.setType(x.get(dbQuery.tableType())); + tableInfo.setCatalog(x.get(dbQuery.catalogName())); + tableInfo.setEngine(x.get(dbQuery.engine())); + tableInfo.setOptions(x.get(dbQuery.options())); + tableInfo.setRows(Long.valueOf(x.get(dbQuery.rows()))); + try { + tableInfo.setCreateTime( + SimpleDateFormat.getDateInstance().parse(x.get(dbQuery.createTime()))); + String updateTime = x.get(dbQuery.updateTime()); + if (Asserts.isNotNullString(updateTime)) { + tableInfo.setUpdateTime(SimpleDateFormat.getDateInstance().parse(updateTime)); + } + } catch (ParseException ignored) { + logger.warn("set date fail"); + + } + TableType tableType = TableType.type(isSplit(x.get(dbQuery.schemaName()), splitConfig), + isSplit(x.get(dbQuery.tableName()), splitConfig)); + tableInfo.setTableType(tableType); + + if (tableType != TableType.SINGLE_DATABASE_AND_TABLE) { + String currentSchemaName = getReValue(x.get(dbQuery.schemaName()), splitConfig) + "." + + getReValue(x.get(dbQuery.tableName()), splitConfig); + List schemaTableNameList = mapList.stream() + .filter(y -> (getReValue(y.get(dbQuery.schemaName()), splitConfig) + "." + + getReValue(y.get(dbQuery.tableName()), splitConfig)) + .equals(currentSchemaName)) + .map(y -> y.get(dbQuery.schemaName()) + "." + y.get(dbQuery.tableName())) + .collect(Collectors.toList()); + tableInfo.setSchemaTableNameList(schemaTableNameList); + } else { + tableInfo.setSchemaTableNameList(Collections + .singletonList(x.get(dbQuery.schemaName()) + "." + x.get(dbQuery.tableName()))); + } + return tableInfo; + }).collect(Collectors.toList()); + set.addAll(tableList); + + } + return set; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/Driver.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/Driver.java new file mode 100644 index 0000000..4be0039 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/Driver.java @@ -0,0 +1,204 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.driver; + + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.exception.MetaDataException; +import net.srt.flink.common.exception.SplitTableException; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.QueryData; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.result.SqlExplainResult; +import net.srt.flink.metadata.base.result.JdbcSelectResult; + +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.ServiceLoader; +import java.util.Set; + +/** + * Driver + * + * @author zrx + * @since 2021/7/19 23:15 + */ +public interface Driver extends AutoCloseable { + + static Optional get(DriverConfig config) { + Asserts.checkNotNull(config, "数据源配置不能为空"); + ServiceLoader drivers = ServiceLoader.load(Driver.class); + for (Driver driver : drivers) { + if (driver.canHandle(config.getType())) { + return Optional.of(driver.setDriverConfig(config)); + } + } + return Optional.empty(); + } + + static Driver build(DriverConfig config) { + String key = config.getName(); + if (DriverPool.exist(key)) { + return getHealthDriver(key); + } + synchronized (Driver.class) { + Optional optionalDriver = Driver.get(config); + if (!optionalDriver.isPresent()) { + throw new MetaDataException("缺少数据源类型【" + config.getType() + "】的依赖,请在 lib 下添加对应的扩展依赖"); + } + Driver driver = optionalDriver.get().connect(); + DriverPool.push(key, driver); + return driver; + } + } + + static Driver getHealthDriver(String key) { + Driver driver = DriverPool.get(key); + if (driver.isHealth()) { + return driver; + } else { + return driver.connect(); + } + } + + static Driver build(String connector, String url, String username, String password) { + String type = null; + if (Asserts.isEqualsIgnoreCase(connector, "doris")) { + type = "Doris"; + } else if (Asserts.isEqualsIgnoreCase(connector, "starrocks")) { + type = "StarRocks"; + } else if (Asserts.isEqualsIgnoreCase(connector, "clickhouse")) { + type = "ClickHouse"; + } else if (Asserts.isEqualsIgnoreCase(connector, "jdbc")) { + if (url.startsWith("jdbc:mysql")) { + type = "MySQL"; + } else if (url.startsWith("jdbc:postgresql")) { + type = "PostgreSql"; + } else if (url.startsWith("jdbc:oracle")) { + type = "Oracle"; + } else if (url.startsWith("jdbc:sqlserver")) { + type = "SQLServer"; + } else if (url.startsWith("jdbc:phoenix")) { + type = "Phoenix"; + } else if (url.startsWith("jdbc:pivotal")) { + type = "Greenplum"; + } + } + if (Asserts.isNull(type)) { + throw new MetaDataException("缺少数据源类型:【" + connector + "】"); + } + DriverConfig driverConfig = new DriverConfig(url, type, url, username, password); + return build(driverConfig); + } + + Driver setDriverConfig(DriverConfig config); + + boolean canHandle(String type); + + String getType(); + + String getName(); + + String test(); + + boolean isHealth(); + + Driver connect(); + + @Override + void close(); + + List listSchemas(); + + boolean existSchema(String schemaName); + + boolean createSchema(String schemaName) throws Exception; + + String generateCreateSchemaSql(String schemaName); + + List
listTables(String schemaName); + + List listColumns(String schemaName, String tableName); + + List listColumnsSortByPK(String schemaName, String tableName); + + List getSchemasAndTables(); + + List
getTablesAndColumns(String schemaName); + + Table getTable(String schemaName, String tableName); + + boolean existTable(Table table); + + boolean createTable(Table table) throws Exception; + + boolean generateCreateTable(Table table) throws Exception; + + boolean dropTable(Table table) throws Exception; + + boolean truncateTable(Table table) throws Exception; + + String getCreateTableSql(Table table); + + String getDropTableSql(Table table); + + String getTruncateTableSql(Table table); + + String generateCreateTableSql(Table table); + + /* boolean insert(Table table, JsonNode data); + + boolean update(Table table, JsonNode data); + + boolean delete(Table table, JsonNode data); + + SelectResult select(String sql);*/ + + boolean execute(String sql) throws Exception; + + int executeUpdate(String sql) throws Exception; + + JdbcSelectResult query(String sql, Integer limit); + + StringBuilder genQueryOption(QueryData queryData); + + JdbcSelectResult executeSql(String sql, Integer limit); + + List explain(String sql); + + Map getFlinkColumnTypeConversion(); + + /** + * 得到分割表 + * + * @param tableRegList 表正则列表 + * @param splitConfig 分库配置 + * @return {@link Set}<{@link Table}> + */ + default Set
getSplitTables(List tableRegList, Map splitConfig) { + throw new SplitTableException("目前此数据源不支持分库分表"); + } + + ; + + List> getSplitSchemaList(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/DriverConfig.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/DriverConfig.java new file mode 100644 index 0000000..8551182 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/DriverConfig.java @@ -0,0 +1,62 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.driver; + +import lombok.Getter; +import lombok.Setter; +import net.srt.flink.common.assertion.Asserts; + +import java.util.Map; + +/** + * DriverConfig + * + * @author zrx + * @since 2021/7/19 23:21 + */ +@Getter +@Setter +public class DriverConfig { + + private String name; + private String type; + private String ip; + private Integer port; + private String url; + private String username; + private String password; + + public DriverConfig() { + } + + public DriverConfig(String name, String type, String url, String username, String password) { + this.name = name; + this.type = type; + this.url = url; + this.username = username; + this.password = password; + } + + public static DriverConfig build(Map confMap) { + Asserts.checkNull(confMap, "数据源配置不能为空"); + return new DriverConfig(confMap.get("name"), confMap.get("type"), confMap.get("url"), confMap.get("username"), + confMap.get("password")); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/DriverPool.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/DriverPool.java new file mode 100644 index 0000000..d24ebb1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/driver/DriverPool.java @@ -0,0 +1,56 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.driver; + +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +/** + * DriverPool + * + * @author zrx + * @since 2022/2/17 15:29 + **/ +public class DriverPool { + + private static volatile Map driverMap = new ConcurrentHashMap<>(); + + public static boolean exist(String key) { + if (driverMap.containsKey(key)) { + return true; + } + return false; + } + + public static Integer push(String key, Driver gainer) { + driverMap.put(key, gainer); + return driverMap.size(); + } + + public static Integer remove(String key) { + driverMap.remove(key); + return driverMap.size(); + } + + public static Driver get(String key) { + return driverMap.get(key); + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/query/AbstractDBQuery.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/query/AbstractDBQuery.java new file mode 100644 index 0000000..18cc734 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/query/AbstractDBQuery.java @@ -0,0 +1,174 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.query; + +/** + * AbstractDBQuery + * + * @author zrx + * @since 2021/7/20 13:50 + **/ +public abstract class AbstractDBQuery implements IDBQuery { + + @Override + public String createTableSql(String schemaName, String tableName) { + return "show create table " + schemaName + "." + tableName; + } + + @Override + public String createTableName() { + return "Create Table"; + } + + @Override + public String createViewName() { + return "Create View"; + } + + @Override + public String[] columnCustom() { + return null; + } + + @Override + public String schemaName() { + return "SCHEMA"; + } + + @Override + public String catalogName() { + return "CATALOG"; + } + + @Override + public String tableName() { + return "NAME"; + } + + @Override + public String tableComment() { + return "COMMENT"; + } + + @Override + public String tableType() { + return "TYPE"; + } + + @Override + public String engine() { + return "ENGINE"; + } + + @Override + public String options() { + return "OPTIONS"; + } + + @Override + public String rows() { + return "ROWS"; + } + + @Override + public String createTime() { + return "CREATE_TIME"; + } + + @Override + public String updateTime() { + return "UPDATE_TIME"; + } + + @Override + public String columnName() { + return "COLUMN_NAME"; + } + + @Override + public String columnPosition() { + return "ORDINAL_POSITION"; + } + + @Override + public String columnType() { + return "DATA_TYPE"; + } + + @Override + public String columnComment() { + return "COLUMN_COMMENT"; + } + + @Override + public String columnKey() { + return "COLUMN_KEY"; + } + + @Override + public String autoIncrement() { + return "AUTO_INCREMENT"; + } + + @Override + public String defaultValue() { + return "COLUMN_DEFAULT"; + } + + @Override + public String columnLength() { + return "LENGTH"; + } + + @Override + public String isNullable() { + return "IS_NULLABLE"; + } + + @Override + public String precision() { + return "NUMERIC_PRECISION"; + } + + @Override + public String scale() { + return "NUMERIC_SCALE"; + } + + @Override + public String characterSet() { + return "CHARACTER_SET_NAME"; + } + + @Override + public String collation() { + return "COLLATION_NAME"; + } + + @Override + public String isPK() { + return "PRI"; + } + + @Override + public String nullableValue() { + return "YES"; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/query/IDBQuery.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/query/IDBQuery.java new file mode 100644 index 0000000..ffbbf26 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/query/IDBQuery.java @@ -0,0 +1,189 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.query; + +/** + * IDBQuery + * + * @author zrx + * @since 2021/7/20 13:44 + **/ +public interface IDBQuery { + + /** + * 所有数据库信息查询 SQL + */ + String schemaAllSql(); + + /** + * 表信息查询 SQL + */ + String tablesSql(String schemaName); + + /** + * 表字段信息查询 SQL + */ + String columnsSql(String schemaName, String tableName); + + /** + * 建表 SQL + */ + String createTableSql(String schemaName, String tableName); + + /** + * 建表语句列名 + */ + String createTableName(); + + /** + * 建视图语句列名 + */ + String createViewName(); + + /** + * 数据库、模式、组织名称 + */ + String schemaName(); + + /** + * catalog 名称 + */ + String catalogName(); + + /** + * 表名称 + */ + String tableName(); + + /** + * 表注释 + */ + String tableComment(); + + /** + * 表类型 + */ + String tableType(); + + /** + * 表引擎 + */ + String engine(); + + /** + * 表配置 + */ + String options(); + + /** + * 表记录数 + */ + String rows(); + + /** + * 创建时间 + */ + String createTime(); + + /** + * 更新时间 + */ + String updateTime(); + + /** + * 字段名称 + */ + String columnName(); + + /** + * 字段序号 + */ + String columnPosition(); + + /** + * 字段类型 + */ + String columnType(); + + /** + * 字段长度 + */ + String columnLength(); + + /** + * 字段注释 + */ + String columnComment(); + + /** + * 主键字段 + */ + String columnKey(); + + /** + * 主键自增 + */ + String autoIncrement(); + + /** + * 默认值 + */ + String defaultValue(); + + /** + * @return 是否允许为 NULL + */ + String isNullable(); + + /** + * @return 精度 + */ + String precision(); + + /** + * @return 小数范围 + */ + String scale(); + + /** + * @return 字符集名称 + */ + String characterSet(); + + /** + * @return 排序规则 + */ + String collation(); + + /** + * 自定义字段名称 + */ + String[] columnCustom(); + + /** + * @return 主键值 + */ + String isPK(); + + /** + * @return 允许为空的值 + */ + String nullableValue(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/result/ExplainResult.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/result/ExplainResult.java new file mode 100644 index 0000000..a356b1f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/result/ExplainResult.java @@ -0,0 +1,30 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.result; + +/** + * ExplainResult + * + * @author qiwenkai + * @since 2021/12/13 19:14 + **/ +public class ExplainResult { + private String sql; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/result/JdbcSelectResult.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/result/JdbcSelectResult.java new file mode 100644 index 0000000..9ef0ed4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/result/JdbcSelectResult.java @@ -0,0 +1,116 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.result; + + +import net.srt.flink.common.result.AbstractResult; +import net.srt.flink.common.result.IResult; + +import java.time.LocalDateTime; +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; + +/** + * SelectResult + * + * @author zrx + * @since 2021/7/19 23:31 + */ +public class JdbcSelectResult extends AbstractResult implements IResult { + private List columns; + private List> rowData; + private Integer total; + private Integer page; + private Integer limit; + + private static final String STATUS = "status"; + private static final List STATUS_COLUMN = new ArrayList() { + { + add("status"); + } + }; + + public JdbcSelectResult() { + } + + public static JdbcSelectResult buildResult() { + JdbcSelectResult result = new JdbcSelectResult(); + result.setStartTime(LocalDateTime.now()); + return result; + } + + public void setStatusList(List statusList) { + this.setColumns(STATUS_COLUMN); + List> dataList = new ArrayList<>(); + for (Object item : statusList) { + LinkedHashMap map = new LinkedHashMap(); + map.put(STATUS, item); + dataList.add(map); + } + this.setRowData(dataList); + this.setTotal(statusList.size()); + } + + @Override + public String getJobId() { + return null; + } + + public List getColumns() { + return columns; + } + + public void setColumns(List columns) { + this.columns = columns; + } + + public List> getRowData() { + return rowData; + } + + public void setRowData(List> rowData) { + this.rowData = rowData; + } + + public Integer getTotal() { + return total; + } + + public void setTotal(Integer total) { + this.total = total; + } + + public Integer getPage() { + return page; + } + + public void setPage(Integer page) { + this.page = page; + } + + public Integer getLimit() { + return limit; + } + + public void setLimit(Integer limit) { + this.limit = limit; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/rules/DbColumnType.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/rules/DbColumnType.java new file mode 100644 index 0000000..4f8c9eb --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/rules/DbColumnType.java @@ -0,0 +1,97 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.rules; + +/** + * DbColumnType + * + * @author zrx + * @since 2021/7/20 14:44 + **/ +public enum DbColumnType implements IColumnType { + + // 基本类型 + BASE_BYTE("byte", null), + BASE_SHORT("short", null), + BASE_CHAR("char", null), + BASE_INT("int", null), + BASE_LONG("long", null), + BASE_FLOAT("float", null), + BASE_DOUBLE("double", null), + BASE_BOOLEAN("boolean", null), + + // 包装类型 + BYTE("Byte", null), + SHORT("Short", null), + CHARACTER("Character", null), + INTEGER("Integer", null), + LONG("Long", null), + FLOAT("Float", null), + DOUBLE("Double", null), + BOOLEAN("Boolean", null), + STRING("String", null), + + // sql 包下数据类型 + DATE_SQL("Date", "java.sql.Date"), + TIME("Time", "java.sql.Time"), + TIMESTAMP("Timestamp", "java.sql.Timestamp"), + BLOB("Blob", "java.sql.Blob"), + CLOB("Clob", "java.sql.Clob"), + + // java8 新时间类型 + LOCAL_DATE("LocalDate", "java.time.LocalDate"), + LOCAL_TIME("LocalTime", "java.time.LocalTime"), + YEAR("Year", "java.time.Year"), + YEAR_MONTH("YearMonth", "java.time.YearMonth"), + LOCAL_DATE_TIME("LocalDateTime", "java.time.LocalDateTime"), + INSTANT("Instant", "java.time.Instant"), + + // 其他杂类 + BYTE_ARRAY("byte[]", null), + OBJECT("Object", null), + DATE("Date", "java.util.Date"), + BIG_INTEGER("BigInteger", "java.math.BigInteger"), + BIG_DECIMAL("BigDecimal", "java.math.BigDecimal"); + + /** + * 类型 + */ + private final String type; + + /** + * 包路径 + */ + private final String pkg; + + DbColumnType(final String type, final String pkg) { + this.type = type; + this.pkg = pkg; + } + + @Override + public String getType() { + return type; + } + + @Override + public String getPkg() { + return pkg; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/rules/IColumnType.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/rules/IColumnType.java new file mode 100644 index 0000000..d45c06b --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-base/src/main/java/net/srt/flink/metadata/base/rules/IColumnType.java @@ -0,0 +1,38 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.base.rules; + +/** + * IColumnType + * + * @author zrx + * @since 2021/7/20 14:43 + **/ +public interface IColumnType { + /** + * 获取字段类型 + */ + String getType(); + + /** + * 获取字段类型完整名 + */ + String getPkg(); +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/pom.xml new file mode 100644 index 0000000..e0ff3f9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/pom.xml @@ -0,0 +1,114 @@ + + + + flink-metadata + net.srt + 2.0.0 + + 4.0.0 + + flink-metadata-hive + + + + net.srt + flink-metadata-base + ${project.version} + + + com.alibaba + druid-spring-boot-starter + + + + org.slf4j + slf4j-nop + 1.6.1 + + + + junit + junit + provided + + + org.apache.commons + commons-lang3 + 3.4 + + + org.apache.hive + hive-jdbc + ${hive-jdbc.version} + + + org.eclipse.jetty.aggregate + jetty-all + + + org.apache.hive + hive-shims + + + slf4j-log4j12 + org.slf4j + + + ch.qos.logback + logback-classic + + + tomcat + * + + + javax.servlet + * + + + org.eclipse.jetty.orbit + * + + + org.eclipse.jetty.aggregate + * + + + org.mortbay.jetty + * + + + org.eclipse.jetty + * + + + org.apache.hbase + * + + + org.apache.logging.log4j + * + + + log4j + log4j + + + guava + com.google.guava + + + + + + + + + cloudera + https://repository.cloudera.com/artifactory/cloudera-repos/ + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/constant/HiveConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/constant/HiveConstant.java new file mode 100644 index 0000000..6cc4154 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/constant/HiveConstant.java @@ -0,0 +1,52 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.hive.constant; + +public interface HiveConstant { + + /** + * 查询所有database + */ + String QUERY_ALL_DATABASE = " show databases"; + /** + * 查询所有schema下的所有表 + */ + String QUERY_ALL_TABLES_BY_SCHEMA = "show tables"; + /** + * 扩展信息Key + */ + String DETAILED_TABLE_INFO = "Detailed Table Information"; + /** + * 查询指定schema.table的扩展信息 + */ + String QUERY_TABLE_SCHEMA_EXTENED_INFOS = " describe extended `%s`.`%s`"; + /** + * 查询指定schema.table的信息 列 列类型 列注释 + */ + String QUERY_TABLE_SCHEMA = " describe `%s`.`%s`"; + /** + * 使用 DB + */ + String USE_DB = "use `%s`"; + /** + * 只查询指定schema.table的列名 + */ + String QUERY_TABLE_COLUMNS_ONLY = "show columns in `%s`.`%s`"; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/convert/HiveTypeConvert.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/convert/HiveTypeConvert.java new file mode 100644 index 0000000..994dee1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/convert/HiveTypeConvert.java @@ -0,0 +1,134 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.hive.convert; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.ColumnType; +import net.srt.flink.metadata.base.convert.ITypeConvert; + +public class HiveTypeConvert implements ITypeConvert { + @Override + public ColumnType convert(Column column) { + ColumnType columnType = ColumnType.STRING; + if (Asserts.isNull(column)) { + return columnType; + } + String t = column.getType().toLowerCase().trim(); + boolean isNullable = !column.isKeyFlag() && column.isNullable(); + if (t.contains("char")) { + columnType = ColumnType.STRING; + } else if (t.contains("boolean")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_BOOLEAN; + } else { + columnType = ColumnType.BOOLEAN; + } + } else if (t.contains("tinyint")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_BYTE; + } else { + columnType = ColumnType.BYTE; + } + } else if (t.contains("smallint")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_SHORT; + } else { + columnType = ColumnType.SHORT; + } + } else if (t.contains("bigint")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_LONG; + } else { + columnType = ColumnType.LONG; + } + } else if (t.contains("largeint")) { + columnType = ColumnType.STRING; + } else if (t.contains("int")) { + if (isNullable) { + columnType = ColumnType.INTEGER; + } else { + columnType = ColumnType.INT; + } + } else if (t.contains("float")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_FLOAT; + } else { + columnType = ColumnType.FLOAT; + } + } else if (t.contains("double")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_DOUBLE; + } else { + columnType = ColumnType.DOUBLE; + } + } else if (t.contains("timestamp")) { + columnType = ColumnType.TIMESTAMP; + } else if (t.contains("date")) { + columnType = ColumnType.STRING; + } else if (t.contains("datetime")) { + columnType = ColumnType.STRING; + } else if (t.contains("decimal")) { + columnType = ColumnType.DECIMAL; + } else if (t.contains("time")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_DOUBLE; + } else { + columnType = ColumnType.DOUBLE; + } + } + return columnType; + } + + @Override + public String convertToDB(ColumnType columnType) { + switch (columnType) { + case STRING: + return "varchar"; + case BOOLEAN: + case JAVA_LANG_BOOLEAN: + return "boolean"; + case BYTE: + case JAVA_LANG_BYTE: + return "tinyint"; + case SHORT: + case JAVA_LANG_SHORT: + return "smallint"; + case LONG: + case JAVA_LANG_LONG: + return "bigint"; + case FLOAT: + case JAVA_LANG_FLOAT: + return "float"; + case DOUBLE: + case JAVA_LANG_DOUBLE: + return "double"; + case DECIMAL: + return "decimal"; + case INT: + case INTEGER: + return "int"; + case TIMESTAMP: + return "timestamp"; + default: + return "varchar"; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/driver/HiveDriver.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/driver/HiveDriver.java new file mode 100644 index 0000000..036347e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/driver/HiveDriver.java @@ -0,0 +1,313 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.hive.driver; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.Schema; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.utils.LogUtil; +import net.srt.flink.metadata.base.convert.ITypeConvert; +import net.srt.flink.metadata.base.driver.AbstractJdbcDriver; +import net.srt.flink.metadata.base.driver.Driver; +import net.srt.flink.metadata.base.query.IDBQuery; +import net.srt.flink.metadata.base.result.JdbcSelectResult; +import net.srt.flink.metadata.hive.constant.HiveConstant; +import net.srt.flink.metadata.hive.convert.HiveTypeConvert; +import net.srt.flink.metadata.hive.query.HiveQuery; +import org.apache.commons.lang3.StringUtils; + +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +public class HiveDriver extends AbstractJdbcDriver implements Driver { + + @Override + public Table getTable(String schemaName, String tableName) { + List
tables = listTables(schemaName); + Table table = null; + for (Table item : tables) { + if (Asserts.isEquals(item.getName(), tableName)) { + table = item; + break; + } + } + if (Asserts.isNotNull(table)) { + table.setColumns(listColumns(schemaName, table.getName())); + } + return table; + } + + @Override + public List
listTables(String schemaName) { + List
tableList = new ArrayList<>(); + PreparedStatement preparedStatement = null; + ResultSet results = null; + IDBQuery dbQuery = getDBQuery(); + String sql = dbQuery.tablesSql(schemaName); + try { + execute(String.format(HiveConstant.USE_DB, schemaName)); + preparedStatement = conn.get().prepareStatement(sql); + results = preparedStatement.executeQuery(); + ResultSetMetaData metaData = results.getMetaData(); + List columnList = new ArrayList<>(); + for (int i = 1; i <= metaData.getColumnCount(); i++) { + columnList.add(metaData.getColumnLabel(i)); + } + while (results.next()) { + String tableName = results.getString(dbQuery.tableName()); + if (Asserts.isNotNullString(tableName)) { + Table tableInfo = new Table(); + tableInfo.setName(tableName); + if (columnList.contains(dbQuery.tableComment())) { + tableInfo.setComment(results.getString(dbQuery.tableComment())); + } + tableInfo.setSchema(schemaName); + if (columnList.contains(dbQuery.tableType())) { + tableInfo.setType(results.getString(dbQuery.tableType())); + } + if (columnList.contains(dbQuery.catalogName())) { + tableInfo.setCatalog(results.getString(dbQuery.catalogName())); + } + if (columnList.contains(dbQuery.engine())) { + tableInfo.setEngine(results.getString(dbQuery.engine())); + } + tableList.add(tableInfo); + } + } + } catch (Exception e) { + e.printStackTrace(); + } finally { + close(preparedStatement, results); + } + return tableList; + } + + @Override + public List getSchemasAndTables() { + return listSchemas(); + } + + @Override + public List listSchemas() { + + List schemas = new ArrayList<>(); + PreparedStatement preparedStatement = null; + ResultSet results = null; + String schemasSql = getDBQuery().schemaAllSql(); + try { + preparedStatement = conn.get().prepareStatement(schemasSql); + results = preparedStatement.executeQuery(); + while (results.next()) { + String schemaName = results.getString(getDBQuery().schemaName()); + if (Asserts.isNotNullString(schemaName)) { + Schema schema = new Schema(schemaName); + if (execute(String.format(HiveConstant.USE_DB, schemaName))) { + schema.setTables(listTables(schema.getName())); + } + schemas.add(schema); + } + } + } catch (Exception e) { + e.printStackTrace(); + } finally { + close(preparedStatement, results); + } + return schemas; + } + + @Override + public List listColumns(String schemaName, String tableName) { + List columns = new ArrayList<>(); + PreparedStatement preparedStatement = null; + ResultSet results = null; + IDBQuery dbQuery = getDBQuery(); + String tableFieldsSql = dbQuery.columnsSql(schemaName, tableName); + try { + preparedStatement = conn.get().prepareStatement(tableFieldsSql); + results = preparedStatement.executeQuery(); + ResultSetMetaData metaData = results.getMetaData(); + List columnList = new ArrayList<>(); + for (int i = 1; i <= metaData.getColumnCount(); i++) { + columnList.add(metaData.getColumnLabel(i)); + } + Integer positionId = 1; + while (results.next()) { + Column field = new Column(); + if (StringUtils.isEmpty(results.getString(dbQuery.columnName()))) { + break; + } else { + if (columnList.contains(dbQuery.columnName())) { + String columnName = results.getString(dbQuery.columnName()); + field.setName(columnName); + } + if (columnList.contains(dbQuery.columnType())) { + field.setType(results.getString(dbQuery.columnType())); + } + if (columnList.contains(dbQuery.columnComment()) && Asserts.isNotNull(results.getString(dbQuery.columnComment()))) { + String columnComment = results.getString(dbQuery.columnComment()).replaceAll("\"|'", ""); + field.setComment(columnComment); + } + field.setPosition(positionId++); + field.setJavaType(getTypeConvert().convert(field)); + } + columns.add(field); + } + } catch (SQLException e) { + e.printStackTrace(); + } finally { + close(preparedStatement, results); + } + return columns; + } + + @Override + public String getCreateTableSql(Table table) { + StringBuilder createTable = new StringBuilder(); + PreparedStatement preparedStatement = null; + ResultSet results = null; + String createTableSql = getDBQuery().createTableSql(table.getSchema(), table.getName()); + try { + preparedStatement = conn.get().prepareStatement(createTableSql); + results = preparedStatement.executeQuery(); + while (results.next()) { + createTable.append(results.getString(getDBQuery().createTableName())).append("\n"); + } + } catch (Exception e) { + e.printStackTrace(); + } finally { + close(preparedStatement, results); + } + return createTable.toString(); + } + + @Override + public int executeUpdate(String sql) throws Exception { + Asserts.checkNullString(sql, "Sql 语句为空"); + String querySQL = sql.trim().replaceAll(";$", ""); + int res = 0; + try (Statement statement = conn.get().createStatement()) { + res = statement.executeUpdate(querySQL); + } + return res; + } + + @Override + public JdbcSelectResult query(String sql, Integer limit) { + if (Asserts.isNull(limit)) { + limit = 100; + } + JdbcSelectResult result = new JdbcSelectResult(); + List> datas = new ArrayList<>(); + List columns = new ArrayList<>(); + List columnNameList = new ArrayList<>(); + PreparedStatement preparedStatement = null; + ResultSet results = null; + int count = 0; + try { + String querySQL = sql.trim().replaceAll(";$", ""); + preparedStatement = conn.get().prepareStatement(querySQL); + results = preparedStatement.executeQuery(); + if (Asserts.isNull(results)) { + result.setSuccess(true); + close(preparedStatement, results); + return result; + } + ResultSetMetaData metaData = results.getMetaData(); + for (int i = 1; i <= metaData.getColumnCount(); i++) { + columnNameList.add(metaData.getColumnLabel(i)); + Column column = new Column(); + column.setName(metaData.getColumnLabel(i)); + column.setType(metaData.getColumnTypeName(i)); + column.setAutoIncrement(metaData.isAutoIncrement(i)); + column.setNullable(metaData.isNullable(i) == 0 ? false : true); + column.setJavaType(getTypeConvert().convert(column)); + columns.add(column); + } + result.setColumns(columnNameList); + while (results.next()) { + LinkedHashMap data = new LinkedHashMap<>(); + for (int i = 0; i < columns.size(); i++) { + data.put(columns.get(i).getName(), getTypeConvert().convertValue(results, columns.get(i).getName(), columns.get(i).getType())); + } + datas.add(data); + count++; + if (count >= limit) { + break; + } + } + result.setSuccess(true); + } catch (Exception e) { + result.setError(LogUtil.getError(e)); + result.setSuccess(false); + } finally { + close(preparedStatement, results); + result.setRowData(datas); + return result; + } + } + + @Override + public IDBQuery getDBQuery() { + return new HiveQuery(); + } + + @Override + public ITypeConvert getTypeConvert() { + return new HiveTypeConvert(); + } + + @Override + public String getDriverClass() { + return "org.apache.hive.jdbc.HiveDriver"; + } + + @Override + public String getType() { + return "Hive"; + } + + @Override + public String getName() { + return "Hive"; + } + + @Override + public Map getFlinkColumnTypeConversion() { + HashMap map = new HashMap<>(); + map.put("BOOLEAN", "BOOLEAN"); + map.put("TINYINT", "TINYINT"); + map.put("SMALLINT", "SMALLINT"); + map.put("INT", "INT"); + map.put("VARCHAR", "STRING"); + map.put("TEXY", "STRING"); + map.put("INT", "INT"); + map.put("DATETIME", "TIMESTAMP"); + return map; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/query/HiveQuery.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/query/HiveQuery.java new file mode 100644 index 0000000..dc8604a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/java/net/srt/flink/metadata/hive/query/HiveQuery.java @@ -0,0 +1,76 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.hive.query; + +import net.srt.flink.metadata.base.query.AbstractDBQuery; +import net.srt.flink.metadata.hive.constant.HiveConstant; + +public class HiveQuery extends AbstractDBQuery { + @Override + public String schemaAllSql() { + return HiveConstant.QUERY_ALL_DATABASE; + } + + @Override + public String tablesSql(String schemaName) { + return HiveConstant.QUERY_ALL_TABLES_BY_SCHEMA; + } + + @Override + public String columnsSql(String schemaName, String tableName) { + return String.format(HiveConstant.QUERY_TABLE_SCHEMA, schemaName, tableName); + } + + @Override + public String schemaName() { + return "database_name"; + } + + @Override + public String createTableName() { + return "createtab_stmt"; + } + + @Override + public String tableName() { + return "tab_name"; + } + + @Override + public String tableComment() { + return "comment"; + } + + @Override + public String columnName() { + return "col_name"; + } + + @Override + public String columnType() { + return "data_type"; + } + + @Override + public String columnComment() { + return "comment"; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver new file mode 100644 index 0000000..e1a03b0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-hive/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver @@ -0,0 +1 @@ +net.srt.flink.metadata.hive.driver.HiveDriver diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/pom.xml new file mode 100644 index 0000000..a998a07 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/pom.xml @@ -0,0 +1,31 @@ + + + + flink-metadata + net.srt + 2.0.0 + + 4.0.0 + + flink-metadata-mysql + + + + net.srt + flink-metadata-base + ${project.version} + + + junit + junit + provided + + + mysql + mysql-connector-java + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/java/net/srt/flink/metadata/mysql/convert/MySqlTypeConvert.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/java/net/srt/flink/metadata/mysql/convert/MySqlTypeConvert.java new file mode 100644 index 0000000..2cd5a2a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/java/net/srt/flink/metadata/mysql/convert/MySqlTypeConvert.java @@ -0,0 +1,128 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.mysql.convert; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.ColumnType; +import net.srt.flink.metadata.base.convert.ITypeConvert; + +/** + * MySqlTypeConvert + * + * @author zrx + * @since 2021/7/20 15:21 + **/ +public class MySqlTypeConvert implements ITypeConvert { + @Override + public ColumnType convert(Column column) { + ColumnType columnType = ColumnType.STRING; + if (Asserts.isNull(column)) { + return columnType; + } + String t = column.getType().toLowerCase(); + boolean isNullable = !column.isKeyFlag() && column.isNullable(); + if (t.contains("numeric") || t.contains("decimal")) { + columnType = ColumnType.DECIMAL; + } else if (t.contains("bigint")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_LONG; + } else { + columnType = ColumnType.LONG; + } + } else if (t.contains("float")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_FLOAT; + } else { + columnType = ColumnType.FLOAT; + } + } else if (t.contains("double")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_DOUBLE; + } else { + columnType = ColumnType.DOUBLE; + } + } else if (t.contains("boolean") || t.contains("tinyint(1)") || t.contains("bit")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_BOOLEAN; + } else { + columnType = ColumnType.BOOLEAN; + } + } else if (t.contains("datetime")) { + columnType = ColumnType.TIMESTAMP; + } else if (t.contains("date")) { + columnType = ColumnType.DATE; + } else if (t.contains("timestamp")) { + columnType = ColumnType.TIMESTAMP; + } else if (t.contains("time")) { + columnType = ColumnType.TIME; + } else if (t.contains("char") || t.contains("text")) { + columnType = ColumnType.STRING; + } else if (t.contains("binary") || t.contains("blob")) { + columnType = ColumnType.BYTES; + } else if (t.contains("tinyint") || t.contains("mediumint") || t.contains("smallint") || t.contains("int")) { + if (isNullable) { + columnType = ColumnType.INTEGER; + } else { + columnType = ColumnType.INT; + } + } + return columnType; + } + + @Override + public String convertToDB(ColumnType columnType) { + switch (columnType) { + case STRING: + return "varchar"; + case BYTE: + return "tinyint"; + case SHORT: + return "smallint"; + case DECIMAL: + return "decimal"; + case LONG: + case JAVA_LANG_LONG: + return "bigint"; + case FLOAT: + case JAVA_LANG_FLOAT: + return "float"; + case DOUBLE: + case JAVA_LANG_DOUBLE: + return "double"; + case BOOLEAN: + case JAVA_LANG_BOOLEAN: + return "boolean"; + case TIMESTAMP: + return "datetime"; + case DATE: + return "date"; + case TIME: + return "time"; + case BYTES: + return "binary"; + case INTEGER: + case INT: + return "int"; + default: + return "varchar"; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/java/net/srt/flink/metadata/mysql/driver/MySqlDriver.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/java/net/srt/flink/metadata/mysql/driver/MySqlDriver.java new file mode 100644 index 0000000..919c04e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/java/net/srt/flink/metadata/mysql/driver/MySqlDriver.java @@ -0,0 +1,142 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.mysql.driver; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.Table; +import net.srt.flink.metadata.base.convert.ITypeConvert; +import net.srt.flink.metadata.base.driver.AbstractJdbcDriver; +import net.srt.flink.metadata.base.query.IDBQuery; +import net.srt.flink.metadata.mysql.convert.MySqlTypeConvert; +import net.srt.flink.metadata.mysql.query.MySqlQuery; + +import java.util.HashMap; +import java.util.Map; + +/** + * MysqlDriver + * + * @author zrx + * @since 2021/7/20 14:06 + **/ +public class MySqlDriver extends AbstractJdbcDriver { + + @Override + public IDBQuery getDBQuery() { + return new MySqlQuery(); + } + + @Override + public ITypeConvert getTypeConvert() { + return new MySqlTypeConvert(); + } + + @Override + public String getType() { + return "MySql"; + } + + @Override + public String getName() { + return "MySql数据库"; + } + + @Override + public String getDriverClass() { + return "com.mysql.jdbc.Driver"; + } + + @Override + public Map getFlinkColumnTypeConversion() { + HashMap map = new HashMap<>(); + map.put("VARCHAR", "STRING"); + map.put("TEXT", "STRING"); + map.put("INT", "INT"); + map.put("DATETIME", "TIMESTAMP"); + return map; + } + + @Override + public String generateCreateTableSql(Table table) { + StringBuilder key = new StringBuilder(); + StringBuilder sb = new StringBuilder(); + + sb.append("CREATE TABLE IF NOT EXISTS ").append(table.getSchemaTableName()).append(" (\n"); + for (int i = 0; i < table.getColumns().size(); i++) { + Column column = table.getColumns().get(i); + sb.append(" `") + .append(column.getName()).append("` ") + .append(column.getType()); + // 处理浮点类型 + if (column.getPrecision() > 0 && column.getScale() > 0) { + sb.append("(") + .append(column.getLength()) + .append(",").append(column.getScale()) + .append(")"); + } else if (null != column.getLength()) { // 处理字符串类型和数值型 + sb.append("(").append(column.getLength()).append(")"); + } + if (Asserts.isNotNull(column.getDefaultValue())) { + if ("".equals(column.getDefaultValue())) { + sb.append(" DEFAULT ").append("\"\""); + } else { + sb.append(" DEFAULT ").append(column.getDefaultValue()); + } + } else { + if (!column.isNullable()) { + sb.append(" NOT "); + } + sb.append(" NULL "); + } + if (column.isAutoIncrement()) { + sb.append(" AUTO_INCREMENT "); + } + if (Asserts.isNotNullString(column.getComment())) { + sb.append(" COMMENT '").append(column.getComment()).append("'"); + } + if (column.isKeyFlag()) { + key.append("`").append(column.getName()).append("`,"); + } + if (i < table.getColumns().size() || key.length() > 0) { + sb.append(","); + } + sb.append("\n"); + } + + if (key.length() > 0) { + sb.append(" PRIMARY KEY ("); + sb.append(key.substring(0, key.length() - 1)); + sb.append(")\n"); + } + + sb.append(")\n ENGINE=").append(table.getEngine()); + if (Asserts.isNotNullString(table.getOptions())) { + sb.append(" ").append(table.getOptions()); + } + + if (Asserts.isNotNullString(table.getComment())) { + sb.append(" COMMENT='").append(table.getComment()).append("'"); + } + sb.append(";"); + logger.info("Auto generateCreateTableSql {}", sb); + return sb.toString(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/java/net/srt/flink/metadata/mysql/query/MySqlQuery.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/java/net/srt/flink/metadata/mysql/query/MySqlQuery.java new file mode 100644 index 0000000..19d98b4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/java/net/srt/flink/metadata/mysql/query/MySqlQuery.java @@ -0,0 +1,63 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.mysql.query; + +import net.srt.flink.metadata.base.query.AbstractDBQuery; + +/** + * MySqlQuery + * + * @author zrx + * @since 2021/7/20 14:01 + **/ +public class MySqlQuery extends AbstractDBQuery { + + @Override + public String schemaAllSql() { + return "show databases"; + } + + @Override + public String tablesSql(String schemaName) { + return "select TABLE_NAME AS `NAME`,TABLE_SCHEMA AS `Database`,TABLE_COMMENT AS COMMENT,TABLE_CATALOG AS `CATALOG`" + + ",TABLE_TYPE AS `TYPE`,ENGINE AS `ENGINE`,CREATE_OPTIONS AS `OPTIONS`,TABLE_ROWS AS `ROWS`" + + ",CREATE_TIME,UPDATE_TIME from information_schema.tables" + + " where TABLE_SCHEMA = '" + schemaName + "'"; + } + + @Override + public String columnsSql(String schemaName, String tableName) { + return "select COLUMN_NAME,COLUMN_TYPE,COLUMN_COMMENT,COLUMN_KEY,EXTRA AS AUTO_INCREMENT" + + ",COLUMN_DEFAULT,IS_NULLABLE,NUMERIC_PRECISION,NUMERIC_SCALE,CHARACTER_SET_NAME" + + ",COLLATION_NAME,ORDINAL_POSITION from INFORMATION_SCHEMA.COLUMNS " + + "where TABLE_SCHEMA = '" + schemaName + "' and TABLE_NAME = '" + tableName + "' " + + "order by ORDINAL_POSITION"; + } + + @Override + public String schemaName() { + return "Database"; + } + + @Override + public String columnType() { + return "COLUMN_TYPE"; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver new file mode 100644 index 0000000..c3934ee --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-mysql/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver @@ -0,0 +1 @@ +net.srt.flink.metadata.mysql.driver.MySqlDriver \ No newline at end of file diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/pom.xml new file mode 100644 index 0000000..138beb5 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/pom.xml @@ -0,0 +1,41 @@ + + + + flink-metadata + net.srt + 2.0.0 + + 4.0.0 + + flink-metadata-oracle + + + + net.srt + flink-metadata-base + ${project.version} + + + junit + junit + provided + + + + com.oracle.ojdbc + ojdbc8 + ${ojdbc8.version} + + + + com.oracle.ojdbc + orai18n + ${ojdbc8.version} + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/java/net/srt/flink/metadata/oracle/convert/OracleTypeConvert.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/java/net/srt/flink/metadata/oracle/convert/OracleTypeConvert.java new file mode 100644 index 0000000..db681f6 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/java/net/srt/flink/metadata/oracle/convert/OracleTypeConvert.java @@ -0,0 +1,102 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.oracle.convert; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.ColumnType; +import net.srt.flink.metadata.base.convert.ITypeConvert; + +/** + * OracleTypeConvert + * + * @author zrx + * @since 2021/7/21 16:00 + **/ +public class OracleTypeConvert implements ITypeConvert { + @Override + public ColumnType convert(Column column) { + ColumnType columnType = ColumnType.STRING; + if (Asserts.isNull(column)) { + return columnType; + } + String t = column.getType().toLowerCase(); + boolean isNullable = !column.isKeyFlag() && column.isNullable(); + if (t.contains("char")) { + columnType = ColumnType.STRING; + } else if (t.contains("date")) { + columnType = ColumnType.LOCALDATETIME; + } else if (t.contains("timestamp")) { + columnType = ColumnType.TIMESTAMP; + } else if (t.contains("number")) { + if (t.matches("number\\(+\\d\\)")) { + if (isNullable) { + columnType = ColumnType.INTEGER; + } else { + columnType = ColumnType.INT; + } + } else if (t.matches("number\\(+\\d{2}+\\)")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_LONG; + } else { + columnType = ColumnType.LONG; + } + } else { + columnType = ColumnType.DECIMAL; + } + } else if (t.contains("float")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_FLOAT; + } else { + columnType = ColumnType.FLOAT; + } + } else if (t.contains("clob")) { + columnType = ColumnType.STRING; + } else if (t.contains("blob")) { + columnType = ColumnType.BYTES; + } + return columnType; + } + + @Override + public String convertToDB(ColumnType columnType) { + switch (columnType) { + case STRING: + return "varchar"; + case DATE: + return "date"; + case TIMESTAMP: + return "timestamp"; + case INTEGER: + case INT: + case LONG: + case JAVA_LANG_LONG: + case DECIMAL: + return "number"; + case FLOAT: + case JAVA_LANG_FLOAT: + return "float"; + case BYTES: + return "blob"; + default: + return "varchar"; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/java/net/srt/flink/metadata/oracle/driver/OracleDriver.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/java/net/srt/flink/metadata/oracle/driver/OracleDriver.java new file mode 100644 index 0000000..09da3a7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/java/net/srt/flink/metadata/oracle/driver/OracleDriver.java @@ -0,0 +1,151 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.oracle.driver; + +import com.alibaba.druid.pool.DruidDataSource; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.QueryData; +import net.srt.flink.common.model.Table; +import net.srt.flink.metadata.base.convert.ITypeConvert; +import net.srt.flink.metadata.base.driver.AbstractJdbcDriver; +import net.srt.flink.metadata.base.driver.DriverConfig; +import net.srt.flink.metadata.base.query.IDBQuery; +import net.srt.flink.metadata.oracle.convert.OracleTypeConvert; +import net.srt.flink.metadata.oracle.query.OracleQuery; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.stream.Collectors; + +/** + * OracleDriver + * + * @author zrx + * @since 2021/7/21 15:52 + **/ +public class OracleDriver extends AbstractJdbcDriver { + + @Override + public String getDriverClass() { + return "oracle.jdbc.driver.OracleDriver"; + } + + @Override + public IDBQuery getDBQuery() { + return new OracleQuery(); + } + + @Override + public ITypeConvert getTypeConvert() { + return new OracleTypeConvert(); + } + + @Override + public String getType() { + return "Oracle"; + } + + @Override + public String getName() { + return "Oracle数据库"; + } + + /** + * oracel sql拼接,目前还未实现limit方法 + * */ + @Override + public StringBuilder genQueryOption(QueryData queryData) { + + String where = queryData.getOption().getWhere(); + String order = queryData.getOption().getOrder(); + + StringBuilder optionBuilder = new StringBuilder() + .append("select * from ") + .append(queryData.getSchemaName()) + .append(".") + .append(queryData.getTableName()); + + if (where != null && !where.equals("")) { + optionBuilder.append(" where ").append(where); + } + if (order != null && !order.equals("")) { + optionBuilder.append(" order by ").append(order); + } + + return optionBuilder; + } + + @Override + public String getCreateTableSql(Table table) { + StringBuilder sb = new StringBuilder(); + sb.append("CREATE TABLE "); + sb.append(table.getName() + " ("); + List columns = table.getColumns(); + for (int i = 0; i < columns.size(); i++) { + if (i > 0) { + sb.append(","); + } + sb.append(columns.get(i).getName() + " " + getTypeConvert().convertToDB(columns.get(i))); + if (columns.get(i).isNullable()) { + sb.append(" NOT NULL"); + } + } + sb.append(");"); + sb.append("\r\n"); + List pks = columns.stream().filter(column -> column.isKeyFlag()).collect(Collectors.toList()); + if (Asserts.isNotNullCollection(pks)) { + sb.append("ALTER TABLE " + table.getName() + " ADD CONSTRAINT " + table.getName() + "_PK PRIMARY KEY ("); + for (int i = 0; i < pks.size(); i++) { + if (i > 0) { + sb.append(","); + } + sb.append(pks.get(i).getName()); + } + sb.append(");\r\n"); + } + for (int i = 0; i < columns.size(); i++) { + sb.append("COMMENT ON COLUMN " + table.getName() + "." + columns.get(i).getName() + " IS '" + columns.get(i).getComment() + "';"); + } + return sb.toString(); + } + + @Override + public Map getFlinkColumnTypeConversion() { + return new HashMap<>(); + } + + @Override + protected void createDataSource(DruidDataSource ds, DriverConfig config) { + ds.setName(config.getName().replaceAll(":", "")); + ds.setUrl(config.getUrl()); + ds.setDriverClassName(getDriverClass()); + ds.setUsername(config.getUsername()); + ds.setPassword(config.getPassword()); + ds.setValidationQuery("select 1 from dual"); + ds.setTestWhileIdle(true); + ds.setBreakAfterAcquireFailure(true); + ds.setFailFast(true); + ds.setInitialSize(1); + ds.setMaxActive(8); + ds.setMinIdle(5); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/java/net/srt/flink/metadata/oracle/query/OracleQuery.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/java/net/srt/flink/metadata/oracle/query/OracleQuery.java new file mode 100644 index 0000000..28c8658 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/java/net/srt/flink/metadata/oracle/query/OracleQuery.java @@ -0,0 +1,111 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.oracle.query; + +import net.srt.flink.metadata.base.query.AbstractDBQuery; + +/** + * OracleQuery + * + * @author zrx + * @since 2021/7/21 15:54 + **/ +public class OracleQuery extends AbstractDBQuery { + + @Override + public String schemaAllSql() { + return "SELECT DISTINCT OWNER FROM ALL_TAB_COMMENTS"; + } + + @Override + public String tablesSql(String schemaName) { + return "SELECT * FROM ALL_TAB_COMMENTS WHERE OWNER='" + schemaName + "'"; + } + + @Override + public String columnsSql(String schemaName, String tableName) { + return "SELECT A.COLUMN_NAME, CASE WHEN A.DATA_TYPE='NUMBER' THEN " + + "(CASE WHEN A.DATA_PRECISION IS NULL THEN A.DATA_TYPE " + + "WHEN NVL(A.DATA_SCALE, 0) > 0 THEN A.DATA_TYPE||'('||A.DATA_PRECISION||','||A.DATA_SCALE||')' " + + "ELSE A.DATA_TYPE||'('||A.DATA_PRECISION||')' END) " + + "ELSE A.DATA_TYPE END DATA_TYPE,A.DATA_PRECISION NUMERIC_PRECISION,A.DATA_SCALE NUMERIC_SCALE," + + " B.COMMENTS,A.NULLABLE,DECODE((select count(1) from all_constraints pc,all_cons_columns pcc" + + " where pcc.column_name = A.column_name" + + " and pcc.constraint_name = pc.constraint_name" + + " and pc.constraint_type ='P'" + + " and pcc.owner = upper(A.OWNER)" + + " and pcc.table_name = upper(A.TABLE_NAME)),0,'','PRI') KEY " + + "FROM ALL_TAB_COLUMNS A " + + " INNER JOIN ALL_COL_COMMENTS B ON A.TABLE_NAME = B.TABLE_NAME AND A.COLUMN_NAME = B.COLUMN_NAME AND B.OWNER = '" + schemaName + "'" + + " LEFT JOIN ALL_CONSTRAINTS D ON D.TABLE_NAME = A.TABLE_NAME AND D.CONSTRAINT_TYPE = 'P' AND D.OWNER = '" + schemaName + "'" + + " LEFT JOIN ALL_CONS_COLUMNS C ON C.CONSTRAINT_NAME = D.CONSTRAINT_NAME AND C.COLUMN_NAME=A.COLUMN_NAME AND C.OWNER = '" + schemaName + "'" + + "WHERE A.OWNER = '" + schemaName + "' AND A.TABLE_NAME = '" + tableName + "' ORDER BY A.COLUMN_ID "; + } + + @Override + public String schemaName() { + return "OWNER"; + } + + @Override + public String tableName() { + return "TABLE_NAME"; + } + + @Override + public String tableComment() { + return "COMMENTS"; + } + + @Override + public String tableType() { + return "TABLE_TYPE"; + } + + @Override + public String columnName() { + return "COLUMN_NAME"; + } + + @Override + public String columnType() { + return "DATA_TYPE"; + } + + @Override + public String columnComment() { + return "COMMENTS"; + } + + @Override + public String columnKey() { + return "KEY"; + } + + @Override + public String isNullable() { + return "NULLABLE"; + } + + @Override + public String nullableValue() { + return "Y"; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver new file mode 100644 index 0000000..e555e34 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-oracle/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver @@ -0,0 +1 @@ +net.srt.flink.metadata.oracle.driver.OracleDriver diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/pom.xml new file mode 100644 index 0000000..f50914d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/pom.xml @@ -0,0 +1,31 @@ + + + + flink-metadata + net.srt + 2.0.0 + + 4.0.0 + + flink-metadata-postgresql + + + + net.srt + flink-metadata-base + ${project.version} + + + junit + junit + provided + + + org.postgresql + postgresql + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/java/net/srt/flink/metadata/postgresql/convert/PostgreSqlTypeConvert.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/java/net/srt/flink/metadata/postgresql/convert/PostgreSqlTypeConvert.java new file mode 100644 index 0000000..f41e855 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/java/net/srt/flink/metadata/postgresql/convert/PostgreSqlTypeConvert.java @@ -0,0 +1,134 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.postgresql.convert; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.ColumnType; +import net.srt.flink.metadata.base.convert.ITypeConvert; + +/** + * PostgreSqlTypeConvert + * + * @author zrx + * @since 2021/7/22 9:33 + **/ +public class PostgreSqlTypeConvert implements ITypeConvert { + + @Override + public ColumnType convert(Column column) { + ColumnType columnType = ColumnType.STRING; + if (Asserts.isNull(column)) { + return columnType; + } + String t = column.getType().toLowerCase(); + boolean isNullable = !column.isKeyFlag() && column.isNullable(); + if (t.contains("smallint") || t.contains("int2") || t.contains("smallserial") || t.contains("serial2")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_SHORT; + } else { + columnType = ColumnType.SHORT; + } + } else if (t.contains("integer") || t.contains("int4") || t.contains("serial")) { + if (isNullable) { + columnType = ColumnType.INTEGER; + } else { + columnType = ColumnType.INT; + } + } else if (t.contains("bigint") || t.contains("bigserial")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_LONG; + } else { + columnType = ColumnType.LONG; + } + } else if (t.contains("real") || t.contains("float4")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_FLOAT; + } else { + columnType = ColumnType.FLOAT; + } + } else if (t.contains("float8") || t.contains("double precision")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_DOUBLE; + } else { + columnType = ColumnType.DOUBLE; + } + } else if (t.contains("numeric") || t.contains("decimal")) { + columnType = ColumnType.DECIMAL; + } else if (t.contains("boolean")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_BOOLEAN; + } else { + columnType = ColumnType.BOOLEAN; + } + } else if (t.contains("timestamp")) { + columnType = ColumnType.TIMESTAMP; + } else if (t.contains("date")) { + columnType = ColumnType.DATE; + } else if (t.contains("time")) { + columnType = ColumnType.TIME; + } else if (t.contains("char") || t.contains("text")) { + columnType = ColumnType.STRING; + } else if (t.contains("bytea")) { + columnType = ColumnType.BYTES; + } else if (t.contains("array")) { + columnType = ColumnType.T; + } + return columnType; + } + + @Override + public String convertToDB(ColumnType columnType) { + switch (columnType) { + case SHORT: + case JAVA_LANG_SHORT: + return "int2"; + case INTEGER: + case INT: + return "integer"; + case LONG: + case JAVA_LANG_LONG: + return "bigint"; + case FLOAT: + case JAVA_LANG_FLOAT: + return "float4"; + case DOUBLE: + case JAVA_LANG_DOUBLE: + return "float8"; + case DECIMAL: + return "decimal"; + case BOOLEAN: + case JAVA_LANG_BOOLEAN: + return "boolean"; + case TIMESTAMP: + return "timestamp"; + case DATE: + return "date"; + case TIME: + return "time"; + case BYTES: + return "bytea"; + case T: + return "array"; + default: + return "varchar"; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/java/net/srt/flink/metadata/postgresql/driver/PostgreSqlDriver.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/java/net/srt/flink/metadata/postgresql/driver/PostgreSqlDriver.java new file mode 100644 index 0000000..7b0a268 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/java/net/srt/flink/metadata/postgresql/driver/PostgreSqlDriver.java @@ -0,0 +1,159 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.postgresql.driver; + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.QueryData; +import net.srt.flink.common.model.Table; +import net.srt.flink.common.utils.TextUtil; +import net.srt.flink.metadata.base.convert.ITypeConvert; +import net.srt.flink.metadata.base.driver.AbstractJdbcDriver; +import net.srt.flink.metadata.base.query.IDBQuery; +import net.srt.flink.metadata.postgresql.convert.PostgreSqlTypeConvert; +import net.srt.flink.metadata.postgresql.query.PostgreSqlQuery; + +import java.util.HashMap; +import java.util.Map; + +/** + * PostgreSqlDriver + * + * @author zrx + * @since 2021/7/22 9:28 + **/ +public class PostgreSqlDriver extends AbstractJdbcDriver { + + @Override + public String getDriverClass() { + return "org.postgresql.Driver"; + } + + @Override + public IDBQuery getDBQuery() { + return new PostgreSqlQuery(); + } + + @Override + public ITypeConvert getTypeConvert() { + return new PostgreSqlTypeConvert(); + } + + @Override + public String getType() { + return "PostgreSql"; + } + + @Override + public String getName() { + return "PostgreSql 数据库"; + } + + @Override + public Map getFlinkColumnTypeConversion() { + return new HashMap<>(); + } + + @Override + public String generateCreateSchemaSql(String schemaName) { + StringBuilder sb = new StringBuilder(); + sb.append("CREATE SCHEMA ").append(schemaName); + return sb.toString(); + } + + @Override + public String getCreateTableSql(Table table) { + StringBuilder key = new StringBuilder(); + StringBuilder sb = new StringBuilder(); + StringBuilder comments = new StringBuilder(); + + sb.append("CREATE TABLE \"").append(table.getSchema()).append("\".\"").append(table.getName()) + .append("\" (\r\n"); + + for (Column column : table.getColumns()) { + sb.append(" \"").append(column.getName()).append("\" "); + sb.append(column.getType()); + if (column.getPrecision() > 0 && column.getScale() > 0) { + sb.append("(") + .append(column.getLength()) + .append(",").append(column.getScale()) + .append(")"); + } else if (null != column.getLength()) { // 处理字符串类型 + sb.append("(").append(column.getLength()).append(")"); + } + if (column.isNullable()) { + sb.append(" NOT NULL"); + } + if (Asserts.isNotNullString(column.getDefaultValue()) && !column.getDefaultValue().contains("nextval")) { + sb.append(" DEFAULT ").append(column.getDefaultValue()); + } + sb.append(",\r\n"); + + // 注释 + if (Asserts.isNotNullString(column.getComment())) { + comments.append("COMMENT ON COLUMN \"").append(table.getSchema()).append("\".\"") + .append(table.getName()).append("\".\"") + .append(column.getName()).append("\" IS '").append(column.getComment()).append("';\r\n"); + } + } + sb.deleteCharAt(sb.length() - 3); + + if (Asserts.isNotNullString(table.getComment())) { + comments.append("COMMENT ON TABLE \"").append(table.getSchema()).append("\".\"") + .append(table.getName()).append("\" IS '").append(table.getComment()).append("';"); + } + sb.append(")\r\n;\r\n").append(comments); + + return sb.toString(); + } + + @Override + public StringBuilder genQueryOption(QueryData queryData) { + + String where = queryData.getOption().getWhere(); + String order = queryData.getOption().getOrder(); + String limitStart = queryData.getOption().getLimitStart(); + String limitEnd = queryData.getOption().getLimitEnd(); + + StringBuilder optionBuilder = new StringBuilder() + .append("select * from ") + .append(queryData.getSchemaName()) + .append(".") + .append(queryData.getTableName()); + + if (where != null && !"".equals(where)) { + optionBuilder.append(" where ").append(where); + } + if (order != null && !"".equals(order)) { + optionBuilder.append(" order by ").append(order); + } + + if (TextUtil.isEmpty(limitStart)) { + limitStart = "0"; + } + if (TextUtil.isEmpty(limitEnd)) { + limitEnd = "100"; + } + optionBuilder.append(" limit ") + .append(limitEnd); + + return optionBuilder; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/java/net/srt/flink/metadata/postgresql/query/PostgreSqlQuery.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/java/net/srt/flink/metadata/postgresql/query/PostgreSqlQuery.java new file mode 100644 index 0000000..0a55a4d --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/java/net/srt/flink/metadata/postgresql/query/PostgreSqlQuery.java @@ -0,0 +1,144 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.postgresql.query; + +import net.srt.flink.metadata.base.query.AbstractDBQuery; + +/** + * PostgreSqlQuery + * + * @author zrx + * @since 2021/7/22 9:29 + **/ +public class PostgreSqlQuery extends AbstractDBQuery { + + @Override + public String schemaAllSql() { + return "SELECT nspname AS \"schema_name\" FROM pg_namespace WHERE nspname NOT LIKE 'pg_%' AND nspname != 'information_schema' ORDER BY nspname"; + } + + @Override + public String tablesSql(String schemaName) { + return "SELECT n.nspname AS schema_name\n" + + " , c.relname AS tablename\n" + + " , obj_description(c.oid) AS comments\n" + + " , c.reltuples as rows\n" + + "FROM pg_class c\n" + + " LEFT JOIN pg_namespace n ON n.oid = c.relnamespace\n" + + "WHERE ((c.relkind = 'r'::\"char\") OR (c.relkind = 'f'::\"char\") OR (c.relkind = 'p'::\"char\"))\n" + + " AND n.nspname = '" + schemaName + "'\n" + + "ORDER BY n.nspname, tablename"; + } + + @Override + public String columnsSql(String schemaName, String tableName) { + + return "SELECT col.column_name as name\n" + + " , col.character_maximum_length as length\n" + + " , col.is_nullable as is_nullableis_nullable\n" + + " , col.numeric_precision as numeric_precision\n" + + " , col.numeric_scale as numeric_scale\n" + + " , col.ordinal_position as ordinal_position\n" + + " , col.udt_name as type\n" + + " , (CASE\n" + + " WHEN (SELECT COUNT(*) FROM pg_constraint AS PC WHERE b.attnum = PC.conkey[1] AND PC.contype = 'p') > 0\n" + + " THEN 'PRI'\n" + + " ELSE '' END) AS key\n" + + " , col_description(c.oid, col.ordinal_position) AS comment\n" + + " , col.column_default AS column_default\n" + + "FROM information_schema.columns AS col\n" + + " LEFT JOIN pg_namespace ns ON ns.nspname = col.table_schema\n" + + " LEFT JOIN pg_class c ON col.table_name = c.relname AND c.relnamespace = ns.oid\n" + + " LEFT JOIN pg_attribute b ON b.attrelid = c.oid AND b.attname = col.column_name\n" + + "WHERE col.table_schema = '" + schemaName + "'\n" + + " AND col.table_name = '" + tableName + "'\n" + + "ORDER BY col.table_schema, col.table_name, col.ordinal_position"; + } + + @Override + public String schemaName() { + return "schema_name"; + } + + @Override + public String tableName() { + return "tablename"; + } + + @Override + public String tableComment() { + return "comments"; + } + + @Override + public String rows() { + return "rows"; + } + + @Override + public String columnName() { + return "name"; + } + + @Override + public String columnType() { + return "type"; + } + + @Override + public String columnLength() { + return "length"; + } + + @Override + public String columnComment() { + return "comment"; + } + + @Override + public String columnKey() { + return "key"; + } + + @Override + public String precision() { + return "numeric_precision"; + } + + @Override + public String scale() { + return "numeric_scale"; + } + + @Override + public String columnPosition() { + return "ordinal_position"; + } + + @Override + public String defaultValue() { + return "column_default"; + } + + @Override + public String isNullable() { + return "is_nullable"; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver new file mode 100644 index 0000000..bd4b276 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-postgresql/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver @@ -0,0 +1 @@ +net.srt.flink.metadata.postgresql.driver.PostgreSqlDriver diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/pom.xml new file mode 100644 index 0000000..e109591 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/pom.xml @@ -0,0 +1,33 @@ + + + + flink-metadata + net.srt + 2.0.0 + + 4.0.0 + + flink-metadata-sqlserver + + + + net.srt + flink-metadata-base + ${project.version} + + + junit + junit + provided + + + com.microsoft.sqlserver + mssql-jdbc + + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/constant/SqlServerConstant.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/constant/SqlServerConstant.java new file mode 100644 index 0000000..ee19c5a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/constant/SqlServerConstant.java @@ -0,0 +1,59 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.sqlserver.constant; + +/** + * SqlServer constant + */ +public interface SqlServerConstant { + + /** + * 添加注释模板SQL + */ + String COMMENT_SQL = " EXECUTE sp_addextendedproperty N'MS_Description', N'%s', N'SCHEMA', N'%s', N'table', N'%S', N'column', N'%S' "; + + /** + * 查询列信息模板SQL + */ + String QUERY_COLUMNS_SQL = " SELECT cast(a.name AS VARCHAR(500)) AS TABLE_NAME,cast(b.name AS VARCHAR(500)) AS COLUMN_NAME, isnull(CAST ( c.VALUE AS NVARCHAR ( 500 ) ),'') AS COMMENTS, " + + " CASE b.is_nullable WHEN 1 THEN 'YES' ELSE 'NO' END as NULLVALUE,cast(sys.types.name AS VARCHAR (500)) AS DATA_TYPE," + + " ( SELECT CASE count(1) WHEN 1 then 'PRI' ELSE '' END FROM syscolumns,sysobjects,sysindexes,sysindexkeys,systypes WHERE syscolumns.xusertype = systypes.xusertype " + + " AND syscolumns.id = object_id (a.name) AND sysobjects.xtype = 'PK' AND sysobjects.parent_obj = syscolumns.id " + + " AND sysindexes.id = syscolumns.id AND sysobjects.name = sysindexes.name AND sysindexkeys.id = syscolumns.id AND sysindexkeys.indid = sysindexes.indid " + + "AND syscolumns.colid = sysindexkeys.colid " + + " AND syscolumns.name = b.name) as 'KEY', b.is_identity isIdentity , '' as CHARACTER_SET_NAME, '' as COLLATION_NAME, " + + "0 as ORDINAL_POSITION, b.PRECISION as NUMERIC_PRECISION, b.scale as NUMERIC_SCALE," + + "'' as AUTO_INCREMENT " + + "FROM ( select name,object_id from sys.tables UNION all select name,object_id from sys.views ) a INNER JOIN sys.columns b " + + " ON b.object_id = a.object_id LEFT JOIN sys.types ON b.user_type_id = sys.types.user_type_id LEFT JOIN sys.extended_properties c ON c.major_id = b.object_id " + + "AND c.minor_id = b.column_id WHERE a.name = '%s' and sys.types.name !='sysname' "; + + /** + * 查询schema模板SQL + */ + String QUERY_SCHEMA_SQL = " SELECT distinct table_schema from INFORMATION_SCHEMA.tables "; + + /** + * 根据schema查询table信息模板SQL + */ + String QUERY_TABLE_BY_SCHEMA_SQL = + " SELECT table_name ,table_schema, '' as type, '' as CATALOG, '' as ENGINE , '' as OPTIONS ,0 as rows , null as CREATE_TIME, null as UPDATE_TIME,null AS COMMENTS " + + "FROM INFORMATION_SCHEMA.tables WHERE TABLE_SCHEMA = '%s' "; +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/convert/SqlServerTypeConvert.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/convert/SqlServerTypeConvert.java new file mode 100644 index 0000000..4fd620a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/convert/SqlServerTypeConvert.java @@ -0,0 +1,117 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.sqlserver.convert; + + +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.ColumnType; +import net.srt.flink.metadata.base.convert.ITypeConvert; + +public class SqlServerTypeConvert implements ITypeConvert { + @Override + public ColumnType convert(Column column) { + ColumnType columnType = ColumnType.STRING; + if (Asserts.isNull(column)) { + return columnType; + } + String t = column.getType().toLowerCase(); + boolean isNullable = !column.isKeyFlag() && column.isNullable(); + if (t.contains("char") || t.contains("varchar") || t.contains("text") + || t.contains("nchar") || t.contains("nvarchar") || t.contains("ntext") + || t.contains("uniqueidentifier") || t.contains("sql_variant")) { + columnType = ColumnType.STRING; + } else if (t.contains("bigint")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_LONG; + } else { + columnType = ColumnType.LONG; + } + } else if (t.contains("bit")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_BOOLEAN; + } else { + columnType = ColumnType.BOOLEAN; + } + } else if (t.contains("int") || t.contains("tinyint") || t.contains("smallint")) { + if (isNullable) { + columnType = ColumnType.INTEGER; + } else { + columnType = ColumnType.INT; + } + } else if (t.contains("float")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_DOUBLE; + } else { + columnType = ColumnType.DOUBLE; + } + } else if (t.contains("decimal") || t.contains("money") || t.contains("smallmoney") || t.contains("numeric")) { + columnType = ColumnType.DECIMAL; + } else if (t.contains("real")) { + if (isNullable) { + columnType = ColumnType.JAVA_LANG_FLOAT; + } else { + columnType = ColumnType.FLOAT; + } + } else if (t.equalsIgnoreCase("datetime") || t.equalsIgnoreCase("smalldatetime")) { + columnType = ColumnType.TIMESTAMP; + } else if (t.equalsIgnoreCase("datetime2")) { + //这里应该是纳秒 + columnType = ColumnType.TIMESTAMP; + } else if (t.equalsIgnoreCase("datetimeoffset")) { + //这里应该是纳秒 + columnType = ColumnType.TIMESTAMP; + } else if (t.equalsIgnoreCase("date")) { + columnType = ColumnType.LOCALDATE; + } else if (t.equalsIgnoreCase("time")) { + columnType = ColumnType.LOCALTIME; + } else if (t.contains("timestamp") || t.contains("binary") || t.contains("varbinary") || t.contains("image")) { + columnType = ColumnType.BYTES; + } + return columnType; + } + + @Override + public String convertToDB(ColumnType columnType) { + switch (columnType) { + case STRING: + return "varchar"; + case BOOLEAN: + case JAVA_LANG_BOOLEAN: + return "bit"; + case LONG: + case JAVA_LANG_LONG: + return "bigint"; + case INTEGER: + case INT: + return "int"; + case DOUBLE: + case JAVA_LANG_DOUBLE: + return "double"; + case FLOAT: + case JAVA_LANG_FLOAT: + return "float"; + case TIMESTAMP: + return "datetime(0)"; + default: + return "varchar"; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/driver/SqlServerDriver.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/driver/SqlServerDriver.java new file mode 100644 index 0000000..01b667e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/driver/SqlServerDriver.java @@ -0,0 +1,135 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.sqlserver.driver; + +import net.srt.flink.common.model.Column; +import net.srt.flink.common.model.QueryData; +import net.srt.flink.common.model.Table; +import net.srt.flink.metadata.base.convert.ITypeConvert; +import net.srt.flink.metadata.base.driver.AbstractJdbcDriver; +import net.srt.flink.metadata.base.query.IDBQuery; +import net.srt.flink.metadata.sqlserver.constant.SqlServerConstant; +import net.srt.flink.metadata.sqlserver.convert.SqlServerTypeConvert; +import net.srt.flink.metadata.sqlserver.query.SqlServerQuery; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class SqlServerDriver extends AbstractJdbcDriver { + @Override + public IDBQuery getDBQuery() { + return new SqlServerQuery(); + } + + @Override + public ITypeConvert getTypeConvert() { + return new SqlServerTypeConvert(); + } + + @Override + public String getDriverClass() { + return "com.microsoft.sqlserver.jdbc.SQLServerDriver"; + } + + @Override + public String getType() { + return "SqlServer"; + } + + @Override + public String getName() { + return "SqlServer数据库"; + } + + /** + * sql拼接,目前还未实现limit方法 + * */ + @Override + public StringBuilder genQueryOption(QueryData queryData) { + + String where = queryData.getOption().getWhere(); + String order = queryData.getOption().getOrder(); + + StringBuilder optionBuilder = new StringBuilder() + .append("select * from ") + .append(queryData.getSchemaName()) + .append(".") + .append(queryData.getTableName()); + + if (where != null && !where.equals("")) { + optionBuilder.append(" where ").append(where); + } + if (order != null && !order.equals("")) { + optionBuilder.append(" order by ").append(order); + } + + return optionBuilder; + } + + @Override + public String getCreateTableSql(Table table) { + StringBuilder sb = new StringBuilder(); + sb.append("CREATE TABLE [" + table.getName() + "] ("); + List columns = table.getColumns(); + for (int i = 0; i < columns.size(); i++) { + if (i > 0) { + sb.append(","); + } + sb.append("[" + columns.get(i).getName() + "]" + getTypeConvert().convertToDB(columns.get(i))); + if (columns.get(i).isNullable()) { + sb.append(" NOT NULL"); + } else { + sb.append(" NULL"); + } + } + List pks = new ArrayList<>(); + for (int i = 0; i < columns.size(); i++) { + if (columns.get(i).isKeyFlag()) { + pks.add(columns.get(i).getName()); + } + } + if (pks.size() > 0) { + sb.append(", PRIMARY KEY ( "); + for (int i = 0; i < pks.size(); i++) { + if (i > 0) { + sb.append(","); + } + sb.append("[" + pks.get(i) + "]"); + } + sb.append(" ) "); + } + sb.append(") GO "); + for (Column column : columns) { + String comment = column.getComment(); + if (comment != null && !comment.isEmpty()) { + sb.append(String.format(SqlServerConstant.COMMENT_SQL, comment, table.getSchema() == null || table.getSchema().isEmpty() ? "dbo" : table.getSchema(), + table.getName(), column.getName()) + " GO "); + } + } + return sb.toString(); + } + + @Override + public Map getFlinkColumnTypeConversion() { + return new HashMap<>(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/query/SqlServerQuery.java b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/query/SqlServerQuery.java new file mode 100644 index 0000000..ae14191 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/java/net/srt/flink/metadata/sqlserver/query/SqlServerQuery.java @@ -0,0 +1,93 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.metadata.sqlserver.query; + +import net.srt.flink.metadata.base.query.AbstractDBQuery; +import net.srt.flink.metadata.sqlserver.constant.SqlServerConstant; + +import java.sql.ResultSet; +import java.sql.SQLException; + +public class SqlServerQuery extends AbstractDBQuery { + + @Override + public String schemaAllSql() { + return SqlServerConstant.QUERY_SCHEMA_SQL; + } + + @Override + public String tablesSql(String schemaName) { + return String.format(SqlServerConstant.QUERY_TABLE_BY_SCHEMA_SQL, schemaName); + } + + @Override + public String columnsSql(String schemaName, String tableName) { + return String.format(SqlServerConstant.QUERY_COLUMNS_SQL, tableName); + } + + @Override + public String schemaName() { + return "TABLE_SCHEMA"; + } + + @Override + public String tableName() { + return "TABLE_NAME"; + } + + @Override + public String tableType() { + return "TYPE"; + } + + @Override + public String tableComment() { + return "COMMENTS"; + } + + @Override + public String columnName() { + return "COLUMN_NAME"; + } + + @Override + public String columnType() { + return "DATA_TYPE"; + } + + @Override + public String columnComment() { + return "COMMENTS"; + } + + @Override + public String columnKey() { + return "KEY"; + } + + public boolean isKeyIdentity(ResultSet results) throws SQLException { + return 1 == results.getInt("isIdentity"); + } + + public String isNullable() { + return "NULLVALUE"; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver new file mode 100644 index 0000000..cfb6ce7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/flink-metadata-sqlserver/src/main/resources/META-INF/services/net.srt.flink.metadata.base.driver.Driver @@ -0,0 +1 @@ +net.srt.flink.metadata.sqlserver.driver.SqlServerDriver diff --git a/srt-cloud-framework/srt-cloud-flink/flink-metadata/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-metadata/pom.xml new file mode 100644 index 0000000..d22f221 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-metadata/pom.xml @@ -0,0 +1,24 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-metadata + pom + + flink-metadata-base + flink-metadata-mysql + flink-metadata-oracle + flink-metadata-postgresql + flink-metadata-sqlserver + flink-metadata-hive + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-process/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-process/pom.xml new file mode 100644 index 0000000..b717f3a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-process/pom.xml @@ -0,0 +1,34 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-process + + + + net.srt + flink-common + ${project.version} + + + com.fasterxml.jackson.core + jackson-annotations + + + com.fasterxml.jackson.core + jackson-databind + + + cn.hutool + hutool-all + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/context/ProcessContextHolder.java b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/context/ProcessContextHolder.java new file mode 100644 index 0000000..c8b8cc7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/context/ProcessContextHolder.java @@ -0,0 +1,84 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.process.context; + +import cn.hutool.core.util.StrUtil; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.process.model.ProcessEntity; +import net.srt.flink.process.pool.ProcessPool; + +/** + * ProcessContextHolder + * + * @author zrx + * @since 2022/10/16 16:57 + */ +public class ProcessContextHolder { + + private static final ThreadLocal PROCESS_CONTEXT = new ThreadLocal<>(); + + private static final ThreadLocal FLOW_PROCESS_CONTEXT = new ThreadLocal<>(); + + public static void setProcess(ProcessEntity process) { + PROCESS_CONTEXT.set(process); + } + + public static void setFlowProcess(ProcessEntity process) { + FLOW_PROCESS_CONTEXT.set(process); + } + + public static ProcessEntity getFlowProcess() { + if (Asserts.isNull(FLOW_PROCESS_CONTEXT.get())) { + return ProcessEntity.NULL_PROCESS; + } + return FLOW_PROCESS_CONTEXT.get(); + } + + public static ProcessEntity getProcess() { + if (Asserts.isNull(PROCESS_CONTEXT.get())) { + return ProcessEntity.NULL_PROCESS; + } + return PROCESS_CONTEXT.get(); + } + + public static void clear() { + PROCESS_CONTEXT.remove(); + } + + public static void clearFlow() { + FLOW_PROCESS_CONTEXT.remove(); + } + + public static ProcessEntity registerProcess(ProcessEntity process) { + Asserts.checkNull(process, "Process can not be null."); + setProcess(process); + if (StrUtil.isNotBlank(process.getAccessToken())) { + ProcessPool.getInstance().push(process.getAccessToken(), process); + } + return process; + } + + public static ProcessEntity registerFlowProcess(ProcessEntity process) { + Asserts.checkNull(process, "Process can not be null."); + setFlowProcess(process); + return process; + } + +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessEntity.java b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessEntity.java new file mode 100644 index 0000000..6b12321 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessEntity.java @@ -0,0 +1,367 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.process.model; + +import cn.hutool.core.text.CharSequenceUtil; +import net.srt.flink.common.assertion.Asserts; +import net.srt.flink.process.pool.ConsolePool; + +import java.time.LocalDateTime; +import java.util.ArrayList; +import java.util.List; +import java.util.UUID; + +/** + * Process + * + * @author zrx + * @since 2022/10/16 16:30 + */ +public class ProcessEntity { + + public static final String SUCCESS_END = "Program runs successfully."; + public static final String FAILED_END = "Program failed to run."; + public static final String INFO_END = "Program runs end."; + + private Long projectId; + private String pid; + private String name; + private Integer taskId; + private ProcessType type; + private ProcessStatus status; + private LocalDateTime startTime; + private LocalDateTime endTime; + private long time; + private int stepIndex = 0; + private List steps; + private Integer userId; + private String accessToken; + /** + * 节点调度日志的id + */ + private Integer nodeRecordId; + + public static final ProcessEntity NULL_PROCESS = new ProcessEntity(); + + public ProcessEntity() { + } + + public ProcessEntity(String pid, String name, Integer taskId, ProcessType type, Integer userId) { + this.pid = pid; + this.name = name; + this.taskId = taskId; + this.type = type; + this.userId = userId; + } + + public ProcessEntity(String pid, String name, Integer taskId, ProcessType type, Integer userId, String accessToken) { + this.pid = pid; + this.name = name; + this.taskId = taskId; + this.type = type; + this.userId = userId; + this.accessToken = accessToken; + } + + public ProcessEntity(String name, Integer taskId, ProcessType type, ProcessStatus status, LocalDateTime startTime, + LocalDateTime endTime, long time, + List steps, Integer userId) { + this.name = name; + this.taskId = taskId; + this.type = type; + this.status = status; + this.startTime = startTime; + this.endTime = endTime; + this.time = time; + this.steps = steps; + this.userId = userId; + } + + public static ProcessEntity init(ProcessType type, Integer userId) { + return init(type.getValue() + "_TEMP", null, type, userId); + } + + public static ProcessEntity init(ProcessType type, String accessToken) { + return init(type.getValue() + "_TEMP", null, type, null, accessToken); + } + + public static ProcessEntity init(Integer taskId, ProcessType type, Integer userId) { + return init(type.getValue() + taskId, taskId, type, userId); + } + + public static ProcessEntity init(String name, Integer taskId, ProcessType type, Integer userId) { + ProcessEntity process = new ProcessEntity(UUID.randomUUID().toString(), name, taskId, type, userId); + process.setStatus(ProcessStatus.INITIALIZING); + process.setStartTime(LocalDateTime.now()); + process.setSteps(new ArrayList<>()); + process.getSteps().add(ProcessStep.init()); + process.nextStep(); + return process; + } + + public static ProcessEntity init(ProcessType type, Integer userId, String accessToken) { + return init(type.getValue() + "_TEMP", null, type, userId, accessToken); + } + + public static ProcessEntity init(Integer taskId, ProcessType type, Integer userId, String accessToken) { + return init(type.getValue() + taskId, taskId, type, userId, accessToken); + } + + public static ProcessEntity init(String name, Integer taskId, ProcessType type, Integer userId, String accessToken) { + ProcessEntity process = new ProcessEntity(UUID.randomUUID().toString(), name, taskId, type, userId, accessToken); + process.setStatus(ProcessStatus.INITIALIZING); + process.setStartTime(LocalDateTime.now()); + process.setSteps(new ArrayList<>()); + process.getSteps().add(ProcessStep.init()); + process.nextStep(); + return process; + } + + public void start() { + if (isNullProcess()) { + return; + } + steps.get(stepIndex - 1).setEndTime(LocalDateTime.now()); + setStatus(ProcessStatus.RUNNING); + steps.add(ProcessStep.run()); + nextStep(); + } + + public void finish() { + if (isNullProcess()) { + return; + } + steps.get(stepIndex - 1).setEndTime(LocalDateTime.now()); + setStatus(ProcessStatus.FINISHED); + setEndTime(LocalDateTime.now()); + setTime(getEndTime().compareTo(getStartTime())); + } + + public void finish(String str) { + if (isNullProcess()) { + return; + } + steps.get(stepIndex - 1).setEndTime(LocalDateTime.now()); + String message = CharSequenceUtil.format("\n[{}] {} INFO: {}", type.getValue(), LocalDateTime.now().toString().replace("T"," "), str); + steps.get(stepIndex - 1).appendInfo(message); + setStatus(ProcessStatus.FINISHED); + setEndTime(LocalDateTime.now()); + setTime(getEndTime().compareTo(getStartTime())); + ConsolePool.write(message, accessToken); + } + + public void config(String str) { + if (isNullProcess()) { + return; + } + String message = CharSequenceUtil.format("\n[{}] {} CONFIG: {}", type.getValue(), LocalDateTime.now().toString().replace("T"," "), str); + steps.get(stepIndex - 1).appendInfo(message); + ConsolePool.write(message, accessToken); + } + + public void info(String str) { + if (isNullProcess()) { + return; + } + String message = CharSequenceUtil.format("\n[{}] {} INFO: {}", type.getValue(), LocalDateTime.now().toString().replace("T"," "), str); + steps.get(stepIndex - 1).appendInfo(message); + ConsolePool.write(message, accessToken); + } + + public void infoEnd() { + if (isNullProcess()) { + return; + } + String message = CharSequenceUtil.format("\n[{}] {} INFO: {}", type.getValue(), LocalDateTime.now().toString().replace("T"," "), INFO_END); + steps.get(stepIndex - 1).appendInfo(message); + ConsolePool.write(message, accessToken); + } + + public void infoSuccessfully() { + if (isNullProcess()) { + return; + } + String message = CharSequenceUtil.format("\n[{}] {} INFO: {}", type.getValue(), LocalDateTime.now().toString().replace("T"," "), SUCCESS_END); + steps.get(stepIndex - 1).appendInfo(message); + ConsolePool.write(message, accessToken); + } + + public void infoFailed() { + if (isNullProcess()) { + return; + } + String message = CharSequenceUtil.format("\n[{}] {} INFO: {}", type.getValue(), LocalDateTime.now().toString().replace("T"," "), FAILED_END); + steps.get(stepIndex - 1).appendInfo(message); + ConsolePool.write(message, accessToken); + } + + public void infoSuccess() { + if (isNullProcess()) { + return; + } + steps.get(stepIndex - 1).appendInfo("Success."); + ConsolePool.write("Success.", accessToken); + } + + public void infoFail() { + if (isNullProcess()) { + return; + } + steps.get(stepIndex - 1).appendInfo("Fail."); + ConsolePool.write("Fail.", accessToken); + } + + public void error(String str) { + if (isNullProcess()) { + return; + } + String message = CharSequenceUtil.format("\n[{}] {} ERROR: {}", type.getValue(), LocalDateTime.now().toString().replace("T"," "), str); + steps.get(stepIndex - 1).appendInfo(message); + steps.get(stepIndex - 1).appendError(message); + ConsolePool.write(message, accessToken); + } + + public void nextStep() { + if (isNullProcess()) { + return; + } + stepIndex++; + } + + public boolean isNullProcess() { + return Asserts.isNullString(pid); + } + + public boolean isActiveProcess() { + return status.isActiveStatus(); + } + + public String getPid() { + return pid; + } + + public void setPid(String pid) { + this.pid = pid; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public Integer getTaskId() { + return taskId; + } + + public void setTaskId(Integer taskId) { + this.taskId = taskId; + } + + public ProcessType getType() { + return type; + } + + public void setType(ProcessType type) { + this.type = type; + } + + public ProcessStatus getStatus() { + return status; + } + + public void setStatus(ProcessStatus status) { + this.status = status; + } + + public LocalDateTime getStartTime() { + return startTime; + } + + public void setStartTime(LocalDateTime startTime) { + this.startTime = startTime; + } + + public LocalDateTime getEndTime() { + return endTime; + } + + public void setEndTime(LocalDateTime endTime) { + this.endTime = endTime; + } + + public long getTime() { + return time; + } + + public void setTime(long time) { + this.time = time; + } + + public Integer getUserId() { + return userId; + } + + public void setUserId(Integer userId) { + this.userId = userId; + } + + public List getSteps() { + return steps; + } + + public void setSteps(List steps) { + this.steps = steps; + } + + public int getStepIndex() { + return stepIndex; + } + + public void setStepIndex(int stepIndex) { + this.stepIndex = stepIndex; + } + + public String getAccessToken() { + return accessToken; + } + + public void setAccessToken(String accessToken) { + this.accessToken = accessToken; + } + + public Long getProjectId() { + return projectId; + } + + public void setProjectId(Long projectId) { + this.projectId = projectId; + } + + public Integer getNodeRecordId() { + return nodeRecordId; + } + + public void setNodeRecordId(Integer nodeRecordId) { + this.nodeRecordId = nodeRecordId; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessStatus.java b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessStatus.java new file mode 100644 index 0000000..3b1bfd9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessStatus.java @@ -0,0 +1,75 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.process.model; + + +import net.srt.flink.common.assertion.Asserts; + +/** + * ProcessStatus + * + * @author zrx + * @since 2022/10/16 16:33 + */ +public enum ProcessStatus { + + INITIALIZING("INITIALIZING"), + RUNNING("RUNNING"), + FAILED("FAILED"), + CANCELED("CANCELED"), + FINISHED("FINISHED"), + UNKNOWN("UNKNOWN"); + + private String value; + + ProcessStatus(String value) { + this.value = value; + } + + public String getValue() { + return value; + } + + public static ProcessStatus get(String value) { + for (ProcessStatus type : ProcessStatus.values()) { + if (Asserts.isEquals(type.getValue(), value)) { + return type; + } + } + return ProcessStatus.UNKNOWN; + } + + public boolean equalsValue(String type) { + if (Asserts.isEquals(value, type)) { + return true; + } + return false; + } + + public boolean isActiveStatus() { + switch (this) { + case INITIALIZING: + case RUNNING: + return true; + default: + return false; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessStep.java b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessStep.java new file mode 100644 index 0000000..0fa4585 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessStep.java @@ -0,0 +1,130 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.process.model; + +import java.time.LocalDateTime; + +/** + * ProcessStep + * + * @author zrx + * @since 2022/10/16 16:46 + */ +public class ProcessStep { + + private ProcessStatus stepStatus; + private LocalDateTime startTime; + private LocalDateTime endTime; + private long time; + private StringBuilder info = new StringBuilder(); + private StringBuilder error = new StringBuilder(); + private boolean isError = false; + + public ProcessStep() { + } + + public ProcessStep(ProcessStatus stepStatus, LocalDateTime startTime) { + this(stepStatus, startTime, null, 0, new StringBuilder(), new StringBuilder()); + } + + public ProcessStep(ProcessStatus stepStatus, LocalDateTime startTime, LocalDateTime endTime, long time, + StringBuilder info, StringBuilder error) { + this.stepStatus = stepStatus; + this.startTime = startTime; + this.endTime = endTime; + this.time = time; + this.info = info; + this.error = error; + } + + public static ProcessStep init() { + return new ProcessStep(ProcessStatus.INITIALIZING, LocalDateTime.now()); + } + + public static ProcessStep run() { + return new ProcessStep(ProcessStatus.RUNNING, LocalDateTime.now()); + } + + public void appendInfo(String str) { + info.append(str); + } + + public void appendError(String str) { + error.append(str); + isError = true; + } + + public ProcessStatus getStepStatus() { + return stepStatus; + } + + public void setStepStatus(ProcessStatus stepStatus) { + this.stepStatus = stepStatus; + } + + public LocalDateTime getStartTime() { + return startTime; + } + + public void setStartTime(LocalDateTime startTime) { + this.startTime = startTime; + } + + public LocalDateTime getEndTime() { + return endTime; + } + + public void setEndTime(LocalDateTime endTime) { + this.endTime = endTime; + this.time = endTime.compareTo(startTime); + } + + public long getTime() { + return time; + } + + public void setTime(long time) { + this.time = time; + } + + public StringBuilder getInfo() { + return info; + } + + public void setInfo(StringBuilder info) { + this.info = info; + } + + public StringBuilder getError() { + return error; + } + + public void setError(StringBuilder error) { + this.error = error; + } + + public boolean isError() { + return isError; + } + + public void setError(boolean error) { + isError = error; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessType.java b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessType.java new file mode 100644 index 0000000..2a0c517 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/model/ProcessType.java @@ -0,0 +1,69 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.process.model; + + +import net.srt.flink.common.assertion.Asserts; + +/** + * ProcessType + * + * @author zrx + * @since 2022/10/16 16:33 + */ +public enum ProcessType { + + FLINKEXPLAIN("FlinkExplain"), + FLINKEXECUTE("FlinkExecute"), + FLINKSUBMIT("FlinkSubmit"), + SQLEXPLAIN("SQLExplain"), + SQLEXECUTE("SQLExecute"), + SQKSUBMIT("SQLSubmit"), + SAVEPOINT("Savepoint"), + FLOWEXECUTE("FlowExecute"), + UNKNOWN("Unknown"); + + + private String value; + + ProcessType(String value) { + this.value = value; + } + + public String getValue() { + return value; + } + + public static ProcessType get(String value) { + for (ProcessType type : ProcessType.values()) { + if (Asserts.isEquals(type.getValue(), value)) { + return type; + } + } + return ProcessType.UNKNOWN; + } + + public boolean equalsValue(String type) { + if (Asserts.isEquals(value, type)) { + return true; + } + return false; + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/pool/ConsolePool.java b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/pool/ConsolePool.java new file mode 100644 index 0000000..75ee482 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/pool/ConsolePool.java @@ -0,0 +1,66 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.process.pool; + + +import net.srt.flink.common.pool.AbstractPool; + +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +/** + * ConsolePool + * + * @author zrx + * @since 2022/10/18 22:51 + */ +public class ConsolePool extends AbstractPool { + + private static final Map consoleEntityMap = new ConcurrentHashMap<>(); + + private static final ConsolePool instance = new ConsolePool(); + + public static ConsolePool getInstance() { + return instance; + } + + @Override + public Map getMap() { + return consoleEntityMap; + } + + @Override + public void refresh(StringBuilder entity) { + + } + + public static void write(String str, Integer userId) { + String user = String.valueOf(userId); + consoleEntityMap.computeIfAbsent(user, k -> new StringBuilder("Console log:")).append(str); + } + + public static void write(String str, String accessToken) { + consoleEntityMap.computeIfAbsent(accessToken, k -> new StringBuilder("Console log:")).append(str); + } + + public static void clear() { + consoleEntityMap.clear(); + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/pool/ProcessPool.java b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/pool/ProcessPool.java new file mode 100644 index 0000000..6846a88 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-process/src/main/java/net/srt/flink/process/pool/ProcessPool.java @@ -0,0 +1,58 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package net.srt.flink.process.pool; + + +import net.srt.flink.common.pool.AbstractPool; +import net.srt.flink.process.model.ProcessEntity; + +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +/** + * ProcessPool + * + * @author zrx + * @since 2022/10/16 17:00 + */ +public class ProcessPool extends AbstractPool { + + private static final Map processEntityMap = new ConcurrentHashMap<>(); + + private static final ProcessPool instance = new ProcessPool(); + + public static ProcessPool getInstance() { + return instance; + } + + @Override + public Map getMap() { + return processEntityMap; + } + + public static void clear() { + processEntityMap.clear(); + } + + @Override + public void refresh(ProcessEntity entity) { + + } +} diff --git a/srt-cloud-framework/srt-cloud-flink/flink-version/flink-1.14/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-version/flink-1.14/pom.xml new file mode 100644 index 0000000..900c7a9 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-version/flink-1.14/pom.xml @@ -0,0 +1,184 @@ + + + + flink-version + net.srt + 2.0.0 + + 4.0.0 + + flink-1.14 + + + 1.14.6 + 2.12 + 2.3.0 + UTF-8 + 14.0 + 1.3.1 + 4.12 + + + + + + org.apache.flink + flink-table-planner_${scala.binary.version} + ${flink.version} + + + org.slf4j + slf4j-api + + + + + org.apache.flink + flink-table-runtime_${scala.binary.version} + ${flink.version} + + + org.slf4j + slf4j-api + + + org.apache.flink + flink-shaded-guava + + + + + org.apache.flink + flink-clients_${scala.binary.version} + ${flink.version} + + + org.slf4j + slf4j-api + + + + + org.apache.flink + flink-streaming-java_${scala.binary.version} + ${flink.version} + + + org.apache.flink + flink-yarn_${scala.binary.version} + ${flink.version} + + + org.apache.hadoop + hadoop-yarn-common + + + org.apache.hadoop + hadoop-common + + + org.apache.hadoop + hadoop-hdfs + + + org.apache.hadoop + hadoop-yarn-client + + + org.apache.hadoop + hadoop-mapreduce-client-core + + + + + org.apache.flink + flink-kubernetes_${scala.binary.version} + ${flink.version} + + + org.apache.flink + flink-python_${scala.binary.version} + ${flink.version} + + + org.apache.flink + flink-connector-kafka_${scala.binary.version} + ${flink.version} + + + org.apache.flink + flink-shaded-guava + 30.1.1-jre-${flink.guava.version} + + + com.ververica + flink-sql-connector-mysql-cdc + ${flinkcdc.version} + + + org.apache.flink + flink-shaded-guava + + + + + com.ververica + flink-sql-connector-sqlserver-cdc + ${flinkcdc.version} + + + com.ververica + flink-sql-connector-oracle-cdc + ${flinkcdc.version} + + + com.ververica + flink-sql-connector-postgres-cdc + ${flinkcdc.version} + + + org.apache.flink + flink-connector-jdbc_${scala.binary.version} + ${flink.version} + + + org.slf4j + slf4j-api + + + commons-cli + commons-cli + ${commons.version} + + + org.apache.doris + flink-doris-connector-1.14_${scala.binary.version} + 1.1.1 + + + com.starrocks + flink-connector-starrocks + 1.2.3_flink-1.14_${scala.binary.version} + + + com.github.jsqlparser + jsqlparser + + + + + org.apache.flink + flink-json + ${flink.version} + + + org.apache.flink + flink-runtime-web_${scala.binary.version} + ${flink.version} + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-version/flink-1.16/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-version/flink-1.16/pom.xml new file mode 100644 index 0000000..97b21b0 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-version/flink-1.16/pom.xml @@ -0,0 +1,157 @@ + + + + flink-version + net.srt + 2.0.0 + + 4.0.0 + + flink-1.16 + + + 1.16.0 + 16.0 + 2.3.0 + 1.3.1 + + + + + org.apache.flink + flink-python + ${flink.version} + + + org.apache.flink + flink-table-planner_2.12 + ${flink.version} + + + org.slf4j + slf4j-api + + + + + org.apache.flink + flink-table-runtime + ${flink.version} + + + org.apache.flink + flink-connector-jdbc + ${flink.version} + + + org.apache.flink + flink-table-api-java-bridge + ${flink.version} + + + org.apache.flink + flink-table-api-scala-bridge_2.12 + ${flink.version} + + + org.apache.flink + flink-core + ${flink.version} + + + org.apache.flink + flink-table-common + ${flink.version} + + + org.apache.flink + flink-table-api-java + ${flink.version} + + + org.apache.flink + flink-clients + ${flink.version} + + + org.slf4j + slf4j-api + + + + + org.apache.flink + flink-yarn + ${flink.version} + + + org.apache.hadoop + hadoop-yarn-common + + + org.apache.hadoop + hadoop-common + + + org.apache.hadoop + hadoop-hdfs + + + org.apache.hadoop + hadoop-yarn-client + + + org.apache.hadoop + hadoop-mapreduce-client-core + + + + + org.apache.flink + flink-kubernetes + ${flink.version} + + + org.apache.flink + flink-connector-kafka + ${flink.version} + + + org.apache.flink + flink-shaded-guava + 30.1.1-jre-${flink.guava.version} + + + com.ververica + flink-sql-connector-mysql-cdc + ${flinkcdc.version} + + + com.ververica + flink-sql-connector-oracle-cdc + ${flinkcdc.version} + + + org.slf4j + slf4j-api + + + commons-cli + commons-cli + ${commons.version} + + + org.apache.doris + flink-doris-connector-1.16 + 1.3.0 + + + org.apache.flink + flink-runtime-web + ${flink.version} + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/flink-version/pom.xml b/srt-cloud-framework/srt-cloud-flink/flink-version/pom.xml new file mode 100644 index 0000000..6d9c915 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/flink-version/pom.xml @@ -0,0 +1,32 @@ + + + + srt-cloud-flink + net.srt + 2.0.0 + + 4.0.0 + + flink-version + pom + + + + + flink-1.14 + + flink-1.14 + + + + + flink-1.16 + + flink-1.16 + + + + + diff --git a/srt-cloud-framework/srt-cloud-flink/pom.xml b/srt-cloud-framework/srt-cloud-flink/pom.xml new file mode 100644 index 0000000..52e594a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-flink/pom.xml @@ -0,0 +1,334 @@ + + + + srt-cloud-framework + net.srt + 2.0.0 + + 4.0.0 + pom + + flink-core-all + flink-common + flink-gateway + flink-version + flink-client + flink-executor + flink-metadata + flink-process + flink-daemon + flink-function + flink-alert + flink-catalog + flink-app + + srt-cloud-flink + + + 2.19.0 + 1.3.2 + 2.5.0 + 1.2.8 + 31.1-jre + 6.2.0.Final + 4.1.2 + 1.6.2 + 3.2.13 + 1.5 + 2.3.0 + + + + + + org.apache.logging.log4j + log4j-core + ${log4j.version} + + + org.apache.logging.log4j + log4j-api + ${log4j.version} + + + org.apache.logging.log4j + log4j-jul + ${log4j.version} + + + org.apache.logging.log4j + log4j-slf4j-impl + ${log4j.version} + + + org.apache.logging.log4j + log4j-to-slf4j + ${log4j.version} + + + javax.annotation + javax.annotation-api + ${javax.annotation-api.version} + + + + + com.google.protobuf + protobuf-java + ${protobuf-java.version} + + + + cn.hutool + hutool-all + ${hutool.version} + + + + com.alibaba + druid-spring-boot-starter + ${druid-starter} + + + org.projectlombok + lombok + ${lombok.version} + + + com.google.guava + guava + ${guava.version} + + + org.apache.commons + commons-lang3 + ${commons-lang3.version} + + + mysql + mysql-connector-java + ${mysql-connector-java.version} + + + com.oracle.ojdbc + ojdbc8 + ${ojdbc8.version} + + + org.postgresql + postgresql + ${postgresql.version} + + + org.hibernate + hibernate-validator + ${hibernate-validator.version} + + + junit + junit + ${junit.version} + provided + + + net.srt + flink-core-all + ${project.version} + + + net.srt + flink-client-base + ${project.version} + + + net.srt + flink-client-1.14 + ${project.version} + + + net.srt + flink-client-1.16 + ${project.version} + + + net.srt + flink-1.14 + ${project.version} + + + net.srt + flink-1.16 + ${project.version} + + + net.srt + flink-catalog-mysql-1.14 + ${project.version} + + + net.srt + flink-catalog-mysql-1.16 + ${project.version} + + + net.srt + flink-function + ${project.version} + + + net.srt + flink-common + ${project.version} + + + net.srt + flink-metadata-base + ${project.version} + + + net.srt + flink-metadata-mysql + ${project.version} + + + net.srt + flink-metadata-oracle + ${project.version} + + + net.srt + flink-metadata-sqlserver + ${project.version} + + + net.srt + flink-metadata-postgresql + ${project.version} + + + net.srt + flink-metadata-hive + ${project.version} + + + net.srt + flink-gateway + ${project.version} + + + net.srt + flink-executor + ${project.version} + + + net.srt + flink-client-hadoop + ${project.version} + + + net.srt + flink-alert-base + ${project.version} + + + net.srt + flink-alert-dingtalk + ${project.version} + + + net.srt + flink-alert-wechat + ${project.version} + + + net.srt + flink-alert-feishu + ${project.version} + + + net.srt + flink-alert-email + ${project.version} + + + net.srt + flink-daemon + ${project.version} + + + net.srt + flink-app-base + ${project.version} + + + org.apache.httpcomponents + httpclient + ${httpclient.version} + + + org.apache.poi + poi + ${poi.version} + + + org.apache.poi + poi-ooxml + ${poi.version} + + + org.apache.poi + poi-ooxml-schemas + + + + + org.apache.commons + commons-email + ${commons-email} + + + com.sun.mail + javax.mail + ${javax.mail} + + + net.srt + flink-process + ${project.version} + + + javax.xml.bind + jaxb-api + ${jaxb.version} + + + com.sun.xml.bind + jaxb-impl + ${jaxb.version} + + + com.sun.xml.bind + jaxb-core + ${jaxb.version} + + + javax.activation + activation + 1.1.1 + + + com.github.docker-java + docker-java-core + ${docker.java.version} + + + com.github.docker-java + docker-java-transport-httpclient5 + ${docker.java.version} + + + + + diff --git a/srt-cloud-framework/srt-cloud-mybatis/pom.xml b/srt-cloud-framework/srt-cloud-mybatis/pom.xml new file mode 100644 index 0000000..ee02735 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/pom.xml @@ -0,0 +1,34 @@ + + + net.srt + srt-cloud-framework + 2.0.0 + + 4.0.0 + srt-cloud-mybatis + jar + + + + net.srt + srt-cloud-security + 2.0.0 + + + com.alibaba + druid-spring-boot-starter + + + com.baomidou + mybatis-plus-boot-starter + + + mysql + mysql-connector-java + + + com.dameng + DmJdbcDriver18 + + + diff --git a/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/config/MybatisPlusConfig.java b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/config/MybatisPlusConfig.java new file mode 100644 index 0000000..09bcfe1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/config/MybatisPlusConfig.java @@ -0,0 +1,39 @@ +package net.srt.framework.mybatis.config; + +import com.baomidou.mybatisplus.extension.plugins.MybatisPlusInterceptor; +import com.baomidou.mybatisplus.extension.plugins.inner.BlockAttackInnerInterceptor; +import com.baomidou.mybatisplus.extension.plugins.inner.OptimisticLockerInnerInterceptor; +import com.baomidou.mybatisplus.extension.plugins.inner.PaginationInnerInterceptor; +import net.srt.framework.mybatis.handler.FieldMetaObjectHandler; +import net.srt.framework.mybatis.interceptor.DataScopeInnerInterceptor; +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; + +/** + * mybatis-plus 配置 + * + * @author 阿沐 babamu@126.com + */ +@Configuration +public class MybatisPlusConfig { + + @Bean + public MybatisPlusInterceptor mybatisPlusInterceptor() { + MybatisPlusInterceptor mybatisPlusInterceptor = new MybatisPlusInterceptor(); + // 数据权限 + mybatisPlusInterceptor.addInnerInterceptor(new DataScopeInnerInterceptor()); + // 分页插件 + mybatisPlusInterceptor.addInnerInterceptor(new PaginationInnerInterceptor()); + // 乐观锁 + mybatisPlusInterceptor.addInnerInterceptor(new OptimisticLockerInnerInterceptor()); + // 防止全表更新与删除 + mybatisPlusInterceptor.addInnerInterceptor(new BlockAttackInnerInterceptor()); + + return mybatisPlusInterceptor; + } + + @Bean + public FieldMetaObjectHandler fieldMetaObjectHandler(){ + return new FieldMetaObjectHandler(); + } +} diff --git a/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/constant/DataScopeEnum.java b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/constant/DataScopeEnum.java new file mode 100644 index 0000000..9cc0d80 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/constant/DataScopeEnum.java @@ -0,0 +1,35 @@ +package net.srt.framework.mybatis.constant; + +import lombok.AllArgsConstructor; +import lombok.Getter; + +/** + * 数据范围枚举 + */ +@Getter +@AllArgsConstructor +public enum DataScopeEnum { + /** + * 全部数据 + */ + ALL(0), + /** + * 本部门及子部门数据 + */ + DEPT_AND_CHILD(1), + /** + * 本部门数据 + */ + DEPT_ONLY(2), + /** + * 本人数据 + */ + SELF(3), + /** + * 自定义数据 + */ + CUSTOM(4); + + private final Integer value; + +} diff --git a/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/dao/BaseDao.java b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/dao/BaseDao.java new file mode 100644 index 0000000..3d95a11 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/dao/BaseDao.java @@ -0,0 +1,13 @@ +package net.srt.framework.mybatis.dao; + + +import com.baomidou.mybatisplus.core.mapper.BaseMapper; + +/** + * 基础Dao + * + * @author 阿沐 babamu@126.com + */ +public interface BaseDao extends BaseMapper { + +} diff --git a/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/entity/BaseEntity.java b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/entity/BaseEntity.java new file mode 100644 index 0000000..291c65e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/entity/BaseEntity.java @@ -0,0 +1,64 @@ +package net.srt.framework.mybatis.entity; + +import com.baomidou.mybatisplus.annotation.*; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; +import lombok.experimental.SuperBuilder; + +import java.util.Date; + +/** + * Entity基类 + * + * @author 阿沐 babamu@126.com + */ +@Data +@SuperBuilder +@AllArgsConstructor +@NoArgsConstructor +public abstract class BaseEntity { + /** + * id + */ + @TableId(value = "id", type = IdType.AUTO) + private Long id; + + /** + * 创建者 + */ + @TableField(fill = FieldFill.INSERT) + private Long creator; + + /** + * 创建时间 + */ + @TableField(fill = FieldFill.INSERT) + private Date createTime; + + /** + * 更新者 + */ + @TableField(fill = FieldFill.INSERT_UPDATE) + private Long updater; + + /** + * 更新时间 + */ + @TableField(fill = FieldFill.INSERT_UPDATE) + private Date updateTime; + + /** + * 版本号 + */ + @Version + @TableField(fill = FieldFill.INSERT) + private Integer version; + + /** + * 删除标记 + */ + @TableLogic + @TableField(fill = FieldFill.INSERT) + private Integer deleted; +} diff --git a/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/handler/FieldMetaObjectHandler.java b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/handler/FieldMetaObjectHandler.java new file mode 100644 index 0000000..33a9fef --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/handler/FieldMetaObjectHandler.java @@ -0,0 +1,55 @@ +package net.srt.framework.mybatis.handler; + +import com.baomidou.mybatisplus.core.handlers.MetaObjectHandler; +import net.srt.framework.security.user.SecurityUser; +import net.srt.framework.security.user.UserDetail; +import org.apache.ibatis.reflection.MetaObject; + +import java.util.Date; + +/** + * mybatis-plus 自动填充字段 + * + * @author 阿沐 babamu@126.com + */ +public class FieldMetaObjectHandler implements MetaObjectHandler { + private final static String CREATE_TIME = "createTime"; + private final static String CREATOR = "creator"; + private final static String UPDATE_TIME = "updateTime"; + private final static String UPDATER = "updater"; + private final static String ORG_ID = "orgId"; + private final static String VERSION = "version"; + private final static String DELETED = "deleted"; + + @Override + public void insertFill(MetaObject metaObject) { + UserDetail user = SecurityUser.getUser(); + if (user.getId() != null) { + // 创建者 + setFieldValByName(CREATOR, user.getId(), metaObject); + // 更新者 + setFieldValByName(UPDATER, user.getId(), metaObject); + // 创建者所属机构 + setFieldValByName(ORG_ID, user.getOrgId(), metaObject); + } + // 创建时间 + setFieldValByName(CREATE_TIME, new Date(), metaObject); + // 更新时间 + setFieldValByName(UPDATE_TIME, new Date(), metaObject); + // 版本号 + setFieldValByName(VERSION, 0, metaObject); + // 删除标识 + setFieldValByName(DELETED, 0, metaObject); + } + + @Override + public void updateFill(MetaObject metaObject) { + Long userId = SecurityUser.getUserId(); + if (userId != null) { + // 更新者 + setFieldValByName(UPDATER, userId, metaObject); + } + // 更新时间 + setFieldValByName(UPDATE_TIME, new Date(), metaObject); + } +} diff --git a/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/interceptor/DataScope.java b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/interceptor/DataScope.java new file mode 100644 index 0000000..64193ff --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/interceptor/DataScope.java @@ -0,0 +1,16 @@ +package net.srt.framework.mybatis.interceptor; + +import lombok.AllArgsConstructor; +import lombok.Data; + +/** + * 数据范围 + * + * @author 阿沐 babamu@126.com + */ +@Data +@AllArgsConstructor +public class DataScope { + private String sqlFilter; + +} diff --git a/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/interceptor/DataScopeInnerInterceptor.java b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/interceptor/DataScopeInnerInterceptor.java new file mode 100644 index 0000000..7a5949a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/interceptor/DataScopeInnerInterceptor.java @@ -0,0 +1,81 @@ +package net.srt.framework.mybatis.interceptor; + +import cn.hutool.core.util.StrUtil; +import com.baomidou.mybatisplus.core.toolkit.PluginUtils; +import com.baomidou.mybatisplus.extension.plugins.inner.InnerInterceptor; +import net.sf.jsqlparser.JSQLParserException; +import net.sf.jsqlparser.expression.Expression; +import net.sf.jsqlparser.expression.StringValue; +import net.sf.jsqlparser.expression.operators.conditional.AndExpression; +import net.sf.jsqlparser.parser.CCJSqlParserUtil; +import net.sf.jsqlparser.statement.select.PlainSelect; +import net.sf.jsqlparser.statement.select.Select; +import org.apache.ibatis.executor.Executor; +import org.apache.ibatis.mapping.BoundSql; +import org.apache.ibatis.mapping.MappedStatement; +import org.apache.ibatis.session.ResultHandler; +import org.apache.ibatis.session.RowBounds; + +import java.util.Map; + +/** + * 数据权限 + * + * @author 阿沐 babamu@126.com + */ +public class DataScopeInnerInterceptor implements InnerInterceptor { + + @Override + public void beforeQuery(Executor executor, MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) { + DataScope scope = getDataScope(parameter); + // 不进行数据过滤 + if(scope == null || StrUtil.isBlank(scope.getSqlFilter())){ + return; + } + + // 拼接新SQL + String buildSql = getSelect(boundSql.getSql(), scope); + + // 重写SQL + PluginUtils.mpBoundSql(boundSql).sql(buildSql); + } + + private DataScope getDataScope(Object parameter){ + if (parameter == null){ + return null; + } + + // 判断参数里是否有DataScope对象 + if (parameter instanceof Map) { + Map parameterMap = (Map) parameter; + for (Map.Entry entry : parameterMap.entrySet()) { + if (entry.getValue() != null && entry.getValue() instanceof DataScope) { + return (DataScope) entry.getValue(); + } + } + } else if (parameter instanceof DataScope) { + return (DataScope) parameter; + } + + return null; + } + + private String getSelect(String buildSql, DataScope scope){ + try { + Select select = (Select) CCJSqlParserUtil.parse(buildSql); + PlainSelect plainSelect = (PlainSelect) select.getSelectBody(); + + Expression expression = plainSelect.getWhere(); + if(expression == null){ + plainSelect.setWhere(new StringValue(scope.getSqlFilter())); + }else{ + AndExpression andExpression = new AndExpression(expression, new StringValue(scope.getSqlFilter())); + plainSelect.setWhere(andExpression); + } + + return select.toString().replaceAll("'", ""); + }catch (JSQLParserException e){ + return buildSql; + } + } +} diff --git a/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/service/BaseService.java b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/service/BaseService.java new file mode 100644 index 0000000..46bb80a --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/service/BaseService.java @@ -0,0 +1,17 @@ +package net.srt.framework.mybatis.service; + +import com.baomidou.mybatisplus.extension.service.IService; +import net.srt.framework.common.cache.bean.DataProjectCacheBean; + +/** + * 基础服务接口,所有Service接口都要继承 + * + * @author 阿沐 babamu@126.com + */ +public interface BaseService extends IService { + + + Long getProjectId(); + + DataProjectCacheBean getProject(); +} diff --git a/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/service/impl/BaseServiceImpl.java b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/service/impl/BaseServiceImpl.java new file mode 100644 index 0000000..89a427e --- /dev/null +++ b/srt-cloud-framework/srt-cloud-mybatis/src/main/java/net/srt/framework/mybatis/service/impl/BaseServiceImpl.java @@ -0,0 +1,218 @@ +package net.srt.framework.mybatis.service.impl; + +import cn.hutool.core.util.StrUtil; +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.baomidou.mybatisplus.core.mapper.BaseMapper; +import com.baomidou.mybatisplus.core.metadata.IPage; +import com.baomidou.mybatisplus.core.metadata.OrderItem; +import com.baomidou.mybatisplus.core.toolkit.StringUtils; +import com.baomidou.mybatisplus.extension.plugins.pagination.Page; +import com.baomidou.mybatisplus.extension.service.impl.ServiceImpl; +import net.srt.framework.common.cache.bean.DataProjectCacheBean; +import net.srt.framework.common.constant.Constant; +import net.srt.framework.common.exception.ErrorCode; +import net.srt.framework.common.exception.ServerException; +import net.srt.framework.common.query.Query; +import net.srt.framework.mybatis.constant.DataScopeEnum; +import net.srt.framework.mybatis.interceptor.DataScope; +import net.srt.framework.mybatis.service.BaseService; +import net.srt.framework.security.cache.TokenStoreCache; +import net.srt.framework.security.user.SecurityUser; +import net.srt.framework.security.user.UserDetail; +import net.srt.framework.security.utils.TokenUtils; +import org.springframework.beans.factory.annotation.Autowired; + +import javax.servlet.http.HttpServletRequest; +import java.util.List; + + +/** + * 基础服务类,所有Service都要继承 + * + * @author 阿沐 babamu@126.com + */ +public class BaseServiceImpl, T> extends ServiceImpl implements BaseService { + + @Autowired + private HttpServletRequest request; + @Autowired + private TokenStoreCache storeCache; + + /** + * 获取分页对象 + * + * @param query 分页参数 + */ + protected IPage getPage(Query query) { + Page page = new Page<>(query.getPage(), query.getLimit()); + + // 排序 + if (StringUtils.isNotBlank(query.getOrder())) { + if (query.isAsc()) { + return page.addOrder(OrderItem.asc(query.getOrder())); + } else { + return page.addOrder(OrderItem.desc(query.getOrder())); + } + } + + return page; + } + + /** + * 原生SQL 数据权限 + * + * @param projectTableAlias 表别名,多表关联时,需要填写表别名 + * @param orgTableAlias 表别名,多表关联时,需要填写表别名 + * @param orgIdAlias 机构ID别名,null:表示org_id + * @param orgIdAlias 项目idID别名,null:表示project_id + * @return 返回数据权限 + */ + protected DataScope getDataScope(String projectTableAlias, String orgTableAlias, String orgIdAlias, String projectIdAlias, boolean filterOrgId, boolean filterProjectId) { + UserDetail user = SecurityUser.getUser(); + List projectIds = user.getProjectIds(); + // 如果是超级管理员,则不进行数据过滤 + + // 如果为null,则设置成空字符串 + if (orgTableAlias == null) { + orgTableAlias = ""; + } + + if (projectTableAlias == null) { + projectTableAlias = ""; + } + + // 获取表的别名 + if (StringUtils.isNotBlank(orgTableAlias)) { + orgTableAlias += "."; + } + + // 获取表的别名 + if (StringUtils.isNotBlank(projectTableAlias)) { + projectTableAlias += "."; + } + + StringBuilder sqlFilter = new StringBuilder(); + sqlFilter.append(" ( 1=1 "); + + // 数据权限范围 + List dataScopeList = user.getDataScopeList(); + + if (!user.getSuperAdmin().equals(Constant.SUPER_ADMIN)) { + // 机构数据过滤,如果角色分配了机构的数据权限,则过滤,仅适用于有机构id的表 + if (dataScopeList != null && dataScopeList.size() > 0 && filterOrgId) { + sqlFilter.append(" AND "); + if (StringUtils.isBlank(orgIdAlias)) { + orgIdAlias = "org_id"; + } + sqlFilter.append(orgTableAlias).append(orgIdAlias); + sqlFilter.append(" IN( ").append(StrUtil.join(",", dataScopeList)).append(" ) "); + } + } + + Long projectId = getProjectId(); + + if (StringUtils.isBlank(projectIdAlias)) { + projectIdAlias = "project_id"; + } + //查看全局项目的时候不需要过滤 + if (filterProjectId) { + sqlFilter.append(" AND "); + sqlFilter.append(projectTableAlias).append(projectIdAlias).append("=").append(projectId); + } + + //始终需要过滤 + if (projectIds != null && projectIds.size() > 0) { + if (StringUtils.isBlank(projectIdAlias)) { + projectIdAlias = "project_id"; + } + sqlFilter.append(" AND "); + sqlFilter.append(projectTableAlias).append(projectIdAlias); + sqlFilter.append(" IN( ").append(StrUtil.join(",", projectIds)).append(" ) "); + } + + if (!user.getSuperAdmin().equals(Constant.SUPER_ADMIN)) { + if (DataScopeEnum.SELF.getValue().equals(user.getDataScope())) { + sqlFilter.append(" AND "); + // 查询本人的数据 + sqlFilter.append(projectTableAlias).append("creator").append("=").append(user.getId()); + } + + } + + sqlFilter.append(")"); + + return new DataScope(sqlFilter.toString()); + } + + /** + * 获取当前的项目id + * + * @return + */ + @Override + public Long getProjectId() { + Long projectId = storeCache.getProjectId(TokenUtils.getAccessToken(request)); + //项目id过期了,重新登录 + if (projectId == null) { + throw new ServerException(ErrorCode.UNAUTHORIZED); + } + return projectId; + } + + /** + * 获取当前的项目数据库信息 + * + * @return + */ + @Override + public DataProjectCacheBean getProject() { + DataProjectCacheBean dataProjectCacheBean = storeCache.getProject(getProjectId()); + if (dataProjectCacheBean == null) { + throw new ServerException("没有查询到当前的项目信息,请尝试重启服务解决!"); + } + return dataProjectCacheBean; + } + + /** + * 获取当前的项目数据库信息 + * + * @return + */ + protected String getAccessToken() { + return TokenUtils.getAccessToken(request); + } + + /** + * 根据项目id获取 + * + * @return + */ + protected DataProjectCacheBean getProject(Long projectId) { + DataProjectCacheBean dataProjectCacheBean = storeCache.getProject(projectId); + if (dataProjectCacheBean == null) { + throw new ServerException("没有查询到当前的项目信息,请尝试重启服务解决!"); + } + return dataProjectCacheBean; + } + + /** + * MyBatis-Plus 数据权限 + */ + protected void dataScopeWithoutOrgId(LambdaQueryWrapper queryWrapper) { + DataScope dataScope = getDataScope(null, null, null, null, false, true); + if (dataScope != null) { + queryWrapper.apply(dataScope.getSqlFilter()); + } + } + + /** + * MyBatis-Plus 数据权限 + */ + protected void dataScopeWithOrgId(LambdaQueryWrapper queryWrapper) { + DataScope dataScope = getDataScope(null, null, null, null, true, true); + if (dataScope != null) { + queryWrapper.apply(dataScope.getSqlFilter()); + } + } + +} diff --git a/srt-cloud-framework/srt-cloud-security/pom.xml b/srt-cloud-framework/srt-cloud-security/pom.xml new file mode 100644 index 0000000..36485c7 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/pom.xml @@ -0,0 +1,26 @@ + + + net.srt + srt-cloud-framework + 2.0.0 + + 4.0.0 + srt-cloud-security + jar + + + + net.srt + srt-cloud-common + 2.0.0 + + + org.springframework.boot + spring-boot-starter-web + + + org.springframework.boot + spring-boot-starter-security + + + diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/cache/TokenStoreCache.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/cache/TokenStoreCache.java new file mode 100644 index 0000000..2f1dbc3 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/cache/TokenStoreCache.java @@ -0,0 +1,83 @@ +package net.srt.framework.security.cache; + +import lombok.AllArgsConstructor; +import net.srt.framework.common.cache.RedisCache; +import net.srt.framework.common.cache.RedisKeys; +import net.srt.framework.common.cache.bean.DataProjectCacheBean; +import net.srt.framework.common.cache.bean.Neo4jInfo; +import net.srt.framework.common.utils.JsonUtils; +import net.srt.framework.security.user.UserDetail; +import org.springframework.stereotype.Component; + +/** + * 认证 Cache + * + * @author 阿沐 babamu@126.com + */ +@Component +@AllArgsConstructor +public class TokenStoreCache { + private final RedisCache redisCache; + + public void saveUser(String accessToken, UserDetail user) { + String key = RedisKeys.getAccessTokenKey(accessToken); + redisCache.set(key, user); + } + + public UserDetail getUser(String accessToken) { + String key = RedisKeys.getAccessTokenKey(accessToken); + return (UserDetail) redisCache.get(key); + } + + public void deleteUser(String accessToken) { + String key = RedisKeys.getAccessTokenKey(accessToken); + redisCache.delete(key); + } + + public void saveProjectId(String accessToken, Long projectId) { + String key = RedisKeys.getProjectIdKey(accessToken); + redisCache.set(key, projectId); + } + + public Long getProjectId(String accessToken) { + String key = RedisKeys.getProjectIdKey(accessToken); + Object projectId = redisCache.get(key); + if (projectId == null) { + return null; + } + return Long.parseLong(projectId.toString()); + } + + public void saveProject(Long projectId, DataProjectCacheBean projectCacheBean) { + String key = RedisKeys.getProjectKey(projectId); + redisCache.set(key, JsonUtils.toJsonString(projectCacheBean), RedisCache.NOT_EXPIRE); + } + + public DataProjectCacheBean getProject(Long projectId) { + String key = RedisKeys.getProjectKey(projectId); + String projectJson = (String) redisCache.get(key); + if (projectJson == null) { + return null; + } + return JsonUtils.parseObject(projectJson, DataProjectCacheBean.class); + } + + public void deleteProject(Long projectId) { + String key = RedisKeys.getProjectKey(projectId); + redisCache.delete(key); + } + + public void saveNeo4jInfo(Long projectId, Neo4jInfo neo4jInfo) { + String key = RedisKeys.getNeo4jKey(projectId); + redisCache.set(key, JsonUtils.toJsonString(neo4jInfo), RedisCache.NOT_EXPIRE); + } + + public Neo4jInfo getNeo4jInfo(Long projectId) { + String key = RedisKeys.getNeo4jKey(projectId); + String projectJson = (String) redisCache.get(key); + if (projectJson == null) { + return null; + } + return JsonUtils.parseObject(projectJson, Neo4jInfo.class); + } +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/config/PasswordConfig.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/config/PasswordConfig.java new file mode 100644 index 0000000..ed3ac09 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/config/PasswordConfig.java @@ -0,0 +1,20 @@ +package net.srt.framework.security.config; + +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; +import org.springframework.security.crypto.factory.PasswordEncoderFactories; +import org.springframework.security.crypto.password.PasswordEncoder; + +/** + * 加密配置 + * + * @author 阿沐 babamu@126.com + */ +@Configuration +public class PasswordConfig { + + @Bean + public PasswordEncoder passwordEncoder(){ + return PasswordEncoderFactories.createDelegatingPasswordEncoder(); + } +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/config/PermitResource.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/config/PermitResource.java new file mode 100644 index 0000000..c5f7a8f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/config/PermitResource.java @@ -0,0 +1,59 @@ +package net.srt.framework.security.config; + +import lombok.SneakyThrows; +import org.apache.commons.lang3.StringUtils; +import org.springframework.beans.factory.config.YamlPropertiesFactoryBean; +import org.springframework.core.io.Resource; +import org.springframework.core.io.support.PathMatchingResourcePatternResolver; +import org.springframework.core.io.support.ResourcePatternResolver; +import org.springframework.stereotype.Component; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * 允许访问的资源 + * + * @author 阿沐 babamu@126.com + */ +@Component +public class PermitResource { + /** + * 指定被 spring security 忽略的URL + */ + @SneakyThrows + public List getPermitList(){ + ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver(); + Resource[] resources = resolver.getResources("classpath*:auth.yml"); + String key = "auth.ignore_urls"; + + return getPropertiesList(key, resources); + } + + private List getPropertiesList(String key, Resource... resources){ + List list = new ArrayList<>(); + + // 解析资源文件 + for(Resource resource : resources) { + Properties properties = loadYamlProperties(resource); + + for (Map.Entry entry : properties.entrySet()) { + String tmpKey = StringUtils.substringBefore(entry.getKey().toString(), "["); + if(tmpKey.equalsIgnoreCase(key)){ + list.add(entry.getValue().toString()); + } + } + } + return list; + } + + private Properties loadYamlProperties(Resource... resources) { + YamlPropertiesFactoryBean factory = new YamlPropertiesFactoryBean(); + factory.setResources(resources); + factory.afterPropertiesSet(); + + return factory.getObject(); + } +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/config/SecurityFilterConfig.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/config/SecurityFilterConfig.java new file mode 100644 index 0000000..329a8c1 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/config/SecurityFilterConfig.java @@ -0,0 +1,51 @@ +package net.srt.framework.security.config; + +import lombok.AllArgsConstructor; +import net.srt.framework.security.exception.SecurityAuthenticationEntryPoint; +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; +import org.springframework.http.HttpMethod; +import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity; +import org.springframework.security.config.annotation.web.builders.HttpSecurity; +import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; +import org.springframework.security.config.http.SessionCreationPolicy; +import org.springframework.security.web.SecurityFilterChain; +import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter; +import org.springframework.web.filter.OncePerRequestFilter; + +import java.util.List; + +/** + * Spring SecurityFilter 配置文件 + * + * @author 阿沐 babamu@126.com + */ +@Configuration +@AllArgsConstructor +@EnableWebSecurity +@EnableGlobalMethodSecurity(prePostEnabled = true) +public class SecurityFilterConfig { + private final OncePerRequestFilter authenticationTokenFilter; + private final PermitResource permitResource; + + @Bean + SecurityFilterChain filterChain(HttpSecurity http) throws Exception { + // 忽略授权的地址列表 + List permitList = permitResource.getPermitList(); + String[] permits = permitList.toArray(new String[0]); + + http + .addFilterBefore(authenticationTokenFilter, UsernamePasswordAuthenticationFilter.class) + .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS) + .and().authorizeRequests() + .antMatchers(permits).permitAll() + .antMatchers(HttpMethod.OPTIONS).permitAll() + .anyRequest().authenticated() + .and().exceptionHandling().authenticationEntryPoint(new SecurityAuthenticationEntryPoint()) + .and().headers().frameOptions().disable() + .and().csrf().disable() + ; + + return http.build(); + } +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/exception/SecurityAuthenticationEntryPoint.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/exception/SecurityAuthenticationEntryPoint.java new file mode 100644 index 0000000..c324810 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/exception/SecurityAuthenticationEntryPoint.java @@ -0,0 +1,29 @@ +package net.srt.framework.security.exception; + +import net.srt.framework.common.exception.ErrorCode; +import net.srt.framework.common.utils.HttpContextUtils; +import net.srt.framework.common.utils.JsonUtils; +import net.srt.framework.common.utils.Result; +import org.springframework.security.core.AuthenticationException; +import org.springframework.security.web.AuthenticationEntryPoint; + +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * 匿名用户(token不存在、错误),异常处理器 + * + * @author 阿沐 babamu@126.com + */ +public class SecurityAuthenticationEntryPoint implements AuthenticationEntryPoint { + + @Override + public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException { + response.setContentType("application/json; charset=utf-8"); + response.setHeader("Access-Control-Allow-Credentials", "true"); + response.setHeader("Access-Control-Allow-Origin", HttpContextUtils.getOrigin()); + + response.getWriter().print(JsonUtils.toJsonString(Result.error(ErrorCode.UNAUTHORIZED))); + } +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/filter/AuthenticationTokenFilter.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/filter/AuthenticationTokenFilter.java new file mode 100644 index 0000000..4f2f3ba --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/filter/AuthenticationTokenFilter.java @@ -0,0 +1,58 @@ +package net.srt.framework.security.filter; + +import lombok.AllArgsConstructor; +import net.srt.framework.common.cache.RedisCache; +import net.srt.framework.security.cache.TokenStoreCache; +import net.srt.framework.security.user.UserDetail; +import net.srt.framework.security.utils.TokenUtils; +import org.apache.commons.lang3.StringUtils; +import org.springframework.security.authentication.UsernamePasswordAuthenticationToken; +import org.springframework.security.core.Authentication; +import org.springframework.security.core.context.SecurityContext; +import org.springframework.security.core.context.SecurityContextHolder; +import org.springframework.stereotype.Component; +import org.springframework.web.filter.OncePerRequestFilter; + +import javax.servlet.FilterChain; +import javax.servlet.ServletException; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; +import java.io.IOException; + +/** + * 认证过滤器 + * + * @author 阿沐 babamu@126.com + */ +@Component +@AllArgsConstructor +public class AuthenticationTokenFilter extends OncePerRequestFilter { + private final TokenStoreCache tokenStoreCache; + + @Override + protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain chain) throws ServletException, IOException { + String accessToken = TokenUtils.getAccessToken(request); + // accessToken为空,表示未登录 + if (StringUtils.isBlank(accessToken)) { + chain.doFilter(request, response); + return; + } + + // 获取登录用户信息 + UserDetail user = tokenStoreCache.getUser(accessToken); + if (user == null) { + chain.doFilter(request, response); + return; + } + + // 用户存在 + Authentication authentication = new UsernamePasswordAuthenticationToken(user, null, user.getAuthorities()); + + // 新建 SecurityContext + SecurityContext context = SecurityContextHolder.createEmptyContext(); + context.setAuthentication(authentication); + SecurityContextHolder.setContext(context); + + chain.doFilter(request, response); + } +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileAuthenticationProvider.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileAuthenticationProvider.java new file mode 100644 index 0000000..761b317 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileAuthenticationProvider.java @@ -0,0 +1,87 @@ +package net.srt.framework.security.mobile; + +import org.springframework.beans.factory.InitializingBean; +import org.springframework.context.MessageSource; +import org.springframework.context.MessageSourceAware; +import org.springframework.context.support.MessageSourceAccessor; +import org.springframework.security.authentication.AuthenticationProvider; +import org.springframework.security.authentication.BadCredentialsException; +import org.springframework.security.core.Authentication; +import org.springframework.security.core.AuthenticationException; +import org.springframework.security.core.SpringSecurityMessageSource; +import org.springframework.security.core.authority.mapping.GrantedAuthoritiesMapper; +import org.springframework.security.core.authority.mapping.NullAuthoritiesMapper; +import org.springframework.security.core.userdetails.UserDetails; +import org.springframework.security.core.userdetails.UsernameNotFoundException; +import org.springframework.util.Assert; + +/** + * 手机短信登录 AuthenticationProvider + * + * @author 阿沐 babamu@126.com + */ +public class MobileAuthenticationProvider implements AuthenticationProvider, InitializingBean, MessageSourceAware { + protected MessageSourceAccessor messages = SpringSecurityMessageSource.getAccessor(); + private final GrantedAuthoritiesMapper authoritiesMapper = new NullAuthoritiesMapper(); + private final MobileUserDetailsService mobileUserDetailsService; + private final MobileVerifyCodeService mobileVerifyCodeService; + + public MobileAuthenticationProvider(MobileUserDetailsService mobileUserDetailsService, MobileVerifyCodeService mobileVerifyCodeService) { + this.mobileUserDetailsService = mobileUserDetailsService; + this.mobileVerifyCodeService = mobileVerifyCodeService; + } + + @Override + public Authentication authenticate(Authentication authentication) throws AuthenticationException { + Assert.isInstanceOf(MobileAuthenticationToken.class, authentication, + () -> messages.getMessage( + "MobileAuthenticationProvider.onlySupports", + "Only MobileAuthenticationProvider is supported")); + + MobileAuthenticationToken authenticationToken = (MobileAuthenticationToken) authentication; + String mobile = authenticationToken.getName(); + String code = (String) authenticationToken.getCredentials(); + + try { + UserDetails userDetails = mobileUserDetailsService.loadUserByMobile(mobile); + if (userDetails == null) { + throw new BadCredentialsException("Bad credentials"); + } + + // 短信验证码效验 + if (mobileVerifyCodeService.verifyCode(mobile, code)) { + return createSuccessAuthentication(authentication, userDetails); + } else { + throw new BadCredentialsException("mobile code is not matched"); + } + } catch (UsernameNotFoundException ex) { + throw new BadCredentialsException(this.messages + .getMessage("MobileAuthenticationProvider.badCredentials", "Bad credentials")); + } + + } + + protected Authentication createSuccessAuthentication(Authentication authentication, UserDetails user) { + MobileAuthenticationToken result = new MobileAuthenticationToken(user, null, + authoritiesMapper.mapAuthorities(user.getAuthorities())); + result.setDetails(authentication.getDetails()); + return result; + } + + @Override + public boolean supports(Class authentication) { + return MobileAuthenticationToken.class.isAssignableFrom(authentication); + } + + @Override + public void afterPropertiesSet() throws Exception { + Assert.notNull(mobileUserDetailsService, "mobileUserDetailsService must not be null"); + Assert.notNull(mobileVerifyCodeService, "mobileVerifyCodeService must not be null"); + } + + @Override + public void setMessageSource(MessageSource messageSource) { + this.messages = new MessageSourceAccessor(messageSource); + } + +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileAuthenticationToken.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileAuthenticationToken.java new file mode 100644 index 0000000..2fc91d4 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileAuthenticationToken.java @@ -0,0 +1,56 @@ +package net.srt.framework.security.mobile; + +import org.springframework.security.authentication.AbstractAuthenticationToken; +import org.springframework.security.core.GrantedAuthority; +import org.springframework.security.core.SpringSecurityCoreVersion; +import org.springframework.util.Assert; + +import java.util.Collection; + +/** + * 手机短信登录 AuthenticationToken + * + * @author 阿沐 babamu@126.com + */ +public class MobileAuthenticationToken extends AbstractAuthenticationToken { + private static final long serialVersionUID = SpringSecurityCoreVersion.SERIAL_VERSION_UID; + private final Object principal; + private String code; + + public MobileAuthenticationToken(Object principal, String code) { + super(null); + this.principal = principal; + this.code = code; + setAuthenticated(false); + } + + public MobileAuthenticationToken(Object principal, String code, Collection authorities) { + super(authorities); + this.principal = principal; + this.code = code; + super.setAuthenticated(true); + } + + @Override + public void setAuthenticated(boolean isAuthenticated) throws IllegalArgumentException { + Assert.isTrue(!isAuthenticated, + "Cannot set this token to trusted - use constructor which takes a GrantedAuthority list instead"); + super.setAuthenticated(false); + } + + @Override + public Object getCredentials() { + return this.code; + } + + @Override + public Object getPrincipal() { + return this.principal; + } + + @Override + public void eraseCredentials() { + super.eraseCredentials(); + this.code = null; + } +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileUserDetailsService.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileUserDetailsService.java new file mode 100644 index 0000000..a63e044 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileUserDetailsService.java @@ -0,0 +1,21 @@ +package net.srt.framework.security.mobile; + +import org.springframework.security.core.userdetails.UserDetails; +import org.springframework.security.core.userdetails.UsernameNotFoundException; + +/** + * 手机短信登录,UserDetailsService + * + * @author 阿沐 babamu@126.com + */ +public interface MobileUserDetailsService { + + /** + * 通过手机号加载用户信息 + * + * @param mobile 手机号 + * @return 用户信息 + * @throws UsernameNotFoundException 不存在异常 + */ + UserDetails loadUserByMobile(String mobile) throws UsernameNotFoundException; +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileVerifyCodeService.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileVerifyCodeService.java new file mode 100644 index 0000000..56c7238 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/mobile/MobileVerifyCodeService.java @@ -0,0 +1,11 @@ +package net.srt.framework.security.mobile; + +/** + * 手机短信登录,验证码效验 + * + * @author 阿沐 babamu@126.com + */ +public interface MobileVerifyCodeService { + + boolean verifyCode(String mobile, String code); +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/user/SecurityUser.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/user/SecurityUser.java new file mode 100644 index 0000000..96ea716 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/user/SecurityUser.java @@ -0,0 +1,37 @@ +package net.srt.framework.security.user; + +import org.springframework.security.core.Authentication; +import org.springframework.security.core.context.SecurityContextHolder; + +/** + * 用户 + * + * @author 阿沐 babamu@126.com + */ +public class SecurityUser { + + /** + * 获取用户信息 + */ + public static UserDetail getUser() { + UserDetail user; + try { + //不在同一个线程执行会获取不到,如远程调用就不行 + Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); + Object principal = authentication.getPrincipal(); + user = (UserDetail) principal; + } catch (Exception e) { + return new UserDetail(); + } + + return user; + } + + /** + * 获取用户ID + */ + public static Long getUserId() { + return getUser().getId(); + } + +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/user/UserDetail.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/user/UserDetail.java new file mode 100644 index 0000000..9575047 --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/user/UserDetail.java @@ -0,0 +1,97 @@ +package net.srt.framework.security.user; + +import com.fasterxml.jackson.annotation.JsonIgnore; +import lombok.Data; +import org.springframework.security.core.GrantedAuthority; +import org.springframework.security.core.authority.SimpleGrantedAuthority; +import org.springframework.security.core.userdetails.UserDetails; + +import java.util.Collection; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * 登录用户信息 + * + * @author 阿沐 babamu@126.com + */ +@Data +public class UserDetail implements UserDetails { + private static final long serialVersionUID = 1L; + + private Long id; + private String username; + private String password; + private String realName; + private String avatar; + private Integer gender; + private String email; + private String mobile; + private Long orgId; + private Integer status; + private Integer superAdmin; + + /** + * 数据权限范围 + *

+ * null:表示全部数据权限 + */ + private List dataScopeList; + + /** + * 数据权限 + */ + private Integer dataScope; + /** + * 帐户是否过期 + */ + private boolean isAccountNonExpired = true; + /** + * 帐户是否被锁定 + */ + private boolean isAccountNonLocked = true; + /** + * 密码是否过期 + */ + private boolean isCredentialsNonExpired = true; + /** + * 帐户是否可用 + */ + private boolean isEnabled = true; + /** + * 拥有权限集合 + */ + private Set authoritySet; + + /** + * 拥有的项目id列表 + */ + private List projectIds; + + @Override + @JsonIgnore + public Collection getAuthorities() { + return authoritySet.stream().map(SimpleGrantedAuthority::new).collect(Collectors.toSet()); + } + + @Override + public boolean isAccountNonExpired() { + return this.isAccountNonExpired; + } + + @Override + public boolean isAccountNonLocked() { + return this.isAccountNonLocked; + } + + @Override + public boolean isCredentialsNonExpired() { + return this.isCredentialsNonExpired; + } + + @Override + public boolean isEnabled() { + return this.isEnabled; + } +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/utils/TokenUtils.java b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/utils/TokenUtils.java new file mode 100644 index 0000000..18ae9fd --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/java/net/srt/framework/security/utils/TokenUtils.java @@ -0,0 +1,34 @@ +package net.srt.framework.security.utils; + +import cn.hutool.core.lang.UUID; +import org.apache.commons.lang3.StringUtils; +import org.springframework.web.context.request.RequestContextHolder; +import org.springframework.web.context.request.ServletRequestAttributes; + +import javax.servlet.http.HttpServletRequest; + +/** + * Token 工具类 + * + * @author 阿沐 babamu@126.com + */ +public class TokenUtils { + + /** + * 生成 AccessToken + */ + public static String generator() { + return UUID.fastUUID().toString(true); + } + + /** + * 获取 AccessToken + */ + public static String getAccessToken(HttpServletRequest request) { + String accessToken = request.getHeader("Authorization"); + if (StringUtils.isBlank(accessToken)) { + accessToken = request.getParameter("access_token"); + } + return accessToken; + } +} diff --git a/srt-cloud-framework/srt-cloud-security/src/main/resources/auth.yml b/srt-cloud-framework/srt-cloud-security/src/main/resources/auth.yml new file mode 100644 index 0000000..bb6867f --- /dev/null +++ b/srt-cloud-framework/srt-cloud-security/src/main/resources/auth.yml @@ -0,0 +1,10 @@ +auth: + ignore_urls: + - /actuator/** + - /v3/api-docs/** + - /webjars/** + - /swagger/** + - /swagger-resources/** + - /swagger-ui.html + - /swagger-ui/** + - /doc.html \ No newline at end of file diff --git a/srt-cloud-gateway/pom.xml b/srt-cloud-gateway/pom.xml new file mode 100644 index 0000000..ca668eb --- /dev/null +++ b/srt-cloud-gateway/pom.xml @@ -0,0 +1,198 @@ + + + net.srt + srt-cloud + 2.0.0 + + 4.0.0 + srt-cloud-gateway + jar + + + + org.springframework.cloud + spring-cloud-starter-bootstrap + + + org.springframework.boot + spring-boot-configuration-processor + true + + + org.springframework.cloud + spring-cloud-starter-gateway + + + org.springframework.boot + spring-boot-starter-webflux + + + spring-boot-starter-logging + org.springframework.boot + + + + + + org.springframework.boot + spring-boot-starter-log4j2 + + + org.springframework.cloud + spring-cloud-starter-loadbalancer + + + com.github.ben-manes.caffeine + caffeine + + + org.springframework.boot + spring-boot-starter-actuator + + + com.alibaba.cloud + spring-cloud-starter-alibaba-nacos-discovery + + + org.springdoc + springdoc-openapi-webflux-ui + + + + + + + + + + org.codehaus.mojo + appassembler-maven-plugin + 2.1.0 + + + + + generate-jsw-scripts + package + + generate-daemons + + + + + + + flat + + src/main/resources + true + + true + + conf + + lib + + bin + UTF-8 + logs + + + + ${project.artifactId} + net.srt.GatewayApplication + + jsw + + + + jsw + + linux-x86-32 + linux-x86-64 + windows-x86-32 + windows-x86-64 + + + + configuration.directory.in.classpath.first + conf + + + wrapper.ping.timeout + 120 + + + set.default.REPO_DIR + lib + + + wrapper.logfile + logs/wrapper.log + + + + + + + + + -server + -Dfile.encoding=utf-8 + -Xms128m + -Xmx2048m + -XX:+PrintGCDetails + -XX:+PrintGCDateStamps + -Xloggc:logs/gc.log + + + + + + + net.srt.GatewayApplication + ${project.artifactId} + + + + + + + + maven-assembly-plugin + + + ${project.parent.basedir}/assembly/assembly-win.xml + ${project.parent.basedir}/assembly/assembly-linux.xml + + + + + make-assembly + package + + single + + + + + + + org.apache.maven.plugins + maven-surefire-plugin + + true + + + + + + diff --git a/srt-cloud-gateway/src/main/java/net/srt/GatewayApplication.java b/srt-cloud-gateway/src/main/java/net/srt/GatewayApplication.java new file mode 100644 index 0000000..88f2e9a --- /dev/null +++ b/srt-cloud-gateway/src/main/java/net/srt/GatewayApplication.java @@ -0,0 +1,20 @@ +package net.srt; + +import org.springframework.boot.SpringApplication; +import org.springframework.boot.autoconfigure.SpringBootApplication; +import org.springframework.cloud.client.discovery.EnableDiscoveryClient; + +/** + * 网关服务 + * + * @author 阿沐 babamu@126.com + */ +@SpringBootApplication +@EnableDiscoveryClient +public class GatewayApplication { + + public static void main(String[] args) { + SpringApplication.run(GatewayApplication.class, args); + } + +} diff --git a/srt-cloud-gateway/src/main/java/net/srt/config/CorsConfig.java b/srt-cloud-gateway/src/main/java/net/srt/config/CorsConfig.java new file mode 100644 index 0000000..4cb04f5 --- /dev/null +++ b/srt-cloud-gateway/src/main/java/net/srt/config/CorsConfig.java @@ -0,0 +1,53 @@ +package net.srt.config; + +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; +import org.springframework.http.HttpHeaders; +import org.springframework.http.HttpMethod; +import org.springframework.http.HttpStatus; +import org.springframework.http.server.reactive.ServerHttpRequest; +import org.springframework.http.server.reactive.ServerHttpResponse; +import org.springframework.web.cors.reactive.CorsUtils; +import org.springframework.web.server.ServerWebExchange; +import org.springframework.web.server.WebFilter; +import org.springframework.web.server.WebFilterChain; +import reactor.core.publisher.Mono; + +/** + * Cors跨域 + * + * @author 阿沐 babamu@126.com + */ +@Configuration +public class CorsConfig { + private static final String MAX_AGE = "18000L"; + + @Bean + public WebFilter corsFilter() { + return (ServerWebExchange ctx, WebFilterChain chain) -> { + ServerHttpRequest request = ctx.getRequest(); + if (!CorsUtils.isCorsRequest(request)) { + return chain.filter(ctx); + } + HttpHeaders requestHeaders = request.getHeaders(); + ServerHttpResponse response = ctx.getResponse(); + HttpMethod requestMethod = requestHeaders.getAccessControlRequestMethod(); + HttpHeaders headers = response.getHeaders(); + headers.add(HttpHeaders.ACCESS_CONTROL_ALLOW_ORIGIN, requestHeaders.getOrigin()); + headers.addAll(HttpHeaders.ACCESS_CONTROL_ALLOW_HEADERS, requestHeaders.getAccessControlRequestHeaders()); + if (requestMethod != null) { + headers.add(HttpHeaders.ACCESS_CONTROL_ALLOW_METHODS, requestMethod.name()); + } + headers.add(HttpHeaders.ACCESS_CONTROL_ALLOW_CREDENTIALS, "true"); + headers.add(HttpHeaders.ACCESS_CONTROL_EXPOSE_HEADERS, "*"); + headers.add(HttpHeaders.ACCESS_CONTROL_MAX_AGE, MAX_AGE); + if (request.getMethod() == HttpMethod.OPTIONS) { + response.setStatusCode(HttpStatus.OK); + return Mono.empty(); + } + return chain.filter(ctx); + }; + } + +} + diff --git a/srt-cloud-gateway/src/main/java/net/srt/config/SpringDocConfig.java b/srt-cloud-gateway/src/main/java/net/srt/config/SpringDocConfig.java new file mode 100644 index 0000000..17dc394 --- /dev/null +++ b/srt-cloud-gateway/src/main/java/net/srt/config/SpringDocConfig.java @@ -0,0 +1,59 @@ +package net.srt.config; + +import io.swagger.v3.oas.models.OpenAPI; +import io.swagger.v3.oas.models.info.Contact; +import io.swagger.v3.oas.models.info.Info; +import io.swagger.v3.oas.models.info.License; +import org.springdoc.core.GroupedOpenApi; +import org.springdoc.core.SwaggerUiConfigParameters; +import org.springframework.cloud.gateway.route.RouteDefinition; +import org.springframework.cloud.gateway.route.RouteDefinitionLocator; +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; +import org.springframework.context.annotation.Lazy; + +import java.util.ArrayList; +import java.util.List; + +/** + * SpringDoc 配置 + * + * @author 阿沐 babamu@126.com + */ +@Configuration +public class SpringDocConfig { + + @Bean + @Lazy(false) + public List apis(SwaggerUiConfigParameters swaggerUiConfigParameters, RouteDefinitionLocator locator) { + List groups = new ArrayList<>(); + List definitions = locator.getRouteDefinitions().collectList().block(); + assert definitions != null; + definitions.stream().filter(routeDefinition -> !routeDefinition.getId().equals("openapi")).forEach(routeDefinition -> { + if (routeDefinition.getId().startsWith("ReactiveCompositeDiscoveryClient")) { + return; + } + String name = routeDefinition.getPredicates().get(0).getArgs().values().stream().findFirst().get(); + name = name.replace("/**", "").replace("/", ""); + swaggerUiConfigParameters.addGroup(name); + GroupedOpenApi.builder().pathsToMatch("/" + name + "/**").group(routeDefinition.getId()).build(); + }); + return groups; + } + + + @Bean + public OpenAPI customOpenAPI() { + Contact contact= new Contact(); + contact.setName("阿沐 babamu@126.com"); + + return new OpenAPI().info(new Info() + .title("SrtCloud") + .description( "SrtCloud") + .contact(contact) + .version("1.0") + .termsOfService("https://zrxlh.top") + .license(new License().name("MIT") + .url("https://zrxlh.top"))); + } +} diff --git a/srt-cloud-gateway/src/main/resources/bootstrap.yml b/srt-cloud-gateway/src/main/resources/bootstrap.yml new file mode 100644 index 0000000..ba4752d --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/bootstrap.yml @@ -0,0 +1,99 @@ +server: + port: 8082 + +spring: + application: + name: srt-cloud-gateway + profiles: + active: dev + cloud: + gateway: + httpclient: + connect-timeout: 120000 + metrics: + enabled: true + discovery: + locator: + enabled: true + routes: + - id: srt-cloud-system + uri: lb://srt-cloud-system + order: 1 + predicates: + - Path=/sys/** + filters: + - StripPrefix=1 + - id: srt-cloud-quartz + uri: lb://srt-cloud-quartz + order: 2 + predicates: + - Path=/schedule/** + filters: + - StripPrefix=1 + - id: srt-cloud-message + uri: lb://srt-cloud-message + order: 3 + predicates: + - Path=/message/** + filters: + - StripPrefix=1 + - id: srt-cloud-data-integrate + uri: lb://srt-cloud-data-integrate + order: 4 + predicates: + - Path=/data-integrate/** + filters: + - StripPrefix=1 + - id: srt-cloud-data-development + uri: lb://srt-cloud-data-development + order: 5 + predicates: + - Path=/data-development/** + filters: + - StripPrefix=1 + - id: srt-cloud-data-service + uri: lb://srt-cloud-data-service + order: 6 + predicates: + - Path=/data-service/** + filters: + - StripPrefix=1 + - id: srt-cloud-data-governance + uri: lb://srt-cloud-data-governance + order: 7 + predicates: + - Path=/data-governance/** + filters: + - StripPrefix=1 + - id: srt-cloud-data-assets + uri: lb://srt-cloud-data-assets + order: 8 + predicates: + - Path=/data-assets/** + filters: + - StripPrefix=1 + - id: openapi + uri: http://localhost:${server.port} + predicates: + - Path=/v3/api-docs/** + filters: + - RewritePath=/v3/api-docs/(?.*), /$\{path}/v3/api-docs + nacos: + discovery: + server-addr: 124.223.48.209:8848 + # 命名空间,默认:public + namespace: c370afdb-9c55-4068-a78b-3b35b1ac1420 + service: ${spring.application.name} + group: srt2.0 + +springdoc: + swagger-ui: + path: doc.html + + +logging: + level: + org: + springframework: + cloud: + gateway: TRACE diff --git a/srt-cloud-gateway/src/main/resources/log4j2.xml b/srt-cloud-gateway/src/main/resources/log4j2.xml new file mode 100644 index 0000000..7015e91 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/log4j2.xml @@ -0,0 +1,48 @@ + + + + + ./logs/ + srt-cloud-gateway + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/srt-cloud-gateway/src/main/resources/static/assets/404.0bf5647b.css b/srt-cloud-gateway/src/main/resources/static/assets/404.0bf5647b.css new file mode 100644 index 0000000..b229f86 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/404.0bf5647b.css @@ -0,0 +1 @@ +.layout-error[data-v-50e1ea3b]{display:flex;flex-direction:column;align-items:center}.layout-error img[data-v-50e1ea3b]{width:800px} diff --git a/srt-cloud-gateway/src/main/resources/static/assets/404.87516382.png b/srt-cloud-gateway/src/main/resources/static/assets/404.87516382.png new file mode 100644 index 0000000..520dcc1 Binary files /dev/null and b/srt-cloud-gateway/src/main/resources/static/assets/404.87516382.png differ diff --git a/srt-cloud-gateway/src/main/resources/static/assets/404.cf929ef3.js b/srt-cloud-gateway/src/main/resources/static/assets/404.cf929ef3.js new file mode 100644 index 0000000..65da041 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/404.cf929ef3.js @@ -0,0 +1 @@ +import{d,B as u,r as i,o as m,c as b,a as r,b as a,w as c,l as n,t as _,z as f,A as h,_ as k}from"./index.e3896b23.js";const v=""+new URL("404.87516382.png",import.meta.url).href,y=e=>(f("data-v-50e1ea3b"),e=e(),h(),e),B={class:"layout-error"},C=y(()=>r("img",{src:v,alt:"404"},null,-1)),g=d({__name:"404",setup(e){const t=u(),p=()=>{t.back()},l=()=>{t.replace("/")};return(o,w)=>{const s=i("el-button");return m(),b("div",B,[C,r("div",null,[a(s,{type:"primary",onClick:p},{default:c(()=>[n(_(o.$t("back")),1)]),_:1}),a(s,{type:"success",onClick:l},{default:c(()=>[n(_(o.$t("router.home")),1)]),_:1})])])}}});const I=k(g,[["__scopeId","data-v-50e1ea3b"]]);export{I as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Bold.ad6d653f.otf b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Bold.ad6d653f.otf new file mode 100644 index 0000000..9ec95bc Binary files /dev/null and b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Bold.ad6d653f.otf differ diff --git a/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Heavy.fef3f912.otf b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Heavy.fef3f912.otf new file mode 100644 index 0000000..7be3ffb Binary files /dev/null and b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Heavy.fef3f912.otf differ diff --git a/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Light.ef0b03fc.otf b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Light.ef0b03fc.otf new file mode 100644 index 0000000..d9743d2 Binary files /dev/null and b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Light.ef0b03fc.otf differ diff --git a/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Medium.f9d31c2c.otf b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Medium.f9d31c2c.otf new file mode 100644 index 0000000..0ebd1f4 Binary files /dev/null and b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Medium.f9d31c2c.otf differ diff --git a/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Regular.7a060a79.otf b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Regular.7a060a79.otf new file mode 100644 index 0000000..237e1aa Binary files /dev/null and b/srt-cloud-gateway/src/main/resources/static/assets/Alibaba-PuHuiTi-Regular.7a060a79.otf differ diff --git a/srt-cloud-gateway/src/main/resources/static/assets/Redirect.e1ccd893.js b/srt-cloud-gateway/src/main/resources/static/assets/Redirect.e1ccd893.js new file mode 100644 index 0000000..ea1b11a --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/Redirect.e1ccd893.js @@ -0,0 +1 @@ +import{d as o,C as s,B as n,X as c}from"./index.e3896b23.js";const d=o({created(){const{params:e,query:r}=s(),{path:t}=e;n().replace({path:"/"+t,query:r}).catch(a=>{console.warn(a)})},render(){return c("div")}});export{d as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/TaskNode.33048c4b.css b/srt-cloud-gateway/src/main/resources/static/assets/TaskNode.33048c4b.css new file mode 100644 index 0000000..86dd87d --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/TaskNode.33048c4b.css @@ -0,0 +1 @@ +.task-node[data-v-001a1d98]{width:220px;height:100px;font-size:15px;color:#737063;padding:15px;border-radius:15px;border:3px solid #8bc9ff;background-color:#fffdfc;box-sizing:border-box} diff --git a/srt-cloud-gateway/src/main/resources/static/assets/TaskNode.9477af8e.js b/srt-cloud-gateway/src/main/resources/static/assets/TaskNode.9477af8e.js new file mode 100644 index 0000000..2adae56 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/TaskNode.9477af8e.js @@ -0,0 +1 @@ +import{d as o,o as s,c as a,a as t,t as n,x as l,_ as i}from"./index.e3896b23.js";const d={key:0},r=["title"],p={style:{"margin-top":"10px"}},u=o({__name:"TaskNode",props:{properties:{type:Object,default:()=>({name:"",taskType:"",taskTypeVal:"",taskId:"",weight:1,failGoOn:0,style:{}})}},setup(e){return(c,_)=>(s(),a("div",{class:"task-node",style:l(e.properties.style)},[t("div",null,[e.properties.name.length<=10?(s(),a("div",d,[t("b",null,"\u540D\u79F0\uFF1A"+n(e.properties.name),1)])):(s(),a("div",{key:1,title:e.properties.name},[t("b",null,"\u540D\u79F0\uFF1A"+n(e.properties.name.substring(0,10)+"..."),1)],8,r))]),t("div",p,[t("div",null,[t("b",null,"\u7C7B\u578B\uFF1A"+n(e.properties.taskTypeVal),1)])])],4))}});const m=i(u,[["__scopeId","data-v-001a1d98"]]);export{m as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/abap.15cc56c3.js b/srt-cloud-gateway/src/main/resources/static/assets/abap.15cc56c3.js new file mode 100644 index 0000000..cfccf92 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/abap.15cc56c3.js @@ -0,0 +1,6 @@ +/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var e={comments:{lineComment:"*"},brackets:[["[","]"],["(",")"]]},t={defaultToken:"invalid",ignoreCase:!0,tokenPostfix:".abap",keywords:["abap-source","abbreviated","abstract","accept","accepting","according","activation","actual","add","add-corresponding","adjacent","after","alias","aliases","align","all","allocate","alpha","analysis","analyzer","and","append","appendage","appending","application","archive","area","arithmetic","as","ascending","aspect","assert","assign","assigned","assigning","association","asynchronous","at","attributes","authority","authority-check","avg","back","background","backup","backward","badi","base","before","begin","between","big","binary","bintohex","bit","black","blank","blanks","blob","block","blocks","blue","bound","boundaries","bounds","boxed","break-point","buffer","by","bypassing","byte","byte-order","call","calling","case","cast","casting","catch","center","centered","chain","chain-input","chain-request","change","changing","channels","character","char-to-hex","check","checkbox","ci_","circular","class","class-coding","class-data","class-events","class-methods","class-pool","cleanup","clear","client","clob","clock","close","coalesce","code","coding","col_background","col_group","col_heading","col_key","col_negative","col_normal","col_positive","col_total","collect","color","column","columns","comment","comments","commit","common","communication","comparing","component","components","compression","compute","concat","concat_with_space","concatenate","cond","condense","condition","connect","connection","constants","context","contexts","continue","control","controls","conv","conversion","convert","copies","copy","corresponding","country","cover","cpi","create","creating","critical","currency","currency_conversion","current","cursor","cursor-selection","customer","customer-function","dangerous","data","database","datainfo","dataset","date","dats_add_days","dats_add_months","dats_days_between","dats_is_valid","daylight","dd/mm/yy","dd/mm/yyyy","ddmmyy","deallocate","decimal_shift","decimals","declarations","deep","default","deferred","define","defining","definition","delete","deleting","demand","department","descending","describe","destination","detail","dialog","directory","disconnect","display","display-mode","distinct","divide","divide-corresponding","division","do","dummy","duplicate","duplicates","duration","during","dynamic","dynpro","edit","editor-call","else","elseif","empty","enabled","enabling","encoding","end","endat","endcase","endcatch","endchain","endclass","enddo","endenhancement","end-enhancement-section","endexec","endform","endfunction","endian","endif","ending","endinterface","end-lines","endloop","endmethod","endmodule","end-of-definition","end-of-editing","end-of-file","end-of-page","end-of-selection","endon","endprovide","endselect","end-test-injection","end-test-seam","endtry","endwhile","endwith","engineering","enhancement","enhancement-point","enhancements","enhancement-section","entries","entry","enum","environment","equiv","errormessage","errors","escaping","event","events","exact","except","exception","exceptions","exception-table","exclude","excluding","exec","execute","exists","exit","exit-command","expand","expanding","expiration","explicit","exponent","export","exporting","extend","extended","extension","extract","fail","fetch","field","field-groups","fields","field-symbol","field-symbols","file","filter","filters","filter-table","final","find","first","first-line","fixed-point","fkeq","fkge","flush","font","for","form","format","forward","found","frame","frames","free","friends","from","function","functionality","function-pool","further","gaps","generate","get","giving","gkeq","gkge","global","grant","green","group","groups","handle","handler","harmless","hashed","having","hdb","header","headers","heading","head-lines","help-id","help-request","hextobin","hide","high","hint","hold","hotspot","icon","id","identification","identifier","ids","if","ignore","ignoring","immediately","implementation","implementations","implemented","implicit","import","importing","in","inactive","incl","include","includes","including","increment","index","index-line","infotypes","inheriting","init","initial","initialization","inner","inout","input","insert","instance","instances","instr","intensified","interface","interface-pool","interfaces","internal","intervals","into","inverse","inverted-date","is","iso","job","join","keep","keeping","kernel","key","keys","keywords","kind","language","last","late","layout","leading","leave","left","left-justified","leftplus","leftspace","legacy","length","let","level","levels","like","line","lines","line-count","linefeed","line-selection","line-size","list","listbox","list-processing","little","llang","load","load-of-program","lob","local","locale","locator","logfile","logical","log-point","long","loop","low","lower","lpad","lpi","ltrim","mail","main","major-id","mapping","margin","mark","mask","match","matchcode","max","maximum","medium","members","memory","mesh","message","message-id","messages","messaging","method","methods","min","minimum","minor-id","mm/dd/yy","mm/dd/yyyy","mmddyy","mode","modif","modifier","modify","module","move","move-corresponding","multiply","multiply-corresponding","name","nametab","native","nested","nesting","new","new-line","new-page","new-section","next","no","no-display","no-extension","no-gap","no-gaps","no-grouping","no-heading","no-scrolling","no-sign","no-title","no-topofpage","no-zero","node","nodes","non-unicode","non-unique","not","null","number","object","objects","obligatory","occurrence","occurrences","occurs","of","off","offset","ole","on","only","open","option","optional","options","or","order","other","others","out","outer","output","output-length","overflow","overlay","pack","package","pad","padding","page","pages","parameter","parameters","parameter-table","part","partially","pattern","percentage","perform","performing","person","pf1","pf10","pf11","pf12","pf13","pf14","pf15","pf2","pf3","pf4","pf5","pf6","pf7","pf8","pf9","pf-status","pink","places","pool","pos_high","pos_low","position","pragmas","precompiled","preferred","preserving","primary","print","print-control","priority","private","procedure","process","program","property","protected","provide","public","push","pushbutton","put","queue-only","quickinfo","radiobutton","raise","raising","range","ranges","read","reader","read-only","receive","received","receiver","receiving","red","redefinition","reduce","reduced","ref","reference","refresh","regex","reject","remote","renaming","replace","replacement","replacing","report","request","requested","reserve","reset","resolution","respecting","responsible","result","results","resumable","resume","retry","return","returncode","returning","returns","right","right-justified","rightplus","rightspace","risk","rmc_communication_failure","rmc_invalid_status","rmc_system_failure","role","rollback","rows","rpad","rtrim","run","sap","sap-spool","saving","scale_preserving","scale_preserving_scientific","scan","scientific","scientific_with_leading_zero","scroll","scroll-boundary","scrolling","search","secondary","seconds","section","select","selection","selections","selection-screen","selection-set","selection-sets","selection-table","select-options","send","separate","separated","set","shared","shift","short","shortdump-id","sign_as_postfix","single","size","skip","skipping","smart","some","sort","sortable","sorted","source","specified","split","spool","spots","sql","sqlscript","stable","stamp","standard","starting","start-of-editing","start-of-selection","state","statement","statements","static","statics","statusinfo","step-loop","stop","structure","structures","style","subkey","submatches","submit","subroutine","subscreen","subtract","subtract-corresponding","suffix","sum","summary","summing","supplied","supply","suppress","switch","switchstates","symbol","syncpoints","syntax","syntax-check","syntax-trace","system-call","system-exceptions","system-exit","tab","tabbed","table","tables","tableview","tabstrip","target","task","tasks","test","testing","test-injection","test-seam","text","textpool","then","throw","time","times","timestamp","timezone","tims_is_valid","title","titlebar","title-lines","to","tokenization","tokens","top-lines","top-of-page","trace-file","trace-table","trailing","transaction","transfer","transformation","translate","transporting","trmac","truncate","truncation","try","tstmp_add_seconds","tstmp_current_utctimestamp","tstmp_is_valid","tstmp_seconds_between","type","type-pool","type-pools","types","uline","unassign","under","unicode","union","unique","unit_conversion","unix","unpack","until","unwind","up","update","upper","user","user-command","using","utf-8","valid","value","value-request","values","vary","varying","verification-message","version","via","view","visible","wait","warning","when","whenever","where","while","width","window","windows","with","with-heading","without","with-title","word","work","write","writer","xml","xsd","yellow","yes","yymmdd","zero","zone","abap_system_timezone","abap_user_timezone","access","action","adabas","adjust_numbers","allow_precision_loss","allowed","amdp","applicationuser","as_geo_json","as400","associations","balance","behavior","breakup","bulk","cds","cds_client","check_before_save","child","clients","corr","corr_spearman","cross","cycles","datn_add_days","datn_add_months","datn_days_between","dats_from_datn","dats_tims_to_tstmp","dats_to_datn","db2","db6","ddl","dense_rank","depth","deterministic","discarding","entities","entity","error","failed","finalize","first_value","fltp_to_dec","following","fractional","full","graph","grouping","hierarchy","hierarchy_ancestors","hierarchy_ancestors_aggregate","hierarchy_descendants","hierarchy_descendants_aggregate","hierarchy_siblings","incremental","indicators","lag","last_value","lead","leaves","like_regexpr","link","locale_sap","lock","locks","many","mapped","matched","measures","median","mssqlnt","multiple","nodetype","ntile","nulls","occurrences_regexpr","one","operations","oracle","orphans","over","parent","parents","partition","pcre","period","pfcg_mapping","preceding","privileged","product","projection","rank","redirected","replace_regexpr","reported","response","responses","root","row","row_number","sap_system_date","save","schema","session","sets","shortdump","siblings","spantree","start","stddev","string_agg","subtotal","sybase","tims_from_timn","tims_to_timn","to_blob","to_clob","total","trace-entry","tstmp_to_dats","tstmp_to_dst","tstmp_to_tims","tstmpl_from_utcl","tstmpl_to_utcl","unbounded","utcl_add_seconds","utcl_current","utcl_seconds_between","uuid","var","verbatim"],builtinFunctions:["abs","acos","asin","atan","bit-set","boolc","boolx","ceil","char_off","charlen","cmax","cmin","concat_lines_of","contains","contains_any_not_of","contains_any_of","cos","cosh","count","count_any_not_of","count_any_of","dbmaxlen","distance","escape","exp","find_any_not_of","find_any_of","find_end","floor","frac","from_mixed","ipow","line_exists","line_index","log","log10","matches","nmax","nmin","numofchar","repeat","rescale","reverse","round","segment","shift_left","shift_right","sign","sin","sinh","sqrt","strlen","substring","substring_after","substring_before","substring_from","substring_to","tan","tanh","to_lower","to_mixed","to_upper","trunc","utclong_add","utclong_current","utclong_diff","xsdbool","xstrlen"],typeKeywords:["b","c","d","decfloat16","decfloat34","f","i","int8","n","p","s","string","t","utclong","x","xstring","any","clike","csequence","decfloat","numeric","simple","xsequence","accp","char","clnt","cuky","curr","datn","dats","d16d","d16n","d16r","d34d","d34n","d34r","dec","df16_dec","df16_raw","df34_dec","df34_raw","fltp","geom_ewkb","int1","int2","int4","lang","lchr","lraw","numc","quan","raw","rawstring","sstring","timn","tims","unit","utcl","df16_scl","df34_scl","prec","varc","abap_bool","abap_false","abap_true","abap_undefined","me","screen","space","super","sy","syst","table_line","*sys*"],builtinMethods:["class_constructor","constructor"],derivedTypes:["%CID","%CID_REF","%CONTROL","%DATA","%ELEMENT","%FAIL","%KEY","%MSG","%PARAM","%PID","%PID_ASSOC","%PID_PARENT","%_HINTS"],cdsLanguage:["@AbapAnnotation","@AbapCatalog","@AccessControl","@API","@ClientDependent","@ClientHandling","@CompatibilityContract","@DataAging","@EndUserText","@Environment","@LanguageDependency","@MappingRole","@Metadata","@MetadataExtension","@ObjectModel","@Scope","@Semantics","$EXTENSION","$SELF"],selectors:["->","->*","=>","~","~*"],operators:[" +"," -","/","*","**","div","mod","=","#","@","+=","-=","*=","/=","**=","&&=","?=","&","&&","bit-and","bit-not","bit-or","bit-xor","m","o","z","<"," >","<=",">=","<>","><","=<","=>","bt","byte-ca","byte-cn","byte-co","byte-cs","byte-na","byte-ns","ca","cn","co","cp","cs","eq","ge","gt","le","lt","na","nb","ne","np","ns","*/","*:","--","/*","//"],symbols:/[=>))*/,{cases:{"@typeKeywords":"type","@keywords":"keyword","@cdsLanguage":"annotation","@derivedTypes":"type","@builtinFunctions":"type","@builtinMethods":"type","@operators":"key","@default":"identifier"}}],[/<[\w]+>/,"identifier"],[/##[\w|_]+/,"comment"],{include:"@whitespace"},[/[:,.]/,"delimiter"],[/[{}()\[\]]/,"@brackets"],[/@symbols/,{cases:{"@selectors":"tag","@operators":"key","@default":""}}],[/'/,{token:"string",bracket:"@open",next:"@stringquote"}],[/`/,{token:"string",bracket:"@open",next:"@stringping"}],[/\|/,{token:"string",bracket:"@open",next:"@stringtemplate"}],[/\d+/,"number"]],stringtemplate:[[/[^\\\|]+/,"string"],[/\\\|/,"string"],[/\|/,{token:"string",bracket:"@close",next:"@pop"}]],stringping:[[/[^\\`]+/,"string"],[/`/,{token:"string",bracket:"@close",next:"@pop"}]],stringquote:[[/[^\\']+/,"string"],[/'/,{token:"string",bracket:"@close",next:"@pop"}]],whitespace:[[/[ \t\r\n]+/,""],[/^\*.*$/,"comment"],[/\".*$/,"comment"]]}};export{e as conf,t as language}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/access-task-detail.3b2de10f.js b/srt-cloud-gateway/src/main/resources/static/assets/access-task-detail.3b2de10f.js new file mode 100644 index 0000000..026304f --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/access-task-detail.3b2de10f.js @@ -0,0 +1 @@ +import"./access-task-detail.vue_vue_type_script_setup_true_name_SrtAccessTaskDetailIndex_lang.19261ba8.js";import{_ as t}from"./access-task-detail.vue_vue_type_script_setup_true_name_SrtAccessTaskDetailIndex_lang.19261ba8.js";import"./index.e3896b23.js";export{t as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/access-task-detail.vue_vue_type_script_setup_true_name_SrtAccessTaskDetailIndex_lang.19261ba8.js b/srt-cloud-gateway/src/main/resources/static/assets/access-task-detail.vue_vue_type_script_setup_true_name_SrtAccessTaskDetailIndex_lang.19261ba8.js new file mode 100644 index 0000000..c80d3ee --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/access-task-detail.vue_vue_type_script_setup_true_name_SrtAccessTaskDetailIndex_lang.19261ba8.js @@ -0,0 +1 @@ +import{d as _,h as D,Y as z,ai as V,r as n,aj as k,o as p,f as g,w as u,b as e,k as r,a2 as N,a7 as A,a as q,t as L,l as T}from"./index.e3896b23.js";const H=T("\u67E5\u8BE2"),I={style:{"margin-left":"20px"}},M=_({name:"SrtAccessTaskDetailIndex"}),$=_({...M,setup(U,{expose:m}){const s=D(!1),t=z({createdIsNeed:!1,dataListUrl:"/data-integrate/access/task-detail-page",queryForm:{ifSuccess:"",taskId:""}}),h=c=>{t.queryForm.taskId=c,s.value=!0,i()},{getDataList:i,selectionChangeHandle:F,sizeChangeHandle:f,currentChangeHandle:b,deleteBatchHandle:j}=V(t);return m({init:h}),(c,l)=>{const E=n("fast-select"),d=n("el-form-item"),C=n("el-button"),y=n("el-form"),a=n("el-table-column"),x=n("fast-table-column"),w=n("el-table"),v=n("el-pagination"),B=n("el-dialog"),S=k("loading");return p(),g(B,{modelValue:s.value,"onUpdate:modelValue":l[3]||(l[3]=o=>s.value=o),title:"\u540C\u6B65\u7ED3\u679C"},{default:u(()=>[e(y,{inline:!0,model:t.queryForm,onKeyup:l[2]||(l[2]=N(o=>r(i)(),["enter"]))},{default:u(()=>[e(d,null,{default:u(()=>[e(E,{modelValue:t.queryForm.ifSuccess,"onUpdate:modelValue":l[0]||(l[0]=o=>t.queryForm.ifSuccess=o),placeholder:"\u662F\u5426\u6210\u529F","dict-type":"yes_or_no",clearable:""},null,8,["modelValue"])]),_:1}),e(d,null,{default:u(()=>[e(C,{onClick:l[1]||(l[1]=o=>r(i)()),type:"primary"},{default:u(()=>[H]),_:1})]),_:1})]),_:1},8,["model"]),A((p(),g(w,{data:t.dataList,border:"",style:{width:"100%"},onSelectionChange:r(F)},{default:u(()=>[e(a,{prop:"id",label:"\u6267\u884C\u5E8F\u53F7","header-align":"center",align:"center",width:"130px"}),e(a,{prop:"sourceSchemaName",label:"\u6E90\u7AEF\u5E93\u540D","header-align":"center",align:"center",width:"150px"}),e(a,{prop:"targetSchemaName",label:"\u76EE\u7684\u7AEF\u5E93\u540D","header-align":"center",align:"center",width:"130px"}),e(a,{prop:"sourceTableName",label:"\u6E90\u7AEF\u8868\u540D","header-align":"center",align:"center",width:"130px"}),e(a,{prop:"targetTableName",label:"\u76EE\u7684\u7AEF\u8868\u540D","header-align":"center",align:"center",width:"130px"}),e(a,{prop:"syncCount",label:"\u540C\u6B65\u8BB0\u5F55\u6570","header-align":"center",align:"center",width:"150px"}),e(a,{prop:"syncBytes",label:"\u540C\u6B65\u6570\u636E\u91CF","header-align":"center",align:"center",width:"150px"}),e(x,{prop:"ifSuccess",label:"\u662F\u5426\u6210\u529F","header-align":"center",align:"center","dict-type":"yes_or_no",width:"130px"}),e(a,{prop:"successMsg",label:"\u6210\u529F\u4FE1\u606F","header-align":"center",align:"center",width:"150px"}),e(a,{type:"expand",label:"\u5931\u8D25\u4FE1\u606F",width:"150px"},{default:u(o=>[q("div",I,L(o.row.errorMsg?o.row.errorMsg:"\u65E0\u4FE1\u606F\u53EF\u67E5\u770B\uFF01"),1)]),_:1}),e(a,{prop:"createTime",label:"\u521B\u5EFA\u65F6\u95F4","header-align":"center",align:"center",width:"130px"})]),_:1},8,["data","onSelectionChange"])),[[S,t.dataListLoading]]),e(v,{"current-page":t.page,"page-sizes":t.pageSizes,"page-size":t.limit,total:t.total,layout:"total, sizes, prev, pager, next, jumper",onSizeChange:r(f),onCurrentChange:r(b)},null,8,["current-page","page-sizes","page-size","total","onSizeChange","onCurrentChange"])]),_:1},8,["modelValue"])}}});export{$ as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/access-task.f8ef05cd.js b/srt-cloud-gateway/src/main/resources/static/assets/access-task.f8ef05cd.js new file mode 100644 index 0000000..4cdc864 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/access-task.f8ef05cd.js @@ -0,0 +1 @@ +import"./access-task.vue_vue_type_script_setup_true_name_SrtTaskIndex_lang.b6a89e68.js";import{_ as s}from"./access-task.vue_vue_type_script_setup_true_name_SrtTaskIndex_lang.b6a89e68.js";import"./index.e3896b23.js";import"./access-task-detail.vue_vue_type_script_setup_true_name_SrtAccessTaskDetailIndex_lang.19261ba8.js";import"./access.ed3b6ac4.js";import"./readonly-studio.vue_vue_type_script_setup_true_lang.0062e564.js";import"./toggleHighContrast.483b4227.js";export{s as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/access-task.vue_vue_type_script_setup_true_name_SrtTaskIndex_lang.b6a89e68.js b/srt-cloud-gateway/src/main/resources/static/assets/access-task.vue_vue_type_script_setup_true_name_SrtTaskIndex_lang.b6a89e68.js new file mode 100644 index 0000000..119bb53 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/access-task.vue_vue_type_script_setup_true_name_SrtTaskIndex_lang.b6a89e68.js @@ -0,0 +1 @@ +import{d as w,h as c,Y as A,ai as H,r,aj as j,o as v,f as k,w as a,b as e,k as i,a2 as K,a7 as Y,a as G,l as d}from"./index.e3896b23.js";import{_ as J}from"./access-task-detail.vue_vue_type_script_setup_true_name_SrtAccessTaskDetailIndex_lang.19261ba8.js";import{g as B}from"./access.ed3b6ac4.js";import{_ as M}from"./readonly-studio.vue_vue_type_script_setup_true_lang.0062e564.js";const O=d("\u67E5\u8BE2"),P=d("\u5220\u9664"),Q=d("\u5B9E\u65F6\u65E5\u5FD7"),W=d("\u540C\u6B65\u7ED3\u679C"),X=d("\u5220\u9664"),Z=d("\u83B7\u53D6\u6700\u65B0\u65E5\u5FD7"),ee={style:{padding:"15px"}},te=w({name:"SrtTaskIndex"}),re=w({...te,setup(ae,{expose:T}){const p=c(!1),l=A({createdIsNeed:!1,dataListUrl:"/data-integrate/access/task-page",deleteUrl:"/data-integrate/access/task",queryForm:{dataAccessId:"",runStatus:""}}),D=u=>{l.queryForm.dataAccessId=u,p.value=!0,m()},C=c(),E=u=>{C.value.init(u)},_=c(!1),g=c(),h=c(),V=u=>{_.value=!0,h.value=u,B(u).then(t=>{g.value.setEditorValue(t.data.realTimeLog)})},x=()=>{B(h.value).then(u=>{g.value.setEditorValue(u.data.realTimeLog)})},{getDataList:m,selectionChangeHandle:L,sizeChangeHandle:S,currentChangeHandle:R,deleteBatchHandle:F}=H(l);return T({init:D}),(u,t)=>{const z=r("fast-select"),f=r("el-form-item"),s=r("el-button"),$=r("el-form"),o=r("el-table-column"),I=r("fast-table-column"),q=r("el-table"),N=r("el-pagination"),y=r("el-dialog"),U=j("loading");return v(),k(y,{modelValue:p.value,"onUpdate:modelValue":t[5]||(t[5]=n=>p.value=n),title:"\u6267\u884C\u8BB0\u5F55"},{default:a(()=>[e($,{inline:!0,model:l.queryForm,onKeyup:t[3]||(t[3]=K(n=>i(m)(),["enter"]))},{default:a(()=>[e(f,null,{default:a(()=>[e(z,{modelValue:l.queryForm.runStatus,"onUpdate:modelValue":t[0]||(t[0]=n=>l.queryForm.runStatus=n),placeholder:"\u8FD0\u884C\u72B6\u6001","dict-type":"run_status",clearable:""},null,8,["modelValue"])]),_:1}),e(f,null,{default:a(()=>[e(s,{type:"primary",onClick:t[1]||(t[1]=n=>i(m)())},{default:a(()=>[O]),_:1})]),_:1}),e(f,null,{default:a(()=>[e(s,{type:"danger",onClick:t[2]||(t[2]=n=>i(F)())},{default:a(()=>[P]),_:1})]),_:1})]),_:1},8,["model"]),Y((v(),k(q,{data:l.dataList,border:"",style:{width:"100%"},onSelectionChange:i(L)},{default:a(()=>[e(o,{type:"selection","header-align":"center",align:"center",width:"50"}),e(o,{prop:"id",label:"\u5E8F\u53F7","header-align":"center",align:"center"}),e(I,{prop:"runStatus",label:"\u8FD0\u884C\u72B6\u6001","header-align":"center",align:"center","dict-type":"run_status",width:"120px"}),e(o,{prop:"startTime",label:"\u5F00\u59CB\u65F6\u95F4","header-align":"center",align:"center",width:"160px"}),e(o,{prop:"endTime",label:"\u7ED3\u675F\u65F6\u95F4","header-align":"center",align:"center",width:"160px"}),e(o,{prop:"dataCount",label:"\u66F4\u65B0\u6570\u636E\u91CF","header-align":"center",align:"center"}),e(o,{prop:"tableSuccessCount",label:"\u6210\u529F\u8868\u6570\u91CF","header-align":"center",align:"center"}),e(o,{prop:"tableFailCount",label:"\u5931\u8D25\u8868\u6570\u91CF","header-align":"center",align:"center"}),e(o,{prop:"byteCount",label:"\u6570\u636E\u91CF\u5927\u5C0F","header-align":"center",align:"center"}),e(o,{prop:"createTime",label:"\u521B\u5EFA\u65F6\u95F4","header-align":"center",align:"center",width:"160px"}),e(o,{label:"\u64CD\u4F5C",fixed:"right","header-align":"center",align:"center",width:"260"},{default:a(n=>[e(s,{type:"primary",link:"",onClick:b=>V(n.row.id)},{default:a(()=>[Q]),_:2},1032,["onClick"]),e(s,{type:"primary",link:"",onClick:b=>E(n.row.id)},{default:a(()=>[W]),_:2},1032,["onClick"]),e(s,{type:"primary",link:"",onClick:b=>i(F)(n.row.id)},{default:a(()=>[X]),_:2},1032,["onClick"])]),_:1})]),_:1},8,["data","onSelectionChange"])),[[U,l.dataListLoading]]),e(N,{"current-page":l.page,"page-sizes":l.pageSizes,"page-size":l.limit,total:l.total,layout:"total, sizes, prev, pager, next, jumper",onSizeChange:i(S),onCurrentChange:i(R)},null,8,["current-page","page-sizes","page-size","total","onSizeChange","onCurrentChange"]),e(J,{ref_key:"accessTaskDetailRef",ref:C},null,512),e(y,{modelValue:_.value,"onUpdate:modelValue":t[4]||(t[4]=n=>_.value=n),title:"\u5B9E\u65F6\u65E5\u5FD7",width:"65%"},{default:a(()=>[e(s,{type:"primary",onClick:x},{default:a(()=>[Z]),_:1}),G("div",ee,[e(M,{id:"accessRealTimeLog",ref_key:"accessRealTimeLogRef",ref:g,style:{height:"500px"}},null,512)])]),_:1},8,["modelValue"])]),_:1},8,["modelValue"])}}});export{re as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/access.ed3b6ac4.js b/srt-cloud-gateway/src/main/resources/static/assets/access.ed3b6ac4.js new file mode 100644 index 0000000..366751a --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/access.ed3b6ac4.js @@ -0,0 +1 @@ +import{ad as a}from"./index.e3896b23.js";const t=e=>a.get("/data-integrate/access/"+e),c=e=>e.id?a.put("/data-integrate/access",e):a.post("/data-integrate/access",e),n=e=>a.post("/data-integrate/access/preview-table-name-map",e),r=e=>a.post("/data-integrate/access/preview-column-name-map",e),i=e=>a.post("/data-integrate/access/release/"+e),p=e=>a.post("/data-integrate/access/cancle/"+e),u=e=>a.post("/data-integrate/access/hand-run/"+e),o=e=>a.get("/data-integrate/access/task/"+e);export{r as a,c as b,i as c,p as d,o as g,u as h,n as p,t as u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/account.30ae67aa.js b/srt-cloud-gateway/src/main/resources/static/assets/account.30ae67aa.js new file mode 100644 index 0000000..c7d83c6 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/account.30ae67aa.js @@ -0,0 +1 @@ +import{d as V,B as k,u as x,h as i,Y as C,L as $,Z as q,r as l,o as B,f as I,w as n,a as g,t as h,b as t,k as d,$ as R,a0 as A,a1 as F,l as K,a2 as L,s as N,_ as U}from"./index.e3896b23.js";const S={class:"login-title"},D=["src"],M=V({__name:"account",setup(T){const y=k(),{t:u}=x(),m=i(),_=i(),a=C({username:"admin",password:"admin",key:"",captcha:""}),b=i({username:[{required:!0,message:u("required"),trigger:"blur"}],password:[{required:!0,message:u("required"),trigger:"blur"}],captcha:[{required:!0,message:u("required"),trigger:"blur"}]});$(()=>{p()});const p=async()=>{const{data:e}=await q();a.key=e.key,_.value=e.image},f=()=>{m.value.validate(e=>{if(!e)return!1;N.userStore.accountLoginAction(a).then(()=>{y.push({path:"/home"})}).catch(()=>{p()})})};return(e,o)=>{const c=l("el-input"),r=l("el-form-item"),v=l("el-button"),w=l("el-form");return B(),I(w,{ref_key:"loginFormRef",ref:m,model:a,rules:b.value,onKeyup:L(f,["enter"])},{default:n(()=>[g("div",S,h(e.$t("app.signIn")),1),t(r,{prop:"username"},{default:n(()=>[t(c,{modelValue:a.username,"onUpdate:modelValue":o[0]||(o[0]=s=>a.username=s),"prefix-icon":d(R),placeholder:e.$t("app.username")},null,8,["modelValue","prefix-icon","placeholder"])]),_:1}),t(r,{prop:"password"},{default:n(()=>[t(c,{modelValue:a.password,"onUpdate:modelValue":o[1]||(o[1]=s=>a.password=s),"prefix-icon":d(A),"show-password":"",placeholder:e.$t("app.password")},null,8,["modelValue","prefix-icon","placeholder"])]),_:1}),t(r,{prop:"captcha",class:"login-captcha"},{default:n(()=>[t(c,{modelValue:a.captcha,"onUpdate:modelValue":o[2]||(o[2]=s=>a.captcha=s),placeholder:e.$t("app.captcha"),"prefix-icon":d(F)},null,8,["modelValue","placeholder","prefix-icon"]),g("img",{src:_.value,onClick:p},null,8,D)]),_:1}),t(r,{class:"login-button"},{default:n(()=>[t(v,{type:"primary",onClick:o[3]||(o[3]=s=>f())},{default:n(()=>[K(h(e.$t("app.signIn")),1)]),_:1})]),_:1})]),_:1},8,["model","rules","onKeyup"])}}});const Z=U(M,[["__scopeId","data-v-b7bb6692"]]);export{Z as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/account.81a25ae6.css b/srt-cloud-gateway/src/main/resources/static/assets/account.81a25ae6.css new file mode 100644 index 0000000..495b517 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/account.81a25ae6.css @@ -0,0 +1 @@ +.login-title[data-v-b7bb6692]{display:flex;justify-content:center;margin-bottom:35px;font-size:24px;color:#444;letter-spacing:4px}.login-captcha[data-v-b7bb6692] .el-input{width:200px}.login-captcha img[data-v-b7bb6692]{width:150px;height:40px;margin:5px 0 0 10px;cursor:pointer}.login-button[data-v-b7bb6692] .el-button--primary{margin-top:10px;width:100%;height:45px;font-size:18px;letter-spacing:8px} diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-mount.318a87c3.js b/srt-cloud-gateway/src/main/resources/static/assets/add-mount.318a87c3.js new file mode 100644 index 0000000..548e800 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-mount.318a87c3.js @@ -0,0 +1 @@ +import"./add-mount.vue_vue_type_script_setup_true_lang.e1fed4db.js";import{_ as h}from"./add-mount.vue_vue_type_script_setup_true_lang.e1fed4db.js";import"./index.e3896b23.js";import"./folder.ea536bf2.js";import"./database.235d7a89.js";import"./table.e1c1b00a.js";import"./column.79595943.js";import"./model.45425835.js";import"./api.55cd055b.js";import"./file.2b66c011.js";import"./resourceMount.bc55130a.js";import"./metadata.0c954be9.js";import"./apiGroup.d1155eaa.js";import"./fileCategory.cc511701.js";import"./api-mount.vue_vue_type_script_setup_true_name_Data-serviceApi-configIndex_lang.42598b31.js";import"./database.32bfd96d.js";import"./file-mount.vue_vue_type_script_setup_true_name_Data-integrateFileIndex_lang.b616d921.js";export{h as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-mount.vue_vue_type_script_setup_true_lang.e1fed4db.js b/srt-cloud-gateway/src/main/resources/static/assets/add-mount.vue_vue_type_script_setup_true_lang.e1fed4db.js new file mode 100644 index 0000000..b8d373b --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-mount.vue_vue_type_script_setup_true_lang.e1fed4db.js @@ -0,0 +1 @@ +import{d as Y,L as z,h as n,Y as J,a6 as x,r as f,o as l,f as k,w as r,b as m,a as d,c as i,H as s,t as h,a2 as Q,l as L,E as F}from"./index.e3896b23.js";import{f as V}from"./folder.ea536bf2.js";import{_ as W}from"./database.235d7a89.js";import{_ as X}from"./table.e1c1b00a.js";import{_ as Z}from"./column.79595943.js";import{_ as ee}from"./model.45425835.js";import{a as te}from"./api.55cd055b.js";import{_ as oe}from"./file.2b66c011.js";import{u as ue,a as ae}from"./resourceMount.bc55130a.js";import{l as le}from"./metadata.0c954be9.js";import{u as se}from"./apiGroup.d1155eaa.js";import{l as ne}from"./fileCategory.cc511701.js";import{_ as re}from"./api-mount.vue_vue_type_script_setup_true_name_Data-serviceApi-configIndex_lang.42598b31.js";import{_ as ie}from"./file-mount.vue_vue_type_script_setup_true_name_Data-integrateFileIndex_lang.b616d921.js";const me={key:0,src:V},de={key:1,src:W},pe={key:2,src:X},fe={key:3,src:Z},ce={key:4,src:ee},_e={style:{"margin-left":"8px"}},ye={key:0,src:V},ve={key:1,src:te},ge={style:{"margin-left":"8px"}},be={key:0,src:V},ke={key:1,src:oe},he={style:{"margin-left":"8px"}},Ce={key:0},Ie={key:1},Ee=L("\u53D6\u6D88"),Fe=L("\u786E\u5B9A"),qe=Y({__name:"add-mount",emits:["refreshDataList"],setup(Ve,{expose:P,emit:$}){z(()=>{le().then(o=>{D.value=o.data}),se().then(o=>{T.value=o.data}),ne().then(o=>{B.value=o.data})});const c=n(!1),v=n(),D=n([]),T=n([]),B=n([]),C={label:"name",children:"children",isLeaf:"leaf",disabled:"disabled"},t=J({mountType:"",treeId:"",mountId:"",mountName:""}),U=(o,e)=>{c.value=!0,t.id="",v.value&&v.value.resetFields(),t.resourceId=e,g.value=!1,o&&S(o)},S=o=>{ue(o).then(e=>{Object.assign(t,e.data)})},q=n({mountType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],treeId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),g=n(!1),M=n(),_=n();x("apiMountInfo",_);const H=(o,e,b,p)=>{console.log(e.data),e.data.type=="2"?(g.value=!0,setTimeout(()=>{M.value.init(e.data.id,e.data.path)},500)):g.value=!1},I=n(!1),w=n(),y=n();x("fileMountInfo",y);const K=(o,e,b,p)=>{console.log(e.data),e.data.type=="1"?(I.value=!0,setTimeout(()=>{w.value.init(e.data.id,e.data.path)},500)):I.value=!1},A=n(),j=(o,e,b,p)=>{console.log(e.data),e.data.disabled||(A.value=e.data)},N=()=>{v.value.validate(o=>{if(!o)return!1;if(t.mountType==1)t.mountId=t.treeId,t.mountName=A.value.path;else if(t.mountType==2)if(_.value)t.mountId=_.value.id,t.mountName=_.value.parentPath+"/"+_.value.name;else return F.warning("\u8BF7\u5355\u51FB\u5217\u8868\u9879\u9009\u62E9\u8981\u6302\u8F7D\u7684api"),!1;else if(t.mountType==3)if(y.value)t.mountId=y.value.id,t.mountName=y.value.parentPath+"/"+y.value.name;else return F.warning("\u8BF7\u5355\u51FB\u5217\u8868\u9879\u9009\u62E9\u8981\u6302\u8F7D\u7684\u6587\u4EF6"),!1;ae(t).then(()=>{F.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{c.value=!1,$("refreshDataList")}})})})};return P({init:U}),(o,e)=>{const b=f("fast-select"),p=f("el-form-item"),E=f("el-tree-select"),G=f("el-form"),R=f("el-button"),O=f("el-dialog");return l(),k(O,{modelValue:c.value,"onUpdate:modelValue":e[7]||(e[7]=u=>c.value=u),title:t.id?"\u4FEE\u6539":"\u6302\u8F7D\u8D44\u6E90","close-on-click-modal":!1},{footer:r(()=>[m(R,{onClick:e[5]||(e[5]=u=>c.value=!1)},{default:r(()=>[Ee]),_:1}),m(R,{type:"primary",onClick:e[6]||(e[6]=u=>N())},{default:r(()=>[Fe]),_:1})]),default:r(()=>[m(G,{ref_key:"dataFormRef",ref:v,model:t,rules:q.value,"label-width":"100px",onKeyup:e[4]||(e[4]=Q(u=>N(),["enter"]))},{default:r(()=>[m(p,{label:"\u8D44\u6E90\u7C7B\u578B",prop:"mountType","label-width":"auto"},{default:r(()=>[m(b,{modelValue:t.mountType,"onUpdate:modelValue":e[0]||(e[0]=u=>t.mountType=u),"dict-type":"mount_type",placeholder:"\u8D44\u6E90\u7C7B\u578B",clearable:""},null,8,["modelValue"])]),_:1}),t.mountType==1?(l(),k(p,{key:0,label:"\u9009\u62E9\u5E93\u8868",prop:"treeId","label-width":"auto"},{default:r(()=>[m(E,{"check-strictly":"",modelValue:t.treeId,"onUpdate:modelValue":e[1]||(e[1]=u=>t.treeId=u),props:C,data:D.value,filterable:"",clearable:"",onNodeClick:j},{default:r(({node:u,data:a})=>[d("div",null,[d("span",null,[a.icon=="/src/assets/folder.png"?(l(),i("img",me)):s("",!0),a.icon=="/src/assets/database.png"?(l(),i("img",de)):s("",!0),a.icon=="/src/assets/table.png"?(l(),i("img",pe)):s("",!0),a.icon=="/src/assets/column.png"?(l(),i("img",fe)):s("",!0),a.icon=="/src/assets/model.png"?(l(),i("img",ce)):s("",!0),d("span",_e,h(a.name)+"\u2003"+h(a.code),1)])])]),_:1},8,["modelValue","data"])]),_:1})):s("",!0),t.mountType==2?(l(),k(p,{key:1,label:"\u9009\u62E9api",prop:"treeId","label-width":"auto"},{default:r(()=>[m(E,{modelValue:t.treeId,"onUpdate:modelValue":e[2]||(e[2]=u=>t.treeId=u),props:C,data:T.value,filterable:"",clearable:"",onNodeClick:H},{default:r(({node:u,data:a})=>[d("div",null,[d("span",null,[a.type=="1"?(l(),i("img",ye)):s("",!0),a.type=="2"?(l(),i("img",ve)):s("",!0),d("span",ge,h(a.name),1)])])]),_:1},8,["modelValue","data"])]),_:1})):s("",!0),t.mountType==3?(l(),k(p,{key:2,label:"\u9009\u62E9\u6587\u4EF6",prop:"treeId","label-width":"auto"},{default:r(()=>[m(E,{modelValue:t.treeId,"onUpdate:modelValue":e[3]||(e[3]=u=>t.treeId=u),props:C,data:B.value,filterable:"",clearable:"",onNodeClick:K},{default:r(({node:u,data:a})=>[d("div",null,[d("span",null,[a.type=="0"?(l(),i("img",be)):s("",!0),a.type=="1"?(l(),i("img",ke)):s("",!0),d("span",he,h(a.name),1)])])]),_:1},8,["modelValue","data"])]),_:1})):s("",!0)]),_:1},8,["model","rules"]),t.mountType==2&&g.value?(l(),i("div",Ce,[m(re,{ref_key:"apiMountRef",ref:M},null,512)])):s("",!0),t.mountType==3&&I.value?(l(),i("div",Ie,[m(ie,{ref_key:"fileMountRef",ref:w},null,512)])):s("",!0)]),_:1},8,["modelValue","title"])}}});export{qe as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.0374c52c.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.0374c52c.js new file mode 100644 index 0000000..34ed174 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.0374c52c.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.e5899b02.js";import{_ as t}from"./add-or-update.vue_vue_type_script_setup_true_lang.e5899b02.js";import"./index.e3896b23.js";export{t as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.0a155606.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.0a155606.js new file mode 100644 index 0000000..f1d6826 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.0a155606.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.d8c4f684.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.d8c4f684.js";import"./index.e3896b23.js";import"./post.de075824.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.24fc7dfb.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.24fc7dfb.js new file mode 100644 index 0000000..2c66b14 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.24fc7dfb.js @@ -0,0 +1 @@ +import{d as mu,h as f,Y as bu,L as cu,r as c,o as r,f as F,w as a,H as n,b as e,a7 as w,a8 as N,a as o,c as D,e as h,F as M,l as p,t as i,a2 as Bu,E as U,z as Du,A as Cu,_ as gu}from"./index.e3896b23.js";import{u as fu,p as _u,a as Au,b as vu}from"./access.ed3b6ac4.js";import{c as yu,h as K}from"./database.32bfd96d.js";const _=S=>(Du("data-v-9c325ed7"),S=S(),Cu(),S),Vu=_(()=>o("br",null,null,-1)),ku=_(()=>o("br",null,null,-1)),wu=_(()=>o("div",{class:"tip-content"},[o("p",null,"\u8BF4\u660E\uFF1A"),o("br"),o("p",null," (1) \u652F\u6301\u6B63\u5219\u8868\u8FBE\u5F0F\u5339\u914D\uFF0C\u4E5F\u53EF\u76F4\u63A5\u586B\u5199\u9700\u8981\u6620\u5C04\u7684\u8868\u540D\u3002 "),o("br"),o("p",null," (2) \u5F53\u8868\u540D\u6620\u5C04\u89C4\u5219\u4E3A\u7A7A\u65F6\uFF0C\u82E5\u63A5\u5165\u65B9\u5F0F\u4E3A\u63A5\u5165\u5230ods\u5C42\uFF0C\u4F1A\u81EA\u52A8\u7ED9\u8868\u540D\u6DFB\u52A0ods_\u524D\u7F00\uFF0C\u5426\u5219\u4EE3\u8868\u76EE\u6807\u8868\u540D\u4E0E\u6E90\u8868\u540D\u7684\u540D\u79F0\u76F8\u540C; \u4E0D\u4E3A\u7A7A\u65F6\uFF0C\u82E5\u63A5\u5165\u65B9\u5F0F\u4E3A\u63A5\u5165\u5230ods\u5C42\uFF0C\u4F1A\u81EA\u52A8\u7ED9\u6620\u5C04\u7684\u8868\u540D\u6DFB\u52A0ods_\u524D\u7F00\uFF0C\u5426\u5219\u4E0E\u6620\u5C04\u7684\u8868\u540D\u4E00\u81F4\u3002"),o("br"),o("p",null," (3) \u5F53\u5B57\u6BB5\u540D\u6620\u5C04\u89C4\u5219\u8BB0\u5F55\u4E3A\u7A7A\u65F6\uFF0C\u4EE3\u8868\u76EE\u6807\u8868\u7684\u5B57\u6BB5\u540D\u4E0E\u6E90\u8868\u540D\u7684\u5B57\u6BB5\u540D\u76F8\u540C\uFF1B\u4E0D\u4E3A\u7A7A\u65F6\uFF0C\u4E0E\u6620\u5C04\u7684\u5B57\u6BB5\u540D\u4E00\u81F4\u3002"),o("br"),o("p",null," (4) \u82E5\u4E0D\u60F3\u540C\u6B65\u67D0\u4E2A\u5B57\u6BB5\uFF0C\u586B\u5199\u6E90\u7AEF\u5B57\u6BB5\u4E4B\u540E\uFF0C\u76EE\u6807\u5B57\u6BB5\u540D\u6620\u5C04\u7559\u7A7A\u5373\u53EF\u3002 ")],-1)),Nu=_(()=>o("br",null,null,-1)),Tu=p("\u6DFB\u52A0\u8868\u540D\u6620\u5C04"),hu=p("\u9884\u89C8\u8868\u540D\u6620\u5C04"),Mu=_(()=>o("span",null,'\u8BF7\u70B9\u51FB"\u6DFB\u52A0\u8868\u540D\u6620\u5C04"\u6309\u94AE\u6DFB\u52A0\u8868\u540D\u6620\u5C04\u5173\u7CFB\u8BB0\u5F55',-1)),Iu=p("\u5220\u9664"),Uu=p("\u6DFB\u52A0\u5B57\u6BB5\u540D\u6620\u5C04"),xu=p("\u9884\u89C8\u5B57\u6BB5\u540D\u6620\u5C04"),Su=_(()=>o("span",null,'\u8BF7\u70B9\u51FB"\u6DFB\u52A0\u5B57\u6BB5\u540D\u6620\u5C04"\u6309\u94AE\u6DFB\u52A0\u5B57\u6BB5\u540D\u6620\u5C04\u5173\u7CFB\u8BB0\u5F55',-1)),Lu=p("\u5220\u9664"),Ou={key:0},zu={key:1},qu={key:2},$u={key:0},Pu={key:1},Ru=_(()=>o("b",null,"\u6240\u6709\u7269\u7406\u8868",-1)),ju=[Ru],Hu={key:0,class:"name-mapper-table"},Ju=_(()=>o("tr",null,[o("th",null,"\u8868\u540D\u5339\u914D\u7684\u6B63\u5219\u540D"),o("th",null,"\u66FF\u6362\u7684\u76EE\u6807\u503C")],-1)),Ku={key:0,class:"name-mapper-table"},Qu=_(()=>o("tr",null,[o("th",null,"\u5B57\u6BB5\u540D\u5339\u914D\u7684\u6B63\u5219\u540D"),o("th",null,"\u66FF\u6362\u7684\u76EE\u6807\u503C")],-1)),Yu=p(" \u4E0A\u4E00\u6B65 "),Gu=p(" \u4E0B\u4E00\u6B65 "),Wu=p(" \u63D0\u4EA4 "),Xu=_(()=>o("br",null,null,-1)),Zu=_(()=>o("br",null,null,-1)),ue=mu({__name:"add-or-update",emits:["refreshDataList"],setup(S,{expose:Q,emit:Y}){const L=f(!1),O=f(),C=f(1),u=bu({taskName:"",description:"",projectId:"",sourceDatabaseId:"",sourceDatabase:{},taskType:"",cron:"",databases:[],includeOrExclude:1,sourceTables:[],sourceSelectedTables:[],accessMode:1,targetDatabaseId:"",targetDatabase:{},targetOnlyCreate:!1,targetSyncExit:!0,targetDropTable:!0,targetDataSync:!1,targetIndexCreate:!1,targetLowerCase:!1,targetUpperCase:!1,targetAutoIncrement:!1,batchSize:1e3,tableNameMapper:[],columnNameMapper:[]});f(),cu(()=>{yu().then(d=>{u.databases=d.data})});const G=d=>{L.value=!0,u.id="",O.value&&O.value.resetFields(),d&&W(d)},W=d=>{fu(d).then(t=>{Object.assign(u,t.data),u.sourceDatabaseId&&(u.sourceDatabase=u.databases.find(B=>B.id==u.sourceDatabaseId),K(u.sourceDatabaseId).then(B=>{u.sourceTables=B.data})),u.targetDatabaseId&&(u.targetDatabase=u.databases.find(B=>B.id==u.targetDatabaseId))})},X=f({taskName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],sourceDatabaseId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],taskType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],cron:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],includeOrExclude:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],accessMode:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],targetOnlyCreate:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],targetSyncExit:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],targetDropTable:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],targetDataSync:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],targetIndexCreate:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],targetLowerCase:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],targetUpperCase:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],targetDatabaseId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],targetAutoIncrement:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],batchSize:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),Z=()=>{C.value--<2&&(C.value=1)},uu=()=>{C.value++>4&&(C.value=5)},eu=d=>{!d||(u.sourceDatabase=u.databases.find(t=>t.id==d),u.sourceSelectedTables=[],K(d).then(t=>{u.sourceTables=t.data}))},au=d=>{!d||(u.targetDatabase=u.databases.find(t=>t.id==d))},lu=()=>{u.tableNameMapper.push({fromPattern:"",toValue:""})},tu=d=>{u.tableNameMapper.splice(d,1)},ru=()=>{u.columnNameMapper.push({fromPattern:"",toValue:""})},ou=d=>{u.columnNameMapper.splice(d,1)},j=f([]),q=f(!1),su=()=>{if(!u.sourceDatabaseId){U({message:"\u8BF7\u9009\u62E9\u3010\u6E90\u7AEF\u6570\u636E\u5E93\u3011\uFF01",type:"warning"});return}_u(JSON.stringify({sourceDatabaseId:u.sourceDatabaseId,includeOrExclude:u.includeOrExclude,sourceSelectedTables:u.sourceSelectedTables,tableNameMapper:u.tableNameMapper,targetLowerCase:u.accessMode=="1"?!0:u.targetLowerCase,targetUpperCase:u.accessMode=="1"?!1:u.targetUpperCase,tablePrefix:u.accessMode=="1"?"ods_":""})).then(d=>{j.value=d.data,q.value=!0})},I=f([]),x=f(""),$=f([]),P=f(!1),du=()=>{if(!u.sourceDatabaseId){U({message:"\u8BF7\u9009\u62E9\u3010\u6E90\u7AEF\u6570\u636E\u5E93\u3011\uFF01",type:"warning"});return}if(u.includeOrExclude=="1")u.sourceSelectedTables.length==0?I.value=u.sourceTables.map(B=>B.tableName):I.value=u.sourceSelectedTables;else{I.value=u.sourceTables.map(B=>B.tableName);for(var d=0;d{if(B==t)return I.value.splice(R,1),!0})}}x.value="",$.value=[],P.value=!0},nu=()=>{if(!u.sourceDatabaseId){U({message:"\u8BF7\u9009\u62E9\u3010\u6E90\u7AEF\u6570\u636E\u5E93\u3011\uFF01",type:"warning"});return}if(!x.value){U({message:"\u8BF7\u9009\u62E9\u4E00\u4E2A\u8868\u540D\uFF01",type:"warning"});return}Au(JSON.stringify({sourceDatabaseId:u.sourceDatabaseId,includeOrExclude:u.includeOrExclude,preiveTableName:x.value,columnNameMapper:u.columnNameMapper,targetLowerCase:u.accessMode=="1"?!0:u.targetLowerCase,targetUpperCase:u.accessMode=="1"?!1:u.targetUpperCase,tablePrefix:u.accessMode=="1"?"ods_":""})).then(d=>{$.value=d.data})},H=()=>{O.value.validate(d=>{if(!d)return U({message:"\u8BF7\u5C06\u6BCF\u4E00\u6B65\u7684\u53C2\u6570\u5FC5\u586B\u9879\u586B\u5199\u5B8C\u6BD5\u540E\u518D\u63D0\u4EA4\uFF01",type:"warning"}),!1;u.taskType!="3"&&(u.cron=""),u.accessMode=="1"&&(u.targetDatabaseId="",u.targetLowerCase=!0,u.targetDatabase={}),vu(u).then(()=>{U.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{L.value=!1,Y("refreshDataList")}})})})};return Q({init:G}),(d,t)=>{const B=c("el-step"),R=c("el-steps"),T=c("el-input"),m=c("el-form-item"),pu=c("fast-radio-group"),s=c("el-option"),g=c("el-select"),y=c("QuestionFilled"),V=c("el-icon"),k=c("el-tooltip"),v=c("el-button"),A=c("el-table-column"),z=c("el-table"),E=c("el-descriptions-item"),Fu=c("el-descriptions"),iu=c("el-form"),J=c("el-dialog"),Eu=c("el-drawer");return r(),F(Eu,{modelValue:L.value,"onUpdate:modelValue":t[29]||(t[29]=l=>L.value=l),title:u.id?"\u4FEE\u6539":"\u65B0\u589E",size:"100%"},{footer:a(()=>[C.value>1?(r(),F(v,{key:0,round:"",size:"large",onClick:t[23]||(t[23]=l=>Z())},{default:a(()=>[Yu]),_:1})):n("",!0),C.value>0&&C.value<5?(r(),F(v,{key:1,round:"",size:"large",onClick:t[24]||(t[24]=l=>uu())},{default:a(()=>[Gu]),_:1})):n("",!0),C.value==5?(r(),F(v,{key:2,type:"primary",round:"",size:"large",onClick:t[25]||(t[25]=l=>H())},{default:a(()=>[Wu]),_:1})):n("",!0)]),default:a(()=>[Vu,e(R,{active:C.value},{default:a(()=>[e(B,{title:"\u57FA\u672C\u4FE1\u606F\u914D\u7F6E"}),e(B,{title:"\u540C\u6B65\u6E90\u7AEF\u914D\u7F6E"}),e(B,{title:"\u76EE\u6807\u7AEF(ods)\u914D\u7F6E"}),e(B,{title:"\u6620\u5C04\u8F6C\u6362\u914D\u7F6E"}),e(B,{title:"\u914D\u7F6E\u786E\u8BA4\u63D0\u4EA4"})]),_:1},8,["active"]),ku,e(iu,{ref_key:"dataFormRef",ref:O,model:u,rules:X.value,"label-width":"100px",onKeyup:t[22]||(t[22]=Bu(l=>H(),["enter"]))},{default:a(()=>[w(o("div",null,[e(m,{label:"\u4EFB\u52A1\u540D\u79F0",prop:"taskName"},{default:a(()=>[e(T,{modelValue:u.taskName,"onUpdate:modelValue":t[0]||(t[0]=l=>u.taskName=l),placeholder:"\u4EFB\u52A1\u540D\u79F0"},null,8,["modelValue"])]),_:1}),e(m,{label:"\u63CF\u8FF0",prop:"description"},{default:a(()=>[e(T,{modelValue:u.description,"onUpdate:modelValue":t[1]||(t[1]=l=>u.description=l),placeholder:"\u63CF\u8FF0"},null,8,["modelValue"])]),_:1}),e(m,{label:"\u8C03\u5EA6\u7C7B\u578B",prop:"taskType"},{default:a(()=>[e(pu,{modelValue:u.taskType,"onUpdate:modelValue":t[2]||(t[2]=l=>u.taskType=l),"dict-type":"task_type"},null,8,["modelValue"])]),_:1}),u.taskType=="3"?(r(),F(m,{key:0,label:"cron\u8868\u8FBE\u5F0F",prop:"cron"},{default:a(()=>[e(T,{modelValue:u.cron,"onUpdate:modelValue":t[3]||(t[3]=l=>u.cron=l),placeholder:"cron\u8868\u8FBE\u5F0F"},null,8,["modelValue"])]),_:1})):n("",!0)],512),[[N,C.value==1]]),w(o("div",null,[e(m,{label:"\u6E90\u7AEF\u6570\u636E\u5E93",prop:"sourceDatabaseId"},{default:a(()=>[e(g,{modelValue:u.sourceDatabaseId,"onUpdate:modelValue":t[4]||(t[4]=l=>u.sourceDatabaseId=l),onChange:eu,clearable:"",filterable:"",placeholder:"\u8BF7\u9009\u62E9"},{default:a(()=>[(r(!0),D(M,null,h(u.databases,(l,b)=>(r(),F(s,{key:l.id,label:`[${l.id}]${l.name}`,value:l.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue"])]),_:1}),e(m,{label:"\u914D\u7F6E\u65B9\u5F0F",prop:"includeOrExclude"},{default:a(()=>[e(g,{placeholder:"\u8BF7\u9009\u62E9\u914D\u7F6E\u65B9\u5F0F",modelValue:u.includeOrExclude,"onUpdate:modelValue":t[5]||(t[5]=l=>u.includeOrExclude=l)},{default:a(()=>[e(s,{label:"\u5305\u542B\u8868",value:1}),e(s,{label:"\u6392\u9664\u8868",value:0})]),_:1},8,["modelValue"])]),_:1}),e(m,{label:"\u8868\u540D\u914D\u7F6E",prop:"sourceSelectedTables"},{default:a(()=>[e(g,{placeholder:"\u8BF7\u9009\u62E9\u8868\u540D",multiple:"",filterable:"",clearable:"",modelValue:u.sourceSelectedTables,"onUpdate:modelValue":t[6]||(t[6]=l=>u.sourceSelectedTables=l)},{default:a(()=>[(r(!0),D(M,null,h(u.sourceTables,(l,b)=>(r(),F(s,{key:l.tableName,label:l.tableName,value:l.tableName},null,8,["label","value"]))),128))]),_:1},8,["modelValue"]),e(k,{placement:"top",content:"\u5F53\u4E3A\u5305\u542B\u8868\u65F6\uFF0C\u9009\u62E9\u6240\u8981\u7CBE\u786E\u5305\u542B\u7684\u8868\u540D\uFF0C\u5982\u679C\u4E0D\u9009\u5219\u4EE3\u8868\u9009\u62E9\u6240\u6709\uFF1B\u5F53\u4E3A\u6392\u9664\u8868\u65F6\uFF0C\u9009\u62E9\u9700\u8981\u7CBE\u786E\u6392\u9664\u7684\u8868\u540D\u3002"},{default:a(()=>[e(V,null,{default:a(()=>[e(y)]),_:1})]),_:1})]),_:1})],512),[[N,C.value==2]]),w(o("div",null,[e(m,{label:"\u63A5\u5165\u65B9\u5F0F","label-width":"auto",prop:"accessMode"},{default:a(()=>[e(g,{placeholder:"\u8BF7\u9009\u62E9\u63A5\u5165\u65B9\u5F0F",modelValue:u.accessMode,"onUpdate:modelValue":t[7]||(t[7]=l=>u.accessMode=l)},{default:a(()=>[e(s,{label:"\u63A5\u5165\u5230ods\u5C42",value:1}),e(s,{label:"\u81EA\u5B9A\u4E49\u63A5\u5165",value:2})]),_:1},8,["modelValue"])]),_:1}),u.accessMode=="2"?(r(),F(m,{key:0,label:"\u63A5\u5165\u6570\u636E\u5E93","label-width":"auto",prop:"targetDatabaseId"},{default:a(()=>[e(g,{modelValue:u.targetDatabaseId,"onUpdate:modelValue":t[8]||(t[8]=l=>u.targetDatabaseId=l),onChange:au,clearable:"",filterable:"",placeholder:"\u8BF7\u9009\u62E9"},{default:a(()=>[(r(!0),D(M,null,h(u.databases,(l,b)=>(r(),F(s,{key:l.id,label:`[${l.id}]${l.name}`,value:l.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue"])]),_:1})):n("",!0),e(m,{label:"\u53EA\u521B\u5EFA\u8868","label-width":"auto",prop:"targetOnlyCreate"},{default:a(()=>[e(g,{modelValue:u.targetOnlyCreate,"onUpdate:modelValue":t[9]||(t[9]=l=>u.targetOnlyCreate=l)},{default:a(()=>[e(s,{label:"\u662F",value:!0}),e(s,{label:"\u5426",value:!1})]),_:1},8,["modelValue"]),e(k,{placement:"top",content:"\u53EA\u5728\u76EE\u6807\u7AEF\u521B\u5EFA\u8868\uFF0C\u4E0D\u540C\u6B65\u6570\u636E\u5185\u5BB9\u3002"},{default:a(()=>[e(V,null,{default:a(()=>[e(y)]),_:1})]),_:1})]),_:1}),e(m,{label:"\u662F\u5426\u540C\u6B65\u5DF2\u5B58\u5728\u7684\u8868","label-width":"auto",prop:"targetSyncExit"},{default:a(()=>[e(g,{modelValue:u.targetSyncExit,"onUpdate:modelValue":t[10]||(t[10]=l=>u.targetSyncExit=l)},{default:a(()=>[e(s,{label:"\u662F",value:!0}),e(s,{label:"\u5426",value:!1})]),_:1},8,["modelValue"])]),_:1}),e(m,{label:"\u540C\u6B65\u524D\u662F\u5426\u5220\u9664\u76EE\u7684\u8868","label-width":"auto",prop:"targetDropTable"},{default:a(()=>[e(g,{modelValue:u.targetDropTable,"onUpdate:modelValue":t[11]||(t[11]=l=>u.targetDropTable=l)},{default:a(()=>[e(s,{label:"\u662F",value:!0}),e(s,{label:"\u5426",value:!1})]),_:1},8,["modelValue"])]),_:1}),u.targetDropTable?n("",!0):(r(),F(m,{key:1,label:"\u662F\u5426\u542F\u7528\u589E\u91CF\u53D8\u66F4\u540C\u6B65","label-width":"auto",prop:"targetDataSync"},{default:a(()=>[e(g,{modelValue:u.targetDataSync,"onUpdate:modelValue":t[12]||(t[12]=l=>u.targetDataSync=l)},{default:a(()=>[e(s,{label:"\u662F",value:!0}),e(s,{label:"\u5426",value:!1})]),_:1},8,["modelValue"]),e(k,{placement:"top",content:"\u8868\u4E0D\u5B58\u5728\u65F6\u4F1A\u81EA\u52A8\u5EFA\u8868"},{default:a(()=>[e(V,null,{default:a(()=>[e(y)]),_:1})]),_:1})]),_:1})),e(m,{label:"\u662F\u5426\u540C\u6B65\u7D22\u5F15","label-width":"auto",prop:"targetIndexCreate"},{default:a(()=>[e(g,{modelValue:u.targetIndexCreate,"onUpdate:modelValue":t[13]||(t[13]=l=>u.targetIndexCreate=l)},{default:a(()=>[e(s,{label:"\u662F",value:!0}),e(s,{label:"\u5426",value:!1})]),_:1},8,["modelValue"]),e(k,{placement:"top",content:"\u4EC5\u751F\u6548\u4E8E\u90E8\u5206\u6570\u636E\u5E93,\u8868\u4E0D\u5B58\u5728\u65F6\u5EFA\u8868\u65F6\u624D\u751F\u6548"},{default:a(()=>[e(V,null,{default:a(()=>[e(y)]),_:1})]),_:1})]),_:1}),!u.targetUpperCase&&u.accessMode=="2"?(r(),F(m,{key:2,label:"\u662F\u5426\u8868\u540D\u5B57\u6BB5\u540D\u8F6C\u5C0F\u5199","label-width":"auto",prop:"targetLowerCase"},{default:a(()=>[e(g,{modelValue:u.targetLowerCase,"onUpdate:modelValue":t[14]||(t[14]=l=>u.targetLowerCase=l)},{default:a(()=>[e(s,{label:"\u662F",value:!0}),e(s,{label:"\u5426",value:!1})]),_:1},8,["modelValue"]),e(k,{placement:"top",content:"\u8868\u4E0D\u5B58\u5728\u65F6\u5EFA\u8868\u65F6\u624D\u751F\u6548"},{default:a(()=>[e(V,null,{default:a(()=>[e(y)]),_:1})]),_:1})]),_:1})):n("",!0),!u.targetLowerCase&&u.accessMode=="2"?(r(),F(m,{key:3,label:"\u662F\u5426\u8868\u540D\u5B57\u6BB5\u540D\u8F6C\u5927\u5199","label-width":"auto",prop:"targetUpperCase"},{default:a(()=>[e(g,{modelValue:u.targetUpperCase,"onUpdate:modelValue":t[15]||(t[15]=l=>u.targetUpperCase=l)},{default:a(()=>[e(s,{label:"\u662F",value:!0}),e(s,{label:"\u5426",value:!1})]),_:1},8,["modelValue"]),e(k,{placement:"top",content:"\u8868\u4E0D\u5B58\u5728\u65F6\u5EFA\u8868\u65F6\u624D\u751F\u6548"},{default:a(()=>[e(V,null,{default:a(()=>[e(y)]),_:1})]),_:1})]),_:1})):n("",!0),e(m,{label:"\u662F\u5426\u4E3B\u952E\u81EA\u52A8\u9012\u589E","label-width":"auto",prop:"targetAutoIncrement"},{default:a(()=>[e(g,{modelValue:u.targetAutoIncrement,"onUpdate:modelValue":t[16]||(t[16]=l=>u.targetAutoIncrement=l)},{default:a(()=>[e(s,{label:"\u662F",value:!0}),e(s,{label:"\u5426",value:!1})]),_:1},8,["modelValue"]),e(k,{placement:"top",content:"\u4EC5\u751F\u6548\u4E8E\u90E8\u5206\u6570\u636E\u5E93,\u8868\u4E0D\u5B58\u5728\u65F6\u751F\u6548"},{default:a(()=>[e(V,null,{default:a(()=>[e(y)]),_:1})]),_:1})]),_:1}),e(m,{label:"\u6570\u636E\u5904\u7406\u6279\u6B21\u5927\u5C0F","label-width":"auto",prop:"batchSize"},{default:a(()=>[e(g,{modelValue:u.batchSize,"onUpdate:modelValue":t[17]||(t[17]=l=>u.batchSize=l)},{default:a(()=>[e(s,{label:"1000",value:1e3}),e(s,{label:"5000",value:5e3}),e(s,{label:"10000",value:1e4}),e(s,{label:"50000",value:5e4}),e(s,{label:"100000",value:1e5})]),_:1},8,["modelValue"]),e(k,{placement:"top",content:"\u6570\u636E\u540C\u6B65\u65F6\u5355\u4E2A\u6279\u6B21\u5904\u7406\u7684\u884C\u8BB0\u5F55\u603B\u6570\uFF0C\u8BE5\u503C\u8D8A\u5927\u8D8A\u5360\u7528\u5185\u5B58\u7A7A\u95F4\u3002\u5EFA\u8BAE\uFF1A\u5C0F\u5B57\u6BB5\u8868\u8BBE\u7F6E\u4E3A10000\uFF0C\u5927\u5B57\u6BB5\u8868\u8BBE\u7F6E\u4E3A1000"},{default:a(()=>[e(V,null,{default:a(()=>[e(y)]),_:1})]),_:1})]),_:1})],512),[[N,C.value==3]]),w(o("div",null,[wu,Nu,e(v,{type:"success",onClick:t[18]||(t[18]=l=>lu()),round:""},{default:a(()=>[Tu]),_:1}),e(v,{type:"primary",onClick:t[19]||(t[19]=l=>su()),round:""},{default:a(()=>[hu]),_:1}),e(z,{data:u.tableNameMapper,size:"small",border:"",height:"200",style:{width:"90%","margin-top":"15px"}},{empty:a(()=>[Mu]),default:a(()=>[e(A,{label:"\u8868\u540D\u5339\u914D\u7684\u6B63\u5219\u540D",width:"320"},{default:a(l=>[e(T,{modelValue:l.row.fromPattern,"onUpdate:modelValue":b=>l.row.fromPattern=b,type:"string"},null,8,["modelValue","onUpdate:modelValue"])]),_:1}),e(A,{label:"\u66FF\u6362\u7684\u76EE\u6807\u503C",width:"320"},{default:a(l=>[e(T,{modelValue:l.row.toValue,"onUpdate:modelValue":b=>l.row.toValue=b,type:"string"},null,8,["modelValue","onUpdate:modelValue"])]),_:1}),e(A,{label:"\u64CD\u4F5C",width:"220"},{default:a(l=>[e(v,{size:"small",type:"danger",onClick:b=>tu(l.$index)},{default:a(()=>[Iu]),_:2},1032,["onClick"])]),_:1})]),_:1},8,["data"]),e(v,{type:"success",onClick:t[20]||(t[20]=l=>ru()),round:""},{default:a(()=>[Uu]),_:1}),e(v,{type:"primary",onClick:t[21]||(t[21]=l=>du()),round:""},{default:a(()=>[xu]),_:1}),e(z,{data:u.columnNameMapper,size:"small",border:"",height:"200",style:{width:"90%","margin-top":"15px"}},{empty:a(()=>[Su]),default:a(()=>[e(A,{label:"\u5B57\u6BB5\u540D\u5339\u914D\u7684\u6B63\u5219\u540D",width:"320"},{default:a(l=>[e(T,{modelValue:l.row.fromPattern,"onUpdate:modelValue":b=>l.row.fromPattern=b,type:"string"},null,8,["modelValue","onUpdate:modelValue"])]),_:1}),e(A,{label:"\u66FF\u6362\u7684\u76EE\u6807\u503C",width:"320"},{default:a(l=>[e(T,{modelValue:l.row.toValue,"onUpdate:modelValue":b=>l.row.toValue=b,type:"string"},null,8,["modelValue","onUpdate:modelValue"])]),_:1}),e(A,{label:"\u64CD\u4F5C",width:"220"},{default:a(l=>[e(v,{size:"small",type:"danger",onClick:b=>ou(l.$index)},{default:a(()=>[Lu]),_:2},1032,["onClick"])]),_:1})]),_:1},8,["data"])],512),[[N,C.value==4]]),w(o("div",null,[e(Fu,{size:"default",column:1,"label-class-name":"el-descriptions-item-label-class",border:""},{default:a(()=>[e(E,{label:"\u4EFB\u52A1\u540D\u79F0"},{default:a(()=>[p(i(u.taskName),1)]),_:1}),e(E,{label:"\u4EFB\u52A1\u63CF\u8FF0"},{default:a(()=>[p(i(u.description),1)]),_:1}),e(E,{label:"\u8C03\u5EA6\u7C7B\u578B"},{default:a(()=>[u.taskType=="1"?(r(),D("span",Ou," \u5B9E\u65F6\u540C\u6B65 ")):n("",!0),u.taskType=="2"?(r(),D("span",zu," \u4E00\u6B21\u6027\u5168\u91CF\u540C\u6B65 ")):n("",!0),u.taskType=="3"?(r(),D("span",qu," \u4E00\u6B21\u6027\u5168\u91CF\u5468\u671F\u6027\u589E\u91CF ")):n("",!0)]),_:1}),u.taskType=="3"?(r(),F(E,{key:0,label:"cron\u8868\u8FBE\u5F0F"},{default:a(()=>[p(i(u.cron),1)]),_:1})):n("",!0),e(E,{label:"\u6E90\u7AEF\u6570\u636E\u5E93"},{default:a(()=>[p("["+i(u.sourceDatabaseId)+"]"+i(u.sourceDatabase.name),1)]),_:1}),e(E,{label:"\u6E90\u7AEF\u8868\u9009\u62E9\u65B9\u5F0F"},{default:a(()=>[u.includeOrExclude=="1"?(r(),D("span",$u," \u5305\u542B\u8868 ")):n("",!0),u.includeOrExclude=="0"?(r(),D("span",Pu," \u6392\u9664\u8868 ")):n("",!0)]),_:1}),e(E,{label:"\u6E90\u7AEF\u8868\u540D\u5217\u8868"},{default:a(()=>[w(o("span",null,ju,512),[[N,u.includeOrExclude=="1"&&(!u.sourceSelectedTables||u.sourceSelectedTables.length==0)]]),(r(!0),D(M,null,h(u.sourceSelectedTables,l=>(r(),D("p",{key:l},i(l),1))),128))]),_:1}),u.accessMode=="2"?(r(),F(E,{key:1,label:"\u76EE\u5730\u7AEF\u6570\u636E\u6E90"},{default:a(()=>[p("["+i(u.targetDatabaseId)+"]"+i(u.targetDatabase.name),1)]),_:1})):n("",!0),e(E,{label:"\u53EA\u521B\u5EFA\u8868"},{default:a(()=>[p(i(u.targetOnlyCreate),1)]),_:1}),e(E,{label:"\u662F\u5426\u540C\u6B65\u5DF2\u5B58\u5728\u7684\u8868"},{default:a(()=>[p(i(u.targetSyncExit),1)]),_:1}),e(E,{label:"\u540C\u6B65\u524D\u662F\u5426\u5148\u5220\u9664\u76EE\u7684\u8868"},{default:a(()=>[p(i(u.targetDropTable),1)]),_:1}),u.targetDropTable?n("",!0):(r(),F(E,{key:2,label:"\u662F\u5426\u542F\u7528\u589E\u91CF\u53D8\u66F4\u540C\u6B65"},{default:a(()=>[p(i(u.targetDataSync),1)]),_:1})),u.targetDropTable?(r(),F(E,{key:3,label:"\u662F\u5426\u521B\u5EFA\u7D22\u5F15"},{default:a(()=>[p(i(u.targetIndexCreate),1)]),_:1})):n("",!0),u.targetDropTable?(r(),F(E,{key:4,label:"\u662F\u5426\u8868\u540D\u5B57\u6BB5\u540D\u8F6C\u5C0F\u5199"},{default:a(()=>[p(i(u.targetLowerCase),1)]),_:1})):n("",!0),u.targetDropTable?(r(),F(E,{key:5,label:"\u662F\u5426\u8868\u540D\u5B57\u6BB5\u540D\u8F6C\u5927\u5199"},{default:a(()=>[p(i(u.targetUpperCase),1)]),_:1})):n("",!0),u.targetDropTable?(r(),F(E,{key:6,label:"\u662F\u5426\u4E3B\u952E\u81EA\u52A8\u9012\u589E"},{default:a(()=>[p(i(u.targetAutoIncrement),1)]),_:1})):n("",!0),e(E,{label:"\u6570\u636E\u5904\u7406\u6279\u6B21\u91CF"},{default:a(()=>[p(i(u.batchSize),1)]),_:1}),e(E,{label:"\u8868\u540D\u6620\u5C04\u89C4\u5219"},{default:a(()=>[w(o("span",null,"[\u6620\u5C04\u5173\u7CFB\u4E3A\u7A7A]",512),[[N,u.tableNameMapper.length==0]]),u.tableNameMapper.length>0?(r(),D("table",Hu,[Ju,(r(!0),D(M,null,h(u.tableNameMapper,(l,b)=>(r(),D("tr",{key:b},[o("td",null,i(l.fromPattern),1),o("td",null,i(l.toValue),1)]))),128))])):n("",!0)]),_:1}),e(E,{label:"\u5B57\u6BB5\u540D\u6620\u5C04\u89C4\u5219"},{default:a(()=>[w(o("span",null,"[\u6620\u5C04\u5173\u7CFB\u4E3A\u7A7A]",512),[[N,u.columnNameMapper.length==0]]),u.columnNameMapper.length>0?(r(),D("table",Ku,[Qu,(r(!0),D(M,null,h(u.columnNameMapper,(l,b)=>(r(),D("tr",{key:b},[o("td",null,i(l.fromPattern),1),o("td",null,i(l.toValue),1)]))),128))])):n("",!0)]),_:1})]),_:1})],512),[[N,C.value==5]])]),_:1},8,["model","rules"]),C.value==4?(r(),F(J,{key:0,title:"\u67E5\u770B\u8868\u540D\u6620\u5C04\u5173\u7CFB",modelValue:q.value,"onUpdate:modelValue":t[26]||(t[26]=l=>q.value=l)},{default:a(()=>[e(z,{"header-cell-style":{background:"#eef1f6",color:"#606266"},data:j.value,size:"small",border:""},{default:a(()=>[e(A,{prop:"originalName",label:"\u6E90\u7AEF\u8868\u540D","min-width":"20%"}),e(A,{prop:"targetName",label:"\u76EE\u6807\u8868\u540D","min-width":"20%"})]),_:1},8,["data"])]),_:1},8,["modelValue"])):n("",!0),C.value==4?(r(),F(J,{key:1,title:"\u67E5\u770B\u5B57\u6BB5\u6620\u5C04\u5173\u7CFB",modelValue:P.value,"onUpdate:modelValue":t[28]||(t[28]=l=>P.value=l)},{default:a(()=>[e(g,{onChange:nu,modelValue:x.value,"onUpdate:modelValue":t[27]||(t[27]=l=>x.value=l),clearable:"",filterable:"",placeholder:"\u8BF7\u9009\u62E9"},{default:a(()=>[(r(!0),D(M,null,h(I.value,(l,b)=>(r(),F(s,{key:b,label:l,value:l},null,8,["label","value"]))),128))]),_:1},8,["modelValue"]),Xu,Zu,e(z,{"header-cell-style":{background:"#eef1f6",color:"#606266"},data:$.value,size:"small",border:""},{default:a(()=>[e(A,{prop:"originalName",label:"\u539F\u59CB\u5B57\u6BB5\u540D","min-width":"20%"}),e(A,{prop:"targetName",label:"\u76EE\u6807\u8868\u5B57\u6BB5\u540D","min-width":"20%"})]),_:1},8,["data"])]),_:1},8,["modelValue"])):n("",!0)]),_:1},8,["modelValue","title"])}}});const te=gu(ue,[["__scopeId","data-v-9c325ed7"]]);export{te as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.31f826ac.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.31f826ac.js new file mode 100644 index 0000000..9fb2564 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.31f826ac.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_style_index_0_lang.1148389a.js";import{_ as d}from"./add-or-update.vue_vue_type_style_index_0_lang.1148389a.js";import"./index.e3896b23.js";import"./folder.ea536bf2.js";import"./database.235d7a89.js";import"./table.e1c1b00a.js";import"./column.79595943.js";import"./model.45425835.js";import"./metamodel.a560a346.js";import"./metadata.0c954be9.js";export{d as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.3712f2ef.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.3712f2ef.js new file mode 100644 index 0000000..a9ecd41 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.3712f2ef.js @@ -0,0 +1 @@ +import{d as U,h as u,Y as $,r as n,o as b,f as A,w as l,b as r,aE as H,H as K,a as R,a2 as T,l as E,E as q,_ as M}from"./index.e3896b23.js";import{u as P,a as S,b as j}from"./orgs.0a892ff5.js";const I={class:"popover-pop-body"},Y=E("\u53D6\u6D88"),z=E("\u786E\u5B9A"),G=U({__name:"add-or-update",emits:["refreshDataList"],setup(J,{expose:C,emit:V}){const s=u(!1),m=u([]),c=u(),d=u(),_=u(),e=$({id:"",name:"",pid:"",parentName:"",sort:0}),F=o=>{s.value=!0,e.id="",d.value&&d.value.resetFields(),o?k(o):i(),y()},y=()=>P().then(o=>{m.value=o.data}),k=o=>{S(o).then(t=>{if(Object.assign(e,t.data),e.pid=="0")return i();c.value.setCurrentKey(e.pid)})},i=()=>{e.pid="0",e.parentName="\u4E00\u7EA7\u673A\u6784"},D=o=>{e.pid=o.id,e.parentName=o.name,_.value.hide()},N=u({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],parentName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),f=()=>{d.value.validate(o=>{if(!o)return!1;j(e).then(()=>{q.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{s.value=!1,V("refreshDataList")}})})})};return C({init:F}),(o,t)=>{const g=n("el-input"),p=n("el-form-item"),h=n("svg-icon"),x=n("el-tree"),L=n("el-popover"),O=n("el-input-number"),B=n("el-form"),v=n("el-button"),w=n("el-dialog");return b(),A(w,{modelValue:s.value,"onUpdate:modelValue":t[7]||(t[7]=a=>s.value=a),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1,draggable:""},{footer:l(()=>[r(v,{onClick:t[5]||(t[5]=a=>s.value=!1)},{default:l(()=>[Y]),_:1}),r(v,{type:"primary",onClick:t[6]||(t[6]=a=>f())},{default:l(()=>[z]),_:1})]),default:l(()=>[r(B,{ref_key:"dataFormRef",ref:d,model:e,rules:N.value,"label-width":"120px",onKeyup:t[4]||(t[4]=T(a=>f(),["enter"]))},{default:l(()=>[r(p,{prop:"name",label:"\u540D\u79F0"},{default:l(()=>[r(g,{modelValue:e.name,"onUpdate:modelValue":t[0]||(t[0]=a=>e.name=a),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),r(p,{prop:"parentName",label:"\u4E0A\u7EA7\u673A\u6784",class:"org-list"},{default:l(()=>[r(L,{ref_key:"orgListPopover",ref:_,placement:"bottom-start",trigger:"click",width:400,"popper-class":"popover-pop"},{reference:l(()=>[r(g,{modelValue:e.parentName,"onUpdate:modelValue":t[2]||(t[2]=a=>e.parentName=a),readonly:!0,placeholder:"\u4E0A\u7EA7\u673A\u6784"},{suffix:l(()=>[e.pid!=="0"?(b(),A(h,{key:0,icon:"icon-close-circle",onClick:t[1]||(t[1]=H(a=>i(),["stop"]))})):K("",!0)]),_:1},8,["modelValue"])]),default:l(()=>[R("div",I,[r(x,{ref_key:"orgListTree",ref:c,data:m.value,props:{label:"name",children:"children"},"node-key":"id","highlight-current":!0,"expand-on-click-node":!1,accordion:"",onCurrentChange:D},null,8,["data"])])]),_:1},512)]),_:1}),r(p,{prop:"sort",label:"\u6392\u5E8F"},{default:l(()=>[r(O,{modelValue:e.sort,"onUpdate:modelValue":t[3]||(t[3]=a=>e.sort=a),"controls-position":"right",min:0,label:"\u6392\u5E8F"},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});const X=M(G,[["__scopeId","data-v-90d2d3fe"]]);export{X as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.4a501d57.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.4a501d57.js new file mode 100644 index 0000000..cec8b37 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.4a501d57.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.763f60cc.js";import{_ as l}from"./add-or-update.vue_vue_type_script_setup_true_lang.763f60cc.js";import"./index.e3896b23.js";import"./folder.ea536bf2.js";import"./database.235d7a89.js";import"./table.e1c1b00a.js";import"./column.79595943.js";import"./model.45425835.js";import"./database.32bfd96d.js";import"./metadataCollect.47d67406.js";import"./metadata.0c954be9.js";export{l as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.5cab3e60.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.5cab3e60.js new file mode 100644 index 0000000..9d6ba06 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.5cab3e60.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.c8657069.js";import{_ as c}from"./add-or-update.vue_vue_type_script_setup_true_lang.c8657069.js";import"./index.e3896b23.js";import"./folder.ea536bf2.js";import"./database.235d7a89.js";import"./table.e1c1b00a.js";import"./column.79595943.js";import"./model.45425835.js";import"./metadata.0c954be9.js";export{c as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.6d328016.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.6d328016.js new file mode 100644 index 0000000..27b3e68 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.6d328016.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.ae2729f2.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.ae2729f2.js";import"./index.e3896b23.js";import"./metamodel.a560a346.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.6fa912a0.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.6fa912a0.js new file mode 100644 index 0000000..2dcf9cf --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.6fa912a0.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.0eb89f0d.js";import{_ as t}from"./add-or-update.vue_vue_type_script_setup_true_lang.0eb89f0d.js";import"./index.e3896b23.js";export{t as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.7b508ca5.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.7b508ca5.js new file mode 100644 index 0000000..701b05d --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.7b508ca5.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.8d00c1b3.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.8d00c1b3.js";import"./index.e3896b23.js";import"./catalog.f6d809a5.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.7b961085.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.7b961085.js new file mode 100644 index 0000000..e0beccf --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.7b961085.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.150fc434.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.150fc434.js";import"./index.e3896b23.js";import"./cluster.85454835.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.83709828.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.83709828.js new file mode 100644 index 0000000..876e6b6 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.83709828.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.8494cd6a.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.8494cd6a.js";import"./index.e3896b23.js";import"./apiGroup.d1155eaa.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.88f61cfd.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.88f61cfd.js new file mode 100644 index 0000000..41553df --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.88f61cfd.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.bc99adba.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.bc99adba.js";import"./index.e3896b23.js";import"./fileCategory.cc511701.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.8b2c05ad.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.8b2c05ad.js new file mode 100644 index 0000000..65ca7b7 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.8b2c05ad.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.fd8fc5c7.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.fd8fc5c7.js";import"./index.e3896b23.js";import"./database.32bfd96d.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.8fc8bffa.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.8fc8bffa.js new file mode 100644 index 0000000..72a9121 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.8fc8bffa.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.c217bb4d.js";import{_}from"./add-or-update.vue_vue_type_script_setup_true_lang.c217bb4d.js";import"./index.e3896b23.js";import"./orgs.0a892ff5.js";import"./post.de075824.js";import"./role.44f4fe5e.js";export{_ as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.996b6c7f.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.996b6c7f.js new file mode 100644 index 0000000..83f3280 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.996b6c7f.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.a947df21.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.a947df21.js";import"./index.e3896b23.js";import"./constant.71632d98.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.9afbce90.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.9afbce90.js new file mode 100644 index 0000000..044c01f --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.9afbce90.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.d627c8a1.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.d627c8a1.js";import"./index.e3896b23.js";import"./clusterConfiguration.e495cab8.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.a615f121.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.a615f121.js new file mode 100644 index 0000000..e00ac1a --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.a615f121.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.aceeddd9.js";import{_ as t}from"./add-or-update.vue_vue_type_script_setup_true_lang.aceeddd9.js";import"./index.e3896b23.js";export{t as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.b2587438.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.b2587438.js new file mode 100644 index 0000000..b79d384 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.b2587438.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.52b8e47c.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.52b8e47c.js";import"./index.e3896b23.js";import"./role.44f4fe5e.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.ba397537.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.ba397537.js new file mode 100644 index 0000000..e293aa3 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.ba397537.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.7504fbe3.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.7504fbe3.js";import"./index.e3896b23.js";import"./sms.f04d455c.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.beb416c0.css b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.beb416c0.css new file mode 100644 index 0000000..f5d94e5 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.beb416c0.css @@ -0,0 +1 @@ +.tip-content[data-v-9c325ed7]{font-size:14px} diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.bef31170.css b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.bef31170.css new file mode 100644 index 0000000..e1f0dd5 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.bef31170.css @@ -0,0 +1 @@ +.org-list[data-v-90d2d3fe] .el-input__inner,.org-list[data-v-90d2d3fe] .el-input__suffix{cursor:pointer} diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.c251d9dd.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.c251d9dd.js new file mode 100644 index 0000000..2cef144 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.c251d9dd.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.2b4df6e3.js";import{_ as t}from"./add-or-update.vue_vue_type_script_setup_true_lang.2b4df6e3.js";import"./index.e3896b23.js";export{t as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.c90de92f.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.c90de92f.js new file mode 100644 index 0000000..fd74c1e --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.c90de92f.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.10132297.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.10132297.js";import"./index.e3896b23.js";import"./app.22c193c2.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.d1a4ef1f.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.d1a4ef1f.js new file mode 100644 index 0000000..4b40396 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.d1a4ef1f.js @@ -0,0 +1 @@ +import{d as q,h as r,Y as I,r as n,o as d,f as p,w as o,b as u,aE as j,H as y,a as g,c as z,e as G,n as Y,F as J,a2 as Q,l as i,ar as W,aF as X,aG as Z,aH as ee,E as te,_ as le}from"./index.e3896b23.js";const oe=i("\u83DC\u5355"),ue=i("\u6309\u94AE"),ae=i("\u63A5\u53E3"),ne=i("\u5185\u90E8\u6253\u5F00"),se=i("\u5916\u90E8\u6253\u5F00"),re={class:"mod__menu-icon-inner"},de={class:"mod__menu-icon-list"},ie=i("\u53D6\u6D88"),pe=i("\u786E\u5B9A"),me=q({__name:"add-or-update",emits:["refreshDataList"],setup(_e,{expose:N,emit:U}){const m=r(!1),F=r([]),V=r([]),C=r(),f=r(),E=r(),k=r(),e=I({id:"",type:0,name:"",pid:"0",parentName:"",url:"",authority:"",sort:0,icon:"",openStyle:0}),x=a=>{m.value=!0,e.id="",f.value&&f.value.resetFields(),a?M(a):v(),A(),V.value=W()},w=()=>{A(),v()},A=()=>X(0).then(a=>{F.value=a.data}),M=a=>{Z(a).then(t=>{if(Object.assign(e,t.data),e.pid=="0")return v();C.value.setCurrentKey(e.pid)})},v=()=>{e.pid="0",e.parentName="\u4E00\u7EA7\u83DC\u5355"},S=a=>{e.pid=a.id,e.parentName=a.name,E.value.hide()},$=a=>{e.icon=a,k.value.hide()},H=r({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],parentName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),D=()=>{f.value.validate(a=>{if(!a)return!1;ee(e).then(()=>{te.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{m.value=!1,U("refreshDataList")}})})})};return N({init:x}),(a,t)=>{const _=n("el-radio"),h=n("el-radio-group"),s=n("el-form-item"),c=n("el-input"),L=n("svg-icon"),P=n("el-tree"),B=n("el-popover"),T=n("el-input-number"),b=n("el-button"),K=n("el-form"),O=n("el-dialog");return d(),p(O,{modelValue:m.value,"onUpdate:modelValue":t[13]||(t[13]=l=>m.value=l),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1,draggable:""},{footer:o(()=>[u(b,{onClick:t[11]||(t[11]=l=>m.value=!1)},{default:o(()=>[ie]),_:1}),u(b,{type:"primary",onClick:t[12]||(t[12]=l=>D())},{default:o(()=>[pe]),_:1})]),default:o(()=>[u(K,{ref_key:"dataFormRef",ref:f,model:e,rules:H.value,"label-width":"120px",onKeyup:t[10]||(t[10]=Q(l=>D(),["enter"]))},{default:o(()=>[u(s,{prop:"type",label:"\u7C7B\u578B"},{default:o(()=>[u(h,{modelValue:e.type,"onUpdate:modelValue":t[0]||(t[0]=l=>e.type=l),disabled:!!e.id,onChange:t[1]||(t[1]=l=>w())},{default:o(()=>[u(_,{label:0},{default:o(()=>[oe]),_:1}),u(_,{label:1},{default:o(()=>[ue]),_:1}),u(_,{label:2},{default:o(()=>[ae]),_:1})]),_:1},8,["modelValue","disabled"])]),_:1}),u(s,{prop:"name",label:"\u540D\u79F0"},{default:o(()=>[u(c,{modelValue:e.name,"onUpdate:modelValue":t[2]||(t[2]=l=>e.name=l),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),u(s,{prop:"parentName",label:"\u4E0A\u7EA7\u83DC\u5355",class:"popover-list"},{default:o(()=>[u(B,{ref_key:"menuListPopover",ref:E,placement:"bottom-start",trigger:"click",width:400},{reference:o(()=>[u(c,{modelValue:e.parentName,"onUpdate:modelValue":t[4]||(t[4]=l=>e.parentName=l),readonly:!0,placeholder:"\u4E0A\u7EA7\u83DC\u5355"},{suffix:o(()=>[e.pid!=="0"?(d(),p(L,{key:0,icon:"icon-close-circle",onClick:t[3]||(t[3]=j(l=>v(),["stop"]))})):y("",!0)]),_:1},8,["modelValue"])]),default:o(()=>[g("div",null,[u(P,{ref_key:"menuListTree",ref:C,data:F.value,props:{label:"name",children:"children"},"node-key":"id","highlight-current":!0,"expand-on-click-node":!1,accordion:"",onCurrentChange:S},null,8,["data"])])]),_:1},512)]),_:1}),e.type===0?(d(),p(s,{key:0,prop:"url",label:"\u8DEF\u7531"},{default:o(()=>[u(c,{modelValue:e.url,"onUpdate:modelValue":t[5]||(t[5]=l=>e.url=l),placeholder:"\u8DEF\u7531"},null,8,["modelValue"])]),_:1})):y("",!0),u(s,{prop:"sort",label:"\u6392\u5E8F"},{default:o(()=>[u(T,{modelValue:e.sort,"onUpdate:modelValue":t[6]||(t[6]=l=>e.sort=l),"controls-position":"right",min:0,label:"\u6392\u5E8F"},null,8,["modelValue"])]),_:1}),e.type===0?(d(),p(s,{key:1,prop:"openStyle",label:"\u6253\u5F00\u65B9\u5F0F"},{default:o(()=>[u(h,{modelValue:e.openStyle,"onUpdate:modelValue":t[7]||(t[7]=l=>e.openStyle=l)},{default:o(()=>[u(_,{label:0},{default:o(()=>[ne]),_:1}),u(_,{label:1},{default:o(()=>[se]),_:1})]),_:1},8,["modelValue"])]),_:1})):y("",!0),u(s,{prop:"authority",label:"\u6388\u6743\u6807\u8BC6"},{default:o(()=>[u(c,{modelValue:e.authority,"onUpdate:modelValue":t[8]||(t[8]=l=>e.authority=l),placeholder:"\u591A\u4E2A\u7528\u9017\u53F7\u5206\u9694\uFF0C\u5982\uFF1Asys:menu:save,sys:menu:update"},null,8,["modelValue"])]),_:1}),e.type===0?(d(),p(s,{key:2,prop:"icon",label:"\u56FE\u6807",class:"popover-list"},{default:o(()=>[u(B,{ref_key:"iconListPopover",ref:k,placement:"top-start",trigger:"click",width:470,"popper-class":"mod__menu-icon-popover"},{reference:o(()=>[u(c,{modelValue:e.icon,"onUpdate:modelValue":t[9]||(t[9]=l=>e.icon=l),readonly:!0,placeholder:"\u56FE\u6807"},null,8,["modelValue"])]),default:o(()=>[g("div",re,[g("div",de,[(d(!0),z(J,null,G(V.value,(l,R)=>(d(),p(b,{key:R,class:Y({"is-active":e.icon===l}),onClick:ce=>$(l)},{default:o(()=>[u(L,{icon:l},null,8,["icon"])]),_:2},1032,["class","onClick"]))),128))])])]),_:1},512)]),_:1})):y("",!0)]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});const ve=le(me,[["__scopeId","data-v-63b13f00"]]);export{ve as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.d43c0f41.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.d43c0f41.js new file mode 100644 index 0000000..be52d00 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.d43c0f41.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.536b089d.js";import{_ as v}from"./add-or-update.vue_vue_type_script_setup_true_lang.536b089d.js";import"./index.e3896b23.js";import"./database.32bfd96d.js";import"./apiConfig.09b7ec3b.js";import"./databases.vue_vue_type_style_index_0_lang.dceac0af.js";import"./database.235d7a89.js";import"./table.e1c1b00a.js";import"./readonly-studio.vue_vue_type_script_setup_true_lang.0062e564.js";import"./toggleHighContrast.483b4227.js";import"./add-or-update.vue_vue_type_script_setup_true_lang.fd8fc5c7.js";import"./middledb.vue_vue_type_style_index_0_lang.fa7bd4c1.js";import"./house.1ac0c09f.js";import"./sql-studio.a6fca977.js";import"./run.cf98bfe1.js";import"./ts.worker.921d436c.js";import"./sqlFormatter.e0a34ad5.js";import"./sql.4f48b9c1.js";import"./json-studio.vue_vue_type_style_index_0_lang.ef678733.js";import"./console-result.vue_vue_type_script_setup_true_lang.932f9a7d.js";import"./param-studio.vue_vue_type_script_setup_true_lang.e008bef0.js";export{v as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.e3aa9a91.css b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.e3aa9a91.css new file mode 100644 index 0000000..27f1006 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.e3aa9a91.css @@ -0,0 +1 @@ +.mod__menu[data-v-63b13f00] .el-popover.el-popper{overflow-x:hidden}.mod__menu .popover-list[data-v-63b13f00] .el-input__inner,.mod__menu .popover-list[data-v-63b13f00] .el-input__suffix{cursor:pointer}.mod__menu-icon-inner[data-v-63b13f00]{width:100%;max-height:260px;overflow-x:hidden;overflow-y:auto}.mod__menu-icon-inner[data-v-63b13f00]::-webkit-scrollbar{width:8px;height:8px;background:transparent}.mod__menu-icon-inner[data-v-63b13f00]::-webkit-scrollbar-thumb{background-color:#ddd;background-clip:padding-box;min-height:28px;border-radius:4px}.mod__menu-icon-inner[data-v-63b13f00]::-webkit-scrollbar-thumb:hover{background-color:#bbb}.mod__menu-icon-list[data-v-63b13f00]{width:458px!important;padding:0;margin:-8px 0 0 -8px}.mod__menu-icon-list>.el-button[data-v-63b13f00]{padding:8px;margin:18px 0 0 8px}.mod__menu-icon-list>.el-button>span[data-v-63b13f00]{display:inline-block;vertical-align:middle;width:18px;height:18px;font-size:18px} diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.e3b0c442.css b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.e3b0c442.css new file mode 100644 index 0000000..e69de29 diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.eac06a95.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.eac06a95.js new file mode 100644 index 0000000..bcf3f08 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.eac06a95.js @@ -0,0 +1 @@ +import"./add-or-update.vue_vue_type_script_setup_true_lang.1662b337.js";import{_ as i}from"./add-or-update.vue_vue_type_script_setup_true_lang.1662b337.js";import"./index.e3896b23.js";import"./dataStandard.4ca18653.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.0eb89f0d.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.0eb89f0d.js new file mode 100644 index 0000000..bc22dc9 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.0eb89f0d.js @@ -0,0 +1 @@ +import{ad as m,d as F,h as i,Y as N,r as s,o as k,f as B,w as d,b as l,a2 as I,l as v,E as w}from"./index.e3896b23.js";const x=u=>m.get("/data-governance/standard-code/"+u),y=u=>u.id?m.put("/data-governance/standard-code",u):m.post("/data-governance/standard-code",u),S=v("\u53D6\u6D88"),R=v("\u786E\u5B9A"),$=F({__name:"add-or-update",emits:["refreshDataList"],setup(u,{expose:g,emit:C}){const n=i(!1),r=i(),a=N({dataId:"",dataName:""}),b=(o,e)=>{n.value=!0,a.id="",r.value&&r.value.resetFields(),a.standardId=e,o&&V(o)},V=o=>{x(o).then(e=>{Object.assign(a,e.data)})},E=i({dataId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],dataName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),p=()=>{r.value.validate(o=>{if(!o)return!1;y(a).then(()=>{w.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{n.value=!1,C("refreshDataList")}})})})};return g({init:b}),(o,e)=>{const f=s("el-input"),c=s("el-form-item"),A=s("el-form"),_=s("el-button"),D=s("el-dialog");return k(),B(D,{modelValue:n.value,"onUpdate:modelValue":e[5]||(e[5]=t=>n.value=t),title:a.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1,width:"30%"},{footer:d(()=>[l(_,{onClick:e[3]||(e[3]=t=>n.value=!1)},{default:d(()=>[S]),_:1}),l(_,{type:"primary",onClick:e[4]||(e[4]=t=>p())},{default:d(()=>[R]),_:1})]),default:d(()=>[l(A,{ref_key:"dataFormRef",ref:r,model:a,rules:E.value,"label-width":"100px",onKeyup:e[2]||(e[2]=I(t=>p(),["enter"]))},{default:d(()=>[l(c,{label:"\u7801\u8868id",prop:"dataId"},{default:d(()=>[l(f,{modelValue:a.dataId,"onUpdate:modelValue":e[0]||(e[0]=t=>a.dataId=t),placeholder:"id"},null,8,["modelValue"])]),_:1}),l(c,{label:"\u7801\u8868name",prop:"dataName"},{default:d(()=>[l(f,{modelValue:a.dataName,"onUpdate:modelValue":e[1]||(e[1]=t=>a.dataName=t),placeholder:"name"},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{$ as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.10132297.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.10132297.js new file mode 100644 index 0000000..98000c3 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.10132297.js @@ -0,0 +1 @@ +import{d as C,h as i,Y as E,r as s,o as K,f as S,w as o,b as a,a7 as c,a8 as _,a2 as k,l as F,E as w}from"./index.e3896b23.js";import{u as B,b as U}from"./app.22c193c2.js";const R=F("\u53D6\u6D88"),$=F("\u786E\u5B9A"),j=C({__name:"add-or-update",emits:["refreshDataList"],setup(q,{expose:V,emit:b}){const p=i(!1),n=i(),l=E({appKey:"",appSecret:"",name:"",note:"",expireDesc:""}),v=i({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],expireDesc:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),D=u=>{p.value=!0,l.id="",n.value&&n.value.resetFields(),u&&y(u)},y=u=>{B(u).then(e=>{Object.assign(l,e.data)})},m=()=>{n.value.validate(u=>{if(!u)return!1;U(l).then(()=>{w.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{p.value=!1,b("refreshDataList")}})})})};return V({init:D}),(u,e)=>{const d=s("el-input"),r=s("el-form-item"),g=s("fast-select"),x=s("el-form"),f=s("el-button"),A=s("el-dialog");return K(),S(A,{modelValue:p.value,"onUpdate:modelValue":e[8]||(e[8]=t=>p.value=t),title:l.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:o(()=>[a(f,{onClick:e[6]||(e[6]=t=>p.value=!1)},{default:o(()=>[R]),_:1}),a(f,{type:"primary",onClick:e[7]||(e[7]=t=>m())},{default:o(()=>[$]),_:1})]),default:o(()=>[a(x,{ref_key:"dataFormRef",ref:n,model:l,rules:v.value,"label-width":"100px",onKeyup:e[5]||(e[5]=k(t=>m(),["enter"]))},{default:o(()=>[c(a(r,{label:"appKey",prop:"appKey"},{default:o(()=>[a(d,{disabled:"",modelValue:l.appKey,"onUpdate:modelValue":e[0]||(e[0]=t=>l.appKey=t),placeholder:"appKey"},null,8,["modelValue"])]),_:1},512),[[_,!!l.id]]),c(a(r,{label:"appSecret",prop:"appSecret"},{default:o(()=>[a(d,{disabled:"",modelValue:l.appSecret,"onUpdate:modelValue":e[1]||(e[1]=t=>l.appSecret=t),placeholder:"appSecret"},null,8,["modelValue"])]),_:1},512),[[_,!!l.id]]),a(r,{label:"\u540D\u79F0",prop:"name"},{default:o(()=>[a(d,{modelValue:l.name,"onUpdate:modelValue":e[2]||(e[2]=t=>l.name=t),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),a(r,{label:"\u63CF\u8FF0",prop:"note"},{default:o(()=>[a(d,{type:"textarea",rows:"2",modelValue:l.note,"onUpdate:modelValue":e[3]||(e[3]=t=>l.note=t),placeholder:"\u63CF\u8FF0"},null,8,["modelValue"])]),_:1}),a(r,{label:"token\u6709\u6548\u671F",prop:"expireDesc"},{default:o(()=>[a(g,{modelValue:l.expireDesc,"onUpdate:modelValue":e[4]||(e[4]=t=>l.expireDesc=t),placeholder:"\u6709\u6548\u671F","dict-type":"api_expire_desc",clearable:""},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{j as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.150fc434.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.150fc434.js new file mode 100644 index 0000000..3feedc9 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.150fc434.js @@ -0,0 +1 @@ +import{d as D,h as m,Y as c,r as s,o as y,f as k,w as a,b as t,a2 as w,l as f,E as U}from"./index.e3896b23.js";import{u as x,a as q}from"./cluster.85454835.js";const R=f("\u53D6\u6D88"),h=f("\u786E\u5B9A"),L=D({__name:"add-or-update",emits:["refreshDataList"],setup($,{expose:_,emit:b}){const r=m(!1),d=m(),u=c({name:"",alias:"",type:"",hosts:"",enabled:!0,note:""}),B=o=>{r.value=!0,u.id="",d.value&&d.value.resetFields(),o&&V(o)},V=o=>{x(o).then(e=>{Object.assign(u,e.data)})},A=m({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],alias:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],type:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],hosts:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),p=()=>{d.value.validate(o=>{if(!o)return!1;q(u).then(()=>{U.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{r.value=!1,b("refreshDataList")}})})})};return _({init:B}),(o,e)=>{const i=s("el-input"),n=s("el-form-item"),C=s("fast-select"),E=s("el-switch"),v=s("el-form"),F=s("el-button"),g=s("el-dialog");return y(),k(g,{modelValue:r.value,"onUpdate:modelValue":e[9]||(e[9]=l=>r.value=l),title:u.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:a(()=>[t(F,{onClick:e[7]||(e[7]=l=>r.value=!1)},{default:a(()=>[R]),_:1}),t(F,{type:"primary",onClick:e[8]||(e[8]=l=>p())},{default:a(()=>[h]),_:1})]),default:a(()=>[t(v,{ref_key:"dataFormRef",ref:d,model:u,rules:A.value,"label-width":"100px",onKeyup:e[6]||(e[6]=w(l=>p(),["enter"]))},{default:a(()=>[t(n,{label:"\u5B9E\u4F8B\u540D\u79F0",prop:"name"},{default:a(()=>[t(i,{modelValue:u.name,"onUpdate:modelValue":e[0]||(e[0]=l=>u.name=l),placeholder:"\u5B9E\u4F8B\u540D\u79F0"},null,8,["modelValue"])]),_:1}),t(n,{label:"\u522B\u540D",prop:"alias"},{default:a(()=>[t(i,{modelValue:u.alias,"onUpdate:modelValue":e[1]||(e[1]=l=>u.alias=l),placeholder:"\u522B\u540D"},null,8,["modelValue"])]),_:1}),t(n,{label:"\u5B9E\u4F8B\u7C7B\u578B",prop:"type"},{default:a(()=>[t(C,{modelValue:u.type,"onUpdate:modelValue":e[2]||(e[2]=l=>u.type=l),"dict-type":"production_cluster_type",placeholder:"\u8BF7\u9009\u62E9",clearable:""},null,8,["modelValue"])]),_:1}),t(n,{label:"\u5B9E\u4F8B\u5730\u5740",prop:"hosts"},{default:a(()=>[t(i,{modelValue:u.hosts,"onUpdate:modelValue":e[3]||(e[3]=l=>u.hosts=l),rows:3,type:"textarea",placeholder:"\u6DFB\u52A0 Flink \u96C6\u7FA4\u7684 JobManager \u7684 RestApi \u5730\u5740\u3002\u5F53 HA \u6A21\u5F0F\u65F6\uFF0C\u5730\u5740\u95F4\u7528\u82F1\u6587\u9017\u53F7\u5206\u9694\uFF0C\u4F8B\u5982\uFF1A192.168.40.135:8081,192.168.40.136:8081,192.168.40.137:8081"},null,8,["modelValue"])]),_:1}),t(n,{label:"\u542F\u7528",prop:"enabled"},{default:a(()=>[t(E,{modelValue:u.enabled,"onUpdate:modelValue":e[4]||(e[4]=l=>u.enabled=l),"active-value":!0,"inactive-value":!1},null,8,["modelValue"])]),_:1}),t(n,{label:"\u5907\u6CE8",prop:"note"},{default:a(()=>[t(i,{modelValue:u.note,"onUpdate:modelValue":e[5]||(e[5]=l=>u.note=l),placeholder:"\u5907\u6CE8"},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{L as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.1662b337.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.1662b337.js new file mode 100644 index 0000000..8eb89f2 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.1662b337.js @@ -0,0 +1 @@ +import{d as k,h as _,Y as N,r as a,o as p,f as m,w as o,b as l,H as x,a2 as U,l as E,E as P}from"./index.e3896b23.js";import{u as w,a as I}from"./dataStandard.4ca18653.js";const h=E("\u53D6\u6D88"),q=E("\u786E\u5B9A"),K=k({__name:"add-or-update",emits:["refreshDataList"],setup(R,{expose:V,emit:y}){const r=_(!1),i=_(),e=N({parentId:"",parentPath:"",path:"",name:"",type:0,orderNo:0,note:""}),C=(n,t,d)=>{r.value=!0,e.id="",i.value&&i.value.resetFields(),e.parentId=t,e.parentPath=d,n&&v(n)},v=n=>{w(n).then(t=>{Object.assign(e,t.data)})},g=_({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],type:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],orderNo:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),F=()=>{i.value.validate(n=>{if(!n)return!1;e.type=e.parentId==0?0:e.type,I(e).then(()=>{P.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{r.value=!1,y("refreshDataList")}})})})};return V({init:C}),(n,t)=>{const d=a("el-input"),s=a("el-form-item"),f=a("el-option"),c=a("el-select"),A=a("el-input-number"),B=a("el-form"),b=a("el-button"),D=a("el-dialog");return p(),m(D,{modelValue:r.value,"onUpdate:modelValue":t[8]||(t[8]=u=>r.value=u),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:o(()=>[l(b,{onClick:t[6]||(t[6]=u=>r.value=!1)},{default:o(()=>[h]),_:1}),l(b,{type:"primary",onClick:t[7]||(t[7]=u=>F())},{default:o(()=>[q]),_:1})]),default:o(()=>[l(B,{ref_key:"dataFormRef",ref:i,model:e,rules:g.value,"label-width":"100px",onKeyup:t[5]||(t[5]=U(u=>F(),["enter"]))},{default:o(()=>[l(s,{label:"\u7236\u7EA7\u76EE\u5F55",prop:"parentPath"},{default:o(()=>[l(d,{disabled:"",modelValue:e.parentPath,"onUpdate:modelValue":t[0]||(t[0]=u=>e.parentPath=u),placeholder:""},null,8,["modelValue"])]),_:1}),l(s,{label:"\u540D\u79F0",prop:"name"},{default:o(()=>[l(d,{modelValue:e.name,"onUpdate:modelValue":t[1]||(t[1]=u=>e.name=u),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),e.parentId!=0?(p(),m(s,{key:0,label:"\u7C7B\u578B",prop:"type"},{default:o(()=>[l(c,{modelValue:e.type,"onUpdate:modelValue":t[2]||(t[2]=u=>e.type=u),placeholder:"\u7C7B\u578B",disabled:!!e.id},{default:o(()=>[(p(),m(f,{key:0,label:"\u666E\u901A\u76EE\u5F55",value:0})),(p(),m(f,{key:1,label:"\u6807\u51C6\u5B57\u6BB5\u76EE\u5F55",value:1})),(p(),m(f,{key:2,label:"\u6807\u51C6\u7801\u8868\u76EE\u5F55",value:2}))]),_:1},8,["modelValue","disabled"])]),_:1})):x("",!0),l(s,{label:"\u5E8F\u53F7",prop:"orderNo"},{default:o(()=>[l(A,{modelValue:e.orderNo,"onUpdate:modelValue":t[3]||(t[3]=u=>e.orderNo=u),max:9999,placeholder:"\u5E8F\u53F7"},null,8,["modelValue"])]),_:1}),l(s,{label:"\u63CF\u8FF0",prop:"note"},{default:o(()=>[l(d,{type:"textarea",modelValue:e.note,"onUpdate:modelValue":t[4]||(t[4]=u=>e.note=u)},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{K as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.2b4df6e3.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.2b4df6e3.js new file mode 100644 index 0000000..dce6972 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.2b4df6e3.js @@ -0,0 +1 @@ +import{d as y,h as p,Y as v,r,o as c,f as N,w as a,b as t,a2 as P,l as b,aw as q,ax as k,E as f,ay as x}from"./index.e3896b23.js";const T=b("\u53D6\u6D88"),j=b("\u6D4B\u8BD5\u6570\u4ED3\u8FDE\u901A\u6027"),h=b("\u786E\u5B9A"),R=y({__name:"add-or-update",emits:["refreshDataList"],setup($,{expose:E,emit:g}){const n=p(!1),m=p(),u=v({name:"",engName:"",dbType:"",dbName:"",dbUrl:"",dbUsername:"",dbPassword:"",description:"",status:"",dutyPerson:""}),F=s=>{n.value=!0,u.id="",m.value&&m.value.resetFields(),s&&V(s)},V=s=>{q(s).then(e=>{Object.assign(u,e.data)})},A=p({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],engName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],dbType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],dbName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],dbUrl:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],dbUsername:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],dbPassword:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],status:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),D=()=>{m.value.validate(s=>{if(!s)return!1;k(u).then(()=>{f.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{n.value=!1,g("refreshDataList")}})})})},_=()=>{x(u).then(()=>{f.success({message:"\u6D4B\u8BD5\u6210\u529F"})})};return E({init:F}),(s,e)=>{const d=r("el-input"),o=r("el-form-item"),B=r("fast-select"),C=r("fast-radio-group"),U=r("el-form"),i=r("el-button"),w=r("el-dialog");return c(),N(w,{modelValue:n.value,"onUpdate:modelValue":e[14]||(e[14]=l=>n.value=l),title:u.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:a(()=>[t(i,{onClick:e[11]||(e[11]=l=>n.value=!1)},{default:a(()=>[T]),_:1}),t(i,{type:"primary",onClick:e[12]||(e[12]=l=>_())},{default:a(()=>[j]),_:1}),t(i,{type:"success",onClick:e[13]||(e[13]=l=>D())},{default:a(()=>[h]),_:1})]),default:a(()=>[t(U,{ref_key:"dataFormRef",ref:m,model:u,rules:A.value,"label-width":"100px",onKeyup:e[10]||(e[10]=P(l=>D(),["enter"]))},{default:a(()=>[t(o,{label:"\u9879\u76EE\u540D\u79F0",prop:"name","label-width":"auto"},{default:a(()=>[t(d,{modelValue:u.name,"onUpdate:modelValue":e[0]||(e[0]=l=>u.name=l),placeholder:"\u9879\u76EE\u540D\u79F0"},null,8,["modelValue"])]),_:1}),t(o,{label:"\u82F1\u6587\u540D\u79F0",prop:"engName","label-width":"auto"},{default:a(()=>[t(d,{modelValue:u.engName,"onUpdate:modelValue":e[1]||(e[1]=l=>u.engName=l),placeholder:"\u82F1\u6587\u540D\u79F0"},null,8,["modelValue"])]),_:1}),t(o,{label:"\u6570\u4ED3\u7C7B\u578B",prop:"dbType","label-width":"auto"},{default:a(()=>[t(B,{modelValue:u.dbType,"onUpdate:modelValue":e[2]||(e[2]=l=>u.dbType=l),"dict-type":"data_house_type",placeholder:"\u8BF7\u9009\u62E9",clearable:""},null,8,["modelValue"])]),_:1}),t(o,{label:"\u6570\u4ED3\u5E93\u540D(schema)",prop:"dbName","label-width":"auto"},{default:a(()=>[t(d,{modelValue:u.dbName,"onUpdate:modelValue":e[3]||(e[3]=l=>u.dbName=l),placeholder:"\u6570\u4ED3\u5E93\u540D(schema)"},null,8,["modelValue"])]),_:1}),t(o,{label:"\u6570\u4ED3url",prop:"dbUrl","label-width":"auto"},{default:a(()=>[t(d,{modelValue:u.dbUrl,"onUpdate:modelValue":e[4]||(e[4]=l=>u.dbUrl=l),placeholder:"\u6570\u4ED3url"},null,8,["modelValue"])]),_:1}),t(o,{label:"\u6570\u4ED3\u7528\u6237\u540D",prop:"dbUsername","label-width":"auto"},{default:a(()=>[t(d,{modelValue:u.dbUsername,"onUpdate:modelValue":e[5]||(e[5]=l=>u.dbUsername=l),placeholder:"\u6570\u4ED3\u7528\u6237\u540D"},null,8,["modelValue"])]),_:1}),t(o,{label:"\u6570\u4ED3\u5BC6\u7801",prop:"dbPassword","label-width":"auto"},{default:a(()=>[t(d,{modelValue:u.dbPassword,"onUpdate:modelValue":e[6]||(e[6]=l=>u.dbPassword=l),placeholder:"\u6570\u4ED3\u5BC6\u7801"},null,8,["modelValue"])]),_:1}),t(o,{label:"\u63CF\u8FF0",prop:"description","label-width":"auto"},{default:a(()=>[t(d,{type:"textarea",modelValue:u.description,"onUpdate:modelValue":e[7]||(e[7]=l=>u.description=l)},null,8,["modelValue"])]),_:1}),t(o,{label:"\u72B6\u6001",prop:"status","label-width":"auto"},{default:a(()=>[t(C,{modelValue:u.status,"onUpdate:modelValue":e[8]||(e[8]=l=>u.status=l),"dict-type":"project_status"},null,8,["modelValue"])]),_:1}),t(o,{label:"\u8D1F\u8D23\u4EBA",prop:"dutyPerson","label-width":"auto"},{default:a(()=>[t(d,{modelValue:u.dutyPerson,"onUpdate:modelValue":e[9]||(e[9]=l=>u.dutyPerson=l),placeholder:"\u8D1F\u8D23\u4EBA"},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{R as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.52b8e47c.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.52b8e47c.js new file mode 100644 index 0000000..f3d5a30 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.52b8e47c.js @@ -0,0 +1 @@ +import{d as D,h as d,Y as L,r as n,o as A,f as R,w as u,b as a,a2 as x,l as v,E as w}from"./index.e3896b23.js";import{a as B,b as K,c as I}from"./role.44f4fe5e.js";const M=v("\u53D6\u6D88"),T=v("\u786E\u5B9A"),N=D({__name:"add-or-update",emits:["refreshDataList"],setup(U,{expose:k,emit:b}){const r=d(!1),p=d([]),s=d(),i=d(),t=L({id:"",name:"",menuIdList:[],orgIdList:[],remark:""}),C=l=>{r.value=!0,t.id="",i.value&&i.value.resetFields(),s.value&&s.value.setCheckedKeys([]),g(l)},g=l=>B().then(e=>{p.value=e.data,l&&h(l)}),h=l=>{K(l).then(e=>{Object.assign(t,e.data),t.menuIdList.forEach(m=>s.value.setChecked(m,!0))})},E=d({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),c=()=>{i.value.validate(l=>{if(!l)return!1;t.menuIdList=[...s.value.getHalfCheckedKeys(),...s.value.getCheckedKeys()],I(t).then(()=>{w.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{r.value=!1,b("refreshDataList")}})})})};return k({init:C}),(l,e)=>{const m=n("el-input"),f=n("el-form-item"),F=n("el-tree"),V=n("el-form"),_=n("el-button"),y=n("el-dialog");return A(),R(y,{modelValue:r.value,"onUpdate:modelValue":e[5]||(e[5]=o=>r.value=o),title:t.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1,draggable:""},{footer:u(()=>[a(_,{onClick:e[3]||(e[3]=o=>r.value=!1)},{default:u(()=>[M]),_:1}),a(_,{type:"primary",onClick:e[4]||(e[4]=o=>c())},{default:u(()=>[T]),_:1})]),default:u(()=>[a(V,{ref_key:"dataFormRef",ref:i,model:t,rules:E.value,"label-width":"120px",onKeyup:e[2]||(e[2]=x(o=>c(),["enter"]))},{default:u(()=>[a(f,{prop:"name",label:"\u540D\u79F0"},{default:u(()=>[a(m,{modelValue:t.name,"onUpdate:modelValue":e[0]||(e[0]=o=>t.name=o),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),a(f,{prop:"remark",label:"\u5907\u6CE8"},{default:u(()=>[a(m,{modelValue:t.remark,"onUpdate:modelValue":e[1]||(e[1]=o=>t.remark=o),placeholder:"\u5907\u6CE8"},null,8,["modelValue"])]),_:1}),a(f,{label:"\u83DC\u5355\u6743\u9650"},{default:u(()=>[a(F,{ref_key:"menuListTree",ref:s,data:p.value,props:{label:"name",children:"children"},"node-key":"id",accordion:"","show-checkbox":""},null,8,["data"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{N as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.536b089d.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.536b089d.js new file mode 100644 index 0000000..379b7f5 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.536b089d.js @@ -0,0 +1 @@ +import{d as Z,L as ee,h as i,Y as y,a6 as ue,r as s,o as b,c as P,b as e,w as l,a as te,l as p,t as le,a7 as ae,a8 as oe,k as se,af as re,f as A,F as k,e as ne,H as B,E as C}from"./index.e3896b23.js";import{c as de}from"./database.32bfd96d.js";import{g as ie,u as pe,b as me}from"./apiConfig.09b7ec3b.js";import{_ as fe}from"./databases.vue_vue_type_style_index_0_lang.dceac0af.js";import{_ as be}from"./middledb.vue_vue_type_style_index_0_lang.fa7bd4c1.js";import _e from"./sql-studio.a6fca977.js";import{_ as h}from"./param-studio.vue_vue_type_script_setup_true_lang.e008bef0.js";const Fe=p("\u6570\u636E\u5E93"),Ee=p("\u4E2D\u53F0\u5E93"),ce=p("\u5E93\u8868\u4FE1\u606F"),ge=p("sql\u5206\u5272\u7B26:"),De=p("\u5F00\u653E"),qe=p("\u79C1\u6709"),Ae=p("\u53D6\u6D88"),ve=p("\u63D0\u4EA4"),Pe=Z({__name:"add-or-update",emits:["refreshDataList"],setup(Ve,{expose:I,emit:U}){ee(()=>{L(),M()});const L=()=>{de().then(n=>{u.databaseList=n.data})},v=i(!1),j=()=>{v.value=!0},_=i(!1),w=i(""),M=()=>{ie().then(n=>{w.value=n.data})},f=y({}),F=i(),r=y({name:"",path:"",type:"",note:"",status:0}),$=i({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],path:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],type:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),E=i(),u=y({sqlDbType:"",sqlParam:"",sqlText:"\\n",contentType:"application/json",openTrans:0,jsonParam:"{}",responseResult:"{}",sqlSeparator:";\\n",sqlMaxRow:100,databaseId:"",databaseList:[],previlege:""}),N=i({sqlSeparator:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],sqlMaxRow:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],contentType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],sqlDbType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],databaseId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],sqlText:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],jsonParam:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],responseResult:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],previlege:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]});ue("apiSqlForm",u);const m=i(),O=(n,t)=>{_.value=!0,f.id="",f.groupId=t,F.value&&F.value.resetFields(),E.value&&(E.value.resetFields(),m.value.setSqlParam(""),m.value.setEditorValue(""),m.value.closeDebug(),c.value.setEditorValue("{}"),g.value.setEditorValue('{"code":0,"msg":"success","data":[]}')),n&&H(n)},H=n=>{pe(n).then(t=>{const o=t.data;Object.assign(f,o),r.name=o.name,r.path=o.path,r.type=o.type,r.note=o.note,r.status=o.status,u.sqlDbType=o.sqlDbType,u.sqlParam=o.sqlParam,u.sqlText=o.sqlText,u.contentType=o.contentType,u.openTrans=o.openTrans,u.jsonParam=o.jsonParam,u.responseResult=o.responseResult,u.sqlSeparator=o.sqlSeparator,u.sqlMaxRow=o.sqlMaxRow,u.databaseId=o.databaseId,u.previlege=o.previlege,m.value.setSqlParam(u.sqlParam),m.value.setEditorValue(u.sqlText),c.value.setEditorValue(u.jsonParam),g.value.setEditorValue(u.responseResult)})},c=i(),g=i(),Q=async()=>{let n=!0;await F.value.validate(t=>{if(!t)return C.warning("\u8BF7\u628A\u57FA\u672C\u4FE1\u606F\u7684\u5FC5\u586B\u9879\u8865\u5145\u5B8C\u6574\uFF01"),n=!1,!1}),n&&(u.sqlParam=m.value.getSqlParam(),u.sqlText=m.value.getEditorValue(),u.jsonParam=c.value.getEditorValue(),u.responseResult=g.value.getEditorValue(),await E.value.validate(t=>{if(!t)return C.warning("\u8BF7\u628A API SQL \u914D\u7F6E\u7684\u5FC5\u586B\u9879\u8865\u5145\u5B8C\u6574\uFF01"),n=!1,!1}),n&&(Object.assign(f,r),Object.assign(f,u),await me(f),C.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{_.value=!1,U("refreshDataList")}})))};return I({init:O}),(n,t)=>{const o=s("el-input"),d=s("el-form-item"),T=s("fast-select"),R=s("el-form"),x=s("el-tab-pane"),D=s("el-radio"),S=s("el-radio-group"),V=s("el-button"),z=s("el-option"),Y=s("el-select"),q=s("el-tooltip"),G=s("el-switch"),J=s("el-input-number"),K=s("el-tabs"),W=s("el-drawer"),X=s("el-dialog");return b(),P(k,null,[e(W,{modelValue:_.value,"onUpdate:modelValue":t[14]||(t[14]=a=>_.value=a),title:f.id?"\u4FEE\u6539":"\u65B0\u589E",size:"100%","destroy-on-close":!0},{footer:l(()=>[e(V,{onClick:t[12]||(t[12]=a=>_.value=!1)},{default:l(()=>[Ae]),_:1}),e(V,{type:"primary",onClick:t[13]||(t[13]=a=>Q())},{default:l(()=>[ve]),_:1})]),default:l(()=>[te("div",null,[e(K,{"tab-position":"top"},{default:l(()=>[e(x,{label:"\u57FA\u672C\u4FE1\u606F"},{default:l(()=>[e(R,{ref_key:"basicDataFormRef",ref:F,rules:$.value,model:r},{default:l(()=>[e(d,{label:"\u540D\u79F0",prop:"name","label-width":"auto"},{default:l(()=>[e(o,{modelValue:r.name,"onUpdate:modelValue":t[0]||(t[0]=a=>r.name=a),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),e(d,{label:"api\u8DEF\u5F84",prop:"path","label-width":"auto"},{default:l(()=>[e(o,{modelValue:r.path,"onUpdate:modelValue":t[1]||(t[1]=a=>r.path=a),placeholder:"api\u8DEF\u5F84"},{prepend:l(()=>[p(le(w.value),1)]),_:1},8,["modelValue"])]),_:1}),e(d,{label:"\u8BF7\u6C42\u65B9\u5F0F",prop:"type","label-width":"auto"},{default:l(()=>[e(T,{modelValue:r.type,"onUpdate:modelValue":t[2]||(t[2]=a=>r.type=a),"dict-type":"api_type",placeholder:"\u8BF7\u9009\u62E9",clearable:"",filterable:""},null,8,["modelValue"])]),_:1}),e(d,{label:"\u63CF\u8FF0",prop:"note","label-width":"auto"},{default:l(()=>[e(o,{modelValue:r.note,"onUpdate:modelValue":t[3]||(t[3]=a=>r.note=a),rows:2,type:"textarea",placeholder:"\u63CF\u8FF0"},null,8,["modelValue"])]),_:1})]),_:1},8,["rules","model"])]),_:1}),e(x,{label:"API SQL \u914D\u7F6E"},{default:l(()=>[e(R,{ref_key:"apiSqlFormRef",ref:E,rules:N.value,model:u},{default:l(()=>[e(d,{label:"\u9009\u62E9",prop:"sqlDbType","label-width":"auto"},{default:l(()=>[e(S,{modelValue:u.sqlDbType,"onUpdate:modelValue":t[4]||(t[4]=a=>u.sqlDbType=a)},{default:l(()=>[e(D,{label:1,border:""},{default:l(()=>[Fe]),_:1}),e(D,{label:2,border:""},{default:l(()=>[Ee]),_:1})]),_:1},8,["modelValue"]),ae(e(V,{style:{"margin-left":"20px"},icon:se(re),type:"primary",onClick:t[5]||(t[5]=a=>j())},{default:l(()=>[ce]),_:1},8,["icon"]),[[oe,!!u.sqlDbType]])]),_:1}),u.sqlDbType=="1"?(b(),A(d,{key:0,label:"\u9009\u62E9",prop:"databaseId","label-width":"auto"},{default:l(()=>[e(Y,{modelValue:u.databaseId,"onUpdate:modelValue":t[6]||(t[6]=a=>u.databaseId=a),clearable:"",filterable:"",placeholder:"\u8BF7\u9009\u62E9"},{default:l(()=>[(b(!0),P(k,null,ne(u.databaseList,(a,ye)=>(b(),A(z,{key:a.id,label:`[${a.id}]${a.name}`,value:a.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue"])]),_:1})):B("",!0),e(d,{label:"sql\u5206\u9694\u7B26",prop:"sqlSeparator","label-width":"auto"},{default:l(()=>[e(q,{class:"box-item",effect:"dark",content:"sql \u8BED\u53E5\u5C06\u6309\u7167\u586B\u5199\u7684 sql \u5206\u5272\u7B26\u8FDB\u884C\u5206\u5272",placement:"top-end"},{default:l(()=>[e(o,{modelValue:u.sqlSeparator,"onUpdate:modelValue":t[7]||(t[7]=a=>u.sqlSeparator=a)},{prepend:l(()=>[ge]),_:1},8,["modelValue"])]),_:1})]),_:1}),e(d,{label:"\u5F00\u542F\u4E8B\u52A1","label-width":"auto",prop:"openTrans"},{default:l(()=>[e(q,{effect:"dark",content:"\u5982\u679C\u6570\u636E\u5E93\u672C\u8EAB\u4E0D\u652F\u6301\u4E8B\u52A1, \u5219\u4E0D\u8981\u5F00\u542F",placement:"top-end"},{default:l(()=>[e(G,{modelValue:u.openTrans,"onUpdate:modelValue":t[8]||(t[8]=a=>u.openTrans=a),"active-value":1,"inactive-value":0},null,8,["modelValue"])]),_:1})]),_:1}),e(d,{label:"\u67E5\u8BE2\u6700\u5927\u884C\u6570","label-width":"auto",prop:"sqlMaxRow"},{default:l(()=>[e(q,{effect:"dark",content:"select\u8BED\u53E5\u67E5\u8BE2\u8FD4\u56DE\u7684\u6700\u5927\u884C\u6570",placement:"top-end"},{default:l(()=>[e(J,{modelValue:u.sqlMaxRow,"onUpdate:modelValue":t[9]||(t[9]=a=>u.sqlMaxRow=a),min:1,max:1e3},null,8,["modelValue"])]),_:1})]),_:1}),e(d,{label:"sql\u8BED\u53E5","label-width":"auto",prop:"sqlText"},{default:l(()=>[e(_e,{ref_key:"sqlStudioRef",ref:m,style:{width:"100%"}},null,512)]),_:1}),e(d,{label:"Content-Type",prop:"contentType","label-width":"auto"},{default:l(()=>[e(T,{modelValue:u.contentType,"onUpdate:modelValue":t[10]||(t[10]=a=>u.contentType=a),"dict-type":"content_type"},null,8,["modelValue"])]),_:1}),e(d,{label:"\u8BF7\u6C42\u53C2\u6570\u793A\u4F8B","label-width":"auto",prop:"jsonParam"},{default:l(()=>[e(h,{id:"requestParamStudio",ref_key:"requestParamRef",ref:c,style:{height:"160px",width:"100%"}},null,512)]),_:1}),e(d,{label:"\u54CD\u5E94\u7ED3\u679C\u793A\u4F8B","label-width":"auto",prop:"responseResult"},{default:l(()=>[e(h,{id:"responseResultStudio",ref_key:"responseResultRef",ref:g,style:{height:"160px",width:"100%"}},null,512)]),_:1}),e(d,{label:"\u6743\u9650",prop:"previlege","label-width":"auto"},{default:l(()=>[e(q,{class:"box-item",effect:"dark",content:"\u5F00\u653E\u63A5\u53E3\u53EF\u4EE5\u76F4\u63A5\u8BBF\u95EE, \u79C1\u6709\u63A5\u53E3\u9700\u8981\u5728\u8BF7\u6C42\u5934\u643A\u5E26 token \u8BBF\u95EE, \u8BF7\u524D\u5F80 API \u6743\u9650\u83DC\u5355\u67E5\u770B",placement:"top-end"},{default:l(()=>[e(S,{modelValue:u.previlege,"onUpdate:modelValue":t[11]||(t[11]=a=>u.previlege=a)},{default:l(()=>[e(D,{label:0,border:""},{default:l(()=>[De]),_:1}),e(D,{label:1,border:""},{default:l(()=>[qe]),_:1})]),_:1},8,["modelValue"])]),_:1})]),_:1})]),_:1},8,["rules","model"])]),_:1})]),_:1})])]),_:1},8,["modelValue","title"]),e(X,{modelValue:v.value,"onUpdate:modelValue":t[15]||(t[15]=a=>v.value=a),title:"\u5E93\u8868\u4FE1\u606F"},{default:l(()=>[u.sqlDbType==1?(b(),A(fe,{key:0,ref:"databasesRef"},null,512)):B("",!0),u.sqlDbType==2?(b(),A(be,{key:1,ref:"middledbRef"},null,512)):B("",!0)]),_:1},8,["modelValue"])],64)}}});export{Pe as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.7504fbe3.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.7504fbe3.js new file mode 100644 index 0000000..e596213 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.7504fbe3.js @@ -0,0 +1 @@ +import{d as v,h as V,Y as K,r as d,o as m,f as p,w as a,b as t,H as i,a2 as k,l as g,E as U}from"./index.e3896b23.js";import{u as N,a as q}from"./sms.f04d455c.js";const S=g("\u53D6\u6D88"),w=g("\u786E\u5B9A"),$=v({__name:"add-or-update",emits:["refreshDataList"],setup(x,{expose:E,emit:b}){const n=V(!1),f=V(),e=K({id:"",platform:0,signName:"",templateId:"",appId:"",senderId:"",url:"",accessKey:"",secretKey:"",status:0,version:"",createTime:""}),y=s=>{n.value=!0,e.id="",f.value&&f.value.resetFields(),s&&D(s)},D=s=>{N(s).then(u=>{Object.assign(e,u.data)})},_=V({platform:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],appId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],signName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],templateId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],url:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],accessKey:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],secretKey:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),A=()=>{f.value.validate(s=>{if(!s)return!1;q(e).then(()=>{U.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{n.value=!1,b("refreshDataList")}})})})};return E({init:y}),(s,u)=>{const c=d("fast-select"),o=d("el-form-item"),r=d("el-input"),B=d("fast-radio-group"),C=d("el-form"),F=d("el-button"),I=d("el-dialog");return m(),p(I,{modelValue:n.value,"onUpdate:modelValue":u[13]||(u[13]=l=>n.value=l),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:a(()=>[t(F,{onClick:u[11]||(u[11]=l=>n.value=!1)},{default:a(()=>[S]),_:1}),t(F,{type:"primary",onClick:u[12]||(u[12]=l=>A())},{default:a(()=>[w]),_:1})]),default:a(()=>[t(C,{ref_key:"dataFormRef",ref:f,model:e,rules:_.value,"label-width":"100px",onKeyup:u[10]||(u[10]=k(l=>A(),["enter"]))},{default:a(()=>[t(o,{label:"\u5E73\u53F0\u7C7B\u578B",prop:"platform"},{default:a(()=>[t(c,{modelValue:e.platform,"onUpdate:modelValue":u[0]||(u[0]=l=>e.platform=l),"dict-type":"sms_platform",placeholder:"\u5E73\u53F0\u7C7B\u578B",style:{width:"100%"}},null,8,["modelValue"])]),_:1}),e.platform==3?(m(),p(o,{key:0,label:"\u63A5\u5165\u5730\u5740",prop:"url"},{default:a(()=>[t(r,{modelValue:e.url,"onUpdate:modelValue":u[1]||(u[1]=l=>e.url=l),placeholder:"APP\u63A5\u5165\u5730\u5740"},null,8,["modelValue"])]),_:1})):i("",!0),e.platform==1?(m(),p(o,{key:1,label:"AppId",prop:"appId"},{default:a(()=>[t(r,{modelValue:e.appId,"onUpdate:modelValue":u[2]||(u[2]=l=>e.appId=l),placeholder:"AppId"},null,8,["modelValue"])]),_:1})):i("",!0),e.platform!=2?(m(),p(o,{key:2,label:"\u77ED\u4FE1\u7B7E\u540D",prop:"signName"},{default:a(()=>[t(r,{modelValue:e.signName,"onUpdate:modelValue":u[3]||(u[3]=l=>e.signName=l),placeholder:"\u77ED\u4FE1\u7B7E\u540D"},null,8,["modelValue"])]),_:1})):i("",!0),t(o,{label:"\u77ED\u4FE1\u6A21\u677F",prop:"templateId"},{default:a(()=>[t(r,{modelValue:e.templateId,"onUpdate:modelValue":u[4]||(u[4]=l=>e.templateId=l),placeholder:"\u77ED\u4FE1\u6A21\u677F"},null,8,["modelValue"])]),_:1}),t(o,{label:"AccessKey",prop:"accessKey"},{default:a(()=>[t(r,{modelValue:e.accessKey,"onUpdate:modelValue":u[5]||(u[5]=l=>e.accessKey=l),placeholder:"AccessKey"},null,8,["modelValue"])]),_:1}),t(o,{label:"SecretKey",prop:"secretKey"},{default:a(()=>[t(r,{modelValue:e.secretKey,"onUpdate:modelValue":u[6]||(u[6]=l=>e.secretKey=l),placeholder:"SecretKey"},null,8,["modelValue"])]),_:1}),e.platform==1?(m(),p(o,{key:3,label:"SenderId",prop:"senderId"},{default:a(()=>[t(r,{modelValue:e.senderId,"onUpdate:modelValue":u[7]||(u[7]=l=>e.senderId=l),placeholder:"\u56FD\u9645\u77ED\u4FE1\u5FC5\u586B"},null,8,["modelValue"])]),_:1})):i("",!0),e.platform==3?(m(),p(o,{key:4,label:"\u901A\u9053\u53F7",prop:"senderId"},{default:a(()=>[t(r,{modelValue:e.senderId,"onUpdate:modelValue":u[8]||(u[8]=l=>e.senderId=l),placeholder:"\u901A\u9053\u53F7\u5FC5\u586B"},null,8,["modelValue"])]),_:1})):i("",!0),t(o,{label:"\u72B6\u6001",prop:"status"},{default:a(()=>[t(B,{modelValue:e.status,"onUpdate:modelValue":u[9]||(u[9]=l=>e.status=l),"dict-type":"enable_disable"},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{$ as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.763f60cc.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.763f60cc.js new file mode 100644 index 0000000..8876abe --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.763f60cc.js @@ -0,0 +1 @@ +import{d as H,L as K,h as f,Y as v,r as s,o as d,f as g,w as u,b as l,H as i,c as m,e as O,F as S,a as E,t as P,a2 as Y,l as p,E as z}from"./index.e3896b23.js";import{f as G}from"./folder.ea536bf2.js";import{_ as J}from"./database.235d7a89.js";import{_ as Q}from"./table.e1c1b00a.js";import{_ as W}from"./column.79595943.js";import{_ as X}from"./model.45425835.js";import{c as Z}from"./database.32bfd96d.js";import{u as ee,a as te}from"./metadataCollect.47d67406.js";import{a as ae}from"./metadata.0c954be9.js";const le=p("\u57FA\u672C\u914D\u7F6E"),ue=p("\u91C7\u96C6\u914D\u7F6E"),oe=p("\u6570\u636E\u5E93"),se=p("\u4E2D\u53F0\u5E93"),de={key:0,src:G},re={key:1,src:J},ne={key:2,src:Q},ie={key:3,src:W},me={key:4,src:X},pe={style:{"margin-left":"8px"}},_e=p("\u53D6\u6D88"),ce=p("\u786E\u5B9A"),ve=H({__name:"add-or-update",emits:["refreshDataList"],setup(be,{expose:k,emit:B}){K(()=>{h(),T()});const h=()=>{Z().then(o=>{e.databaseList=o.data})},T=()=>{ae().then(o=>{y.value=o.data})},w=o=>{for(var t in e.databaseList){var c=e.databaseList[t];c.id==o&&Object.assign(I,c)}},_=f(!1),b=f(),y=f([]),I=v({}),e=v({name:"",strategy:"",taskType:"",cron:"",description:"",dbType:"",databaseId:"",metadataId:"",databaseList:[]}),L=o=>{_.value=!0,e.id="",b.value&&b.value.resetFields(),o&&U(o)},U=o=>{ee(o).then(t=>{Object.assign(e,t.data)})},x=f({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],strategy:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],taskType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],cron:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],dbType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],databaseId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],metadataId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),V=()=>{b.value.validate(o=>{if(!o)return!1;e.databaseId=e.dbType==2?"":e.databaseId,e.cron=e.taskType==1?null:e.cron,te(e).then(()=>{z.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{_.value=!1,B("refreshDataList")}})})})};return k({init:L}),(o,t)=>{const c=s("el-divider"),F=s("el-input"),r=s("el-form-item"),C=s("fast-select"),D=s("el-radio"),q=s("el-radio-group"),M=s("el-option"),$=s("el-select"),N=s("el-tree-select"),R=s("el-form"),A=s("el-button"),j=s("el-dialog");return d(),g(j,{modelValue:_.value,"onUpdate:modelValue":t[11]||(t[11]=a=>_.value=a),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:u(()=>[l(A,{onClick:t[9]||(t[9]=a=>_.value=!1)},{default:u(()=>[_e]),_:1}),l(A,{type:"primary",onClick:t[10]||(t[10]=a=>V())},{default:u(()=>[ce]),_:1})]),default:u(()=>[l(R,{ref_key:"dataFormRef",ref:b,model:e,rules:x.value,"label-width":"100px",onKeyup:t[8]||(t[8]=Y(a=>V(),["enter"]))},{default:u(()=>[l(c,null,{default:u(()=>[le]),_:1}),l(r,{label:"\u540D\u79F0",prop:"name","label-width":"auto"},{default:u(()=>[l(F,{modelValue:e.name,"onUpdate:modelValue":t[0]||(t[0]=a=>e.name=a),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),l(r,{label:"\u5165\u5E93\u7B56\u7565",prop:"strategy","label-width":"auto"},{default:u(()=>[l(C,{modelValue:e.strategy,"onUpdate:modelValue":t[1]||(t[1]=a=>e.strategy=a),"dict-type":"metadata_collect_strategy",placeholder:"\u8BF7\u9009\u62E9",clearable:""},null,8,["modelValue"])]),_:1}),l(r,{label:"\u4EFB\u52A1\u7C7B\u578B",prop:"taskType","label-width":"auto"},{default:u(()=>[l(C,{modelValue:e.taskType,"onUpdate:modelValue":t[2]||(t[2]=a=>e.taskType=a),"dict-type":"metadata_collect_type",placeholder:"\u8BF7\u9009\u62E9",clearable:""},null,8,["modelValue"])]),_:1}),e.taskType=="2"?(d(),g(r,{key:0,label:"cron\u8868\u8FBE\u5F0F",prop:"cron","label-width":"auto"},{default:u(()=>[l(F,{modelValue:e.cron,"onUpdate:modelValue":t[3]||(t[3]=a=>e.cron=a),placeholder:"cron\u8868\u8FBE\u5F0F"},null,8,["modelValue"])]),_:1})):i("",!0),l(r,{label:"\u63CF\u8FF0",prop:"description","label-width":"auto"},{default:u(()=>[l(F,{type:"textarea",rows:2,modelValue:e.description,"onUpdate:modelValue":t[4]||(t[4]=a=>e.description=a),placeholder:"\u63CF\u8FF0"},null,8,["modelValue"])]),_:1}),l(c,null,{default:u(()=>[ue]),_:1}),l(r,{label:"\u6570\u636E\u6E90",prop:"dbType","label-width":"auto"},{default:u(()=>[l(q,{modelValue:e.dbType,"onUpdate:modelValue":t[5]||(t[5]=a=>e.dbType=a),disabled:!!e.id},{default:u(()=>[l(D,{label:1,border:""},{default:u(()=>[oe]),_:1}),l(D,{label:2,border:""},{default:u(()=>[se]),_:1})]),_:1},8,["modelValue","disabled"])]),_:1}),e.dbType=="1"?(d(),g(r,{key:1,label:"\u6570\u636E\u5E93",prop:"databaseId","label-width":"auto"},{default:u(()=>[l($,{modelValue:e.databaseId,"onUpdate:modelValue":t[6]||(t[6]=a=>e.databaseId=a),disabled:!!e.id,clearable:"",filterable:"",onChange:w,placeholder:"\u8BF7\u9009\u62E9"},{default:u(()=>[(d(!0),m(S,null,O(e.databaseList,(a,n)=>(d(),g(M,{key:a.id,label:`[${a.id}]${a.name}`,value:a.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue","disabled"])]),_:1})):i("",!0),l(r,{label:"\u5F52\u5C5E\u5143\u6570\u636E\u76EE\u5F55",prop:"metadataId","label-width":"auto"},{default:u(()=>[l(N,{disabled:!!e.id,modelValue:e.metadataId,"onUpdate:modelValue":t[7]||(t[7]=a=>e.metadataId=a),data:y.value,clearable:""},{default:u(({node:a,data:n})=>[E("div",null,[E("span",null,[n.icon=="/src/assets/folder.png"?(d(),m("img",de)):i("",!0),n.icon=="/src/assets/database.png"?(d(),m("img",re)):i("",!0),n.icon=="/src/assets/table.png"?(d(),m("img",ne)):i("",!0),n.icon=="/src/assets/column.png"?(d(),m("img",ie)):i("",!0),n.icon=="/src/assets/model.png"?(d(),m("img",me)):i("",!0),E("span",pe,P(n.name),1)])])]),_:1},8,["disabled","modelValue","data"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{ve as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.8494cd6a.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.8494cd6a.js new file mode 100644 index 0000000..6a30f70 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.8494cd6a.js @@ -0,0 +1 @@ +import{d as B,h as i,Y as N,r,o as _,f as b,w as o,b as u,H as k,a2 as x,l as F,E as U}from"./index.e3896b23.js";import{a as P,b as w}from"./apiGroup.d1155eaa.js";const I=F("\u53D6\u6D88"),h=F("\u786E\u5B9A"),G=B({__name:"add-or-update",emits:["refreshDataList"],setup(q,{expose:V,emit:c}){const n=i(!1),p=i(),e=N({parentId:"",parentPath:"",path:"",name:"",type:1,orderNo:0,description:""}),g=(a,t,d)=>{n.value=!0,e.id="",p.value&&p.value.resetFields(),e.parentId=t,e.parentPath=d,a&&E(a)},E=a=>{P(a).then(t=>{Object.assign(e,t.data)})},v=i({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],type:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],orderNo:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),m=()=>{p.value.validate(a=>{if(!a)return!1;e.type=e.parentId==0?1:e.type,w(e).then(()=>{U.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{n.value=!1,c("refreshDataList")}})})})};return V({init:g}),(a,t)=>{const d=r("el-input"),s=r("el-form-item"),y=r("fast-select"),C=r("el-input-number"),A=r("el-form"),f=r("el-button"),D=r("el-dialog");return _(),b(D,{modelValue:n.value,"onUpdate:modelValue":t[8]||(t[8]=l=>n.value=l),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:o(()=>[u(f,{onClick:t[6]||(t[6]=l=>n.value=!1)},{default:o(()=>[I]),_:1}),u(f,{type:"primary",onClick:t[7]||(t[7]=l=>m())},{default:o(()=>[h]),_:1})]),default:o(()=>[u(A,{ref_key:"dataFormRef",ref:p,model:e,rules:v.value,"label-width":"100px",onKeyup:t[5]||(t[5]=x(l=>m(),["enter"]))},{default:o(()=>[u(s,{label:"\u7236\u7EA7\u76EE\u5F55",prop:"parentPath"},{default:o(()=>[u(d,{disabled:"",modelValue:e.parentPath,"onUpdate:modelValue":t[0]||(t[0]=l=>e.parentPath=l),placeholder:""},null,8,["modelValue"])]),_:1}),u(s,{label:"\u540D\u79F0",prop:"name"},{default:o(()=>[u(d,{modelValue:e.name,"onUpdate:modelValue":t[1]||(t[1]=l=>e.name=l),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),e.parentId!=0?(_(),b(s,{key:0,label:"\u7C7B\u578B",prop:"type"},{default:o(()=>[u(y,{disabled:!!e.id,modelValue:e.type,"onUpdate:modelValue":t[2]||(t[2]=l=>e.type=l),placeholder:"\u7C7B\u578B","dict-type":"api_group_type",clearable:""},null,8,["disabled","modelValue"])]),_:1})):k("",!0),u(s,{label:"\u5E8F\u53F7",prop:"orderNo"},{default:o(()=>[u(C,{modelValue:e.orderNo,"onUpdate:modelValue":t[3]||(t[3]=l=>e.orderNo=l),max:9999,placeholder:"\u5E8F\u53F7"},null,8,["modelValue"])]),_:1}),u(s,{label:"\u63CF\u8FF0",prop:"description"},{default:o(()=>[u(d,{type:"textarea",modelValue:e.description,"onUpdate:modelValue":t[4]||(t[4]=l=>e.description=l)},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{G as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.8d00c1b3.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.8d00c1b3.js new file mode 100644 index 0000000..21761fc --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.8d00c1b3.js @@ -0,0 +1 @@ +import{d as k,h as f,Y as N,r as a,o as m,f as i,w as o,b as t,H as x,a2 as U,l as b,E as P}from"./index.e3896b23.js";import{u as q,a as w}from"./catalog.f6d809a5.js";const I=b("\u53D6\u6D88"),R=b("\u786E\u5B9A"),K=k({__name:"add-or-update",emits:["refreshDataList"],setup($,{expose:V,emit:c}){const s=f(!1),p=f(),e=N({parentId:"",parentPath:"",path:"",name:"",code:"",type:0,orderNo:0,description:""}),g=(r,l,n)=>{s.value=!0,e.id="",p.value&&p.value.resetFields(),e.parentId=l,e.parentPath=n,r&&A(r)},A=r=>{q(r).then(l=>{Object.assign(e,l.data)})},C=f({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],code:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],type:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],orderNo:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),_=()=>{p.value.validate(r=>{if(!r)return!1;e.type=e.parentId==0?0:e.type,w(e).then(()=>{P.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{s.value=!1,c("refreshDataList")}})})})};return V({init:g}),(r,l)=>{const n=a("el-input"),d=a("el-form-item"),F=a("el-option"),v=a("el-select"),D=a("el-input-number"),y=a("el-form"),E=a("el-button"),B=a("el-dialog");return m(),i(B,{modelValue:s.value,"onUpdate:modelValue":l[9]||(l[9]=u=>s.value=u),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:o(()=>[t(E,{onClick:l[7]||(l[7]=u=>s.value=!1)},{default:o(()=>[I]),_:1}),t(E,{type:"primary",onClick:l[8]||(l[8]=u=>_())},{default:o(()=>[R]),_:1})]),default:o(()=>[t(y,{ref_key:"dataFormRef",ref:p,model:e,rules:C.value,"label-width":"100px",onKeyup:l[6]||(l[6]=U(u=>_(),["enter"]))},{default:o(()=>[t(d,{label:"\u7236\u7EA7\u76EE\u5F55",prop:"parentPath"},{default:o(()=>[t(n,{disabled:"",modelValue:e.parentPath,"onUpdate:modelValue":l[0]||(l[0]=u=>e.parentPath=u),placeholder:""},null,8,["modelValue"])]),_:1}),t(d,{label:"\u540D\u79F0",prop:"name"},{default:o(()=>[t(n,{modelValue:e.name,"onUpdate:modelValue":l[1]||(l[1]=u=>e.name=u),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),t(d,{label:"\u76EE\u5F55\u7F16\u7801",prop:"code"},{default:o(()=>[t(n,{modelValue:e.code,"onUpdate:modelValue":l[2]||(l[2]=u=>e.code=u),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),e.parentId!=0?(m(),i(d,{key:0,label:"\u7C7B\u578B",prop:"type"},{default:o(()=>[t(v,{modelValue:e.type,"onUpdate:modelValue":l[3]||(l[3]=u=>e.type=u),placeholder:"\u7C7B\u578B",disabled:!!e.id},{default:o(()=>[(m(),i(F,{key:0,label:"\u666E\u901A\u76EE\u5F55",value:0})),(m(),i(F,{key:1,label:"\u8D44\u4EA7\u76EE\u5F55",value:1}))]),_:1},8,["modelValue","disabled"])]),_:1})):x("",!0),t(d,{label:"\u5E8F\u53F7",prop:"orderNo"},{default:o(()=>[t(D,{modelValue:e.orderNo,"onUpdate:modelValue":l[4]||(l[4]=u=>e.orderNo=u),max:9999,placeholder:"\u5E8F\u53F7"},null,8,["modelValue"])]),_:1}),t(d,{label:"\u63CF\u8FF0",prop:"description"},{default:o(()=>[t(n,{type:"textarea",modelValue:e.description,"onUpdate:modelValue":l[5]||(l[5]=u=>e.description=u)},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{K as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.a947df21.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.a947df21.js new file mode 100644 index 0000000..14b985a --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.a947df21.js @@ -0,0 +1 @@ +import{ad as _,d as w,h as f,Y as B,r,o as z,f as I,w as a,b as u,k as j,a2 as R,l as g,E as m}from"./index.e3896b23.js";import{c as $}from"./constant.71632d98.js";const K=n=>_.get("/data-integrate/file/"+n),M=n=>n.id?_.put("/data-integrate/file",n):_.post("/data-integrate/file",n),N=g("\u9009\u62E9\u6587\u4EF6"),S=g("\u53D6\u6D88"),q=g("\u786E\u5B9A"),O=w({__name:"add-or-update",emits:["refreshDataList"],setup(n,{expose:C,emit:v}){const c=f(),i=f(!1),p=f(),t=B({name:"",fileCategoryId:"",path:"",type:"",fileUrl:"",size:"",description:"",projectId:""}),b=(l,e,d,s)=>{i.value=!0,t.id="",c.value&&c.value.clearFiles(),p.value&&p.value.resetFields(),l&&D(l),t.fileCategoryId=e,t.path=d,t.projectId=s},D=l=>{K(l).then(e=>{Object.assign(t,e.data),t.fileUrl="",t.name=""})},V=f({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),y=l=>l.size/1024/1024/1024/1024>1?(m.error("\u6587\u4EF6\u5927\u5C0F\u4E0D\u80FD\u8D85\u8FC7100M"),!1):!0,A=(l,e)=>{e.length>1&&e.splice(0,1)},h=(l,e)=>{if(l.code!==0)return m.error("\u4E0A\u4F20\u5931\u8D25\uFF1A"+l.message),!1;console.log(l.data),t.name=l.data.name,t.type=l.data.suffix,t.fileUrl=l.data.url,t.size=l.data.size},E=()=>{p.value.validate(l=>{if(!l)return!1;if(!t.fileUrl){m.warning({message:"\u8BF7\u9009\u62E9\u6587\u4EF6\u4E0A\u4F20\u540E\u518D\u63D0\u4EA4\uFF01"});return}M(t).then(()=>{m.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{i.value=!1,v("refreshDataList")}})})})};return C({init:b}),(l,e)=>{const d=r("el-input"),s=r("el-form-item"),F=r("el-button"),U=r("el-upload"),k=r("el-form"),x=r("el-dialog");return z(),I(x,{modelValue:i.value,"onUpdate:modelValue":e[6]||(e[6]=o=>i.value=o),title:t.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:a(()=>[u(F,{onClick:e[4]||(e[4]=o=>i.value=!1)},{default:a(()=>[S]),_:1}),u(F,{type:"primary",onClick:e[5]||(e[5]=o=>E())},{default:a(()=>[q]),_:1})]),default:a(()=>[u(k,{ref_key:"dataFormRef",ref:p,model:t,rules:V.value,"label-width":"100px",onKeyup:e[3]||(e[3]=R(o=>E(),["enter"]))},{default:a(()=>[u(s,{label:"\u540D\u79F0",prop:"name"},{default:a(()=>[u(d,{disabled:"",modelValue:t.name,"onUpdate:modelValue":e[0]||(e[0]=o=>t.name=o)},null,8,["modelValue"])]),_:1}),u(s,{label:"\u4E0A\u4F20\u6587\u4EF6"},{default:a(()=>[u(U,{ref_key:"upload",ref:c,class:"upload-demo",action:j($).uploadUrl,"before-upload":y,"on-success":h,multiple:"","on-change":A},{trigger:a(()=>[u(F,{type:"primary"},{default:a(()=>[N]),_:1})]),_:1},8,["action"])]),_:1}),u(s,{label:"\u6240\u5C5E\u5206\u7EC4",prop:"path"},{default:a(()=>[u(d,{disabled:"",modelValue:t.path,"onUpdate:modelValue":e[1]||(e[1]=o=>t.path=o),placeholder:"\u6240\u5C5E\u5206\u7EC4"},null,8,["modelValue"])]),_:1}),u(s,{label:"\u63CF\u8FF0",prop:"description"},{default:a(()=>[u(d,{rows:2,type:"textarea",modelValue:t.description,"onUpdate:modelValue":e[2]||(e[2]=o=>t.description=o),placeholder:"\u63CF\u8FF0"},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{O as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.aceeddd9.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.aceeddd9.js new file mode 100644 index 0000000..7b25e8f --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.aceeddd9.js @@ -0,0 +1 @@ +import{ad as f,d as E,h as p,Y as k,r,o as B,f as N,w as a,b as u,a2 as U,l as b,E as A}from"./index.e3896b23.js";const L=n=>f.get("/data-integrate/layer/"+n),T=n=>n.id?f.put("/data-integrate/layer",n):f.post("/data-integrate/layer",n),w=b("\u53D6\u6D88"),P=b("\u786E\u5B9A"),$=E({__name:"add-or-update",emits:["refreshDataList"],setup(n,{expose:c,emit:V}){const d=p(!1),i=p(),l=k({name:"",cnName:"",note:"",tablePrefix:"",createTime:""}),v=o=>{d.value=!0,l.id="",i.value&&i.value.resetFields(),o&&D(o)},D=o=>{L(o).then(e=>{Object.assign(l,e.data)})},C=p({}),_=()=>{i.value.validate(o=>{if(!o)return!1;T(l).then(()=>{A.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{d.value=!1,V("refreshDataList")}})})})};return c({init:v}),(o,e)=>{const m=r("el-input"),s=r("el-form-item"),y=r("el-date-picker"),g=r("el-form"),F=r("el-button"),x=r("el-dialog");return B(),N(x,{modelValue:d.value,"onUpdate:modelValue":e[8]||(e[8]=t=>d.value=t),title:l.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:a(()=>[u(F,{onClick:e[6]||(e[6]=t=>d.value=!1)},{default:a(()=>[w]),_:1}),u(F,{type:"primary",onClick:e[7]||(e[7]=t=>_())},{default:a(()=>[P]),_:1})]),default:a(()=>[u(g,{ref_key:"dataFormRef",ref:i,model:l,rules:C.value,"label-width":"100px",onKeyup:e[5]||(e[5]=U(t=>_(),["enter"]))},{default:a(()=>[u(s,{label:"\u5206\u5C42\u82F1\u6587\u540D\u79F0",prop:"name"},{default:a(()=>[u(m,{modelValue:l.name,"onUpdate:modelValue":e[0]||(e[0]=t=>l.name=t),placeholder:"\u5206\u5C42\u82F1\u6587\u540D\u79F0",disabled:!0},null,8,["modelValue"])]),_:1}),u(s,{label:"\u5206\u5C42\u4E2D\u6587\u540D\u79F0",prop:"cnName"},{default:a(()=>[u(m,{modelValue:l.cnName,"onUpdate:modelValue":e[1]||(e[1]=t=>l.cnName=t),placeholder:"\u5206\u5C42\u4E2D\u6587\u540D\u79F0",disabled:!0},null,8,["modelValue"])]),_:1}),u(s,{label:"\u5206\u5C42\u63CF\u8FF0",prop:"note"},{default:a(()=>[u(m,{type:"textarea",modelValue:l.note,"onUpdate:modelValue":e[2]||(e[2]=t=>l.note=t)},null,8,["modelValue"])]),_:1}),u(s,{label:"\u8868\u540D\u524D\u7F00",prop:"tablePrefix"},{default:a(()=>[u(m,{modelValue:l.tablePrefix,"onUpdate:modelValue":e[3]||(e[3]=t=>l.tablePrefix=t),placeholder:"\u8868\u540D\u524D\u7F00",disabled:!0},null,8,["modelValue"])]),_:1}),u(s,{label:"\u521B\u5EFA\u65F6\u95F4",prop:"createTime"},{default:a(()=>[u(y,{type:"datetime",placeholder:"\u521B\u5EFA\u65F6\u95F4",modelValue:l.createTime,"onUpdate:modelValue":e[4]||(e[4]=t=>l.createTime=t),disabled:!0},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{$ as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.ae2729f2.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.ae2729f2.js new file mode 100644 index 0000000..82f0607 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.ae2729f2.js @@ -0,0 +1 @@ +import{d as y,h as f,Y as L,r as a,o as m,f as p,w as o,b as t,H as N,a2 as x,l as V,E as U}from"./index.e3896b23.js";import{u as P,a as q}from"./metamodel.a560a346.js";const w=V("\u53D6\u6D88"),I=V("\u786E\u5B9A"),h=y({__name:"add-or-update",emits:["refreshDataList"],setup(M,{expose:c,emit:E}){const s=f(!1),i=f(),e=L({parentId:"",parentPath:"",path:"",name:"",code:"",ifLeaf:1,orderNo:0,description:""}),g=(r,l,d)=>{s.value=!0,e.id="",i.value&&i.value.resetFields(),e.parentId=l,e.parentPath=d,r&&v(r)},v=r=>{P(r).then(l=>{Object.assign(e,l.data)})},A=f({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],code:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],ifLeaf:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],orderNo:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),_=()=>{i.value.validate(r=>{if(!r)return!1;e.ifLeaf=e.parentId==0?1:e.ifLeaf,q(e).then(()=>{U.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{s.value=!1,E("refreshDataList")}})})})};return c({init:g}),(r,l)=>{const d=a("el-input"),n=a("el-form-item"),F=a("el-option"),C=a("el-select"),D=a("el-input-number"),B=a("el-form"),b=a("el-button"),k=a("el-dialog");return m(),p(k,{modelValue:s.value,"onUpdate:modelValue":l[9]||(l[9]=u=>s.value=u),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:o(()=>[t(b,{onClick:l[7]||(l[7]=u=>s.value=!1)},{default:o(()=>[w]),_:1}),t(b,{type:"primary",onClick:l[8]||(l[8]=u=>_())},{default:o(()=>[I]),_:1})]),default:o(()=>[t(B,{ref_key:"dataFormRef",ref:i,model:e,rules:A.value,"label-width":"100px",onKeyup:l[6]||(l[6]=x(u=>_(),["enter"]))},{default:o(()=>[t(n,{label:"\u7236\u7EA7\u76EE\u5F55",prop:"parentPath"},{default:o(()=>[t(d,{disabled:"",modelValue:e.parentPath,"onUpdate:modelValue":l[0]||(l[0]=u=>e.parentPath=u),placeholder:""},null,8,["modelValue"])]),_:1}),t(n,{label:"\u540D\u79F0",prop:"name"},{default:o(()=>[t(d,{modelValue:e.name,"onUpdate:modelValue":l[1]||(l[1]=u=>e.name=u),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),t(n,{label:"\u7F16\u7801",prop:"code"},{default:o(()=>[t(d,{modelValue:e.code,"onUpdate:modelValue":l[2]||(l[2]=u=>e.code=u),placeholder:"\u7F16\u7801"},null,8,["modelValue"])]),_:1}),e.parentId!=0?(m(),p(n,{key:0,label:"\u7C7B\u578B",prop:"ifLeaf"},{default:o(()=>[t(C,{modelValue:e.ifLeaf,"onUpdate:modelValue":l[3]||(l[3]=u=>e.ifLeaf=u),placeholder:"\u7C7B\u578B",disabled:!!e.id},{default:o(()=>[(m(),p(F,{key:1,label:"\u76EE\u5F55",value:1})),(m(),p(F,{key:0,label:"\u5143\u6A21\u578B",value:0}))]),_:1},8,["modelValue","disabled"])]),_:1})):N("",!0),t(n,{label:"\u5E8F\u53F7",prop:"orderNo"},{default:o(()=>[t(D,{modelValue:e.orderNo,"onUpdate:modelValue":l[4]||(l[4]=u=>e.orderNo=u),max:9999,placeholder:"\u5E8F\u53F7"},null,8,["modelValue"])]),_:1}),t(n,{label:"\u63CF\u8FF0",prop:"description"},{default:o(()=>[t(d,{type:"textarea",modelValue:e.description,"onUpdate:modelValue":l[5]||(l[5]=u=>e.description=u)},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{h as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.bc99adba.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.bc99adba.js new file mode 100644 index 0000000..627e598 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.bc99adba.js @@ -0,0 +1 @@ +import{d as k,h as f,Y as N,r as a,o as i,f as m,w as o,b as u,H as x,a2 as I,l as E,E as U}from"./index.e3896b23.js";import{u as P,a as w}from"./fileCategory.cc511701.js";const h=E("\u53D6\u6D88"),j=E("\u786E\u5B9A"),H=k({__name:"add-or-update",emits:["refreshDataList"],setup(q,{expose:c,emit:V}){const d=f(!1),p=f(),e=N({parentId:"",parentPath:"",path:"",name:"",type:0,orderNo:0,description:"",projectId:""}),y=(r,t,s,n)=>{d.value=!0,p.value&&p.value.resetFields(),e.id="",e.parentId=t,e.parentPath=s,r&&g(r)},g=r=>{P(r).then(t=>{Object.assign(e,t.data)})},v=f({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],orderNo:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],type:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),_=()=>{p.value.validate(r=>{if(!r)return!1;e.type=e.parentId==0?0:e.type,w(e).then(()=>{U.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{d.value=!1,V("refreshDataList")}})})})};return c({init:y}),(r,t)=>{const s=a("el-input"),n=a("el-form-item"),F=a("el-option"),C=a("el-select"),A=a("el-input-number"),D=a("el-form"),b=a("el-button"),B=a("el-dialog");return i(),m(B,{modelValue:d.value,"onUpdate:modelValue":t[8]||(t[8]=l=>d.value=l),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:o(()=>[u(b,{onClick:t[6]||(t[6]=l=>d.value=!1)},{default:o(()=>[h]),_:1}),u(b,{type:"primary",onClick:t[7]||(t[7]=l=>_())},{default:o(()=>[j]),_:1})]),default:o(()=>[u(D,{ref_key:"dataFormRef",ref:p,model:e,rules:v.value,"label-width":"100px",onKeyup:t[5]||(t[5]=I(l=>_(),["enter"]))},{default:o(()=>[u(n,{label:"\u7236\u7EA7\u76EE\u5F55",prop:"parentPath"},{default:o(()=>[u(s,{disabled:"",modelValue:e.parentPath,"onUpdate:modelValue":t[0]||(t[0]=l=>e.parentPath=l),placeholder:""},null,8,["modelValue"])]),_:1}),u(n,{label:"\u540D\u79F0",prop:"name"},{default:o(()=>[u(s,{modelValue:e.name,"onUpdate:modelValue":t[1]||(t[1]=l=>e.name=l),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),e.parentId!=0?(i(),m(n,{key:0,label:"\u7C7B\u578B",prop:"type"},{default:o(()=>[u(C,{modelValue:e.type,"onUpdate:modelValue":t[2]||(t[2]=l=>e.type=l),placeholder:"\u7C7B\u578B",disabled:!!e.id},{default:o(()=>[(i(),m(F,{key:0,label:"\u666E\u901A\u76EE\u5F55",value:0})),(i(),m(F,{key:1,label:"\u6587\u4EF6\u76EE\u5F55",value:1}))]),_:1},8,["modelValue","disabled"])]),_:1})):x("",!0),u(n,{label:"\u5E8F\u53F7",prop:"orderNo"},{default:o(()=>[u(A,{modelValue:e.orderNo,"onUpdate:modelValue":t[3]||(t[3]=l=>e.orderNo=l),max:9999,placeholder:"\u5E8F\u53F7"},null,8,["modelValue"])]),_:1}),u(n,{label:"\u63CF\u8FF0",prop:"description"},{default:o(()=>[u(s,{type:"textarea",modelValue:e.description,"onUpdate:modelValue":t[4]||(t[4]=l=>e.description=l)},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{H as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.c217bb4d.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.c217bb4d.js new file mode 100644 index 0000000..2c78ea4 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.c217bb4d.js @@ -0,0 +1 @@ +import{d as O,h as d,Y as $,r,o as i,f,w as a,b as t,c as C,e as F,F as L,a2 as P,l as B,aJ as j,aK as H,E as J}from"./index.e3896b23.js";import{u as M}from"./orgs.0a892ff5.js";import{b as S}from"./post.de075824.js";import{u as T}from"./role.44f4fe5e.js";const Y=B("\u53D6\u6D88"),z=B("\u786E\u5B9A"),h=O({__name:"add-or-update",emits:["refreshDataList"],setup(G,{expose:c,emit:y}){const n=d(!1),g=d([]),V=d([]),b=d([]),p=d(),u=$({id:"",username:"",realName:"",orgId:"",orgName:"",password:"",gender:0,email:"",mobile:"",roleIdList:[],postIdList:[],status:1}),U=o=>{n.value=!0,u.id="",p.value&&p.value.resetFields(),o&&N(o),k(),I(),w()},I=()=>S().then(o=>{g.value=o.data}),w=()=>T().then(o=>{V.value=o.data}),k=()=>M().then(o=>{b.value=o.data}),N=o=>{j(o).then(l=>{Object.assign(u,l.data)})},x=d({username:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],realName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],mobile:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],orgId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),_=()=>{p.value.validate(o=>{if(!o)return!1;H(u).then(()=>{J.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{n.value=!1,y("refreshDataList")}})})})};return c({init:U}),(o,l)=>{const m=r("el-input"),s=r("el-form-item"),R=r("el-tree-select"),v=r("fast-radio-group"),A=r("el-option"),D=r("el-select"),q=r("el-form"),E=r("el-button"),K=r("el-dialog");return i(),f(K,{modelValue:n.value,"onUpdate:modelValue":l[13]||(l[13]=e=>n.value=e),title:u.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1,draggable:""},{footer:a(()=>[t(E,{onClick:l[11]||(l[11]=e=>n.value=!1)},{default:a(()=>[Y]),_:1}),t(E,{type:"primary",onClick:l[12]||(l[12]=e=>_())},{default:a(()=>[z]),_:1})]),default:a(()=>[t(q,{ref_key:"dataFormRef",ref:p,model:u,rules:x.value,"label-width":"120px",onKeyup:l[10]||(l[10]=P(e=>_(),["enter"]))},{default:a(()=>[t(s,{prop:"username",label:"\u7528\u6237\u540D"},{default:a(()=>[t(m,{modelValue:u.username,"onUpdate:modelValue":l[0]||(l[0]=e=>u.username=e),placeholder:"\u7528\u6237\u540D"},null,8,["modelValue"])]),_:1}),t(s,{prop:"realName",label:"\u59D3\u540D"},{default:a(()=>[t(m,{modelValue:u.realName,"onUpdate:modelValue":l[1]||(l[1]=e=>u.realName=e),placeholder:"\u59D3\u540D"},null,8,["modelValue"])]),_:1}),t(s,{prop:"orgId",label:"\u6240\u5C5E\u673A\u6784"},{default:a(()=>[t(R,{modelValue:u.orgId,"onUpdate:modelValue":l[2]||(l[2]=e=>u.orgId=e),data:b.value,"check-strictly":"","value-key":"id",props:{label:"name",children:"children"},style:{width:"100%"}},null,8,["modelValue","data"])]),_:1}),t(s,{prop:"gender",label:"\u6027\u522B"},{default:a(()=>[t(v,{modelValue:u.gender,"onUpdate:modelValue":l[3]||(l[3]=e=>u.gender=e),"dict-type":"user_gender"},null,8,["modelValue"])]),_:1}),t(s,{prop:"mobile",label:"\u624B\u673A\u53F7"},{default:a(()=>[t(m,{modelValue:u.mobile,"onUpdate:modelValue":l[4]||(l[4]=e=>u.mobile=e),placeholder:"\u624B\u673A\u53F7"},null,8,["modelValue"])]),_:1}),t(s,{prop:"email",label:"\u90AE\u7BB1"},{default:a(()=>[t(m,{modelValue:u.email,"onUpdate:modelValue":l[5]||(l[5]=e=>u.email=e),placeholder:"\u90AE\u7BB1"},null,8,["modelValue"])]),_:1}),t(s,{prop:"password",label:"\u5BC6\u7801"},{default:a(()=>[t(m,{modelValue:u.password,"onUpdate:modelValue":l[6]||(l[6]=e=>u.password=e),type:"password",placeholder:"\u5BC6\u7801"},null,8,["modelValue"])]),_:1}),t(s,{prop:"roleIdList",label:"\u6240\u5C5E\u89D2\u8272"},{default:a(()=>[t(D,{modelValue:u.roleIdList,"onUpdate:modelValue":l[7]||(l[7]=e=>u.roleIdList=e),multiple:"",placeholder:"\u6240\u5C5E\u89D2\u8272",style:{width:"100%"}},{default:a(()=>[(i(!0),C(L,null,F(V.value,e=>(i(),f(A,{key:e.id,label:e.name,value:e.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue"])]),_:1}),t(s,{prop:"postIdList",label:"\u6240\u5C5E\u5C97\u4F4D"},{default:a(()=>[t(D,{modelValue:u.postIdList,"onUpdate:modelValue":l[8]||(l[8]=e=>u.postIdList=e),multiple:"",placeholder:"\u6240\u5C5E\u5C97\u4F4D",style:{width:"100%"}},{default:a(()=>[(i(!0),C(L,null,F(g.value,e=>(i(),f(A,{key:e.id,label:e.postName,value:e.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue"])]),_:1}),t(s,{prop:"status",label:"\u72B6\u6001"},{default:a(()=>[t(v,{modelValue:u.status,"onUpdate:modelValue":l[9]||(l[9]=e=>u.status=e),"dict-type":"user_status"},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{h as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.c8657069.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.c8657069.js new file mode 100644 index 0000000..dff885c --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.c8657069.js @@ -0,0 +1 @@ +import{ad as f,d as j,L as O,h as B,Y as P,r as p,o as a,f as c,w as l,b as n,c as m,e as Y,F as G,a as g,H as r,t as _,a2 as J,l as q,E as W}from"./index.e3896b23.js";import{f as L}from"./folder.ea536bf2.js";import{_ as w}from"./database.235d7a89.js";import{_ as T}from"./table.e1c1b00a.js";import{_ as M}from"./column.79595943.js";import{_ as x}from"./model.45425835.js";import{e as X,f as Z}from"./metadata.0c954be9.js";const ee=d=>f.get("/data-governance/quality-config/"+d),ue=d=>d.id?f.put("/data-governance/quality-config",d):f.post("/data-governance/quality-config",d),te=()=>f.get("/data-governance/quality-rule/list"),Ve=d=>f.put("/data-governance/quality-config/online/"+d),Ie=d=>f.put("/data-governance/quality-config/offline/"+d),ke=d=>f.put("/data-governance/quality-config/hand-run/"+d),ae={key:0,src:L},le={key:1,src:w},re={key:2,src:T},oe={key:3,src:M},ne={key:4,src:x},se={style:{"margin-left":"8px"}},de={key:0,src:L},ie={key:1,src:w},me={key:2,src:T},pe={key:3,src:M},ce={key:4,src:x},ge={style:{"margin-left":"8px"}},fe=q("\u53D6\u6D88"),_e=q("\u786E\u5B9A"),ve=j({__name:"add-or-update",emits:["refreshDataList"],setup(d,{expose:U,emit:S}){O(()=>{te().then(o=>{h.value=o.data}),X("").then(o=>{A.value=o.data})});const y=B(!1),F=B(),e=P({name:"",categoryId:"",ruleId:"",metadataIds:[],metadataStrs:"",param:{uniqueType:"",columnLength:1,columnMetaId:"",timeLength:1},relMetadataStr:"",taskType:"",cron:"",note:""}),h=B([]),E={label:"name",children:"children",isLeaf:"leaf",disabled:"disabled"},A=B([]),N=(o,u)=>{y.value=!0,e.id="",F.value&&F.value.resetFields(),e.metadataIds=[],e.param={uniqueType:"",columnLength:1,columnMetaId:"",timeLength:1},e.categoryId=u,o&&R(o)},R=o=>{ee(o).then(u=>{Object.assign(e,u.data)})},$=B({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],ruleId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],metadataIds:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],metadataStrs:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],relMetadataStr:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],taskType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],cron:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],param:{uniqueType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],columnLength:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],columnMetaId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],timeLength:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}}),V=()=>{F.value.validate(o=>{if(!o)return!1;if(e.ruleId!=1&&e.ruleId!=7&&e.ruleId!=9&&e.ruleId!=10&&(e.param=null),e.ruleId==9&&!e.id){let u=[];u.push(e.metadataIds),e.metadataIds=u}e.cron=e.taskType==2?e.cron:null,ue(e).then(()=>{W.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{y.value=!1,S("refreshDataList")}})})})},I=async(o,u)=>{if(o.level==0)return u(A.value);if(o.level>=1)if(o.data.children)u(o.data.children);else{const{data:b}=await Z(o.data.id);u(b)}};return U({init:N}),(o,u)=>{const b=p("el-input"),i=p("el-form-item"),Q=p("el-option"),K=p("el-select"),k=p("el-tree-select"),v=p("fast-select"),D=p("el-input-number"),z=p("el-form"),C=p("el-button"),H=p("el-dialog");return a(),c(H,{modelValue:y.value,"onUpdate:modelValue":u[13]||(u[13]=t=>y.value=t),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:l(()=>[n(C,{onClick:u[11]||(u[11]=t=>y.value=!1)},{default:l(()=>[fe]),_:1}),n(C,{type:"primary",onClick:u[12]||(u[12]=t=>V())},{default:l(()=>[_e]),_:1})]),default:l(()=>[n(z,{ref_key:"dataFormRef",ref:F,model:e,rules:$.value,"label-width":"100px",onKeyup:u[10]||(u[10]=J(t=>V(),["enter"]))},{default:l(()=>[n(i,{label:"\u540D\u79F0",prop:"name","label-width":"auto"},{default:l(()=>[n(b,{modelValue:e.name,"onUpdate:modelValue":u[0]||(u[0]=t=>e.name=t),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),n(i,{label:"\u89C4\u5219\u7C7B\u578B",prop:"ruleId","label-width":"auto"},{default:l(()=>[n(K,{modelValue:e.ruleId,"onUpdate:modelValue":u[1]||(u[1]=t=>e.ruleId=t),clearable:"",filterable:"",disabled:!!e.id,placeholder:"\u8BF7\u9009\u62E9"},{default:l(()=>[(a(!0),m(G,null,Y(h.value,(t,s)=>(a(),c(Q,{key:t.id,label:`[${t.engName}]${t.name}`,value:t.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue","disabled"])]),_:1}),!!e.ruleId&&!e.id?(a(),c(i,{key:0,label:"\u68C0\u6D4B\u5B57\u6BB5",prop:"metadataIds","label-width":"auto"},{default:l(()=>[n(k,{modelValue:e.metadataIds,"onUpdate:modelValue":u[2]||(u[2]=t=>e.metadataIds=t),multiple:e.ruleId!=9,props:E,data:A.value,clearable:"",load:I,lazy:!0},{default:l(({node:t,data:s})=>[g("div",null,[g("span",null,[s.icon=="/src/assets/folder.png"?(a(),m("img",ae)):r("",!0),s.icon=="/src/assets/database.png"?(a(),m("img",le)):r("",!0),s.icon=="/src/assets/table.png"?(a(),m("img",re)):r("",!0),s.icon=="/src/assets/column.png"?(a(),m("img",oe)):r("",!0),s.icon=="/src/assets/model.png"?(a(),m("img",ne)):r("",!0),g("span",se,_(s.name)+"\u2003"+_(s.code),1)])])]),_:1},8,["modelValue","multiple","data"])]),_:1})):r("",!0),!!e.ruleId&&e.id?(a(),c(i,{key:1,label:"\u68C0\u6D4B\u5B57\u6BB5",prop:"metadataStrs","label-width":"auto"},{default:l(()=>[g("span",null,_(e.metadataStrs),1)]),_:1})):r("",!0),e.ruleId==1?(a(),c(i,{key:2,label:"\u5224\u65AD\u65B9\u5F0F",prop:"param.uniqueType","label-width":"auto"},{default:l(()=>[n(v,{modelValue:e.param.uniqueType,"onUpdate:modelValue":u[3]||(u[3]=t=>e.param.uniqueType=t),"dict-type":"quality_unique_type",placeholder:"\u5224\u65AD\u65B9\u5F0F",clearable:""},null,8,["modelValue"])]),_:1})):r("",!0),e.ruleId==7?(a(),c(i,{key:3,label:"\u5B57\u6BB5\u957F\u5EA6",prop:"param.columnLength","label-width":"auto"},{default:l(()=>[n(D,{modelValue:e.param.columnLength,"onUpdate:modelValue":u[4]||(u[4]=t=>e.param.columnLength=t),min:1,max:99999999,placeholder:"\u5B57\u6BB5\u957F\u5EA6"},null,8,["modelValue"])]),_:1})):r("",!0),e.ruleId==9&&!e.id?(a(),c(i,{key:4,label:"\u5173\u8054\u5B57\u6BB5",prop:"param.columnMetaId","label-width":"auto"},{default:l(()=>[n(k,{modelValue:e.param.columnMetaId,"onUpdate:modelValue":u[5]||(u[5]=t=>e.param.columnMetaId=t),props:E,data:A.value,clearable:"",load:I,lazy:!0},{default:l(({node:t,data:s})=>[g("div",null,[g("span",null,[s.icon=="/src/assets/folder.png"?(a(),m("img",de)):r("",!0),s.icon=="/src/assets/database.png"?(a(),m("img",ie)):r("",!0),s.icon=="/src/assets/table.png"?(a(),m("img",me)):r("",!0),s.icon=="/src/assets/column.png"?(a(),m("img",pe)):r("",!0),s.icon=="/src/assets/model.png"?(a(),m("img",ce)):r("",!0),g("span",ge,_(s.name)+"\u2003"+_(s.code),1)])])]),_:1},8,["modelValue","data"])]),_:1})):r("",!0),e.ruleId==9&&e.id?(a(),c(i,{key:5,label:"\u5173\u8054\u5B57\u6BB5",prop:"relMetadataStr","label-width":"auto"},{default:l(()=>[g("span",null,_(e.relMetadataStr),1)]),_:1})):r("",!0),e.ruleId==10?(a(),c(i,{key:6,label:"\u66F4\u65B0\u65F6\u957F(s)",prop:"param.timeLength","label-width":"auto"},{default:l(()=>[n(D,{modelValue:e.param.timeLength,"onUpdate:modelValue":u[6]||(u[6]=t=>e.param.timeLength=t),min:1,max:99999999,placeholder:"\u66F4\u65B0\u65F6\u957F(s)"},null,8,["modelValue"])]),_:1})):r("",!0),n(i,{label:"\u4EFB\u52A1\u7C7B\u578B",prop:"taskType","label-width":"auto"},{default:l(()=>[n(v,{modelValue:e.taskType,"onUpdate:modelValue":u[7]||(u[7]=t=>e.taskType=t),"dict-type":"quality_config_task_type",placeholder:"\u4EFB\u52A1\u7C7B\u578B",clearable:""},null,8,["modelValue"])]),_:1}),e.taskType==2?(a(),c(i,{key:7,label:"cron\u8868\u8FBE\u5F0F",prop:"cron","label-width":"auto"},{default:l(()=>[n(b,{modelValue:e.cron,"onUpdate:modelValue":u[8]||(u[8]=t=>e.cron=t),placeholder:"cron\u8868\u8FBE\u5F0F",clearable:""},null,8,["modelValue"])]),_:1})):r("",!0),n(i,{label:"\u5907\u6CE8",prop:"note","label-width":"auto"},{default:l(()=>[n(b,{type:"textarea",modelValue:e.note,"onUpdate:modelValue":u[9]||(u[9]=t=>e.note=t)},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{ve as _,Ie as a,ke as h,Ve as o}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.d627c8a1.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.d627c8a1.js new file mode 100644 index 0000000..4ccdf40 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.d627c8a1.js @@ -0,0 +1 @@ +import{d as w,h as F,Y as x,r as s,o as U,f as O,w as a,b as l,a2 as L,a as d,l as b,E as k}from"./index.e3896b23.js";import{u as S,t as J,a as T}from"./clusterConfiguration.e495cab8.js";const q=d("span",{style:{"font-size":"17px"}},[d("b",null,"Hadoop \u914D\u7F6E")],-1),N=d("span",{style:{"font-size":"17px"}},[d("b",null,"Flink \u914D\u7F6E")],-1),z=d("span",{style:{"font-size":"16px"}},[d("b",null,"\u81EA\u5B9A\u4E49\u914D\u7F6E")],-1),$=d("span",{style:{"font-size":"17px"}},[d("b",null,"\u57FA\u672C\u914D\u7F6E")],-1),H=b("\u53D6\u6D88"),R=b("\u6D4B\u8BD5"),j=b("\u786E\u5B9A"),G=w({__name:"add-or-update",emits:["refreshDataList"],setup(K,{expose:D,emit:g}){const r=F(!1),p=F(),e=x({type:"",hadoopConfigPath:"",flinkLibPath:"",flinkConfigPath:"","taskmanager.numberOfTaskSlots":1,"state.savepoints.dir":"","state.checkpoints.dir":"",name:"",alias:"",note:"",enabled:!0,configJson:""}),C=o=>{r.value=!0,e.id="",p.value&&p.value.resetFields(),o&&_(o)},_=o=>{S(o).then(u=>{Object.assign(e,JSON.parse(u.data.configJson)),Object.assign(e,u.data)})},c=F({type:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],hadoopConfigPath:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],flinkLibPath:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],flinkConfigPath:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],alias:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],enabled:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),V=()=>{p.value.validate(o=>{if(!o)return!1;h(),J(e).then(()=>{k.success({message:"\u6D4B\u8BD5\u8FDE\u63A5\u6210\u529F",duration:500,onClose:()=>{g("refreshDataList")}})})})},E=()=>{p.value.validate(o=>{if(!o)return!1;h(),T(e).then(()=>{k.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{r.value=!1,g("refreshDataList")}})})})},h=()=>{let o={};o.hadoopConfigPath=e.hadoopConfigPath,o.flinkLibPath=e.flinkLibPath,o.flinkConfigPath=e.flinkConfigPath,o["taskmanager.numberOfTaskSlots"]=e["taskmanager.numberOfTaskSlots"],o["state.savepoints.dir"]=e["state.savepoints.dir"],o["state.checkpoints.dir"]=e["state.checkpoints.dir"],e.configJson=JSON.stringify(o)};return D({init:C}),(o,u)=>{const v=s("fast-select"),n=s("el-form-item"),f=s("el-divider"),i=s("el-input"),A=s("el-input-number"),B=s("el-switch"),y=s("el-form"),m=s("el-button"),P=s("el-dialog");return U(),O(P,{modelValue:r.value,"onUpdate:modelValue":u[15]||(u[15]=t=>r.value=t),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:a(()=>[l(m,{onClick:u[12]||(u[12]=t=>r.value=!1)},{default:a(()=>[H]),_:1}),l(m,{type:"primary",onClick:u[13]||(u[13]=t=>V())},{default:a(()=>[R]),_:1}),l(m,{type:"primary",onClick:u[14]||(u[14]=t=>E())},{default:a(()=>[j]),_:1})]),default:a(()=>[l(y,{ref_key:"dataFormRef",ref:p,model:e,rules:c.value,"label-width":"100px",onKeyup:u[11]||(u[11]=L(t=>E(),["enter"]))},{default:a(()=>[l(n,{label:"\u7C7B\u578B",prop:"type","label-width":"auto"},{default:a(()=>[l(v,{modelValue:e.type,"onUpdate:modelValue":u[0]||(u[0]=t=>e.type=t),"dict-type":"production_cluster_configuration_type",placeholder:"\u8BF7\u9009\u62E9",clearable:""},null,8,["modelValue"])]),_:1}),l(f,null,{default:a(()=>[q]),_:1}),l(n,{label:"Hadoop \u914D\u7F6E\u6587\u4EF6\u8DEF\u5F84",prop:"hadoopConfigPath","label-width":"auto"},{default:a(()=>[l(i,{modelValue:e.hadoopConfigPath,"onUpdate:modelValue":u[1]||(u[1]=t=>e.hadoopConfigPath=t),placeholder:"\u6307\u5B9A\u914D\u7F6E\u6587\u4EF6\u8DEF\u5F84\uFF08\u672B\u5C3E\u65E0/\uFF09,\u9700\u8981\u5305\u542B\u4EE5\u4E0B\u6587\u4EF6\uFF1Acore-site.xml,hdfs-site.xml,yarn-site.xml !"},null,8,["modelValue"])]),_:1}),l(f,null,{default:a(()=>[N]),_:1}),l(n,{label:"lib \u8DEF\u5F84",prop:"flinkLibPath","label-width":"auto"},{default:a(()=>[l(i,{modelValue:e.flinkLibPath,"onUpdate:modelValue":u[2]||(u[2]=t=>e.flinkLibPath=t),placeholder:"\u6307\u5B9A lib \u7684 hdfs \u8DEF\u5F84\uFF08\u672B\u5C3E\u65E0/\uFF09,\u9700\u8981\u5305\u542B Flink \u8FD0\u884C\u65F6\u7684\u4F9D\u8D56, \u5982 hdfs://192.168.40.135:9000/flinkJar"},null,8,["modelValue"])]),_:1}),l(n,{label:"Flink \u914D\u7F6E\u6587\u4EF6\u8DEF\u5F84",prop:"flinkConfigPath","label-width":"auto"},{default:a(()=>[l(i,{modelValue:e.flinkConfigPath,"onUpdate:modelValue":u[3]||(u[3]=t=>e.flinkConfigPath=t),placeholder:"\u8BF7\u8F93\u5165 flink-conf.yaml \u6240\u5728\u7684\u8DEF\u5F84, \u5982 /server/deployment/flink-1.14.3/conf !"},null,8,["modelValue"])]),_:1}),l(f,{"content-position":"left"},{default:a(()=>[z]),_:1}),l(n,{label:"taskmanager.numberOfTaskSlots",prop:"taskmanager.numberOfTaskSlots","label-width":"auto"},{default:a(()=>[l(A,{modelValue:e["taskmanager.numberOfTaskSlots"],"onUpdate:modelValue":u[4]||(u[4]=t=>e["taskmanager.numberOfTaskSlots"]=t),min:1,max:10},null,8,["modelValue"])]),_:1}),l(n,{label:"state.savepoints.dir",prop:"state.savepoints.dir","label-width":"auto"},{default:a(()=>[l(i,{modelValue:e["state.savepoints.dir"],"onUpdate:modelValue":u[5]||(u[5]=t=>e["state.savepoints.dir"]=t),placeholder:"\u5982 hdfs://192.168.40.135:9000/savepoints"},null,8,["modelValue"])]),_:1}),l(n,{label:"state.checkpoints.dir",prop:"state.checkpoints.dir","label-width":"auto"},{default:a(()=>[l(i,{modelValue:e["state.checkpoints.dir"],"onUpdate:modelValue":u[6]||(u[6]=t=>e["state.checkpoints.dir"]=t),placeholder:"\u5982 hdfs://192.168.40.135:9000/checkpoints"},null,8,["modelValue"])]),_:1}),l(f,null,{default:a(()=>[$]),_:1}),l(n,{label:"\u96C6\u7FA4\u914D\u7F6E\u540D\u79F0",prop:"name","label-width":"auto"},{default:a(()=>[l(i,{modelValue:e.name,"onUpdate:modelValue":u[7]||(u[7]=t=>e.name=t),placeholder:"\u96C6\u7FA4\u914D\u7F6E\u540D\u79F0"},null,8,["modelValue"])]),_:1}),l(n,{label:"\u522B\u540D",prop:"alias","label-width":"auto"},{default:a(()=>[l(i,{modelValue:e.alias,"onUpdate:modelValue":u[8]||(u[8]=t=>e.alias=t),placeholder:"\u522B\u540D"},null,8,["modelValue"])]),_:1}),l(n,{label:"\u5907\u6CE8",prop:"note","label-width":"auto"},{default:a(()=>[l(i,{modelValue:e.note,"onUpdate:modelValue":u[9]||(u[9]=t=>e.note=t),placeholder:"\u5907\u6CE8"},null,8,["modelValue"])]),_:1}),l(n,{label:"\u542F\u7528",prop:"enabled","label-width":"auto"},{default:a(()=>[l(B,{modelValue:e.enabled,"onUpdate:modelValue":u[10]||(u[10]=t=>e.enabled=t),"active-value":!0,"inactive-value":!1},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{G as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.d8c4f684.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.d8c4f684.js new file mode 100644 index 0000000..605f8ee --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.d8c4f684.js @@ -0,0 +1 @@ +import{d as A,h as m,Y as B,r as a,o as N,f as k,w as l,b as u,a2 as y,l as _,E as x}from"./index.e3896b23.js";import{u as U,a as w}from"./post.de075824.js";const P=_("\u53D6\u6D88"),R=_("\u786E\u5B9A"),L=A({__name:"add-or-update",emits:["refreshDataList"],setup($,{expose:b,emit:C}){const n=m(!1),r=m(),t=B({id:"",postCode:"",postName:"",sort:0,status:1}),V=s=>{n.value=!0,t.id="",r.value&&r.value.resetFields(),s&&g(s)},g=s=>{U(s).then(e=>{Object.assign(t,e.data)})},v=m({postCode:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],postName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),p=()=>{r.value.validate(s=>{if(!s)return!1;w(t).then(()=>{x.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{n.value=!1,C("refreshDataList")}})})})};return b({init:V}),(s,e)=>{const i=a("el-input"),d=a("el-form-item"),F=a("el-input-number"),D=a("fast-radio-group"),c=a("el-form"),f=a("el-button"),E=a("el-dialog");return N(),k(E,{modelValue:n.value,"onUpdate:modelValue":e[7]||(e[7]=o=>n.value=o),title:t.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1,draggable:""},{footer:l(()=>[u(f,{onClick:e[5]||(e[5]=o=>n.value=!1)},{default:l(()=>[P]),_:1}),u(f,{type:"primary",onClick:e[6]||(e[6]=o=>p())},{default:l(()=>[R]),_:1})]),default:l(()=>[u(c,{ref_key:"dataFormRef",ref:r,model:t,rules:v.value,"label-width":"80px",onKeyup:e[4]||(e[4]=y(o=>p(),["enter"]))},{default:l(()=>[u(d,{label:"\u5C97\u4F4D\u7F16\u7801",prop:"postCode"},{default:l(()=>[u(i,{modelValue:t.postCode,"onUpdate:modelValue":e[0]||(e[0]=o=>t.postCode=o)},null,8,["modelValue"])]),_:1}),u(d,{label:"\u5C97\u4F4D\u540D\u79F0",prop:"postName"},{default:l(()=>[u(i,{modelValue:t.postName,"onUpdate:modelValue":e[1]||(e[1]=o=>t.postName=o)},null,8,["modelValue"])]),_:1}),u(d,{label:"\u6392\u5E8F",prop:"sort"},{default:l(()=>[u(F,{modelValue:t.sort,"onUpdate:modelValue":e[2]||(e[2]=o=>t.sort=o),min:0},null,8,["modelValue"])]),_:1}),u(d,{label:"\u72B6\u6001",prop:"status"},{default:l(()=>[u(D,{modelValue:t.status,"onUpdate:modelValue":e[3]||(e[3]=o=>t.status=o),"dict-type":"post_status"},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{L as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.e5899b02.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.e5899b02.js new file mode 100644 index 0000000..cba08f6 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.e5899b02.js @@ -0,0 +1 @@ +import{ad as i,d as k,h as c,Y as x,r as n,o as U,f as w,w as u,b as e,a2 as y,l as b,E as S}from"./index.e3896b23.js";const q=a=>i.get("/schedule/"+a),G=a=>a.id?i.put("/schedule",a):i.post("/schedule",a),J=a=>i.put("/schedule/run",a),M=a=>i.put("/schedule/change-status",a),R=b("\u5141\u8BB8"),$=b(" \u7981\u6B62 "),K=b("\u53D6\u6D88"),L=b("\u786E\u5B9A"),O=k({__name:"add-or-update",emits:["refreshDataList"],setup(a,{expose:V,emit:D}){c();const m=c(!1),_=c(),o=x({id:"",jobName:"",jobGroup:"",beanName:"",method:"",params:"",cronExpression:"",status:0,concurrent:1,remark:""}),A=d=>{m.value=!0,o.id="",_.value&&_.value.resetFields(),d&&C(d)},C=d=>{q(d).then(l=>{Object.assign(o,l.data)})},B=c({jobName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],jobGroup:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],beanName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],method:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],cronExpression:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),E=()=>{_.value.validate(d=>{if(!d)return!1;G(o).then(()=>{S.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{m.value=!1,D("refreshDataList")}})})})};return V({init:A}),(d,l)=>{const p=n("el-input"),r=n("el-form-item"),s=n("el-col"),v=n("fast-select"),f=n("el-row"),F=n("el-radio-button"),h=n("el-radio-group"),N=n("el-form"),g=n("el-button"),j=n("el-dialog");return U(),w(j,{modelValue:m.value,"onUpdate:modelValue":l[11]||(l[11]=t=>m.value=t),title:o.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1,draggable:""},{footer:u(()=>[e(g,{onClick:l[9]||(l[9]=t=>m.value=!1)},{default:u(()=>[K]),_:1}),e(g,{type:"primary",onClick:l[10]||(l[10]=t=>E())},{default:u(()=>[L]),_:1})]),default:u(()=>[e(N,{ref_key:"dataFormRef",ref:_,model:o,rules:B.value,"label-width":"100px",onKeyup:l[8]||(l[8]=y(t=>E(),["enter"]))},{default:u(()=>[e(f,null,{default:u(()=>[e(s,{span:12},{default:u(()=>[e(r,{label:"\u4EFB\u52A1\u540D\u79F0",prop:"jobName"},{default:u(()=>[e(p,{modelValue:o.jobName,"onUpdate:modelValue":l[0]||(l[0]=t=>o.jobName=t),placeholder:"\u4EFB\u52A1\u540D\u79F0"},null,8,["modelValue"])]),_:1})]),_:1}),e(s,{span:12},{default:u(()=>[e(r,{label:"\u4EFB\u52A1\u7EC4\u540D",prop:"jobGroup"},{default:u(()=>[e(v,{modelValue:o.jobGroup,"onUpdate:modelValue":l[1]||(l[1]=t=>o.jobGroup=t),"dict-type":"schedule_group",placeholder:"\u4EFB\u52A1\u7EC4\u540D",style:{width:"100%"}},null,8,["modelValue"])]),_:1})]),_:1})]),_:1}),e(f,null,{default:u(()=>[e(s,{span:12},{default:u(()=>[e(r,{label:"bean\u540D\u79F0",prop:"beanName"},{default:u(()=>[e(p,{modelValue:o.beanName,"onUpdate:modelValue":l[2]||(l[2]=t=>o.beanName=t),placeholder:"spring bean\u540D\u79F0"},null,8,["modelValue"])]),_:1})]),_:1}),e(s,{span:12},{default:u(()=>[e(r,{label:"\u65B9\u6CD5\u540D\u79F0",prop:"beanName"},{default:u(()=>[e(p,{modelValue:o.method,"onUpdate:modelValue":l[3]||(l[3]=t=>o.method=t),placeholder:"\u65B9\u6CD5\u540D\u79F0"},null,8,["modelValue"])]),_:1})]),_:1})]),_:1}),e(f,null,{default:u(()=>[e(s,{span:12},{default:u(()=>[e(r,{label:"\u65B9\u6CD5\u53C2\u6570",prop:"params"},{default:u(()=>[e(p,{modelValue:o.params,"onUpdate:modelValue":l[4]||(l[4]=t=>o.params=t),placeholder:"\u65B9\u6CD5\u53C2\u6570"},null,8,["modelValue"])]),_:1})]),_:1}),e(s,{span:12},{default:u(()=>[e(r,{label:"cron\u8868\u8FBE\u5F0F",prop:"cronExpression"},{default:u(()=>[e(p,{modelValue:o.cronExpression,"onUpdate:modelValue":l[5]||(l[5]=t=>o.cronExpression=t),placeholder:"cron\u8868\u8FBE\u5F0F"},null,8,["modelValue"])]),_:1})]),_:1})]),_:1}),e(f,null,{default:u(()=>[e(s,{span:12},{default:u(()=>[e(r,{label:"\u662F\u5426\u5E76\u53D1",prop:"concurrent"},{default:u(()=>[e(h,{modelValue:o.concurrent,"onUpdate:modelValue":l[6]||(l[6]=t=>o.concurrent=t)},{default:u(()=>[e(F,{label:1},{default:u(()=>[R]),_:1}),e(F,{label:0},{default:u(()=>[$]),_:1})]),_:1},8,["modelValue"])]),_:1})]),_:1}),e(s,{span:12},{default:u(()=>[e(r,{label:"\u5907\u6CE8",prop:"remark"},{default:u(()=>[e(p,{modelValue:o.remark,"onUpdate:modelValue":l[7]||(l[7]=t=>o.remark=t),placeholder:"\u5907\u6CE8"},null,8,["modelValue"])]),_:1})]),_:1})]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{O as _,M as a,J as u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.fd8fc5c7.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.fd8fc5c7.js new file mode 100644 index 0000000..711a736 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_script_setup_true_lang.fd8fc5c7.js @@ -0,0 +1 @@ +import{d as y,h as i,Y as U,r as d,o as c,f as N,w as t,b as l,a2 as w,l as b,E as D}from"./index.e3896b23.js";import{u as j,f as k,t as q}from"./database.32bfd96d.js";const I=b("\u53D6\u6D88"),T=b("\u6D4B\u8BD5"),x=b("\u786E\u5B9A"),R=y({__name:"add-or-update",emits:["refreshDataList"],setup(P,{expose:A,emit:f}){const n=i(!1),m=i(),u=U({name:"",databaseType:"",databaseIp:"",databasePort:"",databaseName:"",userName:"",password:"",jdbcUrl:"",projectId:""}),F=o=>{n.value=!0,u.id="",m.value&&m.value.resetFields(),o&&V(o)},V=o=>{j(o).then(e=>{Object.assign(u,e.data)})},g=i({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],databaseType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],databaseIp:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],databasePort:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],databaseName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],userName:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],password:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),E=()=>{m.value.validate(o=>{if(!o)return!1;k(u).then(()=>{D.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{n.value=!1,f("refreshDataList")}})})})},B=()=>{m.value.validate(o=>{if(!o)return!1;q(u).then(()=>{D.success({message:"\u6D4B\u8BD5\u8FDE\u63A5\u6210\u529F",duration:500,onClose:()=>{f("refreshDataList")}})})})};return A({init:F}),(o,e)=>{const r=d("el-input"),s=d("el-form-item"),_=d("fast-select"),C=d("el-form"),p=d("el-button"),v=d("el-dialog");return c(),N(v,{modelValue:n.value,"onUpdate:modelValue":e[12]||(e[12]=a=>n.value=a),title:u.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:t(()=>[l(p,{onClick:e[9]||(e[9]=a=>n.value=!1)},{default:t(()=>[I]),_:1}),l(p,{type:"primary",onClick:e[10]||(e[10]=a=>B())},{default:t(()=>[T]),_:1}),l(p,{type:"primary",onClick:e[11]||(e[11]=a=>E())},{default:t(()=>[x]),_:1})]),default:t(()=>[l(C,{ref_key:"dataFormRef",ref:m,model:u,rules:g.value,"label-width":"120px",onKeyup:e[8]||(e[8]=w(a=>E(),["enter"]))},{default:t(()=>[l(s,{label:"\u540D\u79F0",prop:"name"},{default:t(()=>[l(r,{modelValue:u.name,"onUpdate:modelValue":e[0]||(e[0]=a=>u.name=a),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),l(s,{label:"\u6570\u636E\u5E93\u7C7B\u578B",prop:"databaseType"},{default:t(()=>[l(_,{modelValue:u.databaseType,"onUpdate:modelValue":e[1]||(e[1]=a=>u.databaseType=a),"dict-type":"database_type",placeholder:"\u8BF7\u9009\u62E9",clearable:""},null,8,["modelValue"])]),_:1}),l(s,{label:"\u4E3B\u673Aip",prop:"databaseIp"},{default:t(()=>[l(r,{modelValue:u.databaseIp,"onUpdate:modelValue":e[2]||(e[2]=a=>u.databaseIp=a),placeholder:"\u4E3B\u673Aip"},null,8,["modelValue"])]),_:1}),l(s,{label:"\u7AEF\u53E3",prop:"databasePort"},{default:t(()=>[l(r,{modelValue:u.databasePort,"onUpdate:modelValue":e[3]||(e[3]=a=>u.databasePort=a),placeholder:"\u7AEF\u53E3"},null,8,["modelValue"])]),_:1}),l(s,{label:"\u5E93\u540D(\u670D\u52A1\u540D)",prop:"databaseName"},{default:t(()=>[l(r,{modelValue:u.databaseName,"onUpdate:modelValue":e[4]||(e[4]=a=>u.databaseName=a),placeholder:"\u5E93\u540D(\u670D\u52A1\u540D)"},null,8,["modelValue"])]),_:1}),l(s,{label:"\u7528\u6237\u540D",prop:"userName"},{default:t(()=>[l(r,{modelValue:u.userName,"onUpdate:modelValue":e[5]||(e[5]=a=>u.userName=a),placeholder:"\u7528\u6237\u540D"},null,8,["modelValue"])]),_:1}),l(s,{label:"\u5BC6\u7801",prop:"password"},{default:t(()=>[l(r,{modelValue:u.password,"onUpdate:modelValue":e[6]||(e[6]=a=>u.password=a),placeholder:"\u5BC6\u7801"},null,8,["modelValue"])]),_:1}),l(s,{label:"jdbc\u8FDE\u63A5\u4E32",prop:"jdbcUrl"},{default:t(()=>[l(r,{modelValue:u.jdbcUrl,"onUpdate:modelValue":e[7]||(e[7]=a=>u.jdbcUrl=a),placeholder:"jdbc\u8FDE\u63A5\u4E32(\u82E5\u586B\u5199\u5C06\u4EE5\u586B\u5199\u7684\u5185\u5BB9\u8FDE\u63A5,\u5426\u5219\u4F1A\u540E\u53F0\u81EA\u52A8\u6784\u5EFA\u8FDE\u63A5)"},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{R as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_style_index_0_lang.1148389a.js b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_style_index_0_lang.1148389a.js new file mode 100644 index 0000000..32f6db2 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-or-update.vue_vue_type_style_index_0_lang.1148389a.js @@ -0,0 +1 @@ +import{d as G,L as J,h as i,Y as I,r as d,o as s,f as _,w as o,b as u,H as p,a as F,c as v,t as Q,a2 as W,a7 as x,a8 as U,e as X,F as Z,l as A,E as ee}from"./index.e3896b23.js";import{f as le}from"./folder.ea536bf2.js";import{_ as ae}from"./database.235d7a89.js";import{_ as te}from"./table.e1c1b00a.js";import{_ as ue}from"./column.79595943.js";import{_ as oe}from"./model.45425835.js";import{l as re}from"./metamodel.a560a346.js";import{b as se,c as de,d as ne}from"./metadata.0c954be9.js";const ie={key:0,src:le},me={key:1,src:ae},pe={key:2,src:te},fe={key:3,src:ue},_e={key:4,src:oe},ce={style:{"margin-left":"8px"}},ve=A("\u5143\u6570\u636E\u5C5E\u6027"),ge=A("\u53D6\u6D88"),be=A("\u786E\u5B9A"),Be=G({__name:"add-or-update",emits:["refreshDataList"],setup(Fe,{expose:w,emit:N}){J(()=>{re().then(t=>{h.value=t.data})});const g=i(!1),E=i(),V=i(),h=i([]);i();const l=I({parentId:"",parentPath:"",path:"",name:"",code:"",ifLeaf:1,metamodelId:"",orderNo:0,description:""}),b=I({}),R=(t,e,n)=>{g.value=!0,l.id="",E.value&&E.value.resetFields(),l.metamodelId="",b.value&&b.value.resetFields(),l.ifLeaf=l.parentId==0?1:l.ifLeaf,l.parentId=e,l.parentPath=n,t&&M(t)},M=t=>{se(t).then(e=>{Object.assign(l,e.data),m.value=e.data.properties,m.value.length>0?c.value=!0:c.value=!1,D()})},q=i({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],code:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],ifLeaf:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],metamodelId:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],orderNo:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),y=i({}),m=i([]),c=i(!1),S=t=>{t&&de(t).then(e=>{m.value=e.data,m.value.length>0?c.value=!0:c.value=!1,D()})},D=()=>{for(var t in m.value){var e=m.value[t];if(e.value=e.value?e.value:"",b[e.code]=e.value?e.value:"",e.nullable==0){let n=[],r={};r.required=!0,r.message="\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",r.trigger="blur",n.push(r),y.value[e.code]=n}}},$=(t,e)=>{b[e.code]=t},k=async()=>{let t=!0;await E.value.validate(e=>{if(!e)return t=!1,!1}),V.value&&await V.value.validate(e=>{if(!e)return t=!1,!1}),t&&H()},H=()=>{l.metamodelId=l.ifLeaf==1?"":l.metamodelId,l.properties=m.value,ne(l).then(()=>{ee.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{g.value=!1,N("refreshDataList")}})})};return w({init:R}),(t,e)=>{const n=d("el-input"),r=d("el-form-item"),B=d("el-option"),K=d("el-select"),O=d("el-tree-select"),T=d("el-input-number"),L=d("el-form"),j=d("el-divider"),Y=d("el-result"),P=d("el-button"),z=d("el-dialog");return s(),_(z,{modelValue:g.value,"onUpdate:modelValue":e[10]||(e[10]=a=>g.value=a),title:l.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:o(()=>[u(P,{onClick:e[8]||(e[8]=a=>g.value=!1)},{default:o(()=>[ge]),_:1}),u(P,{type:"primary",onClick:e[9]||(e[9]=a=>k())},{default:o(()=>[be]),_:1})]),default:o(()=>[u(L,{ref_key:"dataFormRef",ref:E,model:l,rules:q.value,"label-width":"100px",onKeyup:e[7]||(e[7]=W(a=>k(),["enter"]))},{default:o(()=>[u(r,{label:"\u7236\u7EA7\u76EE\u5F55",prop:"parentPath"},{default:o(()=>[u(n,{disabled:"",modelValue:l.parentPath,"onUpdate:modelValue":e[0]||(e[0]=a=>l.parentPath=a),placeholder:""},null,8,["modelValue"])]),_:1}),u(r,{label:"\u540D\u79F0",prop:"name"},{default:o(()=>[u(n,{modelValue:l.name,"onUpdate:modelValue":e[1]||(e[1]=a=>l.name=a),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),u(r,{label:"\u7F16\u7801",prop:"code"},{default:o(()=>[u(n,{modelValue:l.code,"onUpdate:modelValue":e[2]||(e[2]=a=>l.code=a),placeholder:"\u7F16\u7801"},null,8,["modelValue"])]),_:1}),l.parentId!=0?(s(),_(r,{key:0,label:"\u7C7B\u578B",prop:"ifLeaf"},{default:o(()=>[u(K,{disabled:!!l.id,modelValue:l.ifLeaf,"onUpdate:modelValue":e[3]||(e[3]=a=>l.ifLeaf=a),placeholder:"\u7C7B\u578B"},{default:o(()=>[(s(),_(B,{key:1,label:"\u76EE\u5F55",value:1})),(s(),_(B,{key:0,label:"\u5143\u6570\u636E",value:0}))]),_:1},8,["disabled","modelValue"])]),_:1})):p("",!0),l.ifLeaf==0?(s(),_(r,{key:1,label:"\u6240\u5C5E\u5143\u6A21\u578B",prop:"metamodelId"},{default:o(()=>[u(O,{disabled:!!l.id,modelValue:l.metamodelId,"onUpdate:modelValue":e[4]||(e[4]=a=>l.metamodelId=a),data:h.value,clearable:"",onChange:S},{default:o(({node:a,data:f})=>[F("div",null,[F("span",null,[f.icon=="/src/assets/folder.png"?(s(),v("img",ie)):p("",!0),f.icon=="/src/assets/database.png"?(s(),v("img",me)):p("",!0),f.icon=="/src/assets/table.png"?(s(),v("img",pe)):p("",!0),f.icon=="/src/assets/column.png"?(s(),v("img",fe)):p("",!0),f.icon=="/src/assets/model.png"?(s(),v("img",_e)):p("",!0),F("span",ce,Q(f.name),1)])])]),_:1},8,["disabled","modelValue","data"])]),_:1})):p("",!0),u(r,{label:"\u5E8F\u53F7",prop:"orderNo"},{default:o(()=>[u(T,{modelValue:l.orderNo,"onUpdate:modelValue":e[5]||(e[5]=a=>l.orderNo=a),max:9999,placeholder:"\u5E8F\u53F7"},null,8,["modelValue"])]),_:1}),u(r,{label:"\u63CF\u8FF0",prop:"description"},{default:o(()=>[u(n,{type:"textarea",modelValue:l.description,"onUpdate:modelValue":e[6]||(e[6]=a=>l.description=a)},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"]),!!l.metamodelId&&l.ifLeaf==0?(s(),_(L,{key:0,ref_key:"dataPropertyFormRef",ref:V,model:b,rules:y.value,"label-width":"100px"},{default:o(()=>[u(j,{"content-position":"left"},{default:o(()=>[ve]),_:1}),x(F("div",null,[(s(!0),v(Z,null,X(m.value,(a,f)=>(s(),_(r,{label:a.name,prop:a.code},{default:o(()=>[u(n,{onChange:C=>$(C,a),modelValue:a.value,"onUpdate:modelValue":C=>a.value=C,placeholder:a.name},null,8,["onChange","modelValue","onUpdate:modelValue","placeholder"])]),_:2},1032,["label","prop"]))),256))],512),[[U,c.value]]),x(F("div",null,[u(Y,{title:"\u65E0\u5C5E\u6027","sub-title":"\u6CA1\u6709\u9700\u8981\u6DFB\u52A0\u7684\u5C5E\u6027\u4FE1\u606F"})],512),[[U,!c.value]])]),_:1},8,["model","rules"])):p("",!0)]),_:1},8,["modelValue","title"])}}});export{Be as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-project-user.4a5a0a19.js b/srt-cloud-gateway/src/main/resources/static/assets/add-project-user.4a5a0a19.js new file mode 100644 index 0000000..ccc3436 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-project-user.4a5a0a19.js @@ -0,0 +1 @@ +import"./add-project-user.vue_vue_type_script_setup_true_lang.75c78900.js";import{_ as t}from"./add-project-user.vue_vue_type_script_setup_true_lang.75c78900.js";import"./index.e3896b23.js";export{t as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/add-project-user.vue_vue_type_script_setup_true_lang.75c78900.js b/srt-cloud-gateway/src/main/resources/static/assets/add-project-user.vue_vue_type_script_setup_true_lang.75c78900.js new file mode 100644 index 0000000..d6ab1bf --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/add-project-user.vue_vue_type_script_setup_true_lang.75c78900.js @@ -0,0 +1 @@ +import{d as q,h as x,Y as U,ai as L,r as n,aj as S,o as _,f,w as o,b as e,k as i,a2 as k,a7 as H,l as B,E as b,a9 as I,az as N}from"./index.e3896b23.js";const T=B("\u67E5\u8BE2"),$=B("\u6DFB\u52A0"),Y=q({__name:"add-project-user",setup(K,{expose:y}){const d=x(!1),t=U({dataListUrl:"/sys/user/page",deleteUrl:"/sys/user",queryForm:{username:"",mobile:"",gender:""},projectId:""}),F=r=>{d.value=!0,t.projectId=r,c()},h=()=>{let r=[];if(r=t.dataListSelections?t.dataListSelections:[],r.length===0){b.warning("\u8BF7\u9009\u62E9\u8BB0\u5F55\u6DFB\u52A0");return}I.confirm("\u786E\u5B9A\u6DFB\u52A0\u5417?","\u63D0\u793A",{confirmButtonText:"\u786E\u5B9A",cancelButtonText:"\u53D6\u6D88",type:"warning"}).then(()=>{t.projectId&&N(t.projectId,r).then(()=>{b.success("\u6DFB\u52A0\u6210\u529F")})}).catch(()=>{})};y({init:F});const{getDataList:c,selectionChangeHandle:C,sizeChangeHandle:D,currentChangeHandle:v,deleteBatchHandle:M}=L(t);return(r,l)=>{const p=n("el-input"),s=n("el-form-item"),A=n("fast-select"),m=n("el-button"),V=n("el-form"),u=n("el-table-column"),g=n("fast-table-column"),E=n("el-table"),z=n("el-pagination"),w=n("el-dialog"),j=S("loading");return _(),f(w,{modelValue:d.value,"onUpdate:modelValue":l[6]||(l[6]=a=>d.value=a),title:"\u6DFB\u52A0\u9879\u76EE\u6210\u5458","close-on-click-modal":!1},{default:o(()=>[e(V,{inline:!0,model:t.queryForm,onKeyup:l[5]||(l[5]=k(a=>i(c)(),["enter"]))},{default:o(()=>[e(s,null,{default:o(()=>[e(p,{modelValue:t.queryForm.username,"onUpdate:modelValue":l[0]||(l[0]=a=>t.queryForm.username=a),placeholder:"\u7528\u6237\u540D",clearable:""},null,8,["modelValue"])]),_:1}),e(s,null,{default:o(()=>[e(p,{modelValue:t.queryForm.mobile,"onUpdate:modelValue":l[1]||(l[1]=a=>t.queryForm.mobile=a),placeholder:"\u624B\u673A\u53F7",clearable:""},null,8,["modelValue"])]),_:1}),e(s,null,{default:o(()=>[e(A,{modelValue:t.queryForm.gender,"onUpdate:modelValue":l[2]||(l[2]=a=>t.queryForm.gender=a),"dict-type":"user_gender",clearable:"",placeholder:"\u6027\u522B"},null,8,["modelValue"])]),_:1}),e(s,null,{default:o(()=>[e(m,{onClick:l[3]||(l[3]=a=>i(c)())},{default:o(()=>[T]),_:1})]),_:1}),e(s,null,{default:o(()=>[e(m,{type:"primary",onClick:l[4]||(l[4]=a=>h())},{default:o(()=>[$]),_:1})]),_:1})]),_:1},8,["model"]),H((_(),f(E,{data:t.dataList,border:"",style:{width:"100%"},onSelectionChange:i(C)},{default:o(()=>[e(u,{type:"selection","header-align":"center",align:"center",width:"50"}),e(u,{prop:"username",label:"\u7528\u6237\u540D","header-align":"center",align:"center"}),e(u,{prop:"realName",label:"\u59D3\u540D","header-align":"center",align:"center"}),e(g,{prop:"gender",label:"\u6027\u522B","dict-type":"user_gender"}),e(u,{prop:"mobile",label:"\u624B\u673A\u53F7","header-align":"center",align:"center"}),e(u,{prop:"email",label:"\u90AE\u7BB1","header-align":"center",align:"center"}),e(u,{prop:"orgName",label:"\u6240\u5C5E\u673A\u6784","header-align":"center",align:"center"}),e(g,{prop:"status",label:"\u72B6\u6001","dict-type":"user_status"}),e(u,{prop:"createTime",label:"\u521B\u5EFA\u65F6\u95F4","header-align":"center",align:"center",width:"180"})]),_:1},8,["data","onSelectionChange"])),[[j,t.dataListLoading]]),e(z,{"current-page":t.page,"page-sizes":t.pageSizes,"page-size":t.limit,total:t.total,layout:"total, sizes, prev, pager, next, jumper",onSizeChange:i(D),onCurrentChange:i(v)},null,8,["current-page","page-sizes","page-size","total","onSizeChange","onCurrentChange"])]),_:1},8,["modelValue"])}}});export{Y as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/apex.3097bfba.js b/srt-cloud-gateway/src/main/resources/static/assets/apex.3097bfba.js new file mode 100644 index 0000000..fce4246 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/apex.3097bfba.js @@ -0,0 +1,6 @@ +/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var n={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\#\%\^\&\*\(\)\-\=\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s]+)/g,comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"},{open:"<",close:">"}],folding:{markers:{start:new RegExp("^\\s*//\\s*(?:(?:#?region\\b)|(?:))")}}},s=["abstract","activate","and","any","array","as","asc","assert","autonomous","begin","bigdecimal","blob","boolean","break","bulk","by","case","cast","catch","char","class","collect","commit","const","continue","convertcurrency","decimal","default","delete","desc","do","double","else","end","enum","exception","exit","export","extends","false","final","finally","float","for","from","future","get","global","goto","group","having","hint","if","implements","import","in","inner","insert","instanceof","int","interface","into","join","last_90_days","last_month","last_n_days","last_week","like","limit","list","long","loop","map","merge","native","new","next_90_days","next_month","next_n_days","next_week","not","null","nulls","number","object","of","on","or","outer","override","package","parallel","pragma","private","protected","public","retrieve","return","returning","rollback","savepoint","search","select","set","short","sort","stat","static","strictfp","super","switch","synchronized","system","testmethod","then","this","this_month","this_week","throw","throws","today","tolabel","tomorrow","transaction","transient","trigger","true","try","type","undelete","update","upsert","using","virtual","void","volatile","webservice","when","where","while","yesterday"],o=e=>e.charAt(0).toUpperCase()+e.substr(1),t=[];s.forEach(e=>{t.push(e),t.push(e.toUpperCase()),t.push(o(e))});var i={defaultToken:"",tokenPostfix:".apex",keywords:t,operators:["=",">","<","!","~","?",":","==","<=",">=","!=","&&","||","++","--","+","-","*","/","&","|","^","%","<<",">>",">>>","+=","-=","*=","/=","&=","|=","^=","%=","<<=",">>=",">>>="],symbols:/[=>](?!@symbols)/,"@brackets"],[/@symbols/,{cases:{"@operators":"delimiter","@default":""}}],[/@\s*[a-zA-Z_\$][\w\$]*/,"annotation"],[/(@digits)[eE]([\-+]?(@digits))?[fFdD]?/,"number.float"],[/(@digits)\.(@digits)([eE][\-+]?(@digits))?[fFdD]?/,"number.float"],[/(@digits)[fFdD]/,"number.float"],[/(@digits)[lL]?/,"number"],[/[;,.]/,"delimiter"],[/"([^"\\]|\\.)*$/,"string.invalid"],[/'([^'\\]|\\.)*$/,"string.invalid"],[/"/,"string",'@string."'],[/'/,"string","@string.'"],[/'[^\\']'/,"string"],[/(')(@escapes)(')/,["string","string.escape","string"]],[/'/,"string.invalid"]],whitespace:[[/[ \t\r\n]+/,""],[/\/\*\*(?!\/)/,"comment.doc","@apexdoc"],[/\/\*/,"comment","@comment"],[/\/\/.*$/,"comment"]],comment:[[/[^\/*]+/,"comment"],[/\*\//,"comment","@pop"],[/[\/*]/,"comment"]],apexdoc:[[/[^\/*]+/,"comment.doc"],[/\*\//,"comment.doc","@pop"],[/[\/*]/,"comment.doc"]],string:[[/[^\\"']+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/["']/,{cases:{"$#==$S2":{token:"string",next:"@pop"},"@default":"string"}}]]}};export{n as conf,i as language}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-auth-detail.6c770594.js b/srt-cloud-gateway/src/main/resources/static/assets/api-auth-detail.6c770594.js new file mode 100644 index 0000000..0ba7691 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-auth-detail.6c770594.js @@ -0,0 +1 @@ +import"./api-auth-detail.vue_vue_type_script_setup_true_lang.b0bb75fb.js";import{_ as f}from"./api-auth-detail.vue_vue_type_script_setup_true_lang.b0bb75fb.js";import"./app.22c193c2.js";import"./index.e3896b23.js";import"./apiConfig.09b7ec3b.js";export{f as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-auth-detail.vue_vue_type_script_setup_true_lang.b0bb75fb.js b/srt-cloud-gateway/src/main/resources/static/assets/api-auth-detail.vue_vue_type_script_setup_true_lang.b0bb75fb.js new file mode 100644 index 0000000..08612be --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-auth-detail.vue_vue_type_script_setup_true_lang.b0bb75fb.js @@ -0,0 +1 @@ +import{a as w}from"./app.22c193c2.js";import{c as k}from"./apiConfig.09b7ec3b.js";import{d as x,h as m,Y as y,r as a,o as F,f as B,w as u,b as s,H as R,a as p,t as _,l as n,a9 as N,E as C}from"./index.e3896b23.js";const S=n("\u4E0D\u9650\u6B21\u6570"),U=n("\u6307\u5B9A\u6B21\u6570"),$=n("\u53D6\u6D88"),z=n("\u91CD\u7F6E\u8C03\u7528\u6B21\u6570"),H=n("\u786E\u5B9A"),Y=x({__name:"api-auth-detail",emits:["refreshAuthList"],setup(L,{expose:E,emit:b}){const o=m(!1),f=m(),e=y({limited:!1,requestTimes:100,requestedTimes:0}),T=i=>{o.value=!0,Object.assign(e,i),e.limited=e.requestTimes!=-1},q=m({limited:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],requestTimes:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),D=()=>{N.confirm("\u786E\u5B9A\u8FDB\u884C\u91CD\u7F6E\u64CD\u4F5C\u5417\uFF0C\u91CD\u7F6E\u540E\u8BE5\u5E94\u7528\u4E0B\u7684 api \u7684\u5DF2\u8C03\u7528\u6B21\u6570\u5C06\u6E05\u96F6\uFF01","\u63D0\u793A",{confirmButtonText:"\u786E\u5B9A",cancelButtonText:"\u53D6\u6D88",type:"warning"}).then(()=>{k(e.id).then(i=>{C.success("\u91CD\u7F6E\u6210\u529F"),e.requestedTimes=0,e.requestedSuccessTimes=0,e.requestedFailedTimes=0})}).catch(()=>{})},g=()=>{f.value.validate(i=>{if(!i)return!1;e.requestTimes=e.limited?e.requestTimes:-1,w(e).then(()=>{C.success({message:"\u4FEE\u6539\u6388\u6743\u6210\u529F",duration:500,onClose:()=>{o.value=!1,b("refreshAuthList")}})})})};return E({init:T}),(i,t)=>{const c=a("el-radio"),A=a("el-radio-group"),r=a("el-form-item"),v=a("el-input-number"),h=a("el-form"),d=a("el-button"),V=a("el-dialog");return F(),B(V,{modelValue:o.value,"onUpdate:modelValue":t[5]||(t[5]=l=>o.value=l),title:"\u4FEE\u6539\u6388\u6743","close-on-click-modal":!1},{footer:u(()=>[s(d,{onClick:t[2]||(t[2]=l=>o.value=!1)},{default:u(()=>[$]),_:1}),s(d,{type:"warning",onClick:t[3]||(t[3]=l=>D())},{default:u(()=>[z]),_:1}),s(d,{type:"primary",onClick:t[4]||(t[4]=l=>g())},{default:u(()=>[H]),_:1})]),default:u(()=>[s(h,{ref_key:"dataFormRef",ref:f,model:e,rules:q.value,"label-width":"100px"},{default:u(()=>[s(r,{label:"\u8C03\u7528\u6B21\u6570",prop:"limited","label-width":"auto"},{default:u(()=>[s(A,{modelValue:e.limited,"onUpdate:modelValue":t[0]||(t[0]=l=>e.limited=l)},{default:u(()=>[s(c,{label:!1,size:"large"},{default:u(()=>[S]),_:1}),s(c,{label:!0,size:"large"},{default:u(()=>[U]),_:1})]),_:1},8,["modelValue"])]),_:1}),e.limited?(F(),B(r,{key:0,label:"\u6B21\u6570",prop:"requestTimes","label-width":"auto"},{default:u(()=>[s(v,{modelValue:e.requestTimes,"onUpdate:modelValue":t[1]||(t[1]=l=>e.requestTimes=l),placeholder:"\u6B21\u6570"},null,8,["modelValue"])]),_:1})):R("",!0),s(r,{label:"\u5DF2\u8C03\u7528\u6B21\u6570",prop:"requestedTimes","label-width":"auto"},{default:u(()=>[p("span",null,_(e.requestedTimes),1)]),_:1}),s(r,{label:"\u8C03\u7528\u6210\u529F\u6B21\u6570",prop:"requestedSuccessTimes","label-width":"auto"},{default:u(()=>[p("span",null,_(e.requestedSuccessTimes),1)]),_:1}),s(r,{label:"\u8C03\u7528\u5931\u8D25\u6B21\u6570",prop:"requestedFailedTimes","label-width":"auto"},{default:u(()=>[p("span",null,_(e.requestedFailedTimes),1)]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue"])}}});export{Y as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-auth.2c826cd7.js b/srt-cloud-gateway/src/main/resources/static/assets/api-auth.2c826cd7.js new file mode 100644 index 0000000..2a2c3e6 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-auth.2c826cd7.js @@ -0,0 +1 @@ +import"./api-auth.vue_vue_type_script_setup_true_lang.77949b40.js";import{_ as i}from"./api-auth.vue_vue_type_script_setup_true_lang.77949b40.js";import"./app.22c193c2.js";import"./index.e3896b23.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-auth.vue_vue_type_script_setup_true_lang.77949b40.js b/srt-cloud-gateway/src/main/resources/static/assets/api-auth.vue_vue_type_script_setup_true_lang.77949b40.js new file mode 100644 index 0000000..7345ca9 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-auth.vue_vue_type_script_setup_true_lang.77949b40.js @@ -0,0 +1 @@ +import{a as h}from"./app.22c193c2.js";import{d as E,h as n,Y as F,r as s,o as p,f as c,w as l,b as u,H as k,l as i,E as D}from"./index.e3896b23.js";const w=i("\u4E0D\u9650\u6B21\u6570"),x=i("\u6307\u5B9A\u6B21\u6570"),y=i("\u53D6\u6D88"),N=i("\u786E\u5B9A"),H=E({__name:"api-auth",emits:["refreshAuthList"],setup(R,{expose:b,emit:g}){const a=n(!1),m=n(),e=F({limited:!1,requestTimes:100,requestedTimes:0}),v=r=>{a.value=!0,Object.assign(e,r),e.limited=!1,e.requestTimes=100},q=n({limited:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],requestTimes:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),V=()=>{m.value.validate(r=>{if(!r)return!1;e.requestTimes=e.limited?e.requestTimes:-1,e.requestedTimes=null,h(e).then(()=>{D.success({message:"\u6388\u6743\u6210\u529F",duration:500,onClose:()=>{a.value=!1,g("refreshAuthList")}})})})};return b({init:v}),(r,t)=>{const d=s("el-radio"),A=s("el-radio-group"),_=s("el-form-item"),B=s("el-input-number"),T=s("el-form"),f=s("el-button"),C=s("el-dialog");return p(),c(C,{modelValue:a.value,"onUpdate:modelValue":t[4]||(t[4]=o=>a.value=o),title:"\u6388\u6743","close-on-click-modal":!1},{footer:l(()=>[u(f,{onClick:t[2]||(t[2]=o=>a.value=!1)},{default:l(()=>[y]),_:1}),u(f,{type:"primary",onClick:t[3]||(t[3]=o=>V())},{default:l(()=>[N]),_:1})]),default:l(()=>[u(T,{ref_key:"dataFormRef",ref:m,model:e,rules:q.value,"label-width":"100px"},{default:l(()=>[u(_,{label:"\u8C03\u7528\u6B21\u6570",prop:"limited","label-width":"auto"},{default:l(()=>[u(A,{modelValue:e.limited,"onUpdate:modelValue":t[0]||(t[0]=o=>e.limited=o)},{default:l(()=>[u(d,{label:!1,size:"large"},{default:l(()=>[w]),_:1}),u(d,{label:!0,size:"large"},{default:l(()=>[x]),_:1})]),_:1},8,["modelValue"])]),_:1}),e.limited?(p(),c(_,{key:0,label:"\u6B21\u6570",prop:"requestTimes","label-width":"auto"},{default:l(()=>[u(B,{modelValue:e.requestTimes,"onUpdate:modelValue":t[1]||(t[1]=o=>e.requestTimes=o),placeholder:"\u6B21\u6570"},null,8,["modelValue"])]),_:1})):k("",!0)]),_:1},8,["model","rules"])]),_:1},8,["modelValue"])}}});export{H as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-mount.9ab47dcf.js b/srt-cloud-gateway/src/main/resources/static/assets/api-mount.9ab47dcf.js new file mode 100644 index 0000000..a67da0e --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-mount.9ab47dcf.js @@ -0,0 +1 @@ +import"./api-mount.vue_vue_type_script_setup_true_name_Data-serviceApi-configIndex_lang.42598b31.js";import{_ as i}from"./api-mount.vue_vue_type_script_setup_true_name_Data-serviceApi-configIndex_lang.42598b31.js";import"./index.e3896b23.js";import"./database.32bfd96d.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-mount.vue_vue_type_script_setup_true_name_Data-serviceApi-configIndex_lang.42598b31.js b/srt-cloud-gateway/src/main/resources/static/assets/api-mount.vue_vue_type_script_setup_true_name_Data-serviceApi-configIndex_lang.42598b31.js new file mode 100644 index 0000000..76ed31d --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-mount.vue_vue_type_script_setup_true_name_Data-serviceApi-configIndex_lang.42598b31.js @@ -0,0 +1 @@ +import{d as v,L as H,h as S,Y as U,ab as A,ai as $,r as o,aj as j,o as u,c as b,b as a,w as l,f as d,F as M,e as N,k as p,a2 as K,a7 as g,l as c,t as P,a8 as y}from"./index.e3896b23.js";import{c as Y}from"./database.32bfd96d.js";const G=c("\u67E5\u8BE2"),J=c("\u672A\u53D1\u5E03"),O=c("\u5DF2\u53D1\u5E03"),Q=v({name:"Data-serviceApi-configIndex"}),le=v({...Q,setup(R,{expose:C}){H(()=>{F()}),S([]);const F=()=>{Y().then(r=>{e.databaseList=r.data})},e=U({createdIsNeed:!1,databaseList:[],dataListUrl:"/data-service/api-config/page",deleteUrl:"/data-service/api-config",path:"",queryForm:{groupId:"",name:"",path:"",contentType:"",status:1,sqlDbType:"",databaseId:"",previlege:"",openTrans:""}}),w=(r,n)=>{e.queryForm.groupId=r,e.path=n,_()},D=A("apiMountInfo"),E=r=>{r.parentPath=e.path,console.log(r),D.value=r},{getDataList:_,selectionChangeHandle:W,sizeChangeHandle:q,currentChangeHandle:L,deleteBatchHandle:X,downloadHandle:Z}=$(e);return C({init:w}),(r,n)=>{const V=o("el-input"),i=o("el-form-item"),m=o("el-option"),f=o("el-select"),B=o("el-button"),z=o("el-form"),s=o("el-table-column"),h=o("el-tag"),I=o("fast-creator-column"),k=o("el-table"),x=o("el-pagination"),T=j("loading");return u(),b("div",null,[a(z,{inline:!0,model:e.queryForm,onKeyup:n[4]||(n[4]=K(t=>p(_)(),["enter"]))},{default:l(()=>[a(i,null,{default:l(()=>[a(V,{modelValue:e.queryForm.name,"onUpdate:modelValue":n[0]||(n[0]=t=>e.queryForm.name=t),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),a(i,null,{default:l(()=>[a(f,{modelValue:e.queryForm.sqlDbType,"onUpdate:modelValue":n[1]||(n[1]=t=>e.queryForm.sqlDbType=t),clearable:"",filterable:"",placeholder:"sql\u5E93\u7C7B\u578B"},{default:l(()=>[(u(),d(m,{key:1,label:"\u6570\u636E\u5E93",value:1})),(u(),d(m,{key:2,label:"\u4E2D\u53F0\u5E93",value:2}))]),_:1},8,["modelValue"])]),_:1}),a(i,null,{default:l(()=>[a(f,{modelValue:e.queryForm.databaseId,"onUpdate:modelValue":n[2]||(n[2]=t=>e.queryForm.databaseId=t),clearable:"",filterable:"",placeholder:"\u6570\u636E\u5E93"},{default:l(()=>[(u(!0),b(M,null,N(e.databaseList,(t,ee)=>(u(),d(m,{key:t.id,label:`[${t.id}]${t.name}`,value:t.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue"])]),_:1}),a(i,null,{default:l(()=>[a(B,{onClick:n[3]||(n[3]=t=>p(_)()),type:"primary"},{default:l(()=>[G]),_:1})]),_:1})]),_:1},8,["model"]),g((u(),d(k,{data:e.dataList,border:"",style:{width:"100%"},"highlight-current-row":"",onCurrentChange:E},{default:l(()=>[a(s,{"show-overflow-tooltip":"",label:"\u5206\u7EC4","header-align":"center",align:"center",width:"150"},{default:l(()=>[c(P(e.path),1)]),_:1}),a(s,{prop:"path","show-overflow-tooltip":"",label:"api\u5730\u5740","header-align":"center",align:"center"}),a(s,{"show-overflow-tooltip":"",prop:"name",label:"\u540D\u79F0","header-align":"center",align:"center"}),a(s,{prop:"status",label:"\u72B6\u6001","header-align":"center",align:"center"},{default:l(t=>[g(a(h,{type:"info"},{default:l(()=>[J]),_:2},1536),[[y,t.row.status==0]]),g(a(h,{type:"success"},{default:l(()=>[O]),_:2},1536),[[y,t.row.status==1]])]),_:1}),a(I,{prop:"creator",label:"\u521B\u5EFA\u8005","header-align":"center",align:"center"}),a(s,{"show-overflow-tooltip":"",prop:"createTime",label:"\u521B\u5EFA\u65F6\u95F4","header-align":"center",align:"center"})]),_:1},8,["data"])),[[T,e.dataListLoading]]),a(x,{"current-page":e.page,"page-sizes":e.pageSizes,"page-size":e.limit,total:e.total,layout:"total, sizes, prev, pager, next, jumper",onSizeChange:p(q),onCurrentChange:p(L)},null,8,["current-page","page-sizes","page-size","total","onSizeChange","onCurrentChange"])])}}});export{le as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-resource.d03a9e07.css b/srt-cloud-gateway/src/main/resources/static/assets/api-resource.d03a9e07.css new file mode 100644 index 0000000..4f783ad --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-resource.d03a9e07.css @@ -0,0 +1 @@ +.apiConfigDivClass{height:calc(100vh - 170px);position:relative;overflow:hidden}.apiConfigDivClass>.drawerClass>div{height:100%;position:absolute!important;overflow:hidden} diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-resource.ec6ff843.js b/srt-cloud-gateway/src/main/resources/static/assets/api-resource.ec6ff843.js new file mode 100644 index 0000000..b646b22 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-resource.ec6ff843.js @@ -0,0 +1 @@ +import{d as k,L as q,h as b,Y as K,ai as M,r as n,aj as O,o as i,f as p,w as o,a as F,b as e,c as Y,e as G,F as J,k as c,a2 as Q,a7 as m,a8 as v,l as d,E as W}from"./index.e3896b23.js";import{_ as X}from"./app-info.vue_vue_type_script_setup_true_lang.248d957f.js";import{_ as Z}from"./api-test.vue_vue_type_script_setup_true_lang.4a9f24d4.js";import{c as ee}from"./database.32bfd96d.js";import{i as te}from"./apiConfig.09b7ec3b.js";import"./databases.vue_vue_type_style_index_0_lang.dceac0af.js";import"./database.235d7a89.js";import"./table.e1c1b00a.js";import"./readonly-studio.vue_vue_type_script_setup_true_lang.0062e564.js";import"./toggleHighContrast.483b4227.js";import"./add-or-update.vue_vue_type_script_setup_true_lang.fd8fc5c7.js";import"./middledb.vue_vue_type_style_index_0_lang.fa7bd4c1.js";import"./house.1ac0c09f.js";import"./param-studio.vue_vue_type_script_setup_true_lang.e008bef0.js";import"./ts.worker.921d436c.js";const ae={class:"apiConfigDivClass"},le=d("\u67E5\u8BE2"),oe=d("\u5BFC\u51FA\u6587\u6863"),ne=d("\u672A\u53D1\u5E03"),re=d("\u5DF2\u53D1\u5E03"),ue=d("\u67E5\u770B"),se=d("\u6D4B\u8BD5"),ie={class:"drawerClass",style:{height:"100%"}},de={class:"drawerClass",style:{height:"100%"}},pe=k({name:"DataAssetsApiResource"}),ke=k({...pe,setup(ce){q(()=>{T()}),b([]);const T=()=>{ee().then(r=>{a.databaseList=r.data})},a=K({createdIsNeed:!1,databaseList:[],dataListUrl:"/data-service/api-config/page-resource",deleteUrl:"/data-service/api-config",path:"",queryForm:{groupId:"",name:"",path:"",contentType:"",status:"",sqlDbType:"",databaseId:"",previlege:"",openTrans:""}});q(()=>{const r=sessionStorage.getItem("apiResourceId");a.queryForm.resourceId=r,g()});const C=b(),I=(r,l)=>{C.value.init(r,l)},w=b(),L=r=>{w.value.init(r)},A=async()=>{let r=a.dataListSelections?a.dataListSelections:[];if(r.length===0){W.warning("\u8BF7\u52FE\u9009\u9700\u8981\u5BFC\u51FA\u7684api");return}const{data:l}=await te();await U("http://"+l+"/data-service/api-config/export-docs","API DOCS.md","POST",r)},{getDataList:g,selectionChangeHandle:S,sizeChangeHandle:x,currentChangeHandle:z,deleteBatchHandle:_e,downloadHandle:U}=M(a);return(r,l)=>{const $=n("el-input"),s=n("el-form-item"),f=n("fast-select"),y=n("el-option"),B=n("el-select"),_=n("el-button"),R=n("el-form"),u=n("el-table-column"),h=n("fast-table-column"),D=n("el-tag"),E=n("fast-creator-column"),H=n("el-table"),N=n("el-pagination"),P=n("el-card"),j=O("loading");return i(),p(P,null,{default:o(()=>[F("div",ae,[e(R,{inline:!0,model:a.queryForm,onKeyup:l[8]||(l[8]=Q(t=>c(g)(),["enter"]))},{default:o(()=>[e(s,null,{default:o(()=>[e($,{modelValue:a.queryForm.name,"onUpdate:modelValue":l[0]||(l[0]=t=>a.queryForm.name=t),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),e(s,null,{default:o(()=>[e(f,{modelValue:a.queryForm.contentType,"onUpdate:modelValue":l[1]||(l[1]=t=>a.queryForm.contentType=t),"dict-type":"content_type",placeholder:"\u5185\u5BB9\u7C7B\u578B",clearable:"",filterable:""},null,8,["modelValue"])]),_:1}),e(s,null,{default:o(()=>[e(B,{modelValue:a.queryForm.sqlDbType,"onUpdate:modelValue":l[2]||(l[2]=t=>a.queryForm.sqlDbType=t),clearable:"",filterable:"",placeholder:"sql\u5E93\u7C7B\u578B"},{default:o(()=>[(i(),p(y,{key:1,label:"\u6570\u636E\u5E93",value:1})),(i(),p(y,{key:2,label:"\u4E2D\u53F0\u5E93",value:2}))]),_:1},8,["modelValue"])]),_:1}),e(s,null,{default:o(()=>[e(B,{modelValue:a.queryForm.databaseId,"onUpdate:modelValue":l[3]||(l[3]=t=>a.queryForm.databaseId=t),clearable:"",filterable:"",placeholder:"\u6570\u636E\u5E93"},{default:o(()=>[(i(!0),Y(J,null,G(a.databaseList,(t,V)=>(i(),p(y,{key:t.id,label:`[${t.id}]${t.name}`,value:t.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue"])]),_:1}),e(s,null,{default:o(()=>[e(f,{modelValue:a.queryForm.previlege,"onUpdate:modelValue":l[4]||(l[4]=t=>a.queryForm.previlege=t),"dict-type":"yes_or_no",placeholder:"\u662F\u5426\u79C1\u6709",clearable:"",filterable:""},null,8,["modelValue"])]),_:1}),e(s,null,{default:o(()=>[e(f,{modelValue:a.queryForm.openTrans,"onUpdate:modelValue":l[5]||(l[5]=t=>a.queryForm.openTrans=t),"dict-type":"yes_or_no",placeholder:"\u5F00\u542F\u4E8B\u52A1",clearable:"",filterable:""},null,8,["modelValue"])]),_:1}),e(s,null,{default:o(()=>[e(_,{onClick:l[6]||(l[6]=t=>c(g)())},{default:o(()=>[le]),_:1})]),_:1}),e(s,null,{default:o(()=>[e(_,{type:"warning",onClick:l[7]||(l[7]=t=>A())},{default:o(()=>[oe]),_:1})]),_:1})]),_:1},8,["model"]),m((i(),p(H,{data:a.dataList,border:"",style:{width:"100%"},onSelectionChange:c(S)},{default:o(()=>[e(u,{type:"selection","header-align":"center",align:"center",width:"50"}),e(u,{"show-overflow-tooltip":"",label:"\u5206\u7EC4","header-align":"center",align:"center",width:"150",prop:"group"}),e(u,{prop:"path","show-overflow-tooltip":"",label:"api\u5730\u5740","header-align":"center",align:"center"}),e(u,{"show-overflow-tooltip":"",prop:"name",label:"\u540D\u79F0","header-align":"center",align:"center"}),e(h,{prop:"contentType",label:"\u5185\u5BB9\u7C7B\u578B","dict-type":"content_type",width:"150","header-align":"center",align:"center"}),e(u,{prop:"status",label:"\u72B6\u6001","header-align":"center",align:"center"},{default:o(t=>[m(e(D,{type:"info"},{default:o(()=>[ne]),_:2},1536),[[v,t.row.status==0]]),m(e(D,{type:"success"},{default:o(()=>[re]),_:2},1536),[[v,t.row.status==1]])]),_:1}),e(h,{prop:"previlege",label:"\u79C1\u6709","dict-type":"yes_or_no","header-align":"center",align:"center"}),e(h,{prop:"openTrans",label:"\u5F00\u542F\u4E8B\u52A1","dict-type":"yes_or_no","header-align":"center",align:"center"}),e(E,{prop:"releaseUserId",label:"\u53D1\u5E03\u8005","header-align":"center",align:"center"}),e(u,{"show-overflow-tooltip":"",prop:"releaseTime",label:"\u53D1\u5E03\u65F6\u95F4","header-align":"center",align:"center"}),e(E,{prop:"creator",label:"\u521B\u5EFA\u8005","header-align":"center",align:"center"}),e(u,{"show-overflow-tooltip":"",prop:"createTime",label:"\u521B\u5EFA\u65F6\u95F4","header-align":"center",align:"center"}),e(u,{"show-overflow-tooltip":"",prop:"updateTime",label:"\u66F4\u65B0\u65F6\u95F4","header-align":"center",align:"center"}),e(u,{label:"\u64CD\u4F5C",fixed:"right","header-align":"center",align:"center",width:"260"},{default:o(t=>[e(_,{type:"primary",link:"",onClick:V=>I(t.row.id,t.row.authId)},{default:o(()=>[ue]),_:2},1032,["onClick"]),m(e(_,{type:"primary",link:"",onClick:V=>L(t.row)},{default:o(()=>[se]),_:2},1032,["onClick"]),[[v,t.row.status==1]])]),_:1})]),_:1},8,["data","onSelectionChange"])),[[j,a.dataListLoading]]),e(N,{"current-page":a.page,"page-sizes":a.pageSizes,"page-size":a.limit,total:a.total,layout:"total, sizes, prev, pager, next, jumper",onSizeChange:c(x),onCurrentChange:c(z)},null,8,["current-page","page-sizes","page-size","total","onSizeChange","onCurrentChange"]),F("div",ie,[e(X,{ref_key:"appInfoRef",ref:C},null,512)]),F("div",de,[e(Z,{ref_key:"apiTestRef",ref:w},null,512)])])]),_:1})}}});export{ke as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-test.140c2439.js b/srt-cloud-gateway/src/main/resources/static/assets/api-test.140c2439.js new file mode 100644 index 0000000..a47f59b --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-test.140c2439.js @@ -0,0 +1 @@ +import"./api-test.vue_vue_type_script_setup_true_lang.4a9f24d4.js";import{_ as s}from"./api-test.vue_vue_type_script_setup_true_lang.4a9f24d4.js";import"./apiConfig.09b7ec3b.js";import"./index.e3896b23.js";import"./param-studio.vue_vue_type_script_setup_true_lang.e008bef0.js";import"./ts.worker.921d436c.js";import"./toggleHighContrast.483b4227.js";export{s as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api-test.vue_vue_type_script_setup_true_lang.4a9f24d4.js b/srt-cloud-gateway/src/main/resources/static/assets/api-test.vue_vue_type_script_setup_true_lang.4a9f24d4.js new file mode 100644 index 0000000..d004b9b --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api-test.vue_vue_type_script_setup_true_lang.4a9f24d4.js @@ -0,0 +1 @@ +import{g as N,r as P}from"./apiConfig.09b7ec3b.js";import{_ as k}from"./param-studio.vue_vue_type_script_setup_true_lang.e008bef0.js";import{d as S,h as m,Y as O,ak as J,r as p,o as c,f as T,w as s,a as o,b as n,c as V,H as b,l as A}from"./index.e3896b23.js";const j={key:0},$=o("b",null,"token \u7533\u8BF7\uFF1A",-1),I=[$],z=o("br",null,null,-1),G={key:2},H=A("\u83B7\u53D6token"),K=o("br",null,null,-1),L=o("br",null,null,-1),M=o("div",null,[o("b",null,"\u8BF7\u6C42\u5730\u5740\uFF1A")],-1),Y=o("br",null,null,-1),Q=o("div",null,[o("b",null,"\u8BF7\u6C42\u5934\uFF1A")],-1),W=o("br",null,null,-1),X=o("div",null,[o("b",null,"\u8BF7\u6C42\u53C2\u6570\uFF1A")],-1),Z=o("br",null,null,-1),ee=A("\u70B9\u51FB\u8BF7\u6C42"),te=o("br",null,null,-1),le=o("div",null,[o("b",null,"\u54CD\u5E94\u7ED3\u679C\uFF1A")],-1),ue=o("br",null,null,-1),re=S({__name:"api-test",setup(oe,{expose:E}){const v=m(!1),g=m(),e=O({ipPort:"",tokenUrl:"",path:"",url:"",type:"",contentType:"",apiToken:"",jsonParam:"",responseResult:""}),B=m({url:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],type:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],contentType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),_=m(),f=m(),r=m(),R=l=>{v.value=!0,N().then(t=>{e.ipPort=t.data,e.path=l.path,e.url="http://"+t.data+l.path,e.tokenUrl="http://"+t.data+"token/generate?appKey=xxx&appSecret=xxx"}),e.type=l.type,e.contentType=l.contentType,h(l.jsonParam),F(),_.value=l.previlege},h=l=>{if(f.value){f.value.setEditorValue(l);return}setTimeout(()=>{h(l)},500)},F=()=>{if(r.value){r.value.setEditorValue("");return}setTimeout(()=>{F()},500)},y=J.create({timeout:6e5}),C=()=>{let l=f.value.getEditorValue();l=l?JSON.parse(l):{};const t={headers:{apiToken:e.apiToken}};if(e.type=="GET"){let u="",d=0;for(let i in l)d==0?u+="?"+i+"="+l[i]:u+="&"+i+"="+l[i],d++;y.get(e.url+u,t).then(i=>{r.value.setEditorValueAndFormat(JSON.stringify(i.data))}).catch(i=>{r.value.setEditorValueAndFormat(JSON.stringify(i))})}else e.type=="POST"?y.post(e.url,l,t).then(u=>{r.value.setEditorValueAndFormat(JSON.stringify(u.data))}).catch(u=>{r.value.setEditorValueAndFormat(JSON.stringify(u))}):e.type=="PUT"?y.put(e.url,l,t).then(u=>{r.value.setEditorValueAndFormat(JSON.stringify(u.data))}).catch(u=>{r.value.setEditorValueAndFormat(JSON.stringify(u))}):e.type=="DELETE"&&(t.data=l,y.delete(e.url,t).then(u=>{r.value.setEditorValueAndFormat(JSON.stringify(u.data))}).catch(u=>{r.value.setEditorValueAndFormat(JSON.stringify(u))}))},D=()=>{P(e.tokenUrl).then(l=>{e.apiToken=l.data})};return E({init:R}),(l,t)=>{const u=p("el-input"),d=p("el-form-item"),i=p("el-button"),U=p("el-form"),q=p("el-tab-pane"),w=p("el-tabs"),x=p("el-drawer");return c(),T(x,{modelValue:v.value,"onUpdate:modelValue":t[7]||(t[7]=a=>v.value=a),title:"API \u6D4B\u8BD5",size:"100%","destroy-on-close":!0},{default:s(()=>[o("div",null,[n(w,{"tab-position":"top"},{default:s(()=>[n(q,{label:"\u8BF7\u6C42\u6D4B\u8BD5"},{default:s(()=>[n(U,{ref_key:"apiTestFormRef",ref:g,rules:B.value,model:e},{default:s(()=>[_.value==1?(c(),V("div",j,I)):b("",!0),z,_.value==1?(c(),T(d,{key:1,prop:"tokenUrl","label-width":"auto"},{default:s(()=>[n(u,{modelValue:e.tokenUrl,"onUpdate:modelValue":t[0]||(t[0]=a=>e.tokenUrl=a)},null,8,["modelValue"])]),_:1})):b("",!0),_.value==1?(c(),V("div",G,[n(i,{type:"primary",onClick:t[1]||(t[1]=a=>D())},{default:s(()=>[H]),_:1}),K,L])):b("",!0),M,Y,n(d,{label:"Request Url",prop:"url","label-width":"auto"},{default:s(()=>[n(u,{modelValue:e.url,"onUpdate:modelValue":t[2]||(t[2]=a=>e.url=a)},null,8,["modelValue"])]),_:1}),n(d,{label:"Request Method",prop:"type","label-width":"auto"},{default:s(()=>[n(u,{disabled:"",modelValue:e.type,"onUpdate:modelValue":t[3]||(t[3]=a=>e.type=a)},null,8,["modelValue"])]),_:1}),Q,W,n(d,{label:"Content-Type",prop:"contentType","label-width":"auto"},{default:s(()=>[n(u,{disabled:"",modelValue:e.contentType,"onUpdate:modelValue":t[4]||(t[4]=a=>e.contentType=a)},null,8,["modelValue"])]),_:1}),_.value==1?(c(),T(d,{key:3,label:"apiToken",prop:"apiToken","label-width":"auto"},{default:s(()=>[n(u,{modelValue:e.apiToken,"onUpdate:modelValue":t[5]||(t[5]=a=>e.apiToken=a)},null,8,["modelValue"])]),_:1})):b("",!0),X,Z,n(d,{"label-width":"auto",prop:"jsonParam"},{default:s(()=>[n(k,{id:"requestTestParam",ref_key:"requestTestParamRef",ref:f,style:{height:"160px",width:"100%"}},null,512)]),_:1}),o("div",null,[n(i,{type:"primary",onClick:t[6]||(t[6]=a=>C())},{default:s(()=>[ee]),_:1})]),te,le,ue,n(d,{"label-width":"auto",prop:"responseResult"},{default:s(()=>[n(k,{id:"responseTestResult",value:"",ref_key:"responseTestResultRef",ref:r,style:{height:"500px",width:"100%"}},null,512)]),_:1})]),_:1},8,["rules","model"])]),_:1})]),_:1})])]),_:1},8,["modelValue"])}}});export{re as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/api.55cd055b.js b/srt-cloud-gateway/src/main/resources/static/assets/api.55cd055b.js new file mode 100644 index 0000000..b5e9f3f --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/api.55cd055b.js @@ -0,0 +1 @@ +const A="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAQCAYAAAAWGF8bAAACFElEQVR4nIWUPW7bQBCF31CqLAPREZgbSEV6GrCDdNENTJ9AcplYASjkrxRzgrBKzTKICYQ3sIr02gAB0sqA3EmcvCW5ikwR8gdQnN2deTM7u5TgCP778wAexqoYcQg65/zJzU0247AV4dOK//EihGpMM4JIjpOewXo9qMZ4ht7pmblOVxw/olXQ//RyhKJI0OkOzJvvBg1YeQTBazPNhmggfA7wP1ws4cm1efsj5bAV+izoEzV9DgT9z6983W6Wv6fZwdo+tiWqOhdIik5n5nayC7JC2G7GNAMo7s27LMATlDGbTaiCsYicmZvbRSlos0E1YvNTFJpyKyuOAxD2KS575vDE0McHT5xJc5Cq5zpHrzcUm4VbvHMZuF5SinjS59zE2gyOQJg8ZvCKZs65HDXsaQLoQqyzC0RNmVGLS269z2fGqQAOT1IKjmg1BM8ngAyEBrfoJfunZatwCaxNgRWDI9SURTQFbdsKDStBuzjNYr5LbIUugbWxLaxgjhoKBuh2jTtZCwVjviA0Qh7AmIJDHIF+A1bQ3xd22DVeoTvpdJ8Lx9w/L6noEienV22fUx3wUzzvylXuKKsV+QrBjG1KKsH5qI+Hh1ihl5zIYfG8Lza4vgW/GPRXVP9wZYcCA/rf03difTkFjv9TCld/ANjvEXfwTYEXHX6OxbZYc6qC95VVLWjteCR4DIomIGZ6G+II/wAGJxJAfODbywAAAABJRU5ErkJggg==";export{A as a}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/apiConfig.09b7ec3b.js b/srt-cloud-gateway/src/main/resources/static/assets/apiConfig.09b7ec3b.js new file mode 100644 index 0000000..ef56171 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/apiConfig.09b7ec3b.js @@ -0,0 +1 @@ +import{ad as t}from"./index.e3896b23.js";const s=e=>t.get("/data-service/api-config/"+e),r=e=>e.id?t.put("/data-service/api-config",e):t.post("/data-service/api-config",e),n=()=>t.get("/data-service/api-config/getIpPort"),a=e=>t.post("/data-service/api-config/sql/execute",e),o=e=>t.put("/data-service/api-config/"+e+"/online"),p=e=>t.put("/data-service/api-config/"+e+"/offline"),c=e=>t.get(e),u=e=>t.get("/data-service/api-config/auth-info/"+e),g=()=>t.get("/data-service/api-config/ip-port/"),f=e=>t.put("/data-service/api-config/reset-requested/"+e);export{u as a,r as b,f as c,p as d,a as e,n as g,g as i,o,c as r,s as u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/apiGroup.d1155eaa.js b/srt-cloud-gateway/src/main/resources/static/assets/apiGroup.d1155eaa.js new file mode 100644 index 0000000..30f9f3b --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/apiGroup.d1155eaa.js @@ -0,0 +1 @@ +import{ad as r}from"./index.e3896b23.js";const a=e=>r.get("/data-service/api-group/"+e),i=e=>e.id?r.put("/data-service/api-group",e):r.post("/data-service/api-group",e),p=()=>r.get("/data-service/api-group"),s=e=>r.delete("/data-service/api-group/"+e);export{a,i as b,s as c,p as u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/app-auth.055b54f9.css b/srt-cloud-gateway/src/main/resources/static/assets/app-auth.055b54f9.css new file mode 100644 index 0000000..0a025be --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/app-auth.055b54f9.css @@ -0,0 +1 @@ +.apiGroupBox{display:flex}.apiGroupBox .leftBox{height:100%;flex:1}.apiGroupBox .rightBox{height:100%;flex:6}.apiGroupTreeDiv .el-tree-node__content{height:35px}.api-group-tree-node{font-size:16px;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none} diff --git a/srt-cloud-gateway/src/main/resources/static/assets/app-auth.6106504e.js b/srt-cloud-gateway/src/main/resources/static/assets/app-auth.6106504e.js new file mode 100644 index 0000000..8a1e5eb --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/app-auth.6106504e.js @@ -0,0 +1 @@ +import"./app-auth.vue_vue_type_style_index_0_lang.9d1e4107.js";import{_ as G}from"./app-auth.vue_vue_type_style_index_0_lang.9d1e4107.js";import"./index.e3896b23.js";import"./apiGroup.d1155eaa.js";import"./folder.ea536bf2.js";import"./api.55cd055b.js";import"./index.vue_vue_type_style_index_0_lang.0a9da65b.js";import"./add-or-update.vue_vue_type_script_setup_true_lang.536b089d.js";import"./database.32bfd96d.js";import"./apiConfig.09b7ec3b.js";import"./databases.vue_vue_type_style_index_0_lang.dceac0af.js";import"./database.235d7a89.js";import"./table.e1c1b00a.js";import"./readonly-studio.vue_vue_type_script_setup_true_lang.0062e564.js";import"./toggleHighContrast.483b4227.js";import"./add-or-update.vue_vue_type_script_setup_true_lang.fd8fc5c7.js";import"./middledb.vue_vue_type_style_index_0_lang.fa7bd4c1.js";import"./house.1ac0c09f.js";import"./sql-studio.a6fca977.js";import"./run.cf98bfe1.js";import"./ts.worker.921d436c.js";import"./sqlFormatter.e0a34ad5.js";import"./sql.4f48b9c1.js";import"./json-studio.vue_vue_type_style_index_0_lang.ef678733.js";import"./console-result.vue_vue_type_script_setup_true_lang.932f9a7d.js";import"./param-studio.vue_vue_type_script_setup_true_lang.e008bef0.js";import"./app-info.vue_vue_type_script_setup_true_lang.248d957f.js";import"./api-test.vue_vue_type_script_setup_true_lang.4a9f24d4.js";import"./app.22c193c2.js";import"./api-auth.vue_vue_type_script_setup_true_lang.77949b40.js";import"./api-auth-detail.vue_vue_type_script_setup_true_lang.b0bb75fb.js";export{G as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/app-auth.vue_vue_type_style_index_0_lang.9d1e4107.js b/srt-cloud-gateway/src/main/resources/static/assets/app-auth.vue_vue_type_style_index_0_lang.9d1e4107.js new file mode 100644 index 0000000..2f85706 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/app-auth.vue_vue_type_style_index_0_lang.9d1e4107.js @@ -0,0 +1 @@ +import{d as k,ab as w,L as D,h as s,P as L,r as n,o as g,c as v,a as e,b as l,w as r,H as R,a7 as C,a8 as y,k as b,t as E,l as I}from"./index.e3896b23.js";import{u as P}from"./apiGroup.d1155eaa.js";import{f as F}from"./folder.ea536bf2.js";import{a as G}from"./api.55cd055b.js";import{_ as S}from"./index.vue_vue_type_style_index_0_lang.0a9da65b.js";const U={class:"apiGroupBox"},j={class:"leftBox"},q={style:{height:"100%"},class:"apiGroupTreeDiv"},H=e("br",null,null,-1),M=e("br",null,null,-1),$={key:0},z=I("\u6DFB\u52A0\u6839\u76EE\u5F55"),J=e("br",null,null,-1),K=e("br",null,null,-1),O={class:"api-group-tree-node"},Q=["src"],W=["src"],X={style:{"margin-left":"8px"}},Y={class:"rightBox"},Z=k({name:"Data-serviceApi-groupIndex"}),ne=k({...Z,props:{ifAuth:{type:Boolean,required:!1,default:()=>!1}},setup(x){const u=x,c=w("appId");D(()=>{B()});const _=s(),d=s([]),a=s(""),p=s(),B=()=>{P().then(t=>{d.value=t.data})};L(a,t=>{_.value.filter(t)});const T=(t,o)=>t?o.label.includes(t)||o.label.includes(t.toUpperCase())||o.label.includes(t.toLowerCase()):!0,A=(t,o,f,m)=>{console.log(o.data),o.data.type=="2"&&p.value.init(o.data.id,o.data.path,c?c.value:null,u.ifAuth)};return(t,o)=>{const f=n("el-input"),m=n("el-button"),N=n("el-tree"),V=n("el-scrollbar");return g(),v("div",U,[e("div",j,[l(V,null,{default:r(()=>[e("div",q,[e("div",null,[l(f,{modelValue:a.value,"onUpdate:modelValue":o[0]||(o[0]=h=>a.value=h),placeholder:"search"},null,8,["modelValue"]),H,M]),u.ifAuth?R("",!0):(g(),v("div",$,[l(m,{type:"primary",onClick:t.appendCatalogueRoot},{default:r(()=>[z]),_:1},8,["onClick"]),J,K])),l(N,{ref_key:"catalogueTreeRef",ref:_,data:d.value,onNodeClick:A,"default-expand-all":"","node-key":"id","filter-node-method":T},{default:r(({node:h,data:i})=>[e("div",O,[e("span",null,[C(e("img",{src:b(F)},null,8,Q),[[y,i.type=="1"]]),C(e("img",{src:b(G)},null,8,W),[[y,i.type=="2"]]),e("span",X,E(i.name),1)])])]),_:1},8,["data"])])]),_:1})]),e("div",Y,[l(S,{ref_key:"apiConfigRef",ref:p},null,512)])])}}});export{ne as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/app-info.071a714a.js b/srt-cloud-gateway/src/main/resources/static/assets/app-info.071a714a.js new file mode 100644 index 0000000..09edd28 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/app-info.071a714a.js @@ -0,0 +1 @@ +import"./app-info.vue_vue_type_script_setup_true_lang.248d957f.js";import{_ as u}from"./app-info.vue_vue_type_script_setup_true_lang.248d957f.js";import"./index.e3896b23.js";import"./database.32bfd96d.js";import"./apiConfig.09b7ec3b.js";import"./databases.vue_vue_type_style_index_0_lang.dceac0af.js";import"./database.235d7a89.js";import"./table.e1c1b00a.js";import"./readonly-studio.vue_vue_type_script_setup_true_lang.0062e564.js";import"./toggleHighContrast.483b4227.js";import"./add-or-update.vue_vue_type_script_setup_true_lang.fd8fc5c7.js";import"./middledb.vue_vue_type_style_index_0_lang.fa7bd4c1.js";import"./house.1ac0c09f.js";export{u as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/app-info.vue_vue_type_script_setup_true_lang.248d957f.js b/srt-cloud-gateway/src/main/resources/static/assets/app-info.vue_vue_type_script_setup_true_lang.248d957f.js new file mode 100644 index 0000000..c514c25 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/app-info.vue_vue_type_script_setup_true_lang.248d957f.js @@ -0,0 +1 @@ +import{d as Q,L as Y,h as m,Y as q,r as i,o as _,c as k,b as l,w as e,a as o,t as s,a7 as G,a8 as J,k as K,af as W,f,F as S,e as X,H as c,l as b}from"./index.e3896b23.js";import{c as Z}from"./database.32bfd96d.js";import{g as ee,u as le,a as ae}from"./apiConfig.09b7ec3b.js";import{_ as te}from"./databases.vue_vue_type_style_index_0_lang.dceac0af.js";import{_ as ue}from"./middledb.vue_vue_type_style_index_0_lang.fa7bd4c1.js";import{_ as oe}from"./readonly-studio.vue_vue_type_script_setup_true_lang.0062e564.js";const se=b("\u6570\u636E\u5E93"),de=b("\u4E2D\u53F0\u5E93"),ne=b("\u5E93\u8868\u4FE1\u606F"),ie=b("\u4E0D\u9650\u6B21\u6570"),re=b("\u6307\u5B9A\u6B21\u6570"),pe=b("tips:\u8FD9\u91CC\u7684\u8C03\u7528\u6B21\u6570\u6307\u7684\u662F\u8BE5\u5E94\u7528\u4E0B\u88AB\u6388\u6743\u7684api\u7684\u8C03\u7528\u6B21\u6570\uFF0C\u5E76\u975E\u603B\u6B21\u6570"),ye=Q({__name:"app-info",emits:["refreshDataList"],setup(me,{expose:A,emit:_e}){Y(()=>{R(),L()});const R=()=>{Z().then(p=>{t.databaseList=p.data})},T=m(!1),x=()=>{T.value=!0},h=m(!1),v=m(""),L=()=>{ee().then(p=>{v.value="http://"+p.data})},g=q({}),y=m(),r=q({name:"",path:"",type:"",note:"",requestedTimes:""}),C=m(),t=q({sqlDbType:"",sqlParam:"",sqlText:"",contentType:"application/json",openTrans:0,jsonParam:"",responseResult:"",sqlSeparator:";\\n",databaseId:"",databaseList:[],previlege:""}),I=m(),n=q({limited:"",requestTimes:"",requestedTimes:""}),B=m(),P=(p,u)=>{h.value=!0,g.id="",y.value&&y.value.resetFields(),C.value&&B.value.setEditorValue(""),p&&$(p,u)},E=m(!1),$=(p,u)=>{le(p).then(a=>{Object.assign(g,a.data),Object.assign(r,a.data),Object.assign(t,a.data),B.value.setEditorValue(t.sqlText)}),u?ae(u).then(a=>{n.id=u,n.limited=a.data.requestTimes!=-1,n.requestTimes=a.data.requestTimes,n.requestedTimes=a.data.requestedTimes,n.requestedSuccessTimes=a.data.requestedSuccessTimes,n.requestedFailedTimes=a.data.requestedFailedTimes,E.value=!0}):E.value=!1};return A({init:P}),(p,u)=>{const a=i("el-form-item"),D=i("el-form"),w=i("el-tab-pane"),F=i("el-radio"),V=i("el-radio-group"),j=i("el-button"),U=i("el-option"),N=i("el-select"),z=i("el-tag"),M=i("el-tabs"),O=i("el-drawer"),H=i("el-dialog");return _(),k(S,null,[l(O,{modelValue:h.value,"onUpdate:modelValue":u[4]||(u[4]=d=>h.value=d),title:"\u8BE6\u60C5",size:"100%","destroy-on-close":!0},{default:e(()=>[o("div",null,[l(M,{"tab-position":"top"},{default:e(()=>[l(w,{label:"\u57FA\u672C\u4FE1\u606F"},{default:e(()=>[l(D,{ref_key:"basicDataFormRef",ref:y,model:r},{default:e(()=>[l(a,{label:"\u540D\u79F0",prop:"name","label-width":"auto"},{default:e(()=>[o("span",null,s(r.name),1)]),_:1}),l(a,{label:"api\u8DEF\u5F84",prop:"path","label-width":"auto"},{default:e(()=>[o("span",null,s(v.value)+s(r.path),1)]),_:1}),l(a,{label:"\u8BF7\u6C42\u65B9\u5F0F",prop:"type","label-width":"auto"},{default:e(()=>[o("span",null,s(r.type),1)]),_:1}),l(a,{label:"\u603B\u8C03\u7528\u6B21\u6570",prop:"type","label-width":"auto"},{default:e(()=>[o("span",null,s(r.requestedTimes),1)]),_:1}),l(a,{label:"\u8C03\u7528\u6210\u529F\u6B21\u6570",prop:"type","label-width":"auto"},{default:e(()=>[o("span",null,s(r.requestedSuccessTimes),1)]),_:1}),l(a,{label:"\u8C03\u7528\u5931\u8D25\u6B21\u6570",prop:"type","label-width":"auto"},{default:e(()=>[o("span",null,s(r.requestedFailedTimes),1)]),_:1}),l(a,{label:"\u63CF\u8FF0",prop:"note","label-width":"auto"},{default:e(()=>[o("p",null,s(r.note),1)]),_:1})]),_:1},8,["model"])]),_:1}),l(w,{label:"API SQL \u914D\u7F6E"},{default:e(()=>[l(D,{ref_key:"apiSqlFormRef",ref:C,model:t},{default:e(()=>[l(a,{label:"\u5E93\u7C7B\u578B",prop:"sqlDbType","label-width":"auto"},{default:e(()=>[l(V,{modelValue:t.sqlDbType,"onUpdate:modelValue":u[0]||(u[0]=d=>t.sqlDbType=d),disabled:""},{default:e(()=>[l(F,{label:1,border:""},{default:e(()=>[se]),_:1}),l(F,{label:2,border:""},{default:e(()=>[de]),_:1})]),_:1},8,["modelValue"]),G(l(j,{style:{"margin-left":"20px"},icon:K(W),type:"primary",onClick:u[1]||(u[1]=d=>x())},{default:e(()=>[ne]),_:1},8,["icon"]),[[J,!!t.sqlDbType]])]),_:1}),t.sqlDbType=="1"?(_(),f(a,{key:0,label:"\u6570\u636E\u5E93",prop:"databaseId","label-width":"auto"},{default:e(()=>[l(N,{disabled:"",modelValue:t.databaseId,"onUpdate:modelValue":u[2]||(u[2]=d=>t.databaseId=d),clearable:"",filterable:"",placeholder:"\u8BF7\u9009\u62E9"},{default:e(()=>[(_(!0),k(S,null,X(t.databaseList,(d,fe)=>(_(),f(U,{key:d.id,label:`[${d.id}]${d.name}`,value:d.id},null,8,["label","value"]))),128))]),_:1},8,["modelValue"])]),_:1})):c("",!0),l(a,{label:"sql\u5206\u9694\u7B26",prop:"sqlSeparator","label-width":"auto"},{default:e(()=>[o("span",null,s(t.sqlSeparator),1)]),_:1}),l(a,{label:"\u5F00\u542F\u4E8B\u52A1","label-width":"auto",prop:"openTrans"},{default:e(()=>[o("span",null,s(t.openTrans==1?"\u662F":"\u5426"),1)]),_:1}),l(a,{label:"\u67E5\u8BE2\u6700\u5927\u884C\u6570","label-width":"auto",prop:"sqlMaxRow"},{default:e(()=>[o("span",null,s(t.sqlMaxRow),1)]),_:1}),l(a,{label:"sql\u8BED\u53E5","label-width":"auto",prop:"sqlText"},{default:e(()=>[l(oe,{id:"apiSqlId",ref_key:"sqlStudioRef",ref:B,style:{height:"260px",width:"100%"}},null,512)]),_:1}),l(a,{label:"Content-Type",prop:"contentType","label-width":"auto"},{default:e(()=>[o("span",null,s(t.contentType),1)]),_:1}),l(a,{label:"\u6743\u9650",prop:"previlege","label-width":"auto"},{default:e(()=>[o("span",null,s(t.previlege==1?"\u79C1\u6709":"\u5F00\u653E"),1)]),_:1})]),_:1},8,["model"])]),_:1}),E.value?(_(),f(w,{key:0,label:"\u6388\u6743\u4FE1\u606F"},{default:e(()=>[l(D,{ref_key:"authDataFormRef",ref:I,model:n},{default:e(()=>[l(a,{label:"\u8C03\u7528\u6B21\u6570",prop:"limited","label-width":"auto"},{default:e(()=>[l(V,{modelValue:n.limited,"onUpdate:modelValue":u[3]||(u[3]=d=>n.limited=d),disabled:""},{default:e(()=>[l(F,{label:!1,size:"large"},{default:e(()=>[ie]),_:1}),l(F,{label:!0,size:"large"},{default:e(()=>[re]),_:1})]),_:1},8,["modelValue"])]),_:1}),n.limited?(_(),f(a,{key:0,label:"\u6388\u6743\u6B21\u6570",prop:"requestTimes","label-width":"auto"},{default:e(()=>[o("span",null,s(n.requestTimes),1)]),_:1})):c("",!0),l(a,null,{default:e(()=>[l(z,null,{default:e(()=>[pe]),_:1})]),_:1}),l(a,{label:"\u5DF2\u8C03\u7528\u6B21\u6570",prop:"requestedTimes","label-width":"auto"},{default:e(()=>[o("span",null,s(n.requestedTimes),1)]),_:1}),l(a,{label:"\u8C03\u7528\u6210\u529F\u6B21\u6570",prop:"type","label-width":"auto"},{default:e(()=>[o("span",null,s(n.requestedSuccessTimes),1)]),_:1}),l(a,{label:"\u8C03\u7528\u5931\u8D25\u6B21\u6570",prop:"type","label-width":"auto"},{default:e(()=>[o("span",null,s(n.requestedFailedTimes),1)]),_:1})]),_:1},8,["model"])]),_:1})):c("",!0)]),_:1})])]),_:1},8,["modelValue"]),l(H,{modelValue:T.value,"onUpdate:modelValue":u[5]||(u[5]=d=>T.value=d),title:"\u5E93\u8868\u4FE1\u606F"},{default:e(()=>[t.sqlDbType==1?(_(),f(te,{key:0,ref:"databasesRef"},null,512)):c("",!0),t.sqlDbType==2?(_(),f(ue,{key:1,ref:"middledbRef"},null,512)):c("",!0)]),_:1},8,["modelValue"])],64)}}});export{ye as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/app.22c193c2.js b/srt-cloud-gateway/src/main/resources/static/assets/app.22c193c2.js new file mode 100644 index 0000000..8c277bd --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/app.22c193c2.js @@ -0,0 +1 @@ +import{ad as t}from"./index.e3896b23.js";const p=e=>t.get("/data-service/app/"+e),s=e=>e.id?t.put("/data-service/app",e):t.post("/data-service/app",e),i=e=>e.id?t.put("/data-service/app/auth",e):t.post("/data-service/app/auth",e),r=e=>t.delete("/data-service/app/cancel-auth/"+e),u=e=>t.get("/data-service/api-config/auth-info/"+e);export{i as a,s as b,r as c,u as g,p as u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/azcli.b70fb9b3.js b/srt-cloud-gateway/src/main/resources/static/assets/azcli.b70fb9b3.js new file mode 100644 index 0000000..146a775 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/azcli.b70fb9b3.js @@ -0,0 +1,6 @@ +/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var e={comments:{lineComment:"#"}},t={defaultToken:"keyword",ignoreCase:!0,tokenPostfix:".azcli",str:/[^#\s]/,tokenizer:{root:[{include:"@comment"},[/\s-+@str*\s*/,{cases:{"@eos":{token:"key.identifier",next:"@popall"},"@default":{token:"key.identifier",next:"@type"}}}],[/^-+@str*\s*/,{cases:{"@eos":{token:"key.identifier",next:"@popall"},"@default":{token:"key.identifier",next:"@type"}}}]],type:[{include:"@comment"},[/-+@str*\s*/,{cases:{"@eos":{token:"key.identifier",next:"@popall"},"@default":"key.identifier"}}],[/@str+\s*/,{cases:{"@eos":{token:"string",next:"@popall"},"@default":"string"}}]],comment:[[/#.*$/,{cases:{"@eos":{token:"comment",next:"@popall"}}}]]}};export{e as conf,t as language}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/bat.4e83862e.js b/srt-cloud-gateway/src/main/resources/static/assets/bat.4e83862e.js new file mode 100644 index 0000000..0695ef5 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/bat.4e83862e.js @@ -0,0 +1,6 @@ +/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var e={comments:{lineComment:"REM"},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'}],surroundingPairs:[{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'}],folding:{markers:{start:new RegExp("^\\s*(::\\s*|REM\\s+)#region"),end:new RegExp("^\\s*(::\\s*|REM\\s+)#endregion")}}},s={defaultToken:"",ignoreCase:!0,tokenPostfix:".bat",brackets:[{token:"delimiter.bracket",open:"{",close:"}"},{token:"delimiter.parenthesis",open:"(",close:")"},{token:"delimiter.square",open:"[",close:"]"}],keywords:/call|defined|echo|errorlevel|exist|for|goto|if|pause|set|shift|start|title|not|pushd|popd/,symbols:/[=>`\\b${e}\\b`,t="[_a-zA-Z]",o="[_a-zA-Z0-9]",r=n(`${t}${o}*`),i=["targetScope","resource","module","param","var","output","for","in","if","existing"],a=["true","false","null"],s="[ \\t\\r\\n]",c="[0-9]+",g={comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"'",close:"'"},{open:"'''",close:"'''"}],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"'",close:"'",notIn:["string","comment"]},{open:"'''",close:"'''",notIn:["string","comment"]}],autoCloseBefore:`:.,=}])' + `,indentationRules:{increaseIndentPattern:new RegExp("^((?!\\/\\/).)*(\\{[^}\"'`]*|\\([^)\"'`]*|\\[[^\\]\"'`]*)$"),decreaseIndentPattern:new RegExp("^((?!.*?\\/\\*).*\\*/)?\\s*[\\}\\]].*$")}},l={defaultToken:"",tokenPostfix:".bicep",brackets:[{open:"{",close:"}",token:"delimiter.curly"},{open:"[",close:"]",token:"delimiter.square"},{open:"(",close:")",token:"delimiter.parenthesis"}],symbols:/[=>"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"<",close:">"},{open:"'",close:"'"},{open:'"',close:'"'},{open:"(*",close:"*)"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"<",close:">"},{open:"'",close:"'"},{open:'"',close:'"'},{open:"(*",close:"*)"}]},o={defaultToken:"",tokenPostfix:".cameligo",ignoreCase:!0,brackets:[{open:"{",close:"}",token:"delimiter.curly"},{open:"[",close:"]",token:"delimiter.square"},{open:"(",close:")",token:"delimiter.parenthesis"},{open:"<",close:">",token:"delimiter.angle"}],keywords:["abs","assert","block","Bytes","case","Crypto","Current","else","failwith","false","for","fun","if","in","let","let%entry","let%init","List","list","Map","map","match","match%nat","mod","not","operation","Operation","of","record","Set","set","sender","skip","source","String","then","to","true","type","with"],typeKeywords:["int","unit","string","tz","nat","bool"],operators:["=",">","<","<=",">=","<>",":",":=","and","mod","or","+","-","*","/","@","&","^","%","->","<-","&&","||"],symbols:/[=><:@\^&|+\-*\/\^%]+/,tokenizer:{root:[[/[a-zA-Z_][\w]*/,{cases:{"@keywords":{token:"keyword.$0"},"@default":"identifier"}}],{include:"@whitespace"},[/[{}()\[\]]/,"@brackets"],[/[<>](?!@symbols)/,"@brackets"],[/@symbols/,{cases:{"@operators":"delimiter","@default":""}}],[/\d*\.\d+([eE][\-+]?\d+)?/,"number.float"],[/\$[0-9a-fA-F]{1,16}/,"number.hex"],[/\d+/,"number"],[/[;,.]/,"delimiter"],[/'([^'\\]|\\.)*$/,"string.invalid"],[/'/,"string","@string"],[/'[^\\']'/,"string"],[/'/,"string.invalid"],[/\#\d+/,"string"]],comment:[[/[^\(\*]+/,"comment"],[/\*\)/,"comment","@pop"],[/\(\*/,"comment"]],string:[[/[^\\']+/,"string"],[/\\./,"string.escape.invalid"],[/'/,{token:"string.quote",bracket:"@close",next:"@pop"}]],whitespace:[[/[ \t\r\n]+/,"white"],[/\(\*/,"comment","@comment"],[/\/\/.*$/,"comment"]]}};export{e as conf,o as language}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/catalog.f6d809a5.js b/srt-cloud-gateway/src/main/resources/static/assets/catalog.f6d809a5.js new file mode 100644 index 0000000..681057a --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/catalog.f6d809a5.js @@ -0,0 +1 @@ +import{ad as t}from"./index.e3896b23.js";const e=s=>t.get("/data-assets/catalog/"+s),r=s=>s.id?t.put("/data-assets/catalog/",s):t.post("/data-assets/catalog/",s),o=s=>t.get("/data-assets/catalog/list-tree"),l=s=>t.delete("/data-assets/catalog/"+s);export{r as a,l as d,o as l,e as u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/catalogue-add-or-update.bd841364.js b/srt-cloud-gateway/src/main/resources/static/assets/catalogue-add-or-update.bd841364.js new file mode 100644 index 0000000..79a52fc --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/catalogue-add-or-update.bd841364.js @@ -0,0 +1 @@ +import"./catalogue-add-or-update.vue_vue_type_script_setup_true_lang.91df4468.js";import{_ as i}from"./catalogue-add-or-update.vue_vue_type_script_setup_true_lang.91df4468.js";import"./index.e3896b23.js";import"./catalogue.ebcb043b.js";export{i as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/catalogue-add-or-update.vue_vue_type_script_setup_true_lang.91df4468.js b/srt-cloud-gateway/src/main/resources/static/assets/catalogue-add-or-update.vue_vue_type_script_setup_true_lang.91df4468.js new file mode 100644 index 0000000..5cb71c6 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/catalogue-add-or-update.vue_vue_type_script_setup_true_lang.91df4468.js @@ -0,0 +1 @@ +import{d as x,h as m,ab as f,Y as U,r as s,o as V,f as v,w as u,b as l,H as L,a2 as P,l as g,E as w}from"./index.e3896b23.js";import{u as h,a as q}from"./catalogue.ebcb043b.js";const R=g("\u53D6\u6D88"),$=g("\u786E\u5B9A"),K=x({__name:"catalogue-add-or-update",emits:["refreshDataList"],setup(j,{expose:C,emit:E}){const i=m(!1),d=m(),b=f("editorValues"),D=f("editableTabs"),_=f("currentNodeData"),e=U({parentId:"",parentPath:"",path:"",name:"",taskType:"",orderNo:0,ifLeaf:1,description:""}),A=(n,t,r,o)=>{i.value=!0,d.value&&d.value.resetFields(),e.id="",d.value&&d.value.resetFields(),e.parentId=t,e.parentPath=r,e.ifLeaf=o,n&&k(n)},k=n=>{h(n).then(t=>{Object.assign(e,t.data)})},y=m({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],taskType:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],orderNo:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),c=()=>{d.value.validate(n=>{if(!n)return!1;q(e).then(()=>{if(e.id){let t=e.id+"";if(b[t]){b[t].nodeData=e,_.value.id==e.id&&(_.value=e);const r=D.value;for(let o in r){let p=r[o];if(p.name==t){p.title=e.name;break}}}}w.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{i.value=!1,E("refreshDataList")}})})})};return C({init:A}),(n,t)=>{const r=s("el-input"),o=s("el-form-item"),p=s("fast-select"),B=s("el-input-number"),N=s("el-form"),F=s("el-button"),T=s("el-dialog");return V(),v(T,{modelValue:i.value,"onUpdate:modelValue":t[8]||(t[8]=a=>i.value=a),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:u(()=>[l(F,{onClick:t[6]||(t[6]=a=>i.value=!1)},{default:u(()=>[R]),_:1}),l(F,{type:"primary",onClick:t[7]||(t[7]=a=>c())},{default:u(()=>[$]),_:1})]),default:u(()=>[l(N,{ref_key:"dataFormRef",ref:d,model:e,rules:y.value,"label-width":"100px",onKeyup:t[5]||(t[5]=P(a=>c(),["enter"]))},{default:u(()=>[l(o,{label:"\u7236\u7EA7\u76EE\u5F55",prop:"parentPath"},{default:u(()=>[l(r,{disabled:"",modelValue:e.parentPath,"onUpdate:modelValue":t[0]||(t[0]=a=>e.parentPath=a),placeholder:""},null,8,["modelValue"])]),_:1}),l(o,{label:"\u540D\u79F0",prop:"name"},{default:u(()=>[l(r,{modelValue:e.name,"onUpdate:modelValue":t[1]||(t[1]=a=>e.name=a),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),e.ifLeaf?L("",!0):(V(),v(o,{key:0,label:"\u4F5C\u4E1A\u7C7B\u578B",prop:"taskType"},{default:u(()=>[l(p,{disabled:!!e.id,modelValue:e.taskType,"onUpdate:modelValue":t[2]||(t[2]=a=>e.taskType=a),placeholder:"\u4F5C\u4E1A\u7C7B\u578B","dict-type":"production_task_type",clearable:""},null,8,["disabled","modelValue"])]),_:1})),l(o,{label:"\u5E8F\u53F7",prop:"orderNo"},{default:u(()=>[l(B,{modelValue:e.orderNo,"onUpdate:modelValue":t[3]||(t[3]=a=>e.orderNo=a),max:9999,placeholder:"\u5E8F\u53F7"},null,8,["modelValue"])]),_:1}),l(o,{label:"\u63CF\u8FF0",prop:"description"},{default:u(()=>[l(r,{type:"textarea",modelValue:e.description,"onUpdate:modelValue":t[4]||(t[4]=a=>e.description=a)},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{K as _}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/catalogue.ebcb043b.js b/srt-cloud-gateway/src/main/resources/static/assets/catalogue.ebcb043b.js new file mode 100644 index 0000000..4ff2cf7 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/catalogue.ebcb043b.js @@ -0,0 +1 @@ +import{ad as t}from"./index.e3896b23.js";const u=e=>t.get("/data-development/catalogue/"+e),o=e=>e.id?t.put("/data-development/catalogue",e):t.post("/data-development/catalogue",e),l=()=>t.get("/data-development/catalogue"),s=e=>t.delete("/data-development/catalogue/"+e);export{o as a,l as b,s as c,u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/category-add-or-update.20e06c28.js b/srt-cloud-gateway/src/main/resources/static/assets/category-add-or-update.20e06c28.js new file mode 100644 index 0000000..34af556 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/category-add-or-update.20e06c28.js @@ -0,0 +1 @@ +import"./category-add-or-update.vue_vue_type_script_setup_true_lang.7e5963a5.js";import{_ as t}from"./category-add-or-update.vue_vue_type_script_setup_true_lang.7e5963a5.js";import"./index.e3896b23.js";export{t as default}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/category-add-or-update.vue_vue_type_script_setup_true_lang.7e5963a5.js b/srt-cloud-gateway/src/main/resources/static/assets/category-add-or-update.vue_vue_type_script_setup_true_lang.7e5963a5.js new file mode 100644 index 0000000..3158194 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/category-add-or-update.vue_vue_type_script_setup_true_lang.7e5963a5.js @@ -0,0 +1 @@ +import{ad as p,d as q,h as g,Y as x,r as n,o as f,f as c,w as a,b as o,H as U,a2 as P,l as F,E as w}from"./index.e3896b23.js";const I=u=>p.get("/data-governance/quality-config-category/"+u),h=u=>u.id?p.put("/data-governance/quality-config-category",u):p.post("/data-governance/quality-config-category",u),H=()=>p.get("/data-governance/quality-config-category/list-tree"),K=u=>p.delete("/data-governance/quality-config-category/"+u),R=F("\u53D6\u6D88"),T=F("\u786E\u5B9A"),L=q({__name:"category-add-or-update",emits:["refreshDataList"],setup(u,{expose:b,emit:E}){const d=g(!1),m=g(),e=x({parentId:"",parentPath:"",path:"",name:"",type:0,orderNo:0,note:""}),V=(r,t,s)=>{d.value=!0,e.id="",m.value&&m.value.resetFields(),e.parentId=t,e.parentPath=s,r&&C(r)},C=r=>{I(r).then(t=>{Object.assign(e,t.data)})},A=g({name:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],type:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}],orderNo:[{required:!0,message:"\u5FC5\u586B\u9879\u4E0D\u80FD\u4E3A\u7A7A",trigger:"blur"}]}),_=()=>{m.value.validate(r=>{if(!r)return!1;e.type=e.parentId==0?0:e.type,h(e).then(()=>{w.success({message:"\u64CD\u4F5C\u6210\u529F",duration:500,onClose:()=>{d.value=!1,E("refreshDataList")}})})})};return b({init:V}),(r,t)=>{const s=n("el-input"),i=n("el-form-item"),y=n("el-option"),D=n("el-select"),B=n("el-input-number"),k=n("el-form"),v=n("el-button"),N=n("el-dialog");return f(),c(N,{modelValue:d.value,"onUpdate:modelValue":t[8]||(t[8]=l=>d.value=l),title:e.id?"\u4FEE\u6539":"\u65B0\u589E","close-on-click-modal":!1},{footer:a(()=>[o(v,{onClick:t[6]||(t[6]=l=>d.value=!1)},{default:a(()=>[R]),_:1}),o(v,{type:"primary",onClick:t[7]||(t[7]=l=>_())},{default:a(()=>[T]),_:1})]),default:a(()=>[o(k,{ref_key:"dataFormRef",ref:m,model:e,rules:A.value,"label-width":"100px",onKeyup:t[5]||(t[5]=P(l=>_(),["enter"]))},{default:a(()=>[o(i,{label:"\u7236\u7EA7\u76EE\u5F55",prop:"parentPath"},{default:a(()=>[o(s,{disabled:"",modelValue:e.parentPath,"onUpdate:modelValue":t[0]||(t[0]=l=>e.parentPath=l),placeholder:""},null,8,["modelValue"])]),_:1}),o(i,{label:"\u540D\u79F0",prop:"name"},{default:a(()=>[o(s,{modelValue:e.name,"onUpdate:modelValue":t[1]||(t[1]=l=>e.name=l),placeholder:"\u540D\u79F0"},null,8,["modelValue"])]),_:1}),e.parentId!=0?(f(),c(i,{key:0,label:"\u7C7B\u578B",prop:"type"},{default:a(()=>[o(D,{modelValue:e.type,"onUpdate:modelValue":t[2]||(t[2]=l=>e.type=l),placeholder:"\u7C7B\u578B",disabled:!!e.id},{default:a(()=>[(f(),c(y,{key:0,label:"\u666E\u901A\u76EE\u5F55",value:0})),(f(),c(y,{key:1,label:"\u89C4\u5219\u914D\u7F6E\u76EE\u5F55",value:1}))]),_:1},8,["modelValue","disabled"])]),_:1})):U("",!0),o(i,{label:"\u5E8F\u53F7",prop:"orderNo"},{default:a(()=>[o(B,{modelValue:e.orderNo,"onUpdate:modelValue":t[3]||(t[3]=l=>e.orderNo=l),max:9999,placeholder:"\u5E8F\u53F7"},null,8,["modelValue"])]),_:1}),o(i,{label:"\u63CF\u8FF0",prop:"note"},{default:a(()=>[o(s,{type:"textarea",modelValue:e.note,"onUpdate:modelValue":t[4]||(t[4]=l=>e.note=l)},null,8,["modelValue"])]),_:1})]),_:1},8,["model","rules"])]),_:1},8,["modelValue","title"])}}});export{L as _,K as d,H as l}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/clear.9e45766d.js b/srt-cloud-gateway/src/main/resources/static/assets/clear.9e45766d.js new file mode 100644 index 0000000..68aa69a --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/clear.9e45766d.js @@ -0,0 +1 @@ +const A="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAACOUlEQVR4nJ2UMXraQBCF3ywNqWzfgHTwpcGVcZqQG1jmAMgnMD5BlBNYPkHkPkTcIKpiqEyTD0r5BMFV1Fibt2OJCCNiO//HfpJmR29m364QPMO3XntoLQIQEQSn0+U1b3ciHFvE/e5+nmXnsPABtCh0zSsoPOQlhSAyzeaVl8xXfN5AONb8FZIRXxfOhgaIvOkyBYl77VYOFrHgvFj+wqfCKlgr1GyG1cQqRf6oTlgFx0edXwze8zYYTJcRXsG41/bB99jH3mC2OCgE29YIPnJpCf4DWtHPLb4PZkvaTXYJFku7ZPUTwO5zbfOGwWfvZjHh9JoXCarY7+yWt/fGSMAtXuUiXeT2EkYuBjeLkHPKiwTHxx2abkc0u+vMZkjReI5Pzi8+KnWCc1aNqlVpdgIg4SYFeALzNxrQIizO3JYKfj3qTIzYFb8CHwWvEeTXFFmgxdy+CsbHnZM8t7F50zwol6dVc3vO2GEZc2i86AYFeuy4WW6FKuhg1RUELjECKTYlAbBHYwLOpfz0fLgBmdBDDyR+/66bPzzcsuO37DhdC2rbFh+qHTlRm2Uh40M+Ugd3EAnZecD7mMXPGNW88h3hUFyQHaUCiU5nixFDOym6SlARLRGONaWXjJ4xMcI/KEVNo9H3fvycM6RsCDrUdB5eAxl5s8UVQ1s4sapIlS1BB4+MM/8Lp91xunBmg6gt+inCZ2eHdaK1gg49/eDucqOYFcDx+Pd2RzG/TsyxU7BEu3XCjwTPefsH/VBkJJ09ueoAAAAASUVORK5CYII=",B="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAABMElEQVR4nO2UQUrDQBSG/ze47NK22woewBuYI3iEulBbTMEj9AhCI6m6sEfwCPEGHkCw27Yuu5Q8/6dNnATSBml2fpB5//CYDzJ5RLBndgpXD6NTlpz25eSFpRLhs5VFHM4BzPBDvzuMetjCTuEyDpPOMApA/FxFM8JVHAYpX4ubOcoIXjuD6IYJy2l4C8UJYwEFeo7X0R5GCR25MOD9jPEHeM9jCpOGhSJPjC2BHrDWRiGfLGunep4LDZOqw6NA7ritjUKvJcWFyUByoVHnK5Ypn/kX/jZtiLOB/rgPz1hweBU9sxR6/hmjWuhlmzOQbE79np+NZoWL6ejdic5AVKUvm8xhC2AIEhC/lzJ3B5Mjxm+ET44NNzKcO0aavjERbXEhsuZS6vGnuxlqoyDcB1/zossVFiu6DAAAAABJRU5ErkJggg==";export{A as _,B as a}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/clojure.9b9ce362.js b/srt-cloud-gateway/src/main/resources/static/assets/clojure.9b9ce362.js new file mode 100644 index 0000000..5af2a06 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/clojure.9b9ce362.js @@ -0,0 +1,6 @@ +/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var e={comments:{lineComment:";;"},brackets:[["[","]"],["(",")"],["{","}"]],autoClosingPairs:[{open:"[",close:"]"},{open:'"',close:'"'},{open:"(",close:")"},{open:"{",close:"}"}],surroundingPairs:[{open:"[",close:"]"},{open:'"',close:'"'},{open:"(",close:")"},{open:"{",close:"}"}]},t={defaultToken:"",ignoreCase:!0,tokenPostfix:".clj",brackets:[{open:"[",close:"]",token:"delimiter.square"},{open:"(",close:")",token:"delimiter.parenthesis"},{open:"{",close:"}",token:"delimiter.curly"}],constants:["true","false","nil"],numbers:/^(?:[+\-]?\d+(?:(?:N|(?:[eE][+\-]?\d+))|(?:\.?\d*(?:M|(?:[eE][+\-]?\d+))?)|\/\d+|[xX][0-9a-fA-F]+|r[0-9a-zA-Z]+)?(?=[\\\[\]\s"#'(),;@^`{}~]|$))/,characters:/^(?:\\(?:backspace|formfeed|newline|return|space|tab|o[0-7]{3}|u[0-9A-Fa-f]{4}|x[0-9A-Fa-f]{4}|.)?(?=[\\\[\]\s"(),;@^`{}~]|$))/,escapes:/^\\(?:["'\\bfnrt]|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,qualifiedSymbols:/^(?:(?:[^\\\/\[\]\d\s"#'(),;@^`{}~][^\\\[\]\s"(),;@^`{}~]*(?:\.[^\\\/\[\]\d\s"#'(),;@^`{}~][^\\\[\]\s"(),;@^`{}~]*)*\/)?(?:\/|[^\\\/\[\]\d\s"#'(),;@^`{}~][^\\\[\]\s"(),;@^`{}~]*)*(?=[\\\[\]\s"(),;@^`{}~]|$))/,specialForms:[".","catch","def","do","if","monitor-enter","monitor-exit","new","quote","recur","set!","throw","try","var"],coreSymbols:["*","*'","*1","*2","*3","*agent*","*allow-unresolved-vars*","*assert*","*clojure-version*","*command-line-args*","*compile-files*","*compile-path*","*compiler-options*","*data-readers*","*default-data-reader-fn*","*e","*err*","*file*","*flush-on-newline*","*fn-loader*","*in*","*math-context*","*ns*","*out*","*print-dup*","*print-length*","*print-level*","*print-meta*","*print-namespace-maps*","*print-readably*","*read-eval*","*reader-resolver*","*source-path*","*suppress-read*","*unchecked-math*","*use-context-classloader*","*verbose-defrecords*","*warn-on-reflection*","+","+'","-","-'","->","->>","->ArrayChunk","->Eduction","->Vec","->VecNode","->VecSeq","-cache-protocol-fn","-reset-methods","..","/","<","<=","=","==",">",">=","EMPTY-NODE","Inst","StackTraceElement->vec","Throwable->map","accessor","aclone","add-classpath","add-watch","agent","agent-error","agent-errors","aget","alength","alias","all-ns","alter","alter-meta!","alter-var-root","amap","ancestors","and","any?","apply","areduce","array-map","as->","aset","aset-boolean","aset-byte","aset-char","aset-double","aset-float","aset-int","aset-long","aset-short","assert","assoc","assoc!","assoc-in","associative?","atom","await","await-for","await1","bases","bean","bigdec","bigint","biginteger","binding","bit-and","bit-and-not","bit-clear","bit-flip","bit-not","bit-or","bit-set","bit-shift-left","bit-shift-right","bit-test","bit-xor","boolean","boolean-array","boolean?","booleans","bound-fn","bound-fn*","bound?","bounded-count","butlast","byte","byte-array","bytes","bytes?","case","cast","cat","char","char-array","char-escape-string","char-name-string","char?","chars","chunk","chunk-append","chunk-buffer","chunk-cons","chunk-first","chunk-next","chunk-rest","chunked-seq?","class","class?","clear-agent-errors","clojure-version","coll?","comment","commute","comp","comparator","compare","compare-and-set!","compile","complement","completing","concat","cond","cond->","cond->>","condp","conj","conj!","cons","constantly","construct-proxy","contains?","count","counted?","create-ns","create-struct","cycle","dec","dec'","decimal?","declare","dedupe","default-data-readers","definline","definterface","defmacro","defmethod","defmulti","defn","defn-","defonce","defprotocol","defrecord","defstruct","deftype","delay","delay?","deliver","denominator","deref","derive","descendants","destructure","disj","disj!","dissoc","dissoc!","distinct","distinct?","doall","dorun","doseq","dosync","dotimes","doto","double","double-array","double?","doubles","drop","drop-last","drop-while","eduction","empty","empty?","ensure","ensure-reduced","enumeration-seq","error-handler","error-mode","eval","even?","every-pred","every?","ex-data","ex-info","extend","extend-protocol","extend-type","extenders","extends?","false?","ffirst","file-seq","filter","filterv","find","find-keyword","find-ns","find-protocol-impl","find-protocol-method","find-var","first","flatten","float","float-array","float?","floats","flush","fn","fn?","fnext","fnil","for","force","format","frequencies","future","future-call","future-cancel","future-cancelled?","future-done?","future?","gen-class","gen-interface","gensym","get","get-in","get-method","get-proxy-class","get-thread-bindings","get-validator","group-by","halt-when","hash","hash-combine","hash-map","hash-ordered-coll","hash-set","hash-unordered-coll","ident?","identical?","identity","if-let","if-not","if-some","ifn?","import","in-ns","inc","inc'","indexed?","init-proxy","inst-ms","inst-ms*","inst?","instance?","int","int-array","int?","integer?","interleave","intern","interpose","into","into-array","ints","io!","isa?","iterate","iterator-seq","juxt","keep","keep-indexed","key","keys","keyword","keyword?","last","lazy-cat","lazy-seq","let","letfn","line-seq","list","list*","list?","load","load-file","load-reader","load-string","loaded-libs","locking","long","long-array","longs","loop","macroexpand","macroexpand-1","make-array","make-hierarchy","map","map-entry?","map-indexed","map?","mapcat","mapv","max","max-key","memfn","memoize","merge","merge-with","meta","method-sig","methods","min","min-key","mix-collection-hash","mod","munge","name","namespace","namespace-munge","nat-int?","neg-int?","neg?","newline","next","nfirst","nil?","nnext","not","not-any?","not-empty","not-every?","not=","ns","ns-aliases","ns-imports","ns-interns","ns-map","ns-name","ns-publics","ns-refers","ns-resolve","ns-unalias","ns-unmap","nth","nthnext","nthrest","num","number?","numerator","object-array","odd?","or","parents","partial","partition","partition-all","partition-by","pcalls","peek","persistent!","pmap","pop","pop!","pop-thread-bindings","pos-int?","pos?","pr","pr-str","prefer-method","prefers","primitives-classnames","print","print-ctor","print-dup","print-method","print-simple","print-str","printf","println","println-str","prn","prn-str","promise","proxy","proxy-call-with-super","proxy-mappings","proxy-name","proxy-super","push-thread-bindings","pvalues","qualified-ident?","qualified-keyword?","qualified-symbol?","quot","rand","rand-int","rand-nth","random-sample","range","ratio?","rational?","rationalize","re-find","re-groups","re-matcher","re-matches","re-pattern","re-seq","read","read-line","read-string","reader-conditional","reader-conditional?","realized?","record?","reduce","reduce-kv","reduced","reduced?","reductions","ref","ref-history-count","ref-max-history","ref-min-history","ref-set","refer","refer-clojure","reify","release-pending-sends","rem","remove","remove-all-methods","remove-method","remove-ns","remove-watch","repeat","repeatedly","replace","replicate","require","reset!","reset-meta!","reset-vals!","resolve","rest","restart-agent","resultset-seq","reverse","reversible?","rseq","rsubseq","run!","satisfies?","second","select-keys","send","send-off","send-via","seq","seq?","seqable?","seque","sequence","sequential?","set","set-agent-send-executor!","set-agent-send-off-executor!","set-error-handler!","set-error-mode!","set-validator!","set?","short","short-array","shorts","shuffle","shutdown-agents","simple-ident?","simple-keyword?","simple-symbol?","slurp","some","some->","some->>","some-fn","some?","sort","sort-by","sorted-map","sorted-map-by","sorted-set","sorted-set-by","sorted?","special-symbol?","spit","split-at","split-with","str","string?","struct","struct-map","subs","subseq","subvec","supers","swap!","swap-vals!","symbol","symbol?","sync","tagged-literal","tagged-literal?","take","take-last","take-nth","take-while","test","the-ns","thread-bound?","time","to-array","to-array-2d","trampoline","transduce","transient","tree-seq","true?","type","unchecked-add","unchecked-add-int","unchecked-byte","unchecked-char","unchecked-dec","unchecked-dec-int","unchecked-divide-int","unchecked-double","unchecked-float","unchecked-inc","unchecked-inc-int","unchecked-int","unchecked-long","unchecked-multiply","unchecked-multiply-int","unchecked-negate","unchecked-negate-int","unchecked-remainder-int","unchecked-short","unchecked-subtract","unchecked-subtract-int","underive","unquote","unquote-splicing","unreduced","unsigned-bit-shift-right","update","update-in","update-proxy","uri?","use","uuid?","val","vals","var-get","var-set","var?","vary-meta","vec","vector","vector-of","vector?","volatile!","volatile?","vreset!","vswap!","when","when-first","when-let","when-not","when-some","while","with-bindings","with-bindings*","with-in-str","with-loading-context","with-local-vars","with-meta","with-open","with-out-str","with-precision","with-redefs","with-redefs-fn","xml-seq","zero?","zipmap"],tokenizer:{root:[{include:"@whitespace"},[/@numbers/,"number"],[/@characters/,"string"],{include:"@string"},[/[()\[\]{}]/,"@brackets"],[/\/#"(?:\.|(?:")|[^"\n])*"\/g/,"regexp"],[/[#'@^`~]/,"meta"],[/@qualifiedSymbols/,{cases:{"^:.+$":"constant","@specialForms":"keyword","@coreSymbols":"keyword","@constants":"constant","@default":"identifier"}}]],whitespace:[[/[\s,]+/,"white"],[/;.*$/,"comment"],[/\(comment\b/,"comment","@comment"]],comment:[[/\(/,"comment","@push"],[/\)/,"comment","@pop"],[/[^()]/,"comment"]],string:[[/"/,"string","@multiLineString"]],multiLineString:[[/"/,"string","@popall"],[/@escapes/,"string.escape"],[/./,"string"]]}};export{e as conf,t as language}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/cluster.85454835.js b/srt-cloud-gateway/src/main/resources/static/assets/cluster.85454835.js new file mode 100644 index 0000000..78270f8 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/cluster.85454835.js @@ -0,0 +1 @@ +import{ad as t}from"./index.e3896b23.js";const r=e=>t.get("/data-development/cluster/"+e),l=e=>e.id?t.put("/data-development/cluster",e):t.post("/data-development/cluster",e),a=e=>t.post("/data-development/cluster/heartbeat",e),u=()=>t.get("/data-development/cluster/clear"),n=()=>t.get("/data-development/cluster/list-all");export{l as a,u as c,a as h,n as l,r as u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/clusterConfiguration.e495cab8.js b/srt-cloud-gateway/src/main/resources/static/assets/clusterConfiguration.e495cab8.js new file mode 100644 index 0000000..57637a8 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/clusterConfiguration.e495cab8.js @@ -0,0 +1 @@ +import{ad as e}from"./index.e3896b23.js";const o=t=>e.get("/data-development/cluster-configuration/"+t),i=t=>t.id?e.put("/data-development/cluster-configuration",t):e.post("/data-development/cluster-configuration",t),r=t=>e.post("/data-development/cluster-configuration/test-connect",t),s=()=>e.get("/data-development/cluster-configuration/list-all");export{i as a,s as l,r as t,o as u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/codicon.71cccbf1.ttf b/srt-cloud-gateway/src/main/resources/static/assets/codicon.71cccbf1.ttf new file mode 100644 index 0000000..5abfa74 Binary files /dev/null and b/srt-cloud-gateway/src/main/resources/static/assets/codicon.71cccbf1.ttf differ diff --git a/srt-cloud-gateway/src/main/resources/static/assets/coffee.3343db4b.js b/srt-cloud-gateway/src/main/resources/static/assets/coffee.3343db4b.js new file mode 100644 index 0000000..2633e0b --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/coffee.3343db4b.js @@ -0,0 +1,6 @@ +/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var e={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\@\#%\^\&\*\(\)\=\$\-\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s]+)/g,comments:{blockComment:["###","###"],lineComment:"#"},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],folding:{markers:{start:new RegExp("^\\s*#region\\b"),end:new RegExp("^\\s*#endregion\\b")}}},r={defaultToken:"",ignoreCase:!0,tokenPostfix:".coffee",brackets:[{open:"{",close:"}",token:"delimiter.curly"},{open:"[",close:"]",token:"delimiter.square"},{open:"(",close:")",token:"delimiter.parenthesis"}],regEx:/\/(?!\/\/)(?:[^\/\\]|\\.)*\/[igm]*/,keywords:["and","or","is","isnt","not","on","yes","@","no","off","true","false","null","this","new","delete","typeof","in","instanceof","return","throw","break","continue","debugger","if","else","switch","for","while","do","try","catch","finally","class","extends","super","undefined","then","unless","until","loop","of","by","when"],symbols:/[=>s.get("/data-development/task/"+t),fe=t=>t.id?s.put("/data-development/task",t):s.post("/data-development/task",t),ge=t=>s.post("/data-development/task/explain-sql",t),be=t=>s.post("/data-development/task/execute-sql",t),ke=t=>s.post("/data-development/task/just-execute-sql",t),Ae=t=>s.post("/data-development/task/submit/"+t),he=()=>s.get("/data-development/task/console-log"),De=()=>s.get("/data-development/task/clear-log"),Ce=()=>s.get("/data-development/task/clear-log-without-key"),Be=()=>s.get("/data-development/task/end-log"),G=t=>s.get("/data-development/task/job-data?jobId="+t),ye=t=>s.get("/data-development/task/history/instance-error?historyId="+t),we=t=>s.get("/data-development/task/history/cluster-info?historyId="+t),xe=(t,h,E)=>s.get("/data-development/task/savepoint?id="+t+"&historyId="+h+"&type="+E),Ie=()=>s.get("/data-development/task/env-list"),P={style:{padding:"20px"}},U={style:{"margin-bottom":"10px"}},X={key:0},Y=e("b",null,"> Affected rows:\xA0",-1),Z=e("b",null,"> time:\xA0",-1),ee={key:1},te=e("b",null,"> rows:\xA0",-1),ue=e("b",null,"> time:\xA0",-1),oe={key:2},le=e("p",null,[e("b",null,"> errorMsg: ")],-1),se=e("b",null,"> time:\xA0",-1),ae={style:{"margin-bottom":"10px"}},ne=A("\u6CE8\u610F\uFF1AflinkSql\u53EA\u6709\u5728\u540C\u6B65\u6267\u884C\u6A21\u5F0F\u4E0B\uFF0C\u5F53\u6267\u884C\u914D\u7F6E\u52FE\u9009\u4E86\u9884\u89C8\u7ED3\u679C\u5E76\u4E14\u6700\u540E\u4E00\u6761\u8BED\u53E5\u4E3A\u67E5\u8BE2\u8BED\u53E5\u65F6\u624D\u4F1A\u6709\u7ED3\u679C\u663E\u793A\uFF01"),re={style:{"margin-bottom":"10px"}},ce=A("\u83B7\u53D6\u6700\u8FD1\u4E00\u6B21\u4EFB\u52A1\u6267\u884C\u8FD4\u56DE\u7684\u6700\u65B0\u6570\u636E"),ie={style:{"margin-bottom":"10px"}},de=A("\u6CA1\u6709\u83B7\u53D6\u5230jobId\uFF0C\u8BF7\u81EA\u884C\u6392\u67E5\u539F\u56E0\uFF08local\u6A21\u5F0F\u4E0B\u7B2C\u4E00\u6B21\u53EF\u80FD\u83B7\u53D6\u4E0D\u5230\uFF0C\u521D\u59CB\u672C\u5730flink\u9700\u8981\u65F6\u95F4\uFF0C\u6709\u4E9B\u8BED\u53E5\u4E0D\u9700\u8981\u542F\u52A8flink\u5B9E\u4F8B\uFF0C\u4E5F\u4F1A\u5BFC\u81F4\u65E0jobId\uFF09"),pe=e("b",null,"> rows:\xA0",-1),_e={key:0},ve=A("\u8FC7\u7A0B\u4E2D\u7684\u9519\u8BEF\u65E5\u5FD7\uFF1A"),je=$({__name:"console-result",setup(t,{expose:h}){const E=F(!1),m=F(""),b=F([]),l=F({rowData:[],columns:[]}),L=()=>{b.value=[],l.value={rowData:[],columns:[]}},D=F(),J=(u,d,k)=>{k&&(E.value=!0),m.value=d,d=="1"?b.value=u.results:d=="2"&&(u?(u.rowData=u.rowData?u.rowData:[],u.columns=u.columns?u.columns:[],l.value=u,u.success||x(u.error),console.log(u)):l.value={rowData:[],columns:[]})},x=u=>{if(D.value){D.value.setEditorValue(u);return}setTimeout(()=>{x(u)},500)},V=()=>{G(l.value.jobId).then(u=>{u.data.rowData||(l.value.rowData=[]),u.data.columns||(l.value.columns=[]),l.value=u.data,u.data.success||z.error("\u83B7\u53D6\u4EFB\u52A1\u6570\u636E\u8FC7\u7A0B\u4E2D\u51FA\u9519\u4E86\uFF0C\u8BF7\u524D\u5F80\u4F5C\u4E1A\u8FD0\u7EF4\u67E5\u770B\u4EFB\u52A1\u5B9E\u4F8B\u662F\u5426\u6B63\u5E38\u6267\u884C\uFF01")})},I=()=>{m.value="",b.value=[],l.value={rowData:[],columns:[]}};return h({clear:I,reset:L,init:J}),(u,d)=>{const k=c("el-button"),N=c("el-tooltip"),j=c("el-table-column"),q=c("el-table"),R=c("el-collapse-item"),H=c("el-collapse"),C=c("el-tag"),O=c("el-scrollbar");return a(),f(O,null,{default:n(()=>[e("div",P,[e("div",U,[r(N,{effect:"dark",content:"\u6E05\u7A7A\u7ED3\u679C\u9875\u9762",placement:"top-start"},{default:n(()=>[p(r(k,{icon:S(K),onClick:I},null,8,["icon"]),[[_,!!m.value&&!E.value]])]),_:1})]),p(e("div",null,[r(H,null,{default:n(()=>[(a(!0),i(w,null,y(b.value,(o,B)=>(a(),f(R,{title:B+1+". "+o.sql},{default:n(()=>[!o.ifQuery&&o.success?(a(),i("div",X,[e("p",null,[Y,e("span",null,[e("b",null,v(o.count),1)])]),e("p",null,[Z,e("span",null,[e("b",null,v(o.time)+"ms",1)])])])):g("",!0),o.ifQuery&&o.success?(a(),i("div",ee,[e("p",null,[te,e("b",null,v(o.rowData.length),1)]),r(q,{data:o.rowData},{default:n(()=>[(a(!0),i(w,null,y(o.columns,(M,Q)=>(a(),f(j,{"show-overflow-tooltip":!0,prop:M,label:M,key:Q},null,8,["prop","label"]))),128))]),_:2},1032,["data"]),e("p",null,[ue,e("span",null,[e("b",null,v(o.time)+"ms",1)])])])):g("",!0),o.success?g("",!0):(a(),i("div",oe,[le,r(T,{id:"sqlItemErrorMsgId",value:o.errorMsg,style:{height:"500px"}},null,8,["value"]),e("p",null,[se,e("span",null,[e("b",null,v(o.time)+"ms",1)])])]))]),_:2},1032,["title"]))),256))]),_:1})],512),[[_,m.value=="1"]]),p(e("div",null,[p(e("div",null,[e("div",ae,[r(C,null,{default:n(()=>[ne]),_:1})]),e("div",re,[p(r(k,{icon:S(W),type:"primary",onClick:d[0]||(d[0]=o=>V())},{default:n(()=>[ce]),_:1},8,["icon"]),[[_,!!l.value.jobId]])]),p(e("p",ie,[r(C,{type:"warning"},{default:n(()=>[de]),_:1})],512),[[_,!l.value.jobId]])],512),[[_,!E.value]]),e("p",null,[pe,e("b",null,v(l.value.rowData.length),1)]),r(q,{data:l.value.rowData,style:{"margin-bottom":"10px"}},{default:n(()=>[(a(!0),i(w,null,y(l.value.columns,(o,B)=>(a(),f(j,{"show-overflow-tooltip":!0,prop:o,label:o,key:B},null,8,["prop","label"]))),128))]),_:1},8,["data"]),!l.value.success&&!!l.value.error?(a(),i("div",_e,[r(C,{style:{"margin-bottom":"10px"},type:"danger"},{default:n(()=>[ve]),_:1}),l.value.success?g("",!0):(a(),f(T,{key:0,ref_key:"flinkJobErrorMgRef",ref:D,id:"flinkJobErrorMgId",value:l.value.error,style:{height:"500px"}},null,8,["value"]))])):g("",!0)],512),[[_,m.value=="2"]])])]),_:1})}}});export{je as _,Be as a,Ce as b,De as c,Ie as d,ge as e,be as f,he as g,Fe as h,ye as i,ke as j,we as k,xe as l,Ae as s,fe as u}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/constant.71632d98.js b/srt-cloud-gateway/src/main/resources/static/assets/constant.71632d98.js new file mode 100644 index 0000000..267e0dd --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/constant.71632d98.js @@ -0,0 +1 @@ +import{v as e}from"./index.e3896b23.js";const s="maku-admin",t="2.0.0",i={dev:"vite",build:"vite build",preview:"vite preview",lint:"eslint --fix --ext .vue,.jsx,.ts,.tsx ."},n={"@element-plus/icons-vue":"2.0.6","@logicflow/core":"^1.1.31","@logicflow/extension":"^1.1.31","@vueuse/core":"9.1.1","@wangeditor/editor":"5.1.1","@wangeditor/editor-for-vue":"5.1.12",axios:"0.27.2",cropperjs:"1.5.12","element-plus":"^2.2.28",mitt:"3.0.0","monaco-editor":"^0.34.1",nprogress:"0.2.0",pinia:"2.0.16","print-js":"1.6.0","qrcode.vue":"3.3.3",qs:"6.10.3","sql-formatter":"^4.0.2",vue:"3.2.37","vue-drag-resize":"^2.0.3","vue-i18n":"9.1.9","vue-router":"4.0.16"},o={"@babel/types":"7.17.0","@types/node":"17.0.41","@types/nprogress":"0.2.0","@types/qs":"6.9.7","@vitejs/plugin-vue":"3.0.3","@vue/compiler-sfc":"3.2.37","@vue/eslint-config-prettier":"7.0.0","@vue/eslint-config-typescript":"10.0.0","@vue/tsconfig":"0.1.3",eslint:"8.13.0","eslint-plugin-vue":"8.6.0",prettier:"2.6.2",sass:"1.50.1",typescript:"4.7.4",vite:"3.0.8","vite-plugin-svg-icons":"2.0.1","vite-plugin-vue-setup-extend":"0.4.0","vue-tsc":"0.37.3"},r=["vue","vue3","vuejs","vite","element-plus"],p={name:s,version:t,scripts:i,dependencies:n,devDependencies:o,keywords:r},c={version:p.version,apiUrl:"/",uploadUrl:"//sys/file/upload?access_token="+e.getToken()};export{c}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/cpp.5842f29e.js b/srt-cloud-gateway/src/main/resources/static/assets/cpp.5842f29e.js new file mode 100644 index 0000000..8d51636 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/cpp.5842f29e.js @@ -0,0 +1,6 @@ +/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var e={comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"[",close:"]"},{open:"{",close:"}"},{open:"(",close:")"},{open:"'",close:"'",notIn:["string","comment"]},{open:'"',close:'"',notIn:["string"]}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],folding:{markers:{start:new RegExp("^\\s*#pragma\\s+region\\b"),end:new RegExp("^\\s*#pragma\\s+endregion\\b")}}},t={defaultToken:"",tokenPostfix:".cpp",brackets:[{token:"delimiter.curly",open:"{",close:"}"},{token:"delimiter.parenthesis",open:"(",close:")"},{token:"delimiter.square",open:"[",close:"]"},{token:"delimiter.angle",open:"<",close:">"}],keywords:["abstract","amp","array","auto","bool","break","case","catch","char","class","const","constexpr","const_cast","continue","cpu","decltype","default","delegate","delete","do","double","dynamic_cast","each","else","enum","event","explicit","export","extern","false","final","finally","float","for","friend","gcnew","generic","goto","if","in","initonly","inline","int","interface","interior_ptr","internal","literal","long","mutable","namespace","new","noexcept","nullptr","__nullptr","operator","override","partial","pascal","pin_ptr","private","property","protected","public","ref","register","reinterpret_cast","restrict","return","safe_cast","sealed","short","signed","sizeof","static","static_assert","static_cast","struct","switch","template","this","thread_local","throw","tile_static","true","try","typedef","typeid","typename","union","unsigned","using","virtual","void","volatile","wchar_t","where","while","_asm","_based","_cdecl","_declspec","_fastcall","_if_exists","_if_not_exists","_inline","_multiple_inheritance","_pascal","_single_inheritance","_stdcall","_virtual_inheritance","_w64","__abstract","__alignof","__asm","__assume","__based","__box","__builtin_alignof","__cdecl","__clrcall","__declspec","__delegate","__event","__except","__fastcall","__finally","__forceinline","__gc","__hook","__identifier","__if_exists","__if_not_exists","__inline","__int128","__int16","__int32","__int64","__int8","__interface","__leave","__m128","__m128d","__m128i","__m256","__m256d","__m256i","__m64","__multiple_inheritance","__newslot","__nogc","__noop","__nounwind","__novtordisp","__pascal","__pin","__pragma","__property","__ptr32","__ptr64","__raise","__restrict","__resume","__sealed","__single_inheritance","__stdcall","__super","__thiscall","__try","__try_cast","__typeof","__unaligned","__unhook","__uuidof","__value","__virtual_inheritance","__w64","__wchar_t"],operators:["=",">","<","!","~","?",":","==","<=",">=","!=","&&","||","++","--","+","-","*","/","&","|","^","%","<<",">>",">>>","+=","-=","*=","/=","&=","|=","^=","%=","<<=",">>=",">>>="],symbols:/[=>](?!@symbols)/,"@brackets"],[/@symbols/,{cases:{"@operators":"delimiter","@default":""}}],[/\d*\d+[eE]([\-+]?\d+)?(@floatsuffix)/,"number.float"],[/\d*\.\d+([eE][\-+]?\d+)?(@floatsuffix)/,"number.float"],[/0[xX][0-9a-fA-F']*[0-9a-fA-F](@integersuffix)/,"number.hex"],[/0[0-7']*[0-7](@integersuffix)/,"number.octal"],[/0[bB][0-1']*[0-1](@integersuffix)/,"number.binary"],[/\d[\d']*\d(@integersuffix)/,"number"],[/\d(@integersuffix)/,"number"],[/[;,.]/,"delimiter"],[/"([^"\\]|\\.)*$/,"string.invalid"],[/"/,"string","@string"],[/'[^\\']'/,"string"],[/(')(@escapes)(')/,["string","string.escape","string"]],[/'/,"string.invalid"]],whitespace:[[/[ \t\r\n]+/,""],[/\/\*\*(?!\/)/,"comment.doc","@doccomment"],[/\/\*/,"comment","@comment"],[/\/\/.*\\$/,"comment","@linecomment"],[/\/\/.*$/,"comment"]],comment:[[/[^\/*]+/,"comment"],[/\*\//,"comment","@pop"],[/[\/*]/,"comment"]],linecomment:[[/.*[^\\]$/,"comment","@pop"],[/[^]+/,"comment"]],doccomment:[[/[^\/*]+/,"comment.doc"],[/\*\//,"comment.doc","@pop"],[/[\/*]/,"comment.doc"]],string:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"/,"string","@pop"]],raw:[[/(.*)(\))(?:([^ ()\\\t"]*))(\")/,{cases:{"$3==$S2":["string.raw","string.raw.end","string.raw.end",{token:"string.raw.end",next:"@pop"}],"@default":["string.raw","string.raw","string.raw","string.raw"]}}],[/.*/,"string.raw"]],annotation:[{include:"@whitespace"},[/using|alignas/,"keyword"],[/[a-zA-Z0-9_]+/,"annotation"],[/[,:]/,"delimiter"],[/[()]/,"@brackets"],[/\]\s*\]/,{token:"annotation",next:"@pop"}]],include:[[/(\s*)(<)([^<>]*)(>)/,["","keyword.directive.include.begin","string.include.identifier",{token:"keyword.directive.include.end",next:"@pop"}]],[/(\s*)(")([^"]*)(")/,["","keyword.directive.include.begin","string.include.identifier",{token:"keyword.directive.include.end",next:"@pop"}]]]}};export{e as conf,t as language}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/csharp.711e6ef5.js b/srt-cloud-gateway/src/main/resources/static/assets/csharp.711e6ef5.js new file mode 100644 index 0000000..e5cc27b --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/csharp.711e6ef5.js @@ -0,0 +1,6 @@ +/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var e={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\#\$\%\^\&\*\(\)\-\=\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s]+)/g,comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"'",close:"'",notIn:["string","comment"]},{open:'"',close:'"',notIn:["string","comment"]}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"<",close:">"},{open:"'",close:"'"},{open:'"',close:'"'}],folding:{markers:{start:new RegExp("^\\s*#region\\b"),end:new RegExp("^\\s*#endregion\\b")}}},t={defaultToken:"",tokenPostfix:".cs",brackets:[{open:"{",close:"}",token:"delimiter.curly"},{open:"[",close:"]",token:"delimiter.square"},{open:"(",close:")",token:"delimiter.parenthesis"},{open:"<",close:">",token:"delimiter.angle"}],keywords:["extern","alias","using","bool","decimal","sbyte","byte","short","ushort","int","uint","long","ulong","char","float","double","object","dynamic","string","assembly","is","as","ref","out","this","base","new","typeof","void","checked","unchecked","default","delegate","var","const","if","else","switch","case","while","do","for","foreach","in","break","continue","goto","return","throw","try","catch","finally","lock","yield","from","let","where","join","on","equals","into","orderby","ascending","descending","select","group","by","namespace","partial","class","field","event","method","param","public","protected","internal","private","abstract","sealed","static","struct","readonly","volatile","virtual","override","params","get","set","add","remove","operator","true","false","implicit","explicit","interface","enum","null","async","await","fixed","sizeof","stackalloc","unsafe","nameof","when"],namespaceFollows:["namespace","using"],parenFollows:["if","for","while","switch","foreach","using","catch","when"],operators:["=","??","||","&&","|","^","&","==","!=","<=",">=","<<","+","-","*","/","%","!","~","++","--","+=","-=","*=","/=","%=","&=","|=","^=","<<=",">>=",">>","=>"],symbols:/[=>](?!@symbols)/,"@brackets"],[/@symbols/,{cases:{"@operators":"delimiter","@default":""}}],[/[0-9_]*\.[0-9_]+([eE][\-+]?\d+)?[fFdD]?/,"number.float"],[/0[xX][0-9a-fA-F_]+/,"number.hex"],[/0[bB][01_]+/,"number.hex"],[/[0-9_]+/,"number"],[/[;,.]/,"delimiter"],[/"([^"\\]|\\.)*$/,"string.invalid"],[/"/,{token:"string.quote",next:"@string"}],[/\$\@"/,{token:"string.quote",next:"@litinterpstring"}],[/\@"/,{token:"string.quote",next:"@litstring"}],[/\$"/,{token:"string.quote",next:"@interpolatedstring"}],[/'[^\\']'/,"string"],[/(')(@escapes)(')/,["string","string.escape","string"]],[/'/,"string.invalid"]],qualified:[[/[a-zA-Z_][\w]*/,{cases:{"@keywords":{token:"keyword.$0"},"@default":"identifier"}}],[/\./,"delimiter"],["","","@pop"]],namespace:[{include:"@whitespace"},[/[A-Z]\w*/,"namespace"],[/[\.=]/,"delimiter"],["","","@pop"]],comment:[[/[^\/*]+/,"comment"],["\\*/","comment","@pop"],[/[\/*]/,"comment"]],string:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"/,{token:"string.quote",next:"@pop"}]],litstring:[[/[^"]+/,"string"],[/""/,"string.escape"],[/"/,{token:"string.quote",next:"@pop"}]],litinterpstring:[[/[^"{]+/,"string"],[/""/,"string.escape"],[/{{/,"string.escape"],[/}}/,"string.escape"],[/{/,{token:"string.quote",next:"root.litinterpstring"}],[/"/,{token:"string.quote",next:"@pop"}]],interpolatedstring:[[/[^\\"{]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/{{/,"string.escape"],[/}}/,"string.escape"],[/{/,{token:"string.quote",next:"root.interpolatedstring"}],[/"/,{token:"string.quote",next:"@pop"}]],whitespace:[[/^[ \t\v\f]*#((r)|(load))(?=\s)/,"directive.csx"],[/^[ \t\v\f]*#\w.*$/,"namespace.cpp"],[/[ \t\v\f\r\n]+/,""],[/\/\*/,"comment","@comment"],[/\/\/.*$/,"comment"]]}};export{e as conf,t as language}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/csp.1454e635.js b/srt-cloud-gateway/src/main/resources/static/assets/csp.1454e635.js new file mode 100644 index 0000000..acd8524 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/csp.1454e635.js @@ -0,0 +1,6 @@ +/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var t={brackets:[],autoClosingPairs:[],surroundingPairs:[]},r={keywords:[],typeKeywords:[],tokenPostfix:".csp",operators:[],symbols:/[=>",token:"delimiter.angle"}],tokenizer:{root:[{include:"@selector"}],selector:[{include:"@comments"},{include:"@import"},{include:"@strings"},["[@](keyframes|-webkit-keyframes|-moz-keyframes|-o-keyframes)",{token:"keyword",next:"@keyframedeclaration"}],["[@](page|content|font-face|-moz-document)",{token:"keyword"}],["[@](charset|namespace)",{token:"keyword",next:"@declarationbody"}],["(url-prefix)(\\()",["attribute.value",{token:"delimiter.parenthesis",next:"@urldeclaration"}]],["(url)(\\()",["attribute.value",{token:"delimiter.parenthesis",next:"@urldeclaration"}]],{include:"@selectorname"},["[\\*]","tag"],["[>\\+,]","delimiter"],["\\[",{token:"delimiter.bracket",next:"@selectorattribute"}],["{",{token:"delimiter.bracket",next:"@selectorbody"}]],selectorbody:[{include:"@comments"},["[*_]?@identifier@ws:(?=(\\s|\\d|[^{;}]*[;}]))","attribute.name","@rulevalue"],["}",{token:"delimiter.bracket",next:"@pop"}]],selectorname:[["(\\.|#(?=[^{])|%|(@identifier)|:)+","tag"]],selectorattribute:[{include:"@term"},["]",{token:"delimiter.bracket",next:"@pop"}]],term:[{include:"@comments"},["(url-prefix)(\\()",["attribute.value",{token:"delimiter.parenthesis",next:"@urldeclaration"}]],["(url)(\\()",["attribute.value",{token:"delimiter.parenthesis",next:"@urldeclaration"}]],{include:"@functioninvocation"},{include:"@numbers"},{include:"@name"},{include:"@strings"},["([<>=\\+\\-\\*\\/\\^\\|\\~,])","delimiter"],[",","delimiter"]],rulevalue:[{include:"@comments"},{include:"@strings"},{include:"@term"},["!important","keyword"],[";","delimiter","@pop"],["(?=})",{token:"",next:"@pop"}]],warndebug:[["[@](warn|debug)",{token:"keyword",next:"@declarationbody"}]],import:[["[@](import)",{token:"keyword",next:"@declarationbody"}]],urldeclaration:[{include:"@strings"},[`[^)\r +]+`,"string"],["\\)",{token:"delimiter.parenthesis",next:"@pop"}]],parenthizedterm:[{include:"@term"},["\\)",{token:"delimiter.parenthesis",next:"@pop"}]],declarationbody:[{include:"@term"},[";","delimiter","@pop"],["(?=})",{token:"",next:"@pop"}]],comments:[["\\/\\*","comment","@comment"],["\\/\\/+.*","comment"]],comment:[["\\*\\/","comment","@pop"],[/[^*/]+/,"comment"],[/./,"comment"]],name:[["@identifier","attribute.value"]],numbers:[["-?(\\d*\\.)?\\d+([eE][\\-+]?\\d+)?",{token:"attribute.value.number",next:"@units"}],["#[0-9a-fA-F_]+(?!\\w)","attribute.value.hex"]],units:[["(em|ex|ch|rem|fr|vmin|vmax|vw|vh|vm|cm|mm|in|px|pt|pc|deg|grad|rad|turn|s|ms|Hz|kHz|%)?","attribute.value.unit","@pop"]],keyframedeclaration:[["@identifier","attribute.value"],["{",{token:"delimiter.bracket",switchTo:"@keyframebody"}]],keyframebody:[{include:"@term"},["{",{token:"delimiter.bracket",next:"@selectorbody"}],["}",{token:"delimiter.bracket",next:"@pop"}]],functioninvocation:[["@identifier\\(",{token:"attribute.value",next:"@functionarguments"}]],functionarguments:[["\\$@identifier@ws:","attribute.name"],["[,]","delimiter"],{include:"@term"},["\\)",{token:"attribute.value",next:"@pop"}]],strings:[['~?"',{token:"string",next:"@stringenddoublequote"}],["~?'",{token:"string",next:"@stringendquote"}]],stringenddoublequote:[["\\\\.","string"],['"',{token:"string",next:"@pop"}],[/[^\\"]+/,"string"],[".","string"]],stringendquote:[["\\\\.","string"],["'",{token:"string",next:"@pop"}],[/[^\\']+/,"string"],[".","string"]]}};export{e as conf,t as language}; diff --git a/srt-cloud-gateway/src/main/resources/static/assets/css.worker.91dbdef6.js b/srt-cloud-gateway/src/main/resources/static/assets/css.worker.91dbdef6.js new file mode 100644 index 0000000..a978f88 --- /dev/null +++ b/srt-cloud-gateway/src/main/resources/static/assets/css.worker.91dbdef6.js @@ -0,0 +1,84 @@ +var ku=Object.defineProperty;var _u=(at,je,ot)=>je in at?ku(at,je,{enumerable:!0,configurable:!0,writable:!0,value:ot}):at[je]=ot;var un=(at,je,ot)=>(_u(at,typeof je!="symbol"?je+"":je,ot),ot);(function(){"use strict";class at{constructor(){this.listeners=[],this.unexpectedErrorHandler=function(t){setTimeout(()=>{throw t.stack?wt.isErrorNoTelemetry(t)?new wt(t.message+` + +`+t.stack):new Error(t.message+` + +`+t.stack):t},0)}}emit(t){this.listeners.forEach(n=>{n(t)})}onUnexpectedError(t){this.unexpectedErrorHandler(t),this.emit(t)}onUnexpectedExternalError(t){this.unexpectedErrorHandler(t)}}const je=new at;function ot(e){Qo(e)||je.onUnexpectedError(e)}function Fi(e){if(e instanceof Error){const{name:t,message:n}=e,r=e.stacktrace||e.stack;return{$isError:!0,name:t,message:n,stack:r,noTelemetry:wt.isErrorNoTelemetry(e)}}return e}const Yn="Canceled";function Qo(e){return e instanceof el?!0:e instanceof Error&&e.name===Yn&&e.message===Yn}class el extends Error{constructor(){super(Yn),this.name=this.message}}class wt extends Error{constructor(t){super(t),this.name="ErrorNoTelemetry"}static fromError(t){if(t instanceof wt)return t;const n=new wt;return n.message=t.message,n.stack=t.stack,n}static isErrorNoTelemetry(t){return t.name==="ErrorNoTelemetry"}}function tl(e){const t=this;let n=!1,r;return function(){return n||(n=!0,r=e.apply(t,arguments)),r}}var pn;(function(e){function t(k){return k&&typeof k=="object"&&typeof k[Symbol.iterator]=="function"}e.is=t;const n=Object.freeze([]);function r(){return n}e.empty=r;function*i(k){yield k}e.single=i;function s(k){return k||n}e.from=s;function o(k){return!k||k[Symbol.iterator]().next().done===!0}e.isEmpty=o;function l(k){return k[Symbol.iterator]().next().value}e.first=l;function c(k,P){for(const F of k)if(P(F))return!0;return!1}e.some=c;function h(k,P){for(const F of k)if(P(F))return F}e.find=h;function*d(k,P){for(const F of k)P(F)&&(yield F)}e.filter=d;function*u(k,P){let F=0;for(const _ of k)yield P(_,F++)}e.map=u;function*f(...k){for(const P of k)for(const F of P)yield F}e.concat=f;function*m(k){for(const P of k)for(const F of P)yield F}e.concatNested=m;function v(k,P,F){let _=F;for(const C of k)_=P(_,C);return _}e.reduce=v;function b(k,P){let F=0;for(const _ of k)P(_,F++)}e.forEach=b;function*w(k,P,F=k.length){for(P<0&&(P+=k.length),F<0?F+=k.length:F>k.length&&(F=k.length);P_===C){const _=k[Symbol.iterator](),C=P[Symbol.iterator]();for(;;){const z=_.next(),W=C.next();if(z.done!==W.done)return!1;if(z.done)return!0;if(!F(z.value,W.value))return!1}}e.equals=M})(pn||(pn={}));function Fu(e){return e}function Eu(e,t){}class nl extends Error{constructor(t){super(`Encountered errors while disposing of store. Errors: [${t.join(", ")}]`),this.errors=t}}function Ei(e){if(pn.is(e)){const t=[];for(const n of e)if(n)try{n.dispose()}catch(r){t.push(r)}if(t.length===1)throw t[0];if(t.length>1)throw new nl(t);return Array.isArray(e)?[]:e}else if(e)return e.dispose(),e}function rl(...e){return fn(()=>Ei(e))}function fn(e){return{dispose:tl(()=>{e()})}}class lt{constructor(){this._toDispose=new Set,this._isDisposed=!1}dispose(){this._isDisposed||(this._isDisposed=!0,this.clear())}get isDisposed(){return this._isDisposed}clear(){try{Ei(this._toDispose.values())}finally{this._toDispose.clear()}}add(t){if(!t)return t;if(t===this)throw new Error("Cannot register a disposable on itself!");return this._isDisposed?lt.DISABLE_DISPOSED_WARNING||console.warn(new Error("Trying to add a disposable to a DisposableStore that has already been disposed of. The added object will be leaked!").stack):this._toDispose.add(t),t}}lt.DISABLE_DISPOSED_WARNING=!1;class Kn{constructor(){this._store=new lt,this._store}dispose(){this._store.dispose()}_register(t){if(t===this)throw new Error("Cannot register a disposable on itself!");return this._store.add(t)}}Kn.None=Object.freeze({dispose(){}});class il{constructor(){this.dispose=()=>{},this.unset=()=>{},this.isset=()=>!1}set(t){let n=t;return this.unset=()=>n=void 0,this.isset=()=>n!==void 0,this.dispose=()=>{n&&(n(),n=void 0)},this}}class Q{constructor(t){this.element=t,this.next=Q.Undefined,this.prev=Q.Undefined}}Q.Undefined=new Q(void 0);class mn{constructor(){this._first=Q.Undefined,this._last=Q.Undefined,this._size=0}get size(){return this._size}isEmpty(){return this._first===Q.Undefined}clear(){let t=this._first;for(;t!==Q.Undefined;){const n=t.next;t.prev=Q.Undefined,t.next=Q.Undefined,t=n}this._first=Q.Undefined,this._last=Q.Undefined,this._size=0}unshift(t){return this._insert(t,!1)}push(t){return this._insert(t,!0)}_insert(t,n){const r=new Q(t);if(this._first===Q.Undefined)this._first=r,this._last=r;else if(n){const s=this._last;this._last=r,r.prev=s,s.next=r}else{const s=this._first;this._first=r,r.next=s,s.prev=r}this._size+=1;let i=!1;return()=>{i||(i=!0,this._remove(r))}}shift(){if(this._first!==Q.Undefined){const t=this._first.element;return this._remove(this._first),t}}pop(){if(this._last!==Q.Undefined){const t=this._last.element;return this._remove(this._last),t}}_remove(t){if(t.prev!==Q.Undefined&&t.next!==Q.Undefined){const n=t.prev;n.next=t.next,t.next.prev=n}else t.prev===Q.Undefined&&t.next===Q.Undefined?(this._first=Q.Undefined,this._last=Q.Undefined):t.next===Q.Undefined?(this._last=this._last.prev,this._last.next=Q.Undefined):t.prev===Q.Undefined&&(this._first=this._first.next,this._first.prev=Q.Undefined);this._size-=1}*[Symbol.iterator](){let t=this._first;for(;t!==Q.Undefined;)yield t.element,t=t.next}}let sl=typeof document<"u"&&document.location&&document.location.hash.indexOf("pseudo=true")>=0;function al(e,t){let n;return t.length===0?n=e:n=e.replace(/\{(\d+)\}/g,(r,i)=>{const s=i[0],o=t[s];let l=r;return typeof o=="string"?l=o:(typeof o=="number"||typeof o=="boolean"||o===void 0||o===null)&&(l=String(o)),l}),sl&&(n="\uFF3B"+n.replace(/[aouei]/g,"$&$&")+"\uFF3D"),n}function ol(e,t,...n){return al(t,n)}function Du(e){}var Zn;const Lt="en";let Qn=!1,er=!1,tr=!1,Di=!1,gn,nr=Lt,ll,Ye;const ue=typeof self=="object"?self:typeof global=="object"?global:{};let pe;typeof ue.vscode<"u"&&typeof ue.vscode.process<"u"?pe=ue.vscode.process:typeof process<"u"&&(pe=process);const cl=typeof((Zn=pe==null?void 0:pe.versions)===null||Zn===void 0?void 0:Zn.electron)=="string"&&(pe==null?void 0:pe.type)==="renderer";if(typeof navigator=="object"&&!cl)Ye=navigator.userAgent,Qn=Ye.indexOf("Windows")>=0,er=Ye.indexOf("Macintosh")>=0,(Ye.indexOf("Macintosh")>=0||Ye.indexOf("iPad")>=0||Ye.indexOf("iPhone")>=0)&&!!navigator.maxTouchPoints&&navigator.maxTouchPoints>0,tr=Ye.indexOf("Linux")>=0,Di=!0,ol({key:"ensureLoaderPluginIsLoaded",comment:["{Locked}"]},"_"),gn=Lt,nr=gn;else if(typeof pe=="object"){Qn=pe.platform==="win32",er=pe.platform==="darwin",tr=pe.platform==="linux",tr&&!!pe.env.SNAP&&pe.env.SNAP_REVISION,pe.env.CI||pe.env.BUILD_ARTIFACTSTAGINGDIRECTORY,gn=Lt,nr=Lt;const e=pe.env.VSCODE_NLS_CONFIG;if(e)try{const t=JSON.parse(e),n=t.availableLanguages["*"];gn=t.locale,nr=n||Lt,ll=t._translationsConfigFile}catch{}}else console.error("Unable to resolve platform.");const Tt=Qn,hl=er;Di&&ue.importScripts;const Te=Ye,dl=typeof ue.postMessage=="function"&&!ue.importScripts;(()=>{if(dl){const e=[];ue.addEventListener("message",n=>{if(n.data&&n.data.vscodeScheduleAsyncWork)for(let r=0,i=e.length;r{const r=++t;e.push({id:r,callback:n}),ue.postMessage({vscodeScheduleAsyncWork:r},"*")}}return e=>setTimeout(e)})();const ul=!!(Te&&Te.indexOf("Chrome")>=0);Te&&Te.indexOf("Firefox")>=0,!ul&&Te&&Te.indexOf("Safari")>=0,Te&&Te.indexOf("Edg/")>=0,Te&&Te.indexOf("Android")>=0;const pl=ue.performance&&typeof ue.performance.now=="function";class bn{constructor(t){this._highResolution=pl&&t,this._startTime=this._now(),this._stopTime=-1}static create(t=!0){return new bn(t)}stop(){this._stopTime=this._now()}elapsed(){return this._stopTime!==-1?this._stopTime-this._startTime:this._now()-this._startTime}_now(){return this._highResolution?ue.performance.now():Date.now()}}var rr;(function(e){e.None=()=>Kn.None;function t(F){return(_,C=null,z)=>{let W=!1,T;return T=F(L=>{if(!W)return T?T.dispose():W=!0,_.call(C,L)},null,z),W&&T.dispose(),T}}e.once=t;function n(F,_,C){return c((z,W=null,T)=>F(L=>z.call(W,_(L)),null,T),C)}e.map=n;function r(F,_,C){return c((z,W=null,T)=>F(L=>{_(L),z.call(W,L)},null,T),C)}e.forEach=r;function i(F,_,C){return c((z,W=null,T)=>F(L=>_(L)&&z.call(W,L),null,T),C)}e.filter=i;function s(F){return F}e.signal=s;function o(...F){return(_,C=null,z)=>rl(...F.map(W=>W(T=>_.call(C,T),null,z)))}e.any=o;function l(F,_,C,z){let W=C;return n(F,T=>(W=_(W,T),W),z)}e.reduce=l;function c(F,_){let C;const z={onFirstListenerAdd(){C=F(W.fire,W)},onLastListenerRemove(){C==null||C.dispose()}},W=new We(z);return _==null||_.add(W),W.event}function h(F,_,C=100,z=!1,W,T){let L,$,te,Ee=0;const ye={leakWarningThreshold:W,onFirstListenerAdd(){L=F(x=>{Ee++,$=_($,x),z&&!te&&(D.fire($),$=void 0),clearTimeout(te),te=setTimeout(()=>{const R=$;$=void 0,te=void 0,(!z||Ee>1)&&D.fire(R),Ee=0},C)})},onLastListenerRemove(){L.dispose()}},D=new We(ye);return T==null||T.add(D),D.event}e.debounce=h;function d(F,_=(z,W)=>z===W,C){let z=!0,W;return i(F,T=>{const L=z||!_(T,W);return z=!1,W=T,L},C)}e.latch=d;function u(F,_,C){return[e.filter(F,_,C),e.filter(F,z=>!_(z),C)]}e.split=u;function f(F,_=!1,C=[]){let z=C.slice(),W=F($=>{z?z.push($):L.fire($)});const T=()=>{z==null||z.forEach($=>L.fire($)),z=null},L=new We({onFirstListenerAdd(){W||(W=F($=>L.fire($)))},onFirstListenerDidAdd(){z&&(_?setTimeout(T):T())},onLastListenerRemove(){W&&W.dispose(),W=null}});return L.event}e.buffer=f;class m{constructor(_){this.event=_,this.disposables=new lt}map(_){return new m(n(this.event,_,this.disposables))}forEach(_){return new m(r(this.event,_,this.disposables))}filter(_){return new m(i(this.event,_,this.disposables))}reduce(_,C){return new m(l(this.event,_,C,this.disposables))}latch(){return new m(d(this.event,void 0,this.disposables))}debounce(_,C=100,z=!1,W){return new m(h(this.event,_,C,z,W,this.disposables))}on(_,C,z){return this.event(_,C,z)}once(_,C,z){return t(this.event)(_,C,z)}dispose(){this.disposables.dispose()}}function v(F){return new m(F)}e.chain=v;function b(F,_,C=z=>z){const z=(...$)=>L.fire(C(...$)),W=()=>F.on(_,z),T=()=>F.removeListener(_,z),L=new We({onFirstListenerAdd:W,onLastListenerRemove:T});return L.event}e.fromNodeEventEmitter=b;function w(F,_,C=z=>z){const z=(...$)=>L.fire(C(...$)),W=()=>F.addEventListener(_,z),T=()=>F.removeEventListener(_,z),L=new We({onFirstListenerAdd:W,onLastListenerRemove:T});return L.event}e.fromDOMEventEmitter=w;function N(F){return new Promise(_=>t(F)(_))}e.toPromise=N;function E(F,_){return _(void 0),F(C=>_(C))}e.runAndSubscribe=E;function M(F,_){let C=null;function z(T){C==null||C.dispose(),C=new lt,_(T,C)}z(void 0);const W=F(T=>z(T));return fn(()=>{W.dispose(),C==null||C.dispose()})}e.runAndSubscribeWithStore=M;class k{constructor(_,C){this.obs=_,this._counter=0,this._hasChanged=!1;const z={onFirstListenerAdd:()=>{_.addObserver(this)},onLastListenerRemove:()=>{_.removeObserver(this)}};this.emitter=new We(z),C&&C.add(this.emitter)}beginUpdate(_){this._counter++}handleChange(_,C){this._hasChanged=!0}endUpdate(_){--this._counter===0&&this._hasChanged&&(this._hasChanged=!1,this.emitter.fire(this.obs.get()))}}function P(F,_){return new k(F,_).emitter.event}e.fromObservable=P})(rr||(rr={}));class vn{constructor(t){this._listenerCount=0,this._invocationCount=0,this._elapsedOverall=0,this._name=`${t}_${vn._idPool++}`}start(t){this._stopWatch=new bn(!0),this._listenerCount=t}stop(){if(this._stopWatch){const t=this._stopWatch.elapsed();this._elapsedOverall+=t,this._invocationCount+=1,console.info(`did FIRE ${this._name}: elapsed_ms: ${t.toFixed(5)}, listener: ${this._listenerCount} (elapsed_overall: ${this._elapsedOverall.toFixed(2)}, invocations: ${this._invocationCount})`),this._stopWatch=void 0}}}vn._idPool=0;class ir{constructor(t){this.value=t}static create(){var t;return new ir((t=new Error().stack)!==null&&t!==void 0?t:"")}print(){console.warn(this.value.split(` +`).slice(2).join(` +`))}}class fl{constructor(t,n,r){this.callback=t,this.callbackThis=n,this.stack=r,this.subscription=new il}invoke(t){this.callback.call(this.callbackThis,t)}}class We{constructor(t){var n,r;this._disposed=!1,this._options=t,this._leakageMon=void 0,this._perfMon=!((n=this._options)===null||n===void 0)&&n._profName?new vn(this._options._profName):void 0,this._deliveryQueue=(r=this._options)===null||r===void 0?void 0:r.deliveryQueue}dispose(){var t,n,r,i;this._disposed||(this._disposed=!0,this._listeners&&this._listeners.clear(),(t=this._deliveryQueue)===null||t===void 0||t.clear(this),(r=(n=this._options)===null||n===void 0?void 0:n.onLastListenerRemove)===null||r===void 0||r.call(n),(i=this._leakageMon)===null||i===void 0||i.dispose())}get event(){return this._event||(this._event=(t,n,r)=>{var i,s,o;this._listeners||(this._listeners=new mn);const l=this._listeners.isEmpty();l&&((i=this._options)===null||i===void 0?void 0:i.onFirstListenerAdd)&&this._options.onFirstListenerAdd(this);let c,h;this._leakageMon&&this._listeners.size>=30&&(h=ir.create(),c=this._leakageMon.check(h,this._listeners.size+1));const d=new fl(t,n,h),u=this._listeners.push(d);l&&((s=this._options)===null||s===void 0?void 0:s.onFirstListenerDidAdd)&&this._options.onFirstListenerDidAdd(this),!((o=this._options)===null||o===void 0)&&o.onListenerDidAdd&&this._options.onListenerDidAdd(this,t,n);const f=d.subscription.set(()=>{c==null||c(),this._disposed||(u(),this._options&&this._options.onLastListenerRemove&&(this._listeners&&!this._listeners.isEmpty()||this._options.onLastListenerRemove(this)))});return r instanceof lt?r.add(f):Array.isArray(r)&&r.push(f),f}),this._event}fire(t){var n,r;if(this._listeners){this._deliveryQueue||(this._deliveryQueue=new gl);for(const i of this._listeners)this._deliveryQueue.push(this,i,t);(n=this._perfMon)===null||n===void 0||n.start(this._deliveryQueue.size),this._deliveryQueue.deliver(),(r=this._perfMon)===null||r===void 0||r.stop()}}}class ml{constructor(){this._queue=new mn}get size(){return this._queue.size}push(t,n,r){this._queue.push(new bl(t,n,r))}clear(t){const n=new mn;for(const r of this._queue)r.emitter!==t&&n.push(r);this._queue=n}deliver(){for(;this._queue.size>0;){const t=this._queue.shift();try{t.listener.invoke(t.event)}catch(n){ot(n)}}}}class gl extends ml{clear(t){this._queue.clear()}}class bl{constructor(t,n,r){this.emitter=t,this.listener=n,this.event=r}}function vl(e){let t=[],n=Object.getPrototypeOf(e);for(;Object.prototype!==n;)t=t.concat(Object.getOwnPropertyNames(n)),n=Object.getPrototypeOf(n);return t}function sr(e){const t=[];for(const n of vl(e))typeof e[n]=="function"&&t.push(n);return t}function wl(e,t){const n=i=>function(){const s=Array.prototype.slice.call(arguments,0);return t(i,s)},r={};for(const i of e)r[i]=n(i);return r}function yl(e,t="Unreachable"){throw new Error(t)}class xl{constructor(t){this.fn=t,this.lastCache=void 0,this.lastArgKey=void 0}get(t){const n=JSON.stringify(t);return this.lastArgKey!==n&&(this.lastArgKey=n,this.lastCache=this.fn(t)),this.lastCache}}class Ri{constructor(t){this.executor=t,this._didRun=!1}hasValue(){return this._didRun}getValue(){if(!this._didRun)try{this._value=this.executor()}catch(t){this._error=t}finally{this._didRun=!0}if(this._error)throw this._error;return this._value}get rawValue(){return this._value}}var Ai;function Sl(e){return e.replace(/[\\\{\}\*\+\?\|\^\$\.\[\]\(\)]/g,"\\$&")}function Cl(e){return e.split(/\r\n|\r|\n/)}function kl(e){for(let t=0,n=e.length;t=0;n--){const r=e.charCodeAt(n);if(r!==32&&r!==9)return n}return-1}function zi(e){return e>=65&&e<=90}function ar(e){return 55296<=e&&e<=56319}function Fl(e){return 56320<=e&&e<=57343}function El(e,t){return(e-55296<<10)+(t-56320)+65536}function Dl(e,t,n){const r=e.charCodeAt(n);if(ar(r)&&n+1JSON.parse('{"_common":[8232,32,8233,32,5760,32,8192,32,8193,32,8194,32,8195,32,8196,32,8197,32,8198,32,8200,32,8201,32,8202,32,8287,32,8199,32,8239,32,2042,95,65101,95,65102,95,65103,95,8208,45,8209,45,8210,45,65112,45,1748,45,8259,45,727,45,8722,45,10134,45,11450,45,1549,44,1643,44,8218,44,184,44,42233,44,894,59,2307,58,2691,58,1417,58,1795,58,1796,58,5868,58,65072,58,6147,58,6153,58,8282,58,1475,58,760,58,42889,58,8758,58,720,58,42237,58,451,33,11601,33,660,63,577,63,2429,63,5038,63,42731,63,119149,46,8228,46,1793,46,1794,46,42510,46,68176,46,1632,46,1776,46,42232,46,1373,96,65287,96,8219,96,8242,96,1370,96,1523,96,8175,96,65344,96,900,96,8189,96,8125,96,8127,96,8190,96,697,96,884,96,712,96,714,96,715,96,756,96,699,96,701,96,700,96,702,96,42892,96,1497,96,2036,96,2037,96,5194,96,5836,96,94033,96,94034,96,65339,91,10088,40,10098,40,12308,40,64830,40,65341,93,10089,41,10099,41,12309,41,64831,41,10100,123,119060,123,10101,125,65342,94,8270,42,1645,42,8727,42,66335,42,5941,47,8257,47,8725,47,8260,47,9585,47,10187,47,10744,47,119354,47,12755,47,12339,47,11462,47,20031,47,12035,47,65340,92,65128,92,8726,92,10189,92,10741,92,10745,92,119311,92,119355,92,12756,92,20022,92,12034,92,42872,38,708,94,710,94,5869,43,10133,43,66203,43,8249,60,10094,60,706,60,119350,60,5176,60,5810,60,5120,61,11840,61,12448,61,42239,61,8250,62,10095,62,707,62,119351,62,5171,62,94015,62,8275,126,732,126,8128,126,8764,126,65372,124,65293,45,120784,50,120794,50,120804,50,120814,50,120824,50,130034,50,42842,50,423,50,1000,50,42564,50,5311,50,42735,50,119302,51,120785,51,120795,51,120805,51,120815,51,120825,51,130035,51,42923,51,540,51,439,51,42858,51,11468,51,1248,51,94011,51,71882,51,120786,52,120796,52,120806,52,120816,52,120826,52,130036,52,5070,52,71855,52,120787,53,120797,53,120807,53,120817,53,120827,53,130037,53,444,53,71867,53,120788,54,120798,54,120808,54,120818,54,120828,54,130038,54,11474,54,5102,54,71893,54,119314,55,120789,55,120799,55,120809,55,120819,55,120829,55,130039,55,66770,55,71878,55,2819,56,2538,56,2666,56,125131,56,120790,56,120800,56,120810,56,120820,56,120830,56,130040,56,547,56,546,56,66330,56,2663,57,2920,57,2541,57,3437,57,120791,57,120801,57,120811,57,120821,57,120831,57,130041,57,42862,57,11466,57,71884,57,71852,57,71894,57,9082,97,65345,97,119834,97,119886,97,119938,97,119990,97,120042,97,120094,97,120146,97,120198,97,120250,97,120302,97,120354,97,120406,97,120458,97,593,97,945,97,120514,97,120572,97,120630,97,120688,97,120746,97,65313,65,119808,65,119860,65,119912,65,119964,65,120016,65,120068,65,120120,65,120172,65,120224,65,120276,65,120328,65,120380,65,120432,65,913,65,120488,65,120546,65,120604,65,120662,65,120720,65,5034,65,5573,65,42222,65,94016,65,66208,65,119835,98,119887,98,119939,98,119991,98,120043,98,120095,98,120147,98,120199,98,120251,98,120303,98,120355,98,120407,98,120459,98,388,98,5071,98,5234,98,5551,98,65314,66,8492,66,119809,66,119861,66,119913,66,120017,66,120069,66,120121,66,120173,66,120225,66,120277,66,120329,66,120381,66,120433,66,42932,66,914,66,120489,66,120547,66,120605,66,120663,66,120721,66,5108,66,5623,66,42192,66,66178,66,66209,66,66305,66,65347,99,8573,99,119836,99,119888,99,119940,99,119992,99,120044,99,120096,99,120148,99,120200,99,120252,99,120304,99,120356,99,120408,99,120460,99,7428,99,1010,99,11429,99,43951,99,66621,99,128844,67,71922,67,71913,67,65315,67,8557,67,8450,67,8493,67,119810,67,119862,67,119914,67,119966,67,120018,67,120174,67,120226,67,120278,67,120330,67,120382,67,120434,67,1017,67,11428,67,5087,67,42202,67,66210,67,66306,67,66581,67,66844,67,8574,100,8518,100,119837,100,119889,100,119941,100,119993,100,120045,100,120097,100,120149,100,120201,100,120253,100,120305,100,120357,100,120409,100,120461,100,1281,100,5095,100,5231,100,42194,100,8558,68,8517,68,119811,68,119863,68,119915,68,119967,68,120019,68,120071,68,120123,68,120175,68,120227,68,120279,68,120331,68,120383,68,120435,68,5024,68,5598,68,5610,68,42195,68,8494,101,65349,101,8495,101,8519,101,119838,101,119890,101,119942,101,120046,101,120098,101,120150,101,120202,101,120254,101,120306,101,120358,101,120410,101,120462,101,43826,101,1213,101,8959,69,65317,69,8496,69,119812,69,119864,69,119916,69,120020,69,120072,69,120124,69,120176,69,120228,69,120280,69,120332,69,120384,69,120436,69,917,69,120492,69,120550,69,120608,69,120666,69,120724,69,11577,69,5036,69,42224,69,71846,69,71854,69,66182,69,119839,102,119891,102,119943,102,119995,102,120047,102,120099,102,120151,102,120203,102,120255,102,120307,102,120359,102,120411,102,120463,102,43829,102,42905,102,383,102,7837,102,1412,102,119315,70,8497,70,119813,70,119865,70,119917,70,120021,70,120073,70,120125,70,120177,70,120229,70,120281,70,120333,70,120385,70,120437,70,42904,70,988,70,120778,70,5556,70,42205,70,71874,70,71842,70,66183,70,66213,70,66853,70,65351,103,8458,103,119840,103,119892,103,119944,103,120048,103,120100,103,120152,103,120204,103,120256,103,120308,103,120360,103,120412,103,120464,103,609,103,7555,103,397,103,1409,103,119814,71,119866,71,119918,71,119970,71,120022,71,120074,71,120126,71,120178,71,120230,71,120282,71,120334,71,120386,71,120438,71,1292,71,5056,71,5107,71,42198,71,65352,104,8462,104,119841,104,119945,104,119997,104,120049,104,120101,104,120153,104,120205,104,120257,104,120309,104,120361,104,120413,104,120465,104,1211,104,1392,104,5058,104,65320,72,8459,72,8460,72,8461,72,119815,72,119867,72,119919,72,120023,72,120179,72,120231,72,120283,72,120335,72,120387,72,120439,72,919,72,120494,72,120552,72,120610,72,120668,72,120726,72,11406,72,5051,72,5500,72,42215,72,66255,72,731,105,9075,105,65353,105,8560,105,8505,105,8520,105,119842,105,119894,105,119946,105,119998,105,120050,105,120102,105,120154,105,120206,105,120258,105,120310,105,120362,105,120414,105,120466,105,120484,105,618,105,617,105,953,105,8126,105,890,105,120522,105,120580,105,120638,105,120696,105,120754,105,1110,105,42567,105,1231,105,43893,105,5029,105,71875,105,65354,106,8521,106,119843,106,119895,106,119947,106,119999,106,120051,106,120103,106,120155,106,120207,106,120259,106,120311,106,120363,106,120415,106,120467,106,1011,106,1112,106,65322,74,119817,74,119869,74,119921,74,119973,74,120025,74,120077,74,120129,74,120181,74,120233,74,120285,74,120337,74,120389,74,120441,74,42930,74,895,74,1032,74,5035,74,5261,74,42201,74,119844,107,119896,107,119948,107,120000,107,120052,107,120104,107,120156,107,120208,107,120260,107,120312,107,120364,107,120416,107,120468,107,8490,75,65323,75,119818,75,119870,75,119922,75,119974,75,120026,75,120078,75,120130,75,120182,75,120234,75,120286,75,120338,75,120390,75,120442,75,922,75,120497,75,120555,75,120613,75,120671,75,120729,75,11412,75,5094,75,5845,75,42199,75,66840,75,1472,108,8739,73,9213,73,65512,73,1633,108,1777,73,66336,108,125127,108,120783,73,120793,73,120803,73,120813,73,120823,73,130033,73,65321,73,8544,73,8464,73,8465,73,119816,73,119868,73,119920,73,120024,73,120128,73,120180,73,120232,73,120284,73,120336,73,120388,73,120440,73,65356,108,8572,73,8467,108,119845,108,119897,108,119949,108,120001,108,120053,108,120105,73,120157,73,120209,73,120261,73,120313,73,120365,73,120417,73,120469,73,448,73,120496,73,120554,73,120612,73,120670,73,120728,73,11410,73,1030,73,1216,73,1493,108,1503,108,1575,108,126464,108,126592,108,65166,108,65165,108,1994,108,11599,73,5825,73,42226,73,93992,73,66186,124,66313,124,119338,76,8556,76,8466,76,119819,76,119871,76,119923,76,120027,76,120079,76,120131,76,120183,76,120235,76,120287,76,120339,76,120391,76,120443,76,11472,76,5086,76,5290,76,42209,76,93974,76,71843,76,71858,76,66587,76,66854,76,65325,77,8559,77,8499,77,119820,77,119872,77,119924,77,120028,77,120080,77,120132,77,120184,77,120236,77,120288,77,120340,77,120392,77,120444,77,924,77,120499,77,120557,77,120615,77,120673,77,120731,77,1018,77,11416,77,5047,77,5616,77,5846,77,42207,77,66224,77,66321,77,119847,110,119899,110,119951,110,120003,110,120055,110,120107,110,120159,110,120211,110,120263,110,120315,110,120367,110,120419,110,120471,110,1400,110,1404,110,65326,78,8469,78,119821,78,119873,78,119925,78,119977,78,120029,78,120081,78,120185,78,120237,78,120289,78,120341,78,120393,78,120445,78,925,78,120500,78,120558,78,120616,78,120674,78,120732,78,11418,78,42208,78,66835,78,3074,111,3202,111,3330,111,3458,111,2406,111,2662,111,2790,111,3046,111,3174,111,3302,111,3430,111,3664,111,3792,111,4160,111,1637,111,1781,111,65359,111,8500,111,119848,111,119900,111,119952,111,120056,111,120108,111,120160,111,120212,111,120264,111,120316,111,120368,111,120420,111,120472,111,7439,111,7441,111,43837,111,959,111,120528,111,120586,111,120644,111,120702,111,120760,111,963,111,120532,111,120590,111,120648,111,120706,111,120764,111,11423,111,4351,111,1413,111,1505,111,1607,111,126500,111,126564,111,126596,111,65259,111,65260,111,65258,111,65257,111,1726,111,64428,111,64429,111,64427,111,64426,111,1729,111,64424,111,64425,111,64423,111,64422,111,1749,111,3360,111,4125,111,66794,111,71880,111,71895,111,66604,111,1984,79,2534,79,2918,79,12295,79,70864,79,71904,79,120782,79,120792,79,120802,79,120812,79,120822,79,130032,79,65327,79,119822,79,119874,79,119926,79,119978,79,120030,79,120082,79,120134,79,120186,79,120238,79,120290,79,120342,79,120394,79,120446,79,927,79,120502,79,120560,79,120618,79,120676,79,120734,79,11422,79,1365,79,11604,79,4816,79,2848,79,66754,79,42227,79,71861,79,66194,79,66219,79,66564,79,66838,79,9076,112,65360,112,119849,112,119901,112,119953,112,120005,112,120057,112,120109,112,120161,112,120213,112,120265,112,120317,112,120369,112,120421,112,120473,112,961,112,120530,112,120544,112,120588,112,120602,112,120646,112,120660,112,120704,112,120718,112,120762,112,120776,112,11427,112,65328,80,8473,80,119823,80,119875,80,119927,80,119979,80,120031,80,120083,80,120187,80,120239,80,120291,80,120343,80,120395,80,120447,80,929,80,120504,80,120562,80,120620,80,120678,80,120736,80,11426,80,5090,80,5229,80,42193,80,66197,80,119850,113,119902,113,119954,113,120006,113,120058,113,120110,113,120162,113,120214,113,120266,113,120318,113,120370,113,120422,113,120474,113,1307,113,1379,113,1382,113,8474,81,119824,81,119876,81,119928,81,119980,81,120032,81,120084,81,120188,81,120240,81,120292,81,120344,81,120396,81,120448,81,11605,81,119851,114,119903,114,119955,114,120007,114,120059,114,120111,114,120163,114,120215,114,120267,114,120319,114,120371,114,120423,114,120475,114,43847,114,43848,114,7462,114,11397,114,43905,114,119318,82,8475,82,8476,82,8477,82,119825,82,119877,82,119929,82,120033,82,120189,82,120241,82,120293,82,120345,82,120397,82,120449,82,422,82,5025,82,5074,82,66740,82,5511,82,42211,82,94005,82,65363,115,119852,115,119904,115,119956,115,120008,115,120060,115,120112,115,120164,115,120216,115,120268,115,120320,115,120372,115,120424,115,120476,115,42801,115,445,115,1109,115,43946,115,71873,115,66632,115,65331,83,119826,83,119878,83,119930,83,119982,83,120034,83,120086,83,120138,83,120190,83,120242,83,120294,83,120346,83,120398,83,120450,83,1029,83,1359,83,5077,83,5082,83,42210,83,94010,83,66198,83,66592,83,119853,116,119905,116,119957,116,120009,116,120061,116,120113,116,120165,116,120217,116,120269,116,120321,116,120373,116,120425,116,120477,116,8868,84,10201,84,128872,84,65332,84,119827,84,119879,84,119931,84,119983,84,120035,84,120087,84,120139,84,120191,84,120243,84,120295,84,120347,84,120399,84,120451,84,932,84,120507,84,120565,84,120623,84,120681,84,120739,84,11430,84,5026,84,42196,84,93962,84,71868,84,66199,84,66225,84,66325,84,119854,117,119906,117,119958,117,120010,117,120062,117,120114,117,120166,117,120218,117,120270,117,120322,117,120374,117,120426,117,120478,117,42911,117,7452,117,43854,117,43858,117,651,117,965,117,120534,117,120592,117,120650,117,120708,117,120766,117,1405,117,66806,117,71896,117,8746,85,8899,85,119828,85,119880,85,119932,85,119984,85,120036,85,120088,85,120140,85,120192,85,120244,85,120296,85,120348,85,120400,85,120452,85,1357,85,4608,85,66766,85,5196,85,42228,85,94018,85,71864,85,8744,118,8897,118,65366,118,8564,118,119855,118,119907,118,119959,118,120011,118,120063,118,120115,118,120167,118,120219,118,120271,118,120323,118,120375,118,120427,118,120479,118,7456,118,957,118,120526,118,120584,118,120642,118,120700,118,120758,118,1141,118,1496,118,71430,118,43945,118,71872,118,119309,86,1639,86,1783,86,8548,86,119829,86,119881,86,119933,86,119985,86,120037,86,120089,86,120141,86,120193,86,120245,86,120297,86,120349,86,120401,86,120453,86,1140,86,11576,86,5081,86,5167,86,42719,86,42214,86,93960,86,71840,86,66845,86,623,119,119856,119,119908,119,119960,119,120012,119,120064,119,120116,119,120168,119,120220,119,120272,119,120324,119,120376,119,120428,119,120480,119,7457,119,1121,119,1309,119,1377,119,71434,119,71438,119,71439,119,43907,119,71919,87,71910,87,119830,87,119882,87,119934,87,119986,87,120038,87,120090,87,120142,87,120194,87,120246,87,120298,87,120350,87,120402,87,120454,87,1308,87,5043,87,5076,87,42218,87,5742,120,10539,120,10540,120,10799,120,65368,120,8569,120,119857,120,119909,120,119961,120,120013,120,120065,120,120117,120,120169,120,120221,120,120273,120,120325,120,120377,120,120429,120,120481,120,5441,120,5501,120,5741,88,9587,88,66338,88,71916,88,65336,88,8553,88,119831,88,119883,88,119935,88,119987,88,120039,88,120091,88,120143,88,120195,88,120247,88,120299,88,120351,88,120403,88,120455,88,42931,88,935,88,120510,88,120568,88,120626,88,120684,88,120742,88,11436,88,11613,88,5815,88,42219,88,66192,88,66228,88,66327,88,66855,88,611,121,7564,121,65369,121,119858,121,119910,121,119962,121,120014,121,120066,121,120118,121,120170,121,120222,121,120274,121,120326,121,120378,121,120430,121,120482,121,655,121,7935,121,43866,121,947,121,8509,121,120516,121,120574,121,120632,121,120690,121,120748,121,1199,121,4327,121,71900,121,65337,89,119832,89,119884,89,119936,89,119988,89,120040,89,120092,89,120144,89,120196,89,120248,89,120300,89,120352,89,120404,89,120456,89,933,89,978,89,120508,89,120566,89,120624,89,120682,89,120740,89,11432,89,1198,89,5033,89,5053,89,42220,89,94019,89,71844,89,66226,89,119859,122,119911,122,119963,122,120015,122,120067,122,120119,122,120171,122,120223,122,120275,122,120327,122,120379,122,120431,122,120483,122,7458,122,43923,122,71876,122,66293,90,71909,90,65338,90,8484,90,8488,90,119833,90,119885,90,119937,90,119989,90,120041,90,120197,90,120249,90,120301,90,120353,90,120405,90,120457,90,918,90,120493,90,120551,90,120609,90,120667,90,120725,90,5059,90,42204,90,71849,90,65282,34,65284,36,65285,37,65286,38,65290,42,65291,43,65294,46,65295,47,65296,48,65297,49,65298,50,65299,51,65300,52,65301,53,65302,54,65303,55,65304,56,65305,57,65308,60,65309,61,65310,62,65312,64,65316,68,65318,70,65319,71,65324,76,65329,81,65330,82,65333,85,65334,86,65335,87,65343,95,65346,98,65348,100,65350,102,65355,107,65357,109,65358,110,65361,113,65362,114,65364,116,65365,117,65367,119,65370,122,65371,123,65373,125],"_default":[160,32,8211,45,65374,126,65306,58,65281,33,8216,96,8217,96,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"cs":[65374,126,65306,58,65281,33,8216,96,8217,96,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"de":[65374,126,65306,58,65281,33,8216,96,8217,96,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"es":[8211,45,65374,126,65306,58,65281,33,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"fr":[65374,126,65306,58,65281,33,8216,96,8245,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"it":[160,32,8211,45,65374,126,65306,58,65281,33,8216,96,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"ja":[8211,45,65306,58,65281,33,8216,96,8217,96,8245,96,180,96,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65292,44,65307,59],"ko":[8211,45,65374,126,65306,58,65281,33,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"pl":[65374,126,65306,58,65281,33,8216,96,8217,96,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"pt-BR":[65374,126,65306,58,65281,33,8216,96,8217,96,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"qps-ploc":[160,32,8211,45,65374,126,65306,58,65281,33,8216,96,8217,96,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"ru":[65374,126,65306,58,65281,33,8216,96,8217,96,8245,96,180,96,12494,47,305,105,921,73,1009,112,215,120,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"tr":[160,32,8211,45,65374,126,65306,58,65281,33,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65288,40,65289,41,65292,44,65307,59,65311,63],"zh-hans":[65374,126,65306,58,65281,33,8245,96,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65288,40,65289,41],"zh-hant":[8211,45,65374,126,180,96,12494,47,1047,51,1073,54,1072,97,1040,65,1068,98,1042,66,1089,99,1057,67,1077,101,1045,69,1053,72,305,105,1050,75,921,73,1052,77,1086,111,1054,79,1009,112,1088,112,1056,80,1075,114,1058,84,215,120,1093,120,1061,88,1091,121,1059,89,65283,35,65307,59]}')),Re.cache=new xl(e=>{function t(h){const d=new Map;for(let u=0;u!h.startsWith("_")&&h in i);s.length===0&&(s=["_default"]);let o;for(const h of s){const d=t(i[h]);o=r(o,d)}const l=t(i._common),c=n(l,o);return new Re(c)}),Re._locales=new Ri(()=>Object.keys(Re.ambiguousCharacterData.getValue()).filter(e=>!e.startsWith("_")));class Ke{static getRawData(){return JSON.parse("[9,10,11,12,13,32,127,160,173,847,1564,4447,4448,6068,6069,6155,6156,6157,6158,7355,7356,8192,8193,8194,8195,8196,8197,8198,8199,8200,8201,8202,8203,8204,8205,8206,8207,8234,8235,8236,8237,8238,8239,8287,8288,8289,8290,8291,8292,8293,8294,8295,8296,8297,8298,8299,8300,8301,8302,8303,10240,12288,12644,65024,65025,65026,65027,65028,65029,65030,65031,65032,65033,65034,65035,65036,65037,65038,65039,65279,65440,65520,65521,65522,65523,65524,65525,65526,65527,65528,65532,78844,119155,119156,119157,119158,119159,119160,119161,119162,917504,917505,917506,917507,917508,917509,917510,917511,917512,917513,917514,917515,917516,917517,917518,917519,917520,917521,917522,917523,917524,917525,917526,917527,917528,917529,917530,917531,917532,917533,917534,917535,917536,917537,917538,917539,917540,917541,917542,917543,917544,917545,917546,917547,917548,917549,917550,917551,917552,917553,917554,917555,917556,917557,917558,917559,917560,917561,917562,917563,917564,917565,917566,917567,917568,917569,917570,917571,917572,917573,917574,917575,917576,917577,917578,917579,917580,917581,917582,917583,917584,917585,917586,917587,917588,917589,917590,917591,917592,917593,917594,917595,917596,917597,917598,917599,917600,917601,917602,917603,917604,917605,917606,917607,917608,917609,917610,917611,917612,917613,917614,917615,917616,917617,917618,917619,917620,917621,917622,917623,917624,917625,917626,917627,917628,917629,917630,917631,917760,917761,917762,917763,917764,917765,917766,917767,917768,917769,917770,917771,917772,917773,917774,917775,917776,917777,917778,917779,917780,917781,917782,917783,917784,917785,917786,917787,917788,917789,917790,917791,917792,917793,917794,917795,917796,917797,917798,917799,917800,917801,917802,917803,917804,917805,917806,917807,917808,917809,917810,917811,917812,917813,917814,917815,917816,917817,917818,917819,917820,917821,917822,917823,917824,917825,917826,917827,917828,917829,917830,917831,917832,917833,917834,917835,917836,917837,917838,917839,917840,917841,917842,917843,917844,917845,917846,917847,917848,917849,917850,917851,917852,917853,917854,917855,917856,917857,917858,917859,917860,917861,917862,917863,917864,917865,917866,917867,917868,917869,917870,917871,917872,917873,917874,917875,917876,917877,917878,917879,917880,917881,917882,917883,917884,917885,917886,917887,917888,917889,917890,917891,917892,917893,917894,917895,917896,917897,917898,917899,917900,917901,917902,917903,917904,917905,917906,917907,917908,917909,917910,917911,917912,917913,917914,917915,917916,917917,917918,917919,917920,917921,917922,917923,917924,917925,917926,917927,917928,917929,917930,917931,917932,917933,917934,917935,917936,917937,917938,917939,917940,917941,917942,917943,917944,917945,917946,917947,917948,917949,917950,917951,917952,917953,917954,917955,917956,917957,917958,917959,917960,917961,917962,917963,917964,917965,917966,917967,917968,917969,917970,917971,917972,917973,917974,917975,917976,917977,917978,917979,917980,917981,917982,917983,917984,917985,917986,917987,917988,917989,917990,917991,917992,917993,917994,917995,917996,917997,917998,917999]")}static getData(){return this._data||(this._data=new Set(Ke.getRawData())),this._data}static isInvisibleCharacter(t){return Ke.getData().has(t)}static get codePoints(){return Ke.getData()}}Ke._data=void 0;const zl="$initialize";class Ml{constructor(t,n,r,i){this.vsWorker=t,this.req=n,this.method=r,this.args=i,this.type=0}}class Mi{constructor(t,n,r,i){this.vsWorker=t,this.seq=n,this.res=r,this.err=i,this.type=1}}class Nl{constructor(t,n,r,i){this.vsWorker=t,this.req=n,this.eventName=r,this.arg=i,this.type=2}}class Pl{constructor(t,n,r){this.vsWorker=t,this.req=n,this.event=r,this.type=3}}class Il{constructor(t,n){this.vsWorker=t,this.req=n,this.type=4}}class Ll{constructor(t){this._workerId=-1,this._handler=t,this._lastSentReq=0,this._pendingReplies=Object.create(null),this._pendingEmitters=new Map,this._pendingEvents=new Map}setWorkerId(t){this._workerId=t}sendMessage(t,n){const r=String(++this._lastSentReq);return new Promise((i,s)=>{this._pendingReplies[r]={resolve:i,reject:s},this._send(new Ml(this._workerId,r,t,n))})}listen(t,n){let r=null;const i=new We({onFirstListenerAdd:()=>{r=String(++this._lastSentReq),this._pendingEmitters.set(r,i),this._send(new Nl(this._workerId,r,t,n))},onLastListenerRemove:()=>{this._pendingEmitters.delete(r),this._send(new Il(this._workerId,r)),r=null}});return i.event}handleMessage(t){!t||!t.vsWorker||this._workerId!==-1&&t.vsWorker!==this._workerId||this._handleMessage(t)}_handleMessage(t){switch(t.type){case 1:return this._handleReplyMessage(t);case 0:return this._handleRequestMessage(t);case 2:return this._handleSubscribeEventMessage(t);case 3:return this._handleEventMessage(t);case 4:return this._handleUnsubscribeEventMessage(t)}}_handleReplyMessage(t){if(!this._pendingReplies[t.seq]){console.warn("Got reply to unknown seq");return}const n=this._pendingReplies[t.seq];if(delete this._pendingReplies[t.seq],t.err){let r=t.err;t.err.$isError&&(r=new Error,r.name=t.err.name,r.message=t.err.message,r.stack=t.err.stack),n.reject(r);return}n.resolve(t.res)}_handleRequestMessage(t){const n=t.req;this._handler.handleMessage(t.method,t.args).then(i=>{this._send(new Mi(this._workerId,n,i,void 0))},i=>{i.detail instanceof Error&&(i.detail=Fi(i.detail)),this._send(new Mi(this._workerId,n,void 0,Fi(i)))})}_handleSubscribeEventMessage(t){const n=t.req,r=this._handler.handleEvent(t.eventName,t.arg)(i=>{this._send(new Pl(this._workerId,n,i))});this._pendingEvents.set(n,r)}_handleEventMessage(t){if(!this._pendingEmitters.has(t.req)){console.warn("Got event for unknown req");return}this._pendingEmitters.get(t.req).fire(t.event)}_handleUnsubscribeEventMessage(t){if(!this._pendingEvents.has(t.req)){console.warn("Got unsubscribe for unknown req");return}this._pendingEvents.get(t.req).dispose(),this._pendingEvents.delete(t.req)}_send(t){const n=[];if(t.type===0)for(let r=0;rfunction(){const l=Array.prototype.slice.call(arguments,0);return t(o,l)},i=o=>function(l){return n(o,l)},s={};for(const o of e){if(Pi(o)){s[o]=i(o);continue}if(Ni(o)){s[o]=n(o,void 0);continue}s[o]=r(o)}return s}class Wl{constructor(t,n){this._requestHandlerFactory=n,this._requestHandler=null,this._protocol=new Ll({sendMessage:(r,i)=>{t(r,i)},handleMessage:(r,i)=>this._handleMessage(r,i),handleEvent:(r,i)=>this._handleEvent(r,i)})}onmessage(t){this._protocol.handleMessage(t)}_handleMessage(t,n){if(t===zl)return this.initialize(n[0],n[1],n[2],n[3]);if(!this._requestHandler||typeof this._requestHandler[t]!="function")return Promise.reject(new Error("Missing requestHandler or method: "+t));try{return Promise.resolve(this._requestHandler[t].apply(this._requestHandler,n))}catch(r){return Promise.reject(r)}}_handleEvent(t,n){if(!this._requestHandler)throw new Error("Missing requestHandler");if(Pi(t)){const r=this._requestHandler[t].call(this._requestHandler,n);if(typeof r!="function")throw new Error(`Missing dynamic event ${t} on request handler.`);return r}if(Ni(t)){const r=this._requestHandler[t];if(typeof r!="function")throw new Error(`Missing event ${t} on request handler.`);return r}throw new Error(`Malformed event name ${t}`)}initialize(t,n,r,i){this._protocol.setWorkerId(t);const l=Tl(i,(c,h)=>this._protocol.sendMessage(c,h),(c,h)=>this._protocol.listen(c,h));return this._requestHandlerFactory?(this._requestHandler=this._requestHandlerFactory(l),Promise.resolve(sr(this._requestHandler))):(n&&(typeof n.baseUrl<"u"&&delete n.baseUrl,typeof n.paths<"u"&&typeof n.paths.vs<"u"&&delete n.paths.vs,typeof n.trustedTypesPolicy!==void 0&&delete n.trustedTypesPolicy,n.catchError=!0,ue.require.config(n)),new Promise((c,h)=>{const d=ue.require;d([r],u=>{if(this._requestHandler=u.create(l),!this._requestHandler){h(new Error("No RequestHandler!"));return}c(sr(this._requestHandler))},h)}))}}class Ze{constructor(t,n,r,i){this.originalStart=t,this.originalLength=n,this.modifiedStart=r,this.modifiedLength=i}getOriginalEnd(){return this.originalStart+this.originalLength}getModifiedEnd(){return this.modifiedStart+this.modifiedLength}}function Ii(e,t){return(t<<5)-t+e|0}function Ol(e,t){t=Ii(149417,t);for(let n=0,r=e.length;n0||this.m_modifiedCount>0)&&this.m_changes.push(new Ze(this.m_originalStart,this.m_originalCount,this.m_modifiedStart,this.m_modifiedCount)),this.m_originalCount=0,this.m_modifiedCount=0,this.m_originalStart=1073741824,this.m_modifiedStart=1073741824}AddOriginalElement(t,n){this.m_originalStart=Math.min(this.m_originalStart,t),this.m_modifiedStart=Math.min(this.m_modifiedStart,n),this.m_originalCount++}AddModifiedElement(t,n){this.m_originalStart=Math.min(this.m_originalStart,t),this.m_modifiedStart=Math.min(this.m_modifiedStart,n),this.m_modifiedCount++}getChanges(){return(this.m_originalCount>0||this.m_modifiedCount>0)&&this.MarkNextChange(),this.m_changes}getReverseChanges(){return(this.m_originalCount>0||this.m_modifiedCount>0)&&this.MarkNextChange(),this.m_changes.reverse(),this.m_changes}}class Qe{constructor(t,n,r=null){this.ContinueProcessingPredicate=r,this._originalSequence=t,this._modifiedSequence=n;const[i,s,o]=Qe._getElements(t),[l,c,h]=Qe._getElements(n);this._hasStrings=o&&h,this._originalStringElements=i,this._originalElementsOrHash=s,this._modifiedStringElements=l,this._modifiedElementsOrHash=c,this.m_forwardHistory=[],this.m_reverseHistory=[]}static _isStringArray(t){return t.length>0&&typeof t[0]=="string"}static _getElements(t){const n=t.getElements();if(Qe._isStringArray(n)){const r=new Int32Array(n.length);for(let i=0,s=n.length;i=t&&i>=r&&this.ElementsAreEqual(n,i);)n--,i--;if(t>n||r>i){let u;return r<=i?(yt.Assert(t===n+1,"originalStart should only be one more than originalEnd"),u=[new Ze(t,0,r,i-r+1)]):t<=n?(yt.Assert(r===i+1,"modifiedStart should only be one more than modifiedEnd"),u=[new Ze(t,n-t+1,r,0)]):(yt.Assert(t===n+1,"originalStart should only be one more than originalEnd"),yt.Assert(r===i+1,"modifiedStart should only be one more than modifiedEnd"),u=[]),u}const o=[0],l=[0],c=this.ComputeRecursionPoint(t,n,r,i,o,l,s),h=o[0],d=l[0];if(c!==null)return c;if(!s[0]){const u=this.ComputeDiffRecursive(t,h,r,d,s);let f=[];return s[0]?f=[new Ze(h+1,n-(h+1)+1,d+1,i-(d+1)+1)]:f=this.ComputeDiffRecursive(h+1,n,d+1,i,s),this.ConcatenateChanges(u,f)}return[new Ze(t,n-t+1,r,i-r+1)]}WALKTRACE(t,n,r,i,s,o,l,c,h,d,u,f,m,v,b,w,N,E){let M=null,k=null,P=new Ti,F=n,_=r,C=m[0]-w[0]-i,z=-1073741824,W=this.m_forwardHistory.length-1;do{const T=C+t;T===F||T<_&&h[T-1]=0&&(h=this.m_forwardHistory[W],t=h[0],F=1,_=h.length-1)}while(--W>=-1);if(M=P.getReverseChanges(),E[0]){let T=m[0]+1,L=w[0]+1;if(M!==null&&M.length>0){const $=M[M.length-1];T=Math.max(T,$.getOriginalEnd()),L=Math.max(L,$.getModifiedEnd())}k=[new Ze(T,f-T+1,L,b-L+1)]}else{P=new Ti,F=o,_=l,C=m[0]-w[0]-c,z=1073741824,W=N?this.m_reverseHistory.length-1:this.m_reverseHistory.length-2;do{const T=C+s;T===F||T<_&&d[T-1]>=d[T+1]?(u=d[T+1]-1,v=u-C-c,u>z&&P.MarkNextChange(),z=u+1,P.AddOriginalElement(u+1,v+1),C=T+1-s):(u=d[T-1],v=u-C-c,u>z&&P.MarkNextChange(),z=u,P.AddModifiedElement(u+1,v+1),C=T-1-s),W>=0&&(d=this.m_reverseHistory[W],s=d[0],F=1,_=d.length-1)}while(--W>=-1);k=P.getChanges()}return this.ConcatenateChanges(M,k)}ComputeRecursionPoint(t,n,r,i,s,o,l){let c=0,h=0,d=0,u=0,f=0,m=0;t--,r--,s[0]=0,o[0]=0,this.m_forwardHistory=[],this.m_reverseHistory=[];const v=n-t+(i-r),b=v+1,w=new Int32Array(b),N=new Int32Array(b),E=i-r,M=n-t,k=t-r,P=n-i,_=(M-E)%2===0;w[E]=t,N[M]=n,l[0]=!1;for(let C=1;C<=v/2+1;C++){let z=0,W=0;d=this.ClipDiagonalBound(E-C,C,E,b),u=this.ClipDiagonalBound(E+C,C,E,b);for(let L=d;L<=u;L+=2){L===d||Lz+W&&(z=c,W=h),!_&&Math.abs(L-M)<=C-1&&c>=N[L])return s[0]=c,o[0]=h,$<=N[L]&&1447>0&&C<=1447+1?this.WALKTRACE(E,d,u,k,M,f,m,P,w,N,c,n,s,h,i,o,_,l):null}const T=(z-t+(W-r)-C)/2;if(this.ContinueProcessingPredicate!==null&&!this.ContinueProcessingPredicate(z,T))return l[0]=!0,s[0]=z,o[0]=W,T>0&&1447>0&&C<=1447+1?this.WALKTRACE(E,d,u,k,M,f,m,P,w,N,c,n,s,h,i,o,_,l):(t++,r++,[new Ze(t,n-t+1,r,i-r+1)]);f=this.ClipDiagonalBound(M-C,C,M,b),m=this.ClipDiagonalBound(M+C,C,M,b);for(let L=f;L<=m;L+=2){L===f||L=N[L+1]?c=N[L+1]-1:c=N[L-1],h=c-(L-M)-P;const $=c;for(;c>t&&h>r&&this.ElementsAreEqual(c,h);)c--,h--;if(N[L]=c,_&&Math.abs(L-E)<=C&&c<=w[L])return s[0]=c,o[0]=h,$>=w[L]&&1447>0&&C<=1447+1?this.WALKTRACE(E,d,u,k,M,f,m,P,w,N,c,n,s,h,i,o,_,l):null}if(C<=1447){let L=new Int32Array(u-d+2);L[0]=E-d+1,xt.Copy2(w,d,L,1,u-d+1),this.m_forwardHistory.push(L),L=new Int32Array(m-f+2),L[0]=M-f+1,xt.Copy2(N,f,L,1,m-f+1),this.m_reverseHistory.push(L)}}return this.WALKTRACE(E,d,u,k,M,f,m,P,w,N,c,n,s,h,i,o,_,l)}PrettifyChanges(t){for(let n=0;n0,l=r.modifiedLength>0;for(;r.originalStart+r.originalLength=0;n--){const r=t[n];let i=0,s=0;if(n>0){const u=t[n-1];i=u.originalStart+u.originalLength,s=u.modifiedStart+u.modifiedLength}const o=r.originalLength>0,l=r.modifiedLength>0;let c=0,h=this._boundaryScore(r.originalStart,r.originalLength,r.modifiedStart,r.modifiedLength);for(let u=1;;u++){const f=r.originalStart-u,m=r.modifiedStart-u;if(fh&&(h=b,c=u)}r.originalStart-=c,r.modifiedStart-=c;const d=[null];if(n>0&&this.ChangesOverlap(t[n-1],t[n],d)){t[n-1]=d[0],t.splice(n,1),n++;continue}}if(this._hasStrings)for(let n=1,r=t.length;n0&&m>c&&(c=m,h=u,d=f)}return c>0?[h,d]:null}_contiguousSequenceScore(t,n,r){let i=0;for(let s=0;s=this._originalElementsOrHash.length-1?!0:this._hasStrings&&/^\s*$/.test(this._originalStringElements[t])}_OriginalRegionIsBoundary(t,n){if(this._OriginalIsBoundary(t)||this._OriginalIsBoundary(t-1))return!0;if(n>0){const r=t+n;if(this._OriginalIsBoundary(r-1)||this._OriginalIsBoundary(r))return!0}return!1}_ModifiedIsBoundary(t){return t<=0||t>=this._modifiedElementsOrHash.length-1?!0:this._hasStrings&&/^\s*$/.test(this._modifiedStringElements[t])}_ModifiedRegionIsBoundary(t,n){if(this._ModifiedIsBoundary(t)||this._ModifiedIsBoundary(t-1))return!0;if(n>0){const r=t+n;if(this._ModifiedIsBoundary(r-1)||this._ModifiedIsBoundary(r))return!0}return!1}_boundaryScore(t,n,r,i){const s=this._OriginalRegionIsBoundary(t,n)?1:0,o=this._ModifiedRegionIsBoundary(r,i)?1:0;return s+o}ConcatenateChanges(t,n){const r=[];if(t.length===0||n.length===0)return n.length>0?n:t;if(this.ChangesOverlap(t[t.length-1],n[0],r)){const i=new Array(t.length+n.length-1);return xt.Copy(t,0,i,0,t.length-1),i[t.length-1]=r[0],xt.Copy(n,1,i,t.length,n.length-1),i}else{const i=new Array(t.length+n.length);return xt.Copy(t,0,i,0,t.length),xt.Copy(n,0,i,t.length,n.length),i}}ChangesOverlap(t,n,r){if(yt.Assert(t.originalStart<=n.originalStart,"Left change is not less than or equal to right change"),yt.Assert(t.modifiedStart<=n.modifiedStart,"Left change is not less than or equal to right change"),t.originalStart+t.originalLength>=n.originalStart||t.modifiedStart+t.modifiedLength>=n.modifiedStart){const i=t.originalStart;let s=t.originalLength;const o=t.modifiedStart;let l=t.modifiedLength;return t.originalStart+t.originalLength>=n.originalStart&&(s=n.originalStart+n.originalLength-t.originalStart),t.modifiedStart+t.modifiedLength>=n.modifiedStart&&(l=n.modifiedStart+n.modifiedLength-t.modifiedStart),r[0]=new Ze(i,s,o,l),!0}else return r[0]=null,!1}ClipDiagonalBound(t,n,r,i){if(t>=0&&t=Bl&&e<=ql||e>=jl&&e<=$l}function wn(e,t,n,r){let i="",s=0,o=-1,l=0,c=0;for(let h=0;h<=e.length;++h){if(h2){const d=i.lastIndexOf(n);d===-1?(i="",s=0):(i=i.slice(0,d),s=i.length-1-i.lastIndexOf(n)),o=h,l=0;continue}else if(i.length!==0){i="",s=0,o=h,l=0;continue}}t&&(i+=i.length>0?`${n}..`:"..",s=2)}else i.length>0?i+=`${n}${e.slice(o+1,h)}`:i=e.slice(o+1,h),s=h-o-1;o=h,l=0}else c===et&&l!==-1?++l:l=-1}return i}function Oi(e,t){if(t===null||typeof t!="object")throw new Wi("pathObject","Object",t);const n=t.dir||t.root,r=t.base||`${t.name||""}${t.ext||""}`;return n?n===t.root?`${n}${r}`:`${n}${e}${r}`:r}const ve={resolve(...e){let t="",n="",r=!1;for(let i=e.length-1;i>=-1;i--){let s;if(i>=0){if(s=e[i],se(s,"path"),s.length===0)continue}else t.length===0?s=or():(s=Vl[`=${t}`]||or(),(s===void 0||s.slice(0,2).toLowerCase()!==t.toLowerCase()&&s.charCodeAt(2)===xe)&&(s=`${t}\\`));const o=s.length;let l=0,c="",h=!1;const d=s.charCodeAt(0);if(o===1)q(d)&&(l=1,h=!0);else if(q(d))if(h=!0,q(s.charCodeAt(1))){let u=2,f=u;for(;u2&&q(s.charCodeAt(2))&&(h=!0,l=3));if(c.length>0)if(t.length>0){if(c.toLowerCase()!==t.toLowerCase())continue}else t=c;if(r){if(t.length>0)break}else if(n=`${s.slice(l)}\\${n}`,r=h,h&&t.length>0)break}return n=wn(n,!r,"\\",q),r?`${t}\\${n}`:`${t}${n}`||"."},normalize(e){se(e,"path");const t=e.length;if(t===0)return".";let n=0,r,i=!1;const s=e.charCodeAt(0);if(t===1)return lr(s)?"\\":e;if(q(s))if(i=!0,q(e.charCodeAt(1))){let l=2,c=l;for(;l2&&q(e.charCodeAt(2))&&(i=!0,n=3));let o=n0&&q(e.charCodeAt(t-1))&&(o+="\\"),r===void 0?i?`\\${o}`:o:i?`${r}\\${o}`:`${r}${o}`},isAbsolute(e){se(e,"path");const t=e.length;if(t===0)return!1;const n=e.charCodeAt(0);return q(n)||t>2&&nt(n)&&e.charCodeAt(1)===tt&&q(e.charCodeAt(2))},join(...e){if(e.length===0)return".";let t,n;for(let s=0;s0&&(t===void 0?t=n=o:t+=`\\${o}`)}if(t===void 0)return".";let r=!0,i=0;if(typeof n=="string"&&q(n.charCodeAt(0))){++i;const s=n.length;s>1&&q(n.charCodeAt(1))&&(++i,s>2&&(q(n.charCodeAt(2))?++i:r=!1))}if(r){for(;i=2&&(t=`\\${t.slice(i)}`)}return ve.normalize(t)},relative(e,t){if(se(e,"from"),se(t,"to"),e===t)return"";const n=ve.resolve(e),r=ve.resolve(t);if(n===r||(e=n.toLowerCase(),t=r.toLowerCase(),e===t))return"";let i=0;for(;ii&&e.charCodeAt(s-1)===xe;)s--;const o=s-i;let l=0;for(;ll&&t.charCodeAt(c-1)===xe;)c--;const h=c-l,d=od){if(t.charCodeAt(l+f)===xe)return r.slice(l+f+1);if(f===2)return r.slice(l+f)}o>d&&(e.charCodeAt(i+f)===xe?u=f:f===2&&(u=3)),u===-1&&(u=0)}let m="";for(f=i+u+1;f<=s;++f)(f===s||e.charCodeAt(f)===xe)&&(m+=m.length===0?"..":"\\..");return l+=u,m.length>0?`${m}${r.slice(l,c)}`:(r.charCodeAt(l)===xe&&++l,r.slice(l,c))},toNamespacedPath(e){if(typeof e!="string")return e;if(e.length===0)return"";const t=ve.resolve(e);if(t.length<=2)return e;if(t.charCodeAt(0)===xe){if(t.charCodeAt(1)===xe){const n=t.charCodeAt(2);if(n!==Hl&&n!==et)return`\\\\?\\UNC\\${t.slice(2)}`}}else if(nt(t.charCodeAt(0))&&t.charCodeAt(1)===tt&&t.charCodeAt(2)===xe)return`\\\\?\\${t}`;return e},dirname(e){se(e,"path");const t=e.length;if(t===0)return".";let n=-1,r=0;const i=e.charCodeAt(0);if(t===1)return q(i)?e:".";if(q(i)){if(n=r=1,q(e.charCodeAt(1))){let l=2,c=l;for(;l2&&q(e.charCodeAt(2))?3:2,r=n);let s=-1,o=!0;for(let l=t-1;l>=r;--l)if(q(e.charCodeAt(l))){if(!o){s=l;break}}else o=!1;if(s===-1){if(n===-1)return".";s=n}return e.slice(0,s)},basename(e,t){t!==void 0&&se(t,"ext"),se(e,"path");let n=0,r=-1,i=!0,s;if(e.length>=2&&nt(e.charCodeAt(0))&&e.charCodeAt(1)===tt&&(n=2),t!==void 0&&t.length>0&&t.length<=e.length){if(t===e)return"";let o=t.length-1,l=-1;for(s=e.length-1;s>=n;--s){const c=e.charCodeAt(s);if(q(c)){if(!i){n=s+1;break}}else l===-1&&(i=!1,l=s+1),o>=0&&(c===t.charCodeAt(o)?--o===-1&&(r=s):(o=-1,r=l))}return n===r?r=l:r===-1&&(r=e.length),e.slice(n,r)}for(s=e.length-1;s>=n;--s)if(q(e.charCodeAt(s))){if(!i){n=s+1;break}}else r===-1&&(i=!1,r=s+1);return r===-1?"":e.slice(n,r)},extname(e){se(e,"path");let t=0,n=-1,r=0,i=-1,s=!0,o=0;e.length>=2&&e.charCodeAt(1)===tt&&nt(e.charCodeAt(0))&&(t=r=2);for(let l=e.length-1;l>=t;--l){const c=e.charCodeAt(l);if(q(c)){if(!s){r=l+1;break}continue}i===-1&&(s=!1,i=l+1),c===et?n===-1?n=l:o!==1&&(o=1):n!==-1&&(o=-1)}return n===-1||i===-1||o===0||o===1&&n===i-1&&n===r+1?"":e.slice(n,i)},format:Oi.bind(null,"\\"),parse(e){se(e,"path");const t={root:"",dir:"",base:"",ext:"",name:""};if(e.length===0)return t;const n=e.length;let r=0,i=e.charCodeAt(0);if(n===1)return q(i)?(t.root=t.dir=e,t):(t.base=t.name=e,t);if(q(i)){if(r=1,q(e.charCodeAt(1))){let u=2,f=u;for(;u0&&(t.root=e.slice(0,r));let s=-1,o=r,l=-1,c=!0,h=e.length-1,d=0;for(;h>=r;--h){if(i=e.charCodeAt(h),q(i)){if(!c){o=h+1;break}continue}l===-1&&(c=!1,l=h+1),i===et?s===-1?s=h:d!==1&&(d=1):s!==-1&&(d=-1)}return l!==-1&&(s===-1||d===0||d===1&&s===l-1&&s===o+1?t.base=t.name=e.slice(o,l):(t.name=e.slice(o,s),t.base=e.slice(o,l),t.ext=e.slice(s,l))),o>0&&o!==r?t.dir=e.slice(0,o-1):t.dir=t.root,t},sep:"\\",delimiter:";",win32:null,posix:null},Se={resolve(...e){let t="",n=!1;for(let r=e.length-1;r>=-1&&!n;r--){const i=r>=0?e[r]:or();se(i,"path"),i.length!==0&&(t=`${i}/${t}`,n=i.charCodeAt(0)===de)}return t=wn(t,!n,"/",lr),n?`/${t}`:t.length>0?t:"."},normalize(e){if(se(e,"path"),e.length===0)return".";const t=e.charCodeAt(0)===de,n=e.charCodeAt(e.length-1)===de;return e=wn(e,!t,"/",lr),e.length===0?t?"/":n?"./":".":(n&&(e+="/"),t?`/${e}`:e)},isAbsolute(e){return se(e,"path"),e.length>0&&e.charCodeAt(0)===de},join(...e){if(e.length===0)return".";let t;for(let n=0;n0&&(t===void 0?t=r:t+=`/${r}`)}return t===void 0?".":Se.normalize(t)},relative(e,t){if(se(e,"from"),se(t,"to"),e===t||(e=Se.resolve(e),t=Se.resolve(t),e===t))return"";const n=1,r=e.length,i=r-n,s=1,o=t.length-s,l=il){if(t.charCodeAt(s+h)===de)return t.slice(s+h+1);if(h===0)return t.slice(s+h)}else i>l&&(e.charCodeAt(n+h)===de?c=h:h===0&&(c=0));let d="";for(h=n+c+1;h<=r;++h)(h===r||e.charCodeAt(h)===de)&&(d+=d.length===0?"..":"/..");return`${d}${t.slice(s+c)}`},toNamespacedPath(e){return e},dirname(e){if(se(e,"path"),e.length===0)return".";const t=e.charCodeAt(0)===de;let n=-1,r=!0;for(let i=e.length-1;i>=1;--i)if(e.charCodeAt(i)===de){if(!r){n=i;break}}else r=!1;return n===-1?t?"/":".":t&&n===1?"//":e.slice(0,n)},basename(e,t){t!==void 0&&se(t,"ext"),se(e,"path");let n=0,r=-1,i=!0,s;if(t!==void 0&&t.length>0&&t.length<=e.length){if(t===e)return"";let o=t.length-1,l=-1;for(s=e.length-1;s>=0;--s){const c=e.charCodeAt(s);if(c===de){if(!i){n=s+1;break}}else l===-1&&(i=!1,l=s+1),o>=0&&(c===t.charCodeAt(o)?--o===-1&&(r=s):(o=-1,r=l))}return n===r?r=l:r===-1&&(r=e.length),e.slice(n,r)}for(s=e.length-1;s>=0;--s)if(e.charCodeAt(s)===de){if(!i){n=s+1;break}}else r===-1&&(i=!1,r=s+1);return r===-1?"":e.slice(n,r)},extname(e){se(e,"path");let t=-1,n=0,r=-1,i=!0,s=0;for(let o=e.length-1;o>=0;--o){const l=e.charCodeAt(o);if(l===de){if(!i){n=o+1;break}continue}r===-1&&(i=!1,r=o+1),l===et?t===-1?t=o:s!==1&&(s=1):t!==-1&&(s=-1)}return t===-1||r===-1||s===0||s===1&&t===r-1&&t===n+1?"":e.slice(t,r)},format:Oi.bind(null,"/"),parse(e){se(e,"path");const t={root:"",dir:"",base:"",ext:"",name:""};if(e.length===0)return t;const n=e.charCodeAt(0)===de;let r;n?(t.root="/",r=1):r=0;let i=-1,s=0,o=-1,l=!0,c=e.length-1,h=0;for(;c>=r;--c){const d=e.charCodeAt(c);if(d===de){if(!l){s=c+1;break}continue}o===-1&&(l=!1,o=c+1),d===et?i===-1?i=c:h!==1&&(h=1):i!==-1&&(h=-1)}if(o!==-1){const d=s===0&&n?1:s;i===-1||h===0||h===1&&i===o-1&&i===s+1?t.base=t.name=e.slice(d,o):(t.name=e.slice(d,i),t.base=e.slice(d,o),t.ext=e.slice(i,o))}return s>0?t.dir=e.slice(0,s-1):n&&(t.dir="/"),t},sep:"/",delimiter:":",win32:null,posix:null};Se.win32=ve.win32=ve,Se.posix=ve.posix=Se,ct==="win32"?ve.normalize:Se.normalize,ct==="win32"?ve.resolve:Se.resolve,ct==="win32"?ve.relative:Se.relative,ct==="win32"?ve.dirname:Se.dirname,ct==="win32"?ve.basename:Se.basename,ct==="win32"?ve.extname:Se.extname,ct==="win32"?ve.sep:Se.sep;const Gl=/^\w[\w\d+.-]*$/,Jl=/^\//,Xl=/^\/\//;function Ui(e,t){if(!e.scheme&&t)throw new Error(`[UriError]: Scheme is missing: {scheme: "", authority: "${e.authority}", path: "${e.path}", query: "${e.query}", fragment: "${e.fragment}"}`);if(e.scheme&&!Gl.test(e.scheme))throw new Error("[UriError]: Scheme contains illegal characters.");if(e.path){if(e.authority){if(!Jl.test(e.path))throw new Error('[UriError]: If a URI contains an authority component, then the path component must either be empty or begin with a slash ("/") character')}else if(Xl.test(e.path))throw new Error('[UriError]: If a URI does not contain an authority component, then the path cannot begin with two slash characters ("//")')}}function Yl(e,t){return!e&&!t?"file":e}function Kl(e,t){switch(e){case"https":case"http":case"file":t?t[0]!==Pe&&(t=Pe+t):t=Pe;break}return t}const ne="",Pe="/",Zl=/^(([^:/?#]+?):)?(\/\/([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?/;class ht{constructor(t,n,r,i,s,o=!1){typeof t=="object"?(this.scheme=t.scheme||ne,this.authority=t.authority||ne,this.path=t.path||ne,this.query=t.query||ne,this.fragment=t.fragment||ne):(this.scheme=Yl(t,o),this.authority=n||ne,this.path=Kl(this.scheme,r||ne),this.query=i||ne,this.fragment=s||ne,Ui(this,o))}static isUri(t){return t instanceof ht?!0:t?typeof t.authority=="string"&&typeof t.fragment=="string"&&typeof t.path=="string"&&typeof t.query=="string"&&typeof t.scheme=="string"&&typeof t.fsPath=="string"&&typeof t.with=="function"&&typeof t.toString=="function":!1}get fsPath(){return cr(this,!1)}with(t){if(!t)return this;let{scheme:n,authority:r,path:i,query:s,fragment:o}=t;return n===void 0?n=this.scheme:n===null&&(n=ne),r===void 0?r=this.authority:r===null&&(r=ne),i===void 0?i=this.path:i===null&&(i=ne),s===void 0?s=this.query:s===null&&(s=ne),o===void 0?o=this.fragment:o===null&&(o=ne),n===this.scheme&&r===this.authority&&i===this.path&&s===this.query&&o===this.fragment?this:new Ct(n,r,i,s,o)}static parse(t,n=!1){const r=Zl.exec(t);return r?new Ct(r[2]||ne,yn(r[4]||ne),yn(r[5]||ne),yn(r[7]||ne),yn(r[9]||ne),n):new Ct(ne,ne,ne,ne,ne)}static file(t){let n=ne;if(Tt&&(t=t.replace(/\\/g,Pe)),t[0]===Pe&&t[1]===Pe){const r=t.indexOf(Pe,2);r===-1?(n=t.substring(2),t=Pe):(n=t.substring(2,r),t=t.substring(r)||Pe)}return new Ct("file",n,t,ne,ne)}static from(t){const n=new Ct(t.scheme,t.authority,t.path,t.query,t.fragment);return Ui(n,!0),n}static joinPath(t,...n){if(!t.path)throw new Error("[UriError]: cannot call joinPath on URI without path");let r;return Tt&&t.scheme==="file"?r=ht.file(ve.join(cr(t,!0),...n)).path:r=Se.join(t.path,...n),t.with({path:r})}toString(t=!1){return hr(this,t)}toJSON(){return this}static revive(t){if(t){if(t instanceof ht)return t;{const n=new Ct(t);return n._formatted=t.external,n._fsPath=t._sep===Vi?t.fsPath:null,n}}else return t}}const Vi=Tt?1:void 0;class Ct extends ht{constructor(){super(...arguments),this._formatted=null,this._fsPath=null}get fsPath(){return this._fsPath||(this._fsPath=cr(this,!1)),this._fsPath}toString(t=!1){return t?hr(this,!0):(this._formatted||(this._formatted=hr(this,!1)),this._formatted)}toJSON(){const t={$mid:1};return this._fsPath&&(t.fsPath=this._fsPath,t._sep=Vi),this._formatted&&(t.external=this._formatted),this.path&&(t.path=this.path),this.scheme&&(t.scheme=this.scheme),this.authority&&(t.authority=this.authority),this.query&&(t.query=this.query),this.fragment&&(t.fragment=this.fragment),t}}const Bi={[58]:"%3A",[47]:"%2F",[63]:"%3F",[35]:"%23",[91]:"%5B",[93]:"%5D",[64]:"%40",[33]:"%21",[36]:"%24",[38]:"%26",[39]:"%27",[40]:"%28",[41]:"%29",[42]:"%2A",[43]:"%2B",[44]:"%2C",[59]:"%3B",[61]:"%3D",[32]:"%20"};function ji(e,t){let n,r=-1;for(let i=0;i=97&&s<=122||s>=65&&s<=90||s>=48&&s<=57||s===45||s===46||s===95||s===126||t&&s===47)r!==-1&&(n+=encodeURIComponent(e.substring(r,i)),r=-1),n!==void 0&&(n+=e.charAt(i));else{n===void 0&&(n=e.substr(0,i));const o=Bi[s];o!==void 0?(r!==-1&&(n+=encodeURIComponent(e.substring(r,i)),r=-1),n+=o):r===-1&&(r=i)}}return r!==-1&&(n+=encodeURIComponent(e.substring(r))),n!==void 0?n:e}function Ql(e){let t;for(let n=0;n1&&e.scheme==="file"?n=`//${e.authority}${e.path}`:e.path.charCodeAt(0)===47&&(e.path.charCodeAt(1)>=65&&e.path.charCodeAt(1)<=90||e.path.charCodeAt(1)>=97&&e.path.charCodeAt(1)<=122)&&e.path.charCodeAt(2)===58?t?n=e.path.substr(1):n=e.path[1].toLowerCase()+e.path.substr(2):n=e.path,Tt&&(n=n.replace(/\//g,"\\")),n}function hr(e,t){const n=t?Ql:ji;let r="",{scheme:i,authority:s,path:o,query:l,fragment:c}=e;if(i&&(r+=i,r+=":"),(s||i==="file")&&(r+=Pe,r+=Pe),s){let h=s.indexOf("@");if(h!==-1){const d=s.substr(0,h);s=s.substr(h+1),h=d.indexOf(":"),h===-1?r+=n(d,!1):(r+=n(d.substr(0,h),!1),r+=":",r+=n(d.substr(h+1),!1)),r+="@"}s=s.toLowerCase(),h=s.indexOf(":"),h===-1?r+=n(s,!1):(r+=n(s.substr(0,h),!1),r+=s.substr(h))}if(o){if(o.length>=3&&o.charCodeAt(0)===47&&o.charCodeAt(2)===58){const h=o.charCodeAt(1);h>=65&&h<=90&&(o=`/${String.fromCharCode(h+32)}:${o.substr(3)}`)}else if(o.length>=2&&o.charCodeAt(1)===58){const h=o.charCodeAt(0);h>=65&&h<=90&&(o=`${String.fromCharCode(h+32)}:${o.substr(2)}`)}r+=n(o,!0)}return l&&(r+="?",r+=n(l,!1)),c&&(r+="#",r+=t?c:ji(c,!1)),r}function qi(e){try{return decodeURIComponent(e)}catch{return e.length>3?e.substr(0,3)+qi(e.substr(3)):e}}const $i=/(%[0-9A-Za-z][0-9A-Za-z])+/g;function yn(e){return e.match($i)?e.replace($i,t=>qi(t)):e}class Ce{constructor(t,n){this.lineNumber=t,this.column=n}with(t=this.lineNumber,n=this.column){return t===this.lineNumber&&n===this.column?this:new Ce(t,n)}delta(t=0,n=0){return this.with(this.lineNumber+t,this.column+n)}equals(t){return Ce.equals(this,t)}static equals(t,n){return!t&&!n?!0:!!t&&!!n&&t.lineNumber===n.lineNumber&&t.column===n.column}isBefore(t){return Ce.isBefore(this,t)}static isBefore(t,n){return t.lineNumberr||t===r&&n>i?(this.startLineNumber=r,this.startColumn=i,this.endLineNumber=t,this.endColumn=n):(this.startLineNumber=t,this.startColumn=n,this.endLineNumber=r,this.endColumn=i)}isEmpty(){return re.isEmpty(this)}static isEmpty(t){return t.startLineNumber===t.endLineNumber&&t.startColumn===t.endColumn}containsPosition(t){return re.containsPosition(this,t)}static containsPosition(t,n){return!(n.lineNumbert.endLineNumber||n.lineNumber===t.startLineNumber&&n.columnt.endColumn)}static strictContainsPosition(t,n){return!(n.lineNumbert.endLineNumber||n.lineNumber===t.startLineNumber&&n.column<=t.startColumn||n.lineNumber===t.endLineNumber&&n.column>=t.endColumn)}containsRange(t){return re.containsRange(this,t)}static containsRange(t,n){return!(n.startLineNumbert.endLineNumber||n.endLineNumber>t.endLineNumber||n.startLineNumber===t.startLineNumber&&n.startColumnt.endColumn)}strictContainsRange(t){return re.strictContainsRange(this,t)}static strictContainsRange(t,n){return!(n.startLineNumbert.endLineNumber||n.endLineNumber>t.endLineNumber||n.startLineNumber===t.startLineNumber&&n.startColumn<=t.startColumn||n.endLineNumber===t.endLineNumber&&n.endColumn>=t.endColumn)}plusRange(t){return re.plusRange(this,t)}static plusRange(t,n){let r,i,s,o;return n.startLineNumbert.endLineNumber?(s=n.endLineNumber,o=n.endColumn):n.endLineNumber===t.endLineNumber?(s=n.endLineNumber,o=Math.max(n.endColumn,t.endColumn)):(s=t.endLineNumber,o=t.endColumn),new re(r,i,s,o)}intersectRanges(t){return re.intersectRanges(this,t)}static intersectRanges(t,n){let r=t.startLineNumber,i=t.startColumn,s=t.endLineNumber,o=t.endColumn;const l=n.startLineNumber,c=n.startColumn,h=n.endLineNumber,d=n.endColumn;return rh?(s=h,o=d):s===h&&(o=Math.min(o,d)),r>s||r===s&&i>o?null:new re(r,i,s,o)}equalsRange(t){return re.equalsRange(this,t)}static equalsRange(t,n){return!!t&&!!n&&t.startLineNumber===n.startLineNumber&&t.startColumn===n.startColumn&&t.endLineNumber===n.endLineNumber&&t.endColumn===n.endColumn}getEndPosition(){return re.getEndPosition(this)}static getEndPosition(t){return new Ce(t.endLineNumber,t.endColumn)}getStartPosition(){return re.getStartPosition(this)}static getStartPosition(t){return new Ce(t.startLineNumber,t.startColumn)}toString(){return"["+this.startLineNumber+","+this.startColumn+" -> "+this.endLineNumber+","+this.endColumn+"]"}setEndPosition(t,n){return new re(this.startLineNumber,this.startColumn,t,n)}setStartPosition(t,n){return new re(t,n,this.endLineNumber,this.endColumn)}collapseToStart(){return re.collapseToStart(this)}static collapseToStart(t){return new re(t.startLineNumber,t.startColumn,t.startLineNumber,t.startColumn)}static fromPositions(t,n=t){return new re(t.lineNumber,t.column,n.lineNumber,n.column)}static lift(t){return t?new re(t.startLineNumber,t.startColumn,t.endLineNumber,t.endColumn):null}static isIRange(t){return t&&typeof t.startLineNumber=="number"&&typeof t.startColumn=="number"&&typeof t.endLineNumber=="number"&&typeof t.endColumn=="number"}static areIntersectingOrTouching(t,n){return!(t.endLineNumbert.startLineNumber}toJSON(){return this}}const ec=3;function Hi(e,t,n,r){return new Qe(e,t,n).ComputeDiff(r)}class Gi{constructor(t){const n=[],r=[];for(let i=0,s=t.length;i(t===10?"\\n":String.fromCharCode(t))+`-(${this._lineNumbers[n]},${this._columns[n]})`).join(", ")+"]"}_assertIndex(t,n){if(t<0||t>=n.length)throw new Error("Illegal index")}getElements(){return this._charCodes}getStartLineNumber(t){return t>0&&t===this._lineNumbers.length?this.getEndLineNumber(t-1):(this._assertIndex(t,this._lineNumbers),this._lineNumbers[t])}getEndLineNumber(t){return t===-1?this.getStartLineNumber(t+1):(this._assertIndex(t,this._lineNumbers),this._charCodes[t]===10?this._lineNumbers[t]+1:this._lineNumbers[t])}getStartColumn(t){return t>0&&t===this._columns.length?this.getEndColumn(t-1):(this._assertIndex(t,this._columns),this._columns[t])}getEndColumn(t){return t===-1?this.getStartColumn(t+1):(this._assertIndex(t,this._columns),this._charCodes[t]===10?1:this._columns[t]+1)}}class Wt{constructor(t,n,r,i,s,o,l,c){this.originalStartLineNumber=t,this.originalStartColumn=n,this.originalEndLineNumber=r,this.originalEndColumn=i,this.modifiedStartLineNumber=s,this.modifiedStartColumn=o,this.modifiedEndLineNumber=l,this.modifiedEndColumn=c}static createFromDiffChange(t,n,r){const i=n.getStartLineNumber(t.originalStart),s=n.getStartColumn(t.originalStart),o=n.getEndLineNumber(t.originalStart+t.originalLength-1),l=n.getEndColumn(t.originalStart+t.originalLength-1),c=r.getStartLineNumber(t.modifiedStart),h=r.getStartColumn(t.modifiedStart),d=r.getEndLineNumber(t.modifiedStart+t.modifiedLength-1),u=r.getEndColumn(t.modifiedStart+t.modifiedLength-1);return new Wt(i,s,o,l,c,h,d,u)}}function nc(e){if(e.length<=1)return e;const t=[e[0]];let n=t[0];for(let r=1,i=e.length;r0&&n.originalLength<20&&n.modifiedLength>0&&n.modifiedLength<20&&s()){const m=r.createCharSequence(t,n.originalStart,n.originalStart+n.originalLength-1),v=i.createCharSequence(t,n.modifiedStart,n.modifiedStart+n.modifiedLength-1);if(m.getElements().length>0&&v.getElements().length>0){let b=Hi(m,v,s,!0).changes;l&&(b=nc(b)),f=[];for(let w=0,N=b.length;w1&&b>1;){const w=f.charCodeAt(v-2),N=m.charCodeAt(b-2);if(w!==N)break;v--,b--}(v>1||b>1)&&this._pushTrimWhitespaceCharChange(i,s+1,1,v,o+1,1,b)}{let v=ur(f,1),b=ur(m,1);const w=f.length+1,N=m.length+1;for(;v!0;const t=Date.now();return()=>Date.now()-t0}e.isGreaterThan=n;function r(i){return i===0}e.isNeitherLessOrGreaterThan=r,e.greaterThan=1,e.lessThan=-1,e.neitherLessOrGreaterThan=0})(Xi||(Xi={}));function Yi(e){return e<0?0:e>255?255:e|0}function kt(e){return e<0?0:e>4294967295?4294967295:e|0}class ic{constructor(t){this.values=t,this.prefixSum=new Uint32Array(t.length),this.prefixSumValidIndex=new Int32Array(1),this.prefixSumValidIndex[0]=-1}insertValues(t,n){t=kt(t);const r=this.values,i=this.prefixSum,s=n.length;return s===0?!1:(this.values=new Uint32Array(r.length+s),this.values.set(r.subarray(0,t),0),this.values.set(r.subarray(t),t+s),this.values.set(n,t),t-1=0&&this.prefixSum.set(i.subarray(0,this.prefixSumValidIndex[0]+1)),!0)}setValue(t,n){return t=kt(t),n=kt(n),this.values[t]===n?!1:(this.values[t]=n,t-1=r.length)return!1;const s=r.length-t;return n>=s&&(n=s),n===0?!1:(this.values=new Uint32Array(r.length-n),this.values.set(r.subarray(0,t),0),this.values.set(r.subarray(t+n),t),this.prefixSum=new Uint32Array(this.values.length),t-1=0&&this.prefixSum.set(i.subarray(0,this.prefixSumValidIndex[0]+1)),!0)}getTotalSum(){return this.values.length===0?0:this._getPrefixSum(this.values.length-1)}getPrefixSum(t){return t<0?0:(t=kt(t),this._getPrefixSum(t))}_getPrefixSum(t){if(t<=this.prefixSumValidIndex[0])return this.prefixSum[t];let n=this.prefixSumValidIndex[0]+1;n===0&&(this.prefixSum[0]=this.values[0],n++),t>=this.values.length&&(t=this.values.length-1);for(let r=n;r<=t;r++)this.prefixSum[r]=this.prefixSum[r-1]+this.values[r];return this.prefixSumValidIndex[0]=Math.max(this.prefixSumValidIndex[0],t),this.prefixSum[t]}getIndexOf(t){t=Math.floor(t),this.getTotalSum();let n=0,r=this.values.length-1,i=0,s=0,o=0;for(;n<=r;)if(i=n+(r-n)/2|0,s=this.prefixSum[i],o=s-this.values[i],t=s)n=i+1;else break;return new sc(i,t-o)}}class sc{constructor(t,n){this.index=t,this.remainder=n,this._prefixSumIndexOfResultBrand=void 0,this.index=t,this.remainder=n}}class ac{constructor(t,n,r,i){this._uri=t,this._lines=n,this._eol=r,this._versionId=i,this._lineStarts=null,this._cachedTextValue=null}dispose(){this._lines.length=0}get version(){return this._versionId}getText(){return this._cachedTextValue===null&&(this._cachedTextValue=this._lines.join(this._eol)),this._cachedTextValue}onEvents(t){t.eol&&t.eol!==this._eol&&(this._eol=t.eol,this._lineStarts=null);const n=t.changes;for(const r of n)this._acceptDeleteRange(r.range),this._acceptInsertText(new Ce(r.range.startLineNumber,r.range.startColumn),r.text);this._versionId=t.versionId,this._cachedTextValue=null}_ensureLineStarts(){if(!this._lineStarts){const t=this._eol.length,n=this._lines.length,r=new Uint32Array(n);for(let i=0;i/?";function lc(e=""){let t="(-?\\d*\\.\\d\\w*)|([^";for(const n of oc)e.indexOf(n)>=0||(t+="\\"+n);return t+="\\s]+)",new RegExp(t,"g")}const Ki=lc();function cc(e){let t=Ki;if(e&&e instanceof RegExp)if(e.global)t=e;else{let n="g";e.ignoreCase&&(n+="i"),e.multiline&&(n+="m"),e.unicode&&(n+="u"),t=new RegExp(e.source,n)}return t.lastIndex=0,t}const Zi=new mn;Zi.unshift({maxLen:1e3,windowSize:15,timeBudget:150});function pr(e,t,n,r,i){if(i||(i=pn.first(Zi)),n.length>i.maxLen){let h=e-i.maxLen/2;return h<0?h=0:r+=h,n=n.substring(h,e+i.maxLen/2),pr(e,t,n,r,i)}const s=Date.now(),o=e-1-r;let l=-1,c=null;for(let h=1;!(Date.now()-s>=i.timeBudget);h++){const d=o-i.windowSize*h;t.lastIndex=Math.max(0,d);const u=hc(t,n,o,l);if(!u&&c||(c=u,d<=0))break;l=d}if(c){const h={word:c[0],startColumn:r+1+c.index,endColumn:r+1+c.index+c[0].length};return t.lastIndex=0,h}return null}function hc(e,t,n,r){let i;for(;i=e.exec(t);){const s=i.index||0;if(s<=n&&e.lastIndex>=n)return i;if(r>0&&s>r)return null}return null}class fr{constructor(t){const n=Yi(t);this._defaultValue=n,this._asciiMap=fr._createAsciiMap(n),this._map=new Map}static _createAsciiMap(t){const n=new Uint8Array(256);for(let r=0;r<256;r++)n[r]=t;return n}set(t,n){const r=Yi(n);t>=0&&t<256?this._asciiMap[t]=r:this._map.set(t,r)}get(t){return t>=0&&t<256?this._asciiMap[t]:this._map.get(t)||this._defaultValue}}class dc{constructor(t,n,r){const i=new Uint8Array(t*n);for(let s=0,o=t*n;sn&&(n=c),l>r&&(r=l),h>r&&(r=h)}n++,r++;const i=new dc(r,n,0);for(let s=0,o=t.length;s=this._maxCharCode?0:this._states.get(t,n)}}let mr=null;function pc(){return mr===null&&(mr=new uc([[1,104,2],[1,72,2],[1,102,6],[1,70,6],[2,116,3],[2,84,3],[3,116,4],[3,84,4],[4,112,5],[4,80,5],[5,115,9],[5,83,9],[5,58,10],[6,105,7],[6,73,7],[7,108,8],[7,76,8],[8,101,9],[8,69,9],[9,58,10],[10,47,11],[11,47,12]])),mr}let Ut=null;function fc(){if(Ut===null){Ut=new fr(0);const e=` <>'"\u3001\u3002\uFF61\uFF64\uFF0C\uFF0E\uFF1A\uFF1B\u2018\u3008\u300C\u300E\u3014\uFF08\uFF3B\uFF5B\uFF62\uFF63\uFF5D\uFF3D\uFF09\u3015\u300F\u300D\u3009\u2019\uFF40\uFF5E\u2026`;for(let n=0;ni);if(i>0){const l=n.charCodeAt(i-1),c=n.charCodeAt(o);(l===40&&c===41||l===91&&c===93||l===123&&c===125)&&o--}return{range:{startLineNumber:r,startColumn:i+1,endLineNumber:r,endColumn:o+2},url:n.substring(i,o+1)}}static computeLinks(t,n=pc()){const r=fc(),i=[];for(let s=1,o=t.getLineCount();s<=o;s++){const l=t.getLineContent(s),c=l.length;let h=0,d=0,u=0,f=1,m=!1,v=!1,b=!1,w=!1;for(;h=0?(i+=r?1:-1,i<0?i=t.length-1:i%=t.length,t[i]):null}}gr.INSTANCE=new gr;const Qi=Object.freeze(function(e,t){const n=setTimeout(e.bind(t),0);return{dispose(){clearTimeout(n)}}});var Sn;(function(e){function t(n){return n===e.None||n===e.Cancelled||n instanceof Cn?!0:!n||typeof n!="object"?!1:typeof n.isCancellationRequested=="boolean"&&typeof n.onCancellationRequested=="function"}e.isCancellationToken=t,e.None=Object.freeze({isCancellationRequested:!1,onCancellationRequested:rr.None}),e.Cancelled=Object.freeze({isCancellationRequested:!0,onCancellationRequested:Qi})})(Sn||(Sn={}));class Cn{constructor(){this._isCancelled=!1,this._emitter=null}cancel(){this._isCancelled||(this._isCancelled=!0,this._emitter&&(this._emitter.fire(void 0),this.dispose()))}get isCancellationRequested(){return this._isCancelled}get onCancellationRequested(){return this._isCancelled?Qi:(this._emitter||(this._emitter=new We),this._emitter.event)}dispose(){this._emitter&&(this._emitter.dispose(),this._emitter=null)}}class gc{constructor(t){this._token=void 0,this._parentListener=void 0,this._parentListener=t&&t.onCancellationRequested(this.cancel,this)}get token(){return this._token||(this._token=new Cn),this._token}cancel(){this._token?this._token instanceof Cn&&this._token.cancel():this._token=Sn.Cancelled}dispose(t=!1){t&&this.cancel(),this._parentListener&&this._parentListener.dispose(),this._token?this._token instanceof Cn&&this._token.dispose():this._token=Sn.None}}class br{constructor(){this._keyCodeToStr=[],this._strToKeyCode=Object.create(null)}define(t,n){this._keyCodeToStr[t]=n,this._strToKeyCode[n.toLowerCase()]=t}keyCodeToStr(t){return this._keyCodeToStr[t]}strToKeyCode(t){return this._strToKeyCode[t.toLowerCase()]||0}}const kn=new br,vr=new br,wr=new br,bc=new Array(230),vc=Object.create(null),wc=Object.create(null);(function(){const e="",t=[[0,1,0,"None",0,"unknown",0,"VK_UNKNOWN",e,e],[0,1,1,"Hyper",0,e,0,e,e,e],[0,1,2,"Super",0,e,0,e,e,e],[0,1,3,"Fn",0,e,0,e,e,e],[0,1,4,"FnLock",0,e,0,e,e,e],[0,1,5,"Suspend",0,e,0,e,e,e],[0,1,6,"Resume",0,e,0,e,e,e],[0,1,7,"Turbo",0,e,0,e,e,e],[0,1,8,"Sleep",0,e,0,"VK_SLEEP",e,e],[0,1,9,"WakeUp",0,e,0,e,e,e],[31,0,10,"KeyA",31,"A",65,"VK_A",e,e],[32,0,11,"KeyB",32,"B",66,"VK_B",e,e],[33,0,12,"KeyC",33,"C",67,"VK_C",e,e],[34,0,13,"KeyD",34,"D",68,"VK_D",e,e],[35,0,14,"KeyE",35,"E",69,"VK_E",e,e],[36,0,15,"KeyF",36,"F",70,"VK_F",e,e],[37,0,16,"KeyG",37,"G",71,"VK_G",e,e],[38,0,17,"KeyH",38,"H",72,"VK_H",e,e],[39,0,18,"KeyI",39,"I",73,"VK_I",e,e],[40,0,19,"KeyJ",40,"J",74,"VK_J",e,e],[41,0,20,"KeyK",41,"K",75,"VK_K",e,e],[42,0,21,"KeyL",42,"L",76,"VK_L",e,e],[43,0,22,"KeyM",43,"M",77,"VK_M",e,e],[44,0,23,"KeyN",44,"N",78,"VK_N",e,e],[45,0,24,"KeyO",45,"O",79,"VK_O",e,e],[46,0,25,"KeyP",46,"P",80,"VK_P",e,e],[47,0,26,"KeyQ",47,"Q",81,"VK_Q",e,e],[48,0,27,"KeyR",48,"R",82,"VK_R",e,e],[49,0,28,"KeyS",49,"S",83,"VK_S",e,e],[50,0,29,"KeyT",50,"T",84,"VK_T",e,e],[51,0,30,"KeyU",51,"U",85,"VK_U",e,e],[52,0,31,"KeyV",52,"V",86,"VK_V",e,e],[53,0,32,"KeyW",53,"W",87,"VK_W",e,e],[54,0,33,"KeyX",54,"X",88,"VK_X",e,e],[55,0,34,"KeyY",55,"Y",89,"VK_Y",e,e],[56,0,35,"KeyZ",56,"Z",90,"VK_Z",e,e],[22,0,36,"Digit1",22,"1",49,"VK_1",e,e],[23,0,37,"Digit2",23,"2",50,"VK_2",e,e],[24,0,38,"Digit3",24,"3",51,"VK_3",e,e],[25,0,39,"Digit4",25,"4",52,"VK_4",e,e],[26,0,40,"Digit5",26,"5",53,"VK_5",e,e],[27,0,41,"Digit6",27,"6",54,"VK_6",e,e],[28,0,42,"Digit7",28,"7",55,"VK_7",e,e],[29,0,43,"Digit8",29,"8",56,"VK_8",e,e],[30,0,44,"Digit9",30,"9",57,"VK_9",e,e],[21,0,45,"Digit0",21,"0",48,"VK_0",e,e],[3,1,46,"Enter",3,"Enter",13,"VK_RETURN",e,e],[9,1,47,"Escape",9,"Escape",27,"VK_ESCAPE",e,e],[1,1,48,"Backspace",1,"Backspace",8,"VK_BACK",e,e],[2,1,49,"Tab",2,"Tab",9,"VK_TAB",e,e],[10,1,50,"Space",10,"Space",32,"VK_SPACE",e,e],[83,0,51,"Minus",83,"-",189,"VK_OEM_MINUS","-","OEM_MINUS"],[81,0,52,"Equal",81,"=",187,"VK_OEM_PLUS","=","OEM_PLUS"],[87,0,53,"BracketLeft",87,"[",219,"VK_OEM_4","[","OEM_4"],[89,0,54,"BracketRight",89,"]",221,"VK_OEM_6","]","OEM_6"],[88,0,55,"Backslash",88,"\\",220,"VK_OEM_5","\\","OEM_5"],[0,0,56,"IntlHash",0,e,0,e,e,e],[80,0,57,"Semicolon",80,";",186,"VK_OEM_1",";","OEM_1"],[90,0,58,"Quote",90,"'",222,"VK_OEM_7","'","OEM_7"],[86,0,59,"Backquote",86,"`",192,"VK_OEM_3","`","OEM_3"],[82,0,60,"Comma",82,",",188,"VK_OEM_COMMA",",","OEM_COMMA"],[84,0,61,"Period",84,".",190,"VK_OEM_PERIOD",".","OEM_PERIOD"],[85,0,62,"Slash",85,"/",191,"VK_OEM_2","/","OEM_2"],[8,1,63,"CapsLock",8,"CapsLock",20,"VK_CAPITAL",e,e],[59,1,64,"F1",59,"F1",112,"VK_F1",e,e],[60,1,65,"F2",60,"F2",113,"VK_F2",e,e],[61,1,66,"F3",61,"F3",114,"VK_F3",e,e],[62,1,67,"F4",62,"F4",115,"VK_F4",e,e],[63,1,68,"F5",63,"F5",116,"VK_F5",e,e],[64,1,69,"F6",64,"F6",117,"VK_F6",e,e],[65,1,70,"F7",65,"F7",118,"VK_F7",e,e],[66,1,71,"F8",66,"F8",119,"VK_F8",e,e],[67,1,72,"F9",67,"F9",120,"VK_F9",e,e],[68,1,73,"F10",68,"F10",121,"VK_F10",e,e],[69,1,74,"F11",69,"F11",122,"VK_F11",e,e],[70,1,75,"F12",70,"F12",123,"VK_F12",e,e],[0,1,76,"PrintScreen",0,e,0,e,e,e],[79,1,77,"ScrollLock",79,"ScrollLock",145,"VK_SCROLL",e,e],[7,1,78,"Pause",7,"PauseBreak",19,"VK_PAUSE",e,e],[19,1,79,"Insert",19,"Insert",45,"VK_INSERT",e,e],[14,1,80,"Home",14,"Home",36,"VK_HOME",e,e],[11,1,81,"PageUp",11,"PageUp",33,"VK_PRIOR",e,e],[20,1,82,"Delete",20,"Delete",46,"VK_DELETE",e,e],[13,1,83,"End",13,"End",35,"VK_END",e,e],[12,1,84,"PageDown",12,"PageDown",34,"VK_NEXT",e,e],[17,1,85,"ArrowRight",17,"RightArrow",39,"VK_RIGHT","Right",e],[15,1,86,"ArrowLeft",15,"LeftArrow",37,"VK_LEFT","Left",e],[18,1,87,"ArrowDown",18,"DownArrow",40,"VK_DOWN","Down",e],[16,1,88,"ArrowUp",16,"UpArrow",38,"VK_UP","Up",e],[78,1,89,"NumLock",78,"NumLock",144,"VK_NUMLOCK",e,e],[108,1,90,"NumpadDivide",108,"NumPad_Divide",111,"VK_DIVIDE",e,e],[103,1,91,"NumpadMultiply",103,"NumPad_Multiply",106,"VK_MULTIPLY",e,e],[106,1,92,"NumpadSubtract",106,"NumPad_Subtract",109,"VK_SUBTRACT",e,e],[104,1,93,"NumpadAdd",104,"NumPad_Add",107,"VK_ADD",e,e],[3,1,94,"NumpadEnter",3,e,0,e,e,e],[94,1,95,"Numpad1",94,"NumPad1",97,"VK_NUMPAD1",e,e],[95,1,96,"Numpad2",95,"NumPad2",98,"VK_NUMPAD2",e,e],[96,1,97,"Numpad3",96,"NumPad3",99,"VK_NUMPAD3",e,e],[97,1,98,"Numpad4",97,"NumPad4",100,"VK_NUMPAD4",e,e],[98,1,99,"Numpad5",98,"NumPad5",101,"VK_NUMPAD5",e,e],[99,1,100,"Numpad6",99,"NumPad6",102,"VK_NUMPAD6",e,e],[100,1,101,"Numpad7",100,"NumPad7",103,"VK_NUMPAD7",e,e],[101,1,102,"Numpad8",101,"NumPad8",104,"VK_NUMPAD8",e,e],[102,1,103,"Numpad9",102,"NumPad9",105,"VK_NUMPAD9",e,e],[93,1,104,"Numpad0",93,"NumPad0",96,"VK_NUMPAD0",e,e],[107,1,105,"NumpadDecimal",107,"NumPad_Decimal",110,"VK_DECIMAL",e,e],[92,0,106,"IntlBackslash",92,"OEM_102",226,"VK_OEM_102",e,e],[58,1,107,"ContextMenu",58,"ContextMenu",93,e,e,e],[0,1,108,"Power",0,e,0,e,e,e],[0,1,109,"NumpadEqual",0,e,0,e,e,e],[71,1,110,"F13",71,"F13",124,"VK_F13",e,e],[72,1,111,"F14",72,"F14",125,"VK_F14",e,e],[73,1,112,"F15",73,"F15",126,"VK_F15",e,e],[74,1,113,"F16",74,"F16",127,"VK_F16",e,e],[75,1,114,"F17",75,"F17",128,"VK_F17",e,e],[76,1,115,"F18",76,"F18",129,"VK_F18",e,e],[77,1,116,"F19",77,"F19",130,"VK_F19",e,e],[0,1,117,"F20",0,e,0,"VK_F20",e,e],[0,1,118,"F21",0,e,0,"VK_F21",e,e],[0,1,119,"F22",0,e,0,"VK_F22",e,e],[0,1,120,"F23",0,e,0,"VK_F23",e,e],[0,1,121,"F24",0,e,0,"VK_F24",e,e],[0,1,122,"Open",0,e,0,e,e,e],[0,1,123,"Help",0,e,0,e,e,e],[0,1,124,"Select",0,e,0,e,e,e],[0,1,125,"Again",0,e,0,e,e,e],[0,1,126,"Undo",0,e,0,e,e,e],[0,1,127,"Cut",0,e,0,e,e,e],[0,1,128,"Copy",0,e,0,e,e,e],[0,1,129,"Paste",0,e,0,e,e,e],[0,1,130,"Find",0,e,0,e,e,e],[0,1,131,"AudioVolumeMute",112,"AudioVolumeMute",173,"VK_VOLUME_MUTE",e,e],[0,1,132,"AudioVolumeUp",113,"AudioVolumeUp",175,"VK_VOLUME_UP",e,e],[0,1,133,"AudioVolumeDown",114,"AudioVolumeDown",174,"VK_VOLUME_DOWN",e,e],[105,1,134,"NumpadComma",105,"NumPad_Separator",108,"VK_SEPARATOR",e,e],[110,0,135,"IntlRo",110,"ABNT_C1",193,"VK_ABNT_C1",e,e],[0,1,136,"KanaMode",0,e,0,e,e,e],[0,0,137,"IntlYen",0,e,0,e,e,e],[0,1,138,"Convert",0,e,0,e,e,e],[0,1,139,"NonConvert",0,e,0,e,e,e],[0,1,140,"Lang1",0,e,0,e,e,e],[0,1,141,"Lang2",0,e,0,e,e,e],[0,1,142,"Lang3",0,e,0,e,e,e],[0,1,143,"Lang4",0,e,0,e,e,e],[0,1,144,"Lang5",0,e,0,e,e,e],[0,1,145,"Abort",0,e,0,e,e,e],[0,1,146,"Props",0,e,0,e,e,e],[0,1,147,"NumpadParenLeft",0,e,0,e,e,e],[0,1,148,"NumpadParenRight",0,e,0,e,e,e],[0,1,149,"NumpadBackspace",0,e,0,e,e,e],[0,1,150,"NumpadMemoryStore",0,e,0,e,e,e],[0,1,151,"NumpadMemoryRecall",0,e,0,e,e,e],[0,1,152,"NumpadMemoryClear",0,e,0,e,e,e],[0,1,153,"NumpadMemoryAdd",0,e,0,e,e,e],[0,1,154,"NumpadMemorySubtract",0,e,0,e,e,e],[0,1,155,"NumpadClear",126,"Clear",12,"VK_CLEAR",e,e],[0,1,156,"NumpadClearEntry",0,e,0,e,e,e],[5,1,0,e,5,"Ctrl",17,"VK_CONTROL",e,e],[4,1,0,e,4,"Shift",16,"VK_SHIFT",e,e],[6,1,0,e,6,"Alt",18,"VK_MENU",e,e],[57,1,0,e,57,"Meta",0,"VK_COMMAND",e,e],[5,1,157,"ControlLeft",5,e,0,"VK_LCONTROL",e,e],[4,1,158,"ShiftLeft",4,e,0,"VK_LSHIFT",e,e],[6,1,159,"AltLeft",6,e,0,"VK_LMENU",e,e],[57,1,160,"MetaLeft",57,e,0,"VK_LWIN",e,e],[5,1,161,"ControlRight",5,e,0,"VK_RCONTROL",e,e],[4,1,162,"ShiftRight",4,e,0,"VK_RSHIFT",e,e],[6,1,163,"AltRight",6,e,0,"VK_RMENU",e,e],[57,1,164,"MetaRight",57,e,0,"VK_RWIN",e,e],[0,1,165,"BrightnessUp",0,e,0,e,e,e],[0,1,166,"BrightnessDown",0,e,0,e,e,e],[0,1,167,"MediaPlay",0,e,0,e,e,e],[0,1,168,"MediaRecord",0,e,0,e,e,e],[0,1,169,"MediaFastForward",0,e,0,e,e,e],[0,1,170,"MediaRewind",0,e,0,e,e,e],[114,1,171,"MediaTrackNext",119,"MediaTrackNext",176,"VK_MEDIA_NEXT_TRACK",e,e],[115,1,172,"MediaTrackPrevious",120,"MediaTrackPrevious",177,"VK_MEDIA_PREV_TRACK",e,e],[116,1,173,"MediaStop",121,"MediaStop",178,"VK_MEDIA_STOP",e,e],[0,1,174,"Eject",0,e,0,e,e,e],[117,1,175,"MediaPlayPause",122,"MediaPlayPause",179,"VK_MEDIA_PLAY_PAUSE",e,e],[0,1,176,"MediaSelect",123,"LaunchMediaPlayer",181,"VK_MEDIA_LAUNCH_MEDIA_SELECT",e,e],[0,1,177,"LaunchMail",124,"LaunchMail",180,"VK_MEDIA_LAUNCH_MAIL",e,e],[0,1,178,"LaunchApp2",125,"LaunchApp2",183,"VK_MEDIA_LAUNCH_APP2",e,e],[0,1,179,"LaunchApp1",0,e,0,"VK_MEDIA_LAUNCH_APP1",e,e],[0,1,180,"SelectTask",0,e,0,e,e,e],[0,1,181,"LaunchScreenSaver",0,e,0,e,e,e],[0,1,182,"BrowserSearch",115,"BrowserSearch",170,"VK_BROWSER_SEARCH",e,e],[0,1,183,"BrowserHome",116,"BrowserHome",172,"VK_BROWSER_HOME",e,e],[112,1,184,"BrowserBack",117,"BrowserBack",166,"VK_BROWSER_BACK",e,e],[113,1,185,"BrowserForward",118,"BrowserForward",167,"VK_BROWSER_FORWARD",e,e],[0,1,186,"BrowserStop",0,e,0,"VK_BROWSER_STOP",e,e],[0,1,187,"BrowserRefresh",0,e,0,"VK_BROWSER_REFRESH",e,e],[0,1,188,"BrowserFavorites",0,e,0,"VK_BROWSER_FAVORITES",e,e],[0,1,189,"ZoomToggle",0,e,0,e,e,e],[0,1,190,"MailReply",0,e,0,e,e,e],[0,1,191,"MailForward",0,e,0,e,e,e],[0,1,192,"MailSend",0,e,0,e,e,e],[109,1,0,e,109,"KeyInComposition",229,e,e,e],[111,1,0,e,111,"ABNT_C2",194,"VK_ABNT_C2",e,e],[91,1,0,e,91,"OEM_8",223,"VK_OEM_8",e,e],[0,1,0,e,0,e,0,"VK_KANA",e,e],[0,1,0,e,0,e,0,"VK_HANGUL",e,e],[0,1,0,e,0,e,0,"VK_JUNJA",e,e],[0,1,0,e,0,e,0,"VK_FINAL",e,e],[0,1,0,e,0,e,0,"VK_HANJA",e,e],[0,1,0,e,0,e,0,"VK_KANJI",e,e],[0,1,0,e,0,e,0,"VK_CONVERT",e,e],[0,1,0,e,0,e,0,"VK_NONCONVERT",e,e],[0,1,0,e,0,e,0,"VK_ACCEPT",e,e],[0,1,0,e,0,e,0,"VK_MODECHANGE",e,e],[0,1,0,e,0,e,0,"VK_SELECT",e,e],[0,1,0,e,0,e,0,"VK_PRINT",e,e],[0,1,0,e,0,e,0,"VK_EXECUTE",e,e],[0,1,0,e,0,e,0,"VK_SNAPSHOT",e,e],[0,1,0,e,0,e,0,"VK_HELP",e,e],[0,1,0,e,0,e,0,"VK_APPS",e,e],[0,1,0,e,0,e,0,"VK_PROCESSKEY",e,e],[0,1,0,e,0,e,0,"VK_PACKET",e,e],[0,1,0,e,0,e,0,"VK_DBE_SBCSCHAR",e,e],[0,1,0,e,0,e,0,"VK_DBE_DBCSCHAR",e,e],[0,1,0,e,0,e,0,"VK_ATTN",e,e],[0,1,0,e,0,e,0,"VK_CRSEL",e,e],[0,1,0,e,0,e,0,"VK_EXSEL",e,e],[0,1,0,e,0,e,0,"VK_EREOF",e,e],[0,1,0,e,0,e,0,"VK_PLAY",e,e],[0,1,0,e,0,e,0,"VK_ZOOM",e,e],[0,1,0,e,0,e,0,"VK_NONAME",e,e],[0,1,0,e,0,e,0,"VK_PA1",e,e],[0,1,0,e,0,e,0,"VK_OEM_CLEAR",e,e]],n=[],r=[];for(const i of t){const[s,o,l,c,h,d,u,f,m,v]=i;if(r[l]||(r[l]=!0,vc[c]=l,wc[c.toLowerCase()]=l),!n[h]){if(n[h]=!0,!d)throw new Error(`String representation missing for key code ${h} around scan code ${c}`);kn.define(h,d),vr.define(h,m||d),wr.define(h,v||m||d)}u&&(bc[u]=h)}})();var es;(function(e){function t(l){return kn.keyCodeToStr(l)}e.toString=t;function n(l){return kn.strToKeyCode(l)}e.fromString=n;function r(l){return vr.keyCodeToStr(l)}e.toUserSettingsUS=r;function i(l){return wr.keyCodeToStr(l)}e.toUserSettingsGeneral=i;function s(l){return vr.strToKeyCode(l)||wr.strToKeyCode(l)}e.fromUserSettings=s;function o(l){if(l>=93&&l<=108)return null;switch(l){case 16:return"Up";case 18:return"Down";case 15:return"Left";case 17:return"Right"}return kn.keyCodeToStr(l)}e.toElectronAccelerator=o})(es||(es={}));function yc(e,t){const n=(t&65535)<<16>>>0;return(e|n)>>>0}class ke extends re{constructor(t,n,r,i){super(t,n,r,i),this.selectionStartLineNumber=t,this.selectionStartColumn=n,this.positionLineNumber=r,this.positionColumn=i}toString(){return"["+this.selectionStartLineNumber+","+this.selectionStartColumn+" -> "+this.positionLineNumber+","+this.positionColumn+"]"}equalsSelection(t){return ke.selectionsEqual(this,t)}static selectionsEqual(t,n){return t.selectionStartLineNumber===n.selectionStartLineNumber&&t.selectionStartColumn===n.selectionStartColumn&&t.positionLineNumber===n.positionLineNumber&&t.positionColumn===n.positionColumn}getDirection(){return this.selectionStartLineNumber===this.startLineNumber&&this.selectionStartColumn===this.startColumn?0:1}setEndPosition(t,n){return this.getDirection()===0?new ke(this.startLineNumber,this.startColumn,t,n):new ke(t,n,this.startLineNumber,this.startColumn)}getPosition(){return new Ce(this.positionLineNumber,this.positionColumn)}getSelectionStart(){return new Ce(this.selectionStartLineNumber,this.selectionStartColumn)}setStartPosition(t,n){return this.getDirection()===0?new ke(t,n,this.endLineNumber,this.endColumn):new ke(this.endLineNumber,this.endColumn,t,n)}static fromPositions(t,n=t){return new ke(t.lineNumber,t.column,n.lineNumber,n.column)}static fromRange(t,n){return n===0?new ke(t.startLineNumber,t.startColumn,t.endLineNumber,t.endColumn):new ke(t.endLineNumber,t.endColumn,t.startLineNumber,t.startColumn)}static liftSelection(t){return new ke(t.selectionStartLineNumber,t.selectionStartColumn,t.positionLineNumber,t.positionColumn)}static selectionsArrEqual(t,n){if(t&&!n||!t&&n)return!1;if(!t&&!n)return!0;if(t.length!==n.length)return!1;for(let r=0,i=t.length;r{this._map.get(t)===n&&(this._map.delete(t),this.fire([t]))})}registerFactory(t,n){var r;(r=this._factories.get(t))===null||r===void 0||r.dispose();const i=new Sc(this,t,n);return this._factories.set(t,i),fn(()=>{const s=this._factories.get(t);!s||s!==i||(this._factories.delete(t),s.dispose())})}getOrCreate(t){return yr(this,void 0,void 0,function*(){const n=this.get(t);if(n)return n;const r=this._factories.get(t);return!r||r.isResolved?null:(yield r.resolve(),this.get(t))})}get(t){return this._map.get(t)||null}isResolved(t){if(this.get(t))return!0;const r=this._factories.get(t);return!!(!r||r.isResolved)}setColorMap(t){this._colorMap=t,this._onDidChange.fire({changedLanguages:Array.from(this._map.keys()),changedColorMap:!0})}getColorMap(){return this._colorMap}getDefaultBackground(){return this._colorMap&&this._colorMap.length>2?this._colorMap[2]:null}}class Sc extends Kn{constructor(t,n,r){super(),this._registry=t,this._languageId=n,this._factory=r,this._isDisposed=!1,this._resolvePromise=null,this._isResolved=!1}get isResolved(){return this._isResolved}dispose(){this._isDisposed=!0,super.dispose()}resolve(){return yr(this,void 0,void 0,function*(){return this._resolvePromise||(this._resolvePromise=this._create()),this._resolvePromise})}_create(){return yr(this,void 0,void 0,function*(){const t=yield Promise.resolve(this._factory.createTokenizationSupport());this._isResolved=!0,t&&!this._isDisposed&&this._register(this._registry.register(this._languageId,t))})}}class Cc{constructor(t,n,r){this._tokenBrand=void 0,this.offset=t,this.type=n,this.language=r}toString(){return"("+this.offset+", "+this.type+")"}}var ns;(function(e){const t=new Map;t.set(0,a.symbolMethod),t.set(1,a.symbolFunction),t.set(2,a.symbolConstructor),t.set(3,a.symbolField),t.set(4,a.symbolVariable),t.set(5,a.symbolClass),t.set(6,a.symbolStruct),t.set(7,a.symbolInterface),t.set(8,a.symbolModule),t.set(9,a.symbolProperty),t.set(10,a.symbolEvent),t.set(11,a.symbolOperator),t.set(12,a.symbolUnit),t.set(13,a.symbolValue),t.set(15,a.symbolEnum),t.set(14,a.symbolConstant),t.set(15,a.symbolEnum),t.set(16,a.symbolEnumMember),t.set(17,a.symbolKeyword),t.set(27,a.symbolSnippet),t.set(18,a.symbolText),t.set(19,a.symbolColor),t.set(20,a.symbolFile),t.set(21,a.symbolReference),t.set(22,a.symbolCustomColor),t.set(23,a.symbolFolder),t.set(24,a.symbolTypeParameter),t.set(25,a.account),t.set(26,a.issues);function n(s){let o=t.get(s);return o||(console.info("No codicon found for CompletionItemKind "+s),o=a.symbolProperty),o}e.toIcon=n;const r=new Map;r.set("method",0),r.set("function",1),r.set("constructor",2),r.set("field",3),r.set("variable",4),r.set("class",5),r.set("struct",6),r.set("interface",7),r.set("module",8),r.set("property",9),r.set("event",10),r.set("operator",11),r.set("unit",12),r.set("value",13),r.set("constant",14),r.set("enum",15),r.set("enum-member",16),r.set("enumMember",16),r.set("keyword",17),r.set("snippet",27),r.set("text",18),r.set("color",19),r.set("file",20),r.set("reference",21),r.set("customcolor",22),r.set("folder",23),r.set("type-parameter",24),r.set("typeParameter",24),r.set("account",25),r.set("issue",26);function i(s,o){let l=r.get(s);return typeof l>"u"&&!o&&(l=9),l}e.fromString=i})(ns||(ns={}));var rs;(function(e){e[e.Automatic=0]="Automatic",e[e.Explicit=1]="Explicit"})(rs||(rs={}));var is;(function(e){e[e.Invoke=1]="Invoke",e[e.TriggerCharacter=2]="TriggerCharacter",e[e.ContentChange=3]="ContentChange"})(is||(is={}));var ss;(function(e){e[e.Text=0]="Text",e[e.Read=1]="Read",e[e.Write=2]="Write"})(ss||(ss={}));var as;(function(e){const t=new Map;t.set(0,a.symbolFile),t.set(1,a.symbolModule),t.set(2,a.symbolNamespace),t.set(3,a.symbolPackage),t.set(4,a.symbolClass),t.set(5,a.symbolMethod),t.set(6,a.symbolProperty),t.set(7,a.symbolField),t.set(8,a.symbolConstructor),t.set(9,a.symbolEnum),t.set(10,a.symbolInterface),t.set(11,a.symbolFunction),t.set(12,a.symbolVariable),t.set(13,a.symbolConstant),t.set(14,a.symbolString),t.set(15,a.symbolNumber),t.set(16,a.symbolBoolean),t.set(17,a.symbolArray),t.set(18,a.symbolObject),t.set(19,a.symbolKey),t.set(20,a.symbolNull),t.set(21,a.symbolEnumMember),t.set(22,a.symbolStruct),t.set(23,a.symbolEvent),t.set(24,a.symbolOperator),t.set(25,a.symbolTypeParameter);function n(r){let i=t.get(r);return i||(console.info("No codicon found for SymbolKind "+r),i=a.symbolProperty),i}e.toIcon=n})(as||(as={}));var os;(function(e){function t(n){return!n||typeof n!="object"?!1:typeof n.id=="string"&&typeof n.title=="string"}e.is=t})(os||(os={}));var ls;(function(e){e[e.Type=1]="Type",e[e.Parameter=2]="Parameter"})(ls||(ls={})),new xc;var cs;(function(e){e[e.Unknown=0]="Unknown",e[e.Disabled=1]="Disabled",e[e.Enabled=2]="Enabled"})(cs||(cs={}));var hs;(function(e){e[e.Invoke=1]="Invoke",e[e.Auto=2]="Auto"})(hs||(hs={}));var ds;(function(e){e[e.KeepWhitespace=1]="KeepWhitespace",e[e.InsertAsSnippet=4]="InsertAsSnippet"})(ds||(ds={}));var us;(function(e){e[e.Method=0]="Method",e[e.Function=1]="Function",e[e.Constructor=2]="Constructor",e[e.Field=3]="Field",e[e.Variable=4]="Variable",e[e.Class=5]="Class",e[e.Struct=6]="Struct",e[e.Interface=7]="Interface",e[e.Module=8]="Module",e[e.Property=9]="Property",e[e.Event=10]="Event",e[e.Operator=11]="Operator",e[e.Unit=12]="Unit",e[e.Value=13]="Value",e[e.Constant=14]="Constant",e[e.Enum=15]="Enum",e[e.EnumMember=16]="EnumMember",e[e.Keyword=17]="Keyword",e[e.Text=18]="Text",e[e.Color=19]="Color",e[e.File=20]="File",e[e.Reference=21]="Reference",e[e.Customcolor=22]="Customcolor",e[e.Folder=23]="Folder",e[e.TypeParameter=24]="TypeParameter",e[e.User=25]="User",e[e.Issue=26]="Issue",e[e.Snippet=27]="Snippet"})(us||(us={}));var ps;(function(e){e[e.Deprecated=1]="Deprecated"})(ps||(ps={}));var fs;(function(e){e[e.Invoke=0]="Invoke",e[e.TriggerCharacter=1]="TriggerCharacter",e[e.TriggerForIncompleteCompletions=2]="TriggerForIncompleteCompletions"})(fs||(fs={}));var ms;(function(e){e[e.EXACT=0]="EXACT",e[e.ABOVE=1]="ABOVE",e[e.BELOW=2]="BELOW"})(ms||(ms={}));var gs;(function(e){e[e.NotSet=0]="NotSet",e[e.ContentFlush=1]="ContentFlush",e[e.RecoverFromMarkers=2]="RecoverFromMarkers",e[e.Explicit=3]="Explicit",e[e.Paste=4]="Paste",e[e.Undo=5]="Undo",e[e.Redo=6]="Redo"})(gs||(gs={}));var bs;(function(e){e[e.LF=1]="LF",e[e.CRLF=2]="CRLF"})(bs||(bs={}));var vs;(function(e){e[e.Text=0]="Text",e[e.Read=1]="Read",e[e.Write=2]="Write"})(vs||(vs={}));var ws;(function(e){e[e.None=0]="None",e[e.Keep=1]="Keep",e[e.Brackets=2]="Brackets",e[e.Advanced=3]="Advanced",e[e.Full=4]="Full"})(ws||(ws={}));var ys;(function(e){e[e.acceptSuggestionOnCommitCharacter=0]="acceptSuggestionOnCommitCharacter",e[e.acceptSuggestionOnEnter=1]="acceptSuggestionOnEnter",e[e.accessibilitySupport=2]="accessibilitySupport",e[e.accessibilityPageSize=3]="accessibilityPageSize",e[e.ariaLabel=4]="ariaLabel",e[e.autoClosingBrackets=5]="autoClosingBrackets",e[e.autoClosingDelete=6]="autoClosingDelete",e[e.autoClosingOvertype=7]="autoClosingOvertype",e[e.autoClosingQuotes=8]="autoClosingQuotes",e[e.autoIndent=9]="autoIndent",e[e.automaticLayout=10]="automaticLayout",e[e.autoSurround=11]="autoSurround",e[e.bracketPairColorization=12]="bracketPairColorization",e[e.guides=13]="guides",e[e.codeLens=14]="codeLens",e[e.codeLensFontFamily=15]="codeLensFontFamily",e[e.codeLensFontSize=16]="codeLensFontSize",e[e.colorDecorators=17]="colorDecorators",e[e.columnSelection=18]="columnSelection",e[e.comments=19]="comments",e[e.contextmenu=20]="contextmenu",e[e.copyWithSyntaxHighlighting=21]="copyWithSyntaxHighlighting",e[e.cursorBlinking=22]="cursorBlinking",e[e.cursorSmoothCaretAnimation=23]="cursorSmoothCaretAnimation",e[e.cursorStyle=24]="cursorStyle",e[e.cursorSurroundingLines=25]="cursorSurroundingLines",e[e.cursorSurroundingLinesStyle=26]="cursorSurroundingLinesStyle",e[e.cursorWidth=27]="cursorWidth",e[e.disableLayerHinting=28]="disableLayerHinting",e[e.disableMonospaceOptimizations=29]="disableMonospaceOptimizations",e[e.domReadOnly=30]="domReadOnly",e[e.dragAndDrop=31]="dragAndDrop",e[e.dropIntoEditor=32]="dropIntoEditor",e[e.emptySelectionClipboard=33]="emptySelectionClipboard",e[e.experimental=34]="experimental",e[e.extraEditorClassName=35]="extraEditorClassName",e[e.fastScrollSensitivity=36]="fastScrollSensitivity",e[e.find=37]="find",e[e.fixedOverflowWidgets=38]="fixedOverflowWidgets",e[e.folding=39]="folding",e[e.foldingStrategy=40]="foldingStrategy",e[e.foldingHighlight=41]="foldingHighlight",e[e.foldingImportsByDefault=42]="foldingImportsByDefault",e[e.foldingMaximumRegions=43]="foldingMaximumRegions",e[e.unfoldOnClickAfterEndOfLine=44]="unfoldOnClickAfterEndOfLine",e[e.fontFamily=45]="fontFamily",e[e.fontInfo=46]="fontInfo",e[e.fontLigatures=47]="fontLigatures",e[e.fontSize=48]="fontSize",e[e.fontWeight=49]="fontWeight",e[e.formatOnPaste=50]="formatOnPaste",e[e.formatOnType=51]="formatOnType",e[e.glyphMargin=52]="glyphMargin",e[e.gotoLocation=53]="gotoLocation",e[e.hideCursorInOverviewRuler=54]="hideCursorInOverviewRuler",e[e.hover=55]="hover",e[e.inDiffEditor=56]="inDiffEditor",e[e.inlineSuggest=57]="inlineSuggest",e[e.letterSpacing=58]="letterSpacing",e[e.lightbulb=59]="lightbulb",e[e.lineDecorationsWidth=60]="lineDecorationsWidth",e[e.lineHeight=61]="lineHeight",e[e.lineNumbers=62]="lineNumbers",e[e.lineNumbersMinChars=63]="lineNumbersMinChars",e[e.linkedEditing=64]="linkedEditing",e[e.links=65]="links",e[e.matchBrackets=66]="matchBrackets",e[e.minimap=67]="minimap",e[e.mouseStyle=68]="mouseStyle",e[e.mouseWheelScrollSensitivity=69]="mouseWheelScrollSensitivity",e[e.mouseWheelZoom=70]="mouseWheelZoom",e[e.multiCursorMergeOverlapping=71]="multiCursorMergeOverlapping",e[e.multiCursorModifier=72]="multiCursorModifier",e[e.multiCursorPaste=73]="multiCursorPaste",e[e.occurrencesHighlight=74]="occurrencesHighlight",e[e.overviewRulerBorder=75]="overviewRulerBorder",e[e.overviewRulerLanes=76]="overviewRulerLanes",e[e.padding=77]="padding",e[e.parameterHints=78]="parameterHints",e[e.peekWidgetDefaultFocus=79]="peekWidgetDefaultFocus",e[e.definitionLinkOpensInPeek=80]="definitionLinkOpensInPeek",e[e.quickSuggestions=81]="quickSuggestions",e[e.quickSuggestionsDelay=82]="quickSuggestionsDelay",e[e.readOnly=83]="readOnly",e[e.renameOnType=84]="renameOnType",e[e.renderControlCharacters=85]="renderControlCharacters",e[e.renderFinalNewline=86]="renderFinalNewline",e[e.renderLineHighlight=87]="renderLineHighlight",e[e.renderLineHighlightOnlyWhenFocus=88]="renderLineHighlightOnlyWhenFocus",e[e.renderValidationDecorations=89]="renderValidationDecorations",e[e.renderWhitespace=90]="renderWhitespace",e[e.revealHorizontalRightPadding=91]="revealHorizontalRightPadding",e[e.roundedSelection=92]="roundedSelection",e[e.rulers=93]="rulers",e[e.scrollbar=94]="scrollbar",e[e.scrollBeyondLastColumn=95]="scrollBeyondLastColumn",e[e.scrollBeyondLastLine=96]="scrollBeyondLastLine",e[e.scrollPredominantAxis=97]="scrollPredominantAxis",e[e.selectionClipboard=98]="selectionClipboard",e[e.selectionHighlight=99]="selectionHighlight",e[e.selectOnLineNumbers=100]="selectOnLineNumbers",e[e.showFoldingControls=101]="showFoldingControls",e[e.showUnused=102]="showUnused",e[e.snippetSuggestions=103]="snippetSuggestions",e[e.smartSelect=104]="smartSelect",e[e.smoothScrolling=105]="smoothScrolling",e[e.stickyTabStops=106]="stickyTabStops",e[e.stopRenderingLineAfter=107]="stopRenderingLineAfter",e[e.suggest=108]="suggest",e[e.suggestFontSize=109]="suggestFontSize",e[e.suggestLineHeight=110]="suggestLineHeight",e[e.suggestOnTriggerCharacters=111]="suggestOnTriggerCharacters",e[e.suggestSelection=112]="suggestSelection",e[e.tabCompletion=113]="tabCompletion",e[e.tabIndex=114]="tabIndex",e[e.unicodeHighlighting=115]="unicodeHighlighting",e[e.unusualLineTerminators=116]="unusualLineTerminators",e[e.useShadowDOM=117]="useShadowDOM",e[e.useTabStops=118]="useTabStops",e[e.wordSeparators=119]="wordSeparators",e[e.wordWrap=120]="wordWrap",e[e.wordWrapBreakAfterCharacters=121]="wordWrapBreakAfterCharacters",e[e.wordWrapBreakBeforeCharacters=122]="wordWrapBreakBeforeCharacters",e[e.wordWrapColumn=123]="wordWrapColumn",e[e.wordWrapOverride1=124]="wordWrapOverride1",e[e.wordWrapOverride2=125]="wordWrapOverride2",e[e.wrappingIndent=126]="wrappingIndent",e[e.wrappingStrategy=127]="wrappingStrategy",e[e.showDeprecated=128]="showDeprecated",e[e.inlayHints=129]="inlayHints",e[e.editorClassName=130]="editorClassName",e[e.pixelRatio=131]="pixelRatio",e[e.tabFocusMode=132]="tabFocusMode",e[e.layoutInfo=133]="layoutInfo",e[e.wrappingInfo=134]="wrappingInfo"})(ys||(ys={}));var xs;(function(e){e[e.TextDefined=0]="TextDefined",e[e.LF=1]="LF",e[e.CRLF=2]="CRLF"})(xs||(xs={}));var Ss;(function(e){e[e.LF=0]="LF",e[e.CRLF=1]="CRLF"})(Ss||(Ss={}));var Cs;(function(e){e[e.None=0]="None",e[e.Indent=1]="Indent",e[e.IndentOutdent=2]="IndentOutdent",e[e.Outdent=3]="Outdent"})(Cs||(Cs={}));var ks;(function(e){e[e.Both=0]="Both",e[e.Right=1]="Right",e[e.Left=2]="Left",e[e.None=3]="None"})(ks||(ks={}));var _s;(function(e){e[e.Type=1]="Type",e[e.Parameter=2]="Parameter"})(_s||(_s={}));var Fs;(function(e){e[e.Automatic=0]="Automatic",e[e.Explicit=1]="Explicit"})(Fs||(Fs={}));var xr;(function(e){e[e.DependsOnKbLayout=-1]="DependsOnKbLayout",e[e.Unknown=0]="Unknown",e[e.Backspace=1]="Backspace",e[e.Tab=2]="Tab",e[e.Enter=3]="Enter",e[e.Shift=4]="Shift",e[e.Ctrl=5]="Ctrl",e[e.Alt=6]="Alt",e[e.PauseBreak=7]="PauseBreak",e[e.CapsLock=8]="CapsLock",e[e.Escape=9]="Escape",e[e.Space=10]="Space",e[e.PageUp=11]="PageUp",e[e.PageDown=12]="PageDown",e[e.End=13]="End",e[e.Home=14]="Home",e[e.LeftArrow=15]="LeftArrow",e[e.UpArrow=16]="UpArrow",e[e.RightArrow=17]="RightArrow",e[e.DownArrow=18]="DownArrow",e[e.Insert=19]="Insert",e[e.Delete=20]="Delete",e[e.Digit0=21]="Digit0",e[e.Digit1=22]="Digit1",e[e.Digit2=23]="Digit2",e[e.Digit3=24]="Digit3",e[e.Digit4=25]="Digit4",e[e.Digit5=26]="Digit5",e[e.Digit6=27]="Digit6",e[e.Digit7=28]="Digit7",e[e.Digit8=29]="Digit8",e[e.Digit9=30]="Digit9",e[e.KeyA=31]="KeyA",e[e.KeyB=32]="KeyB",e[e.KeyC=33]="KeyC",e[e.KeyD=34]="KeyD",e[e.KeyE=35]="KeyE",e[e.KeyF=36]="KeyF",e[e.KeyG=37]="KeyG",e[e.KeyH=38]="KeyH",e[e.KeyI=39]="KeyI",e[e.KeyJ=40]="KeyJ",e[e.KeyK=41]="KeyK",e[e.KeyL=42]="KeyL",e[e.KeyM=43]="KeyM",e[e.KeyN=44]="KeyN",e[e.KeyO=45]="KeyO",e[e.KeyP=46]="KeyP",e[e.KeyQ=47]="KeyQ",e[e.KeyR=48]="KeyR",e[e.KeyS=49]="KeyS",e[e.KeyT=50]="KeyT",e[e.KeyU=51]="KeyU",e[e.KeyV=52]="KeyV",e[e.KeyW=53]="KeyW",e[e.KeyX=54]="KeyX",e[e.KeyY=55]="KeyY",e[e.KeyZ=56]="KeyZ",e[e.Meta=57]="Meta",e[e.ContextMenu=58]="ContextMenu",e[e.F1=59]="F1",e[e.F2=60]="F2",e[e.F3=61]="F3",e[e.F4=62]="F4",e[e.F5=63]="F5",e[e.F6=64]="F6",e[e.F7=65]="F7",e[e.F8=66]="F8",e[e.F9=67]="F9",e[e.F10=68]="F10",e[e.F11=69]="F11",e[e.F12=70]="F12",e[e.F13=71]="F13",e[e.F14=72]="F14",e[e.F15=73]="F15",e[e.F16=74]="F16",e[e.F17=75]="F17",e[e.F18=76]="F18",e[e.F19=77]="F19",e[e.NumLock=78]="NumLock",e[e.ScrollLock=79]="ScrollLock",e[e.Semicolon=80]="Semicolon",e[e.Equal=81]="Equal",e[e.Comma=82]="Comma",e[e.Minus=83]="Minus",e[e.Period=84]="Period",e[e.Slash=85]="Slash",e[e.Backquote=86]="Backquote",e[e.BracketLeft=87]="BracketLeft",e[e.Backslash=88]="Backslash",e[e.BracketRight=89]="BracketRight",e[e.Quote=90]="Quote",e[e.OEM_8=91]="OEM_8",e[e.IntlBackslash=92]="IntlBackslash",e[e.Numpad0=93]="Numpad0",e[e.Numpad1=94]="Numpad1",e[e.Numpad2=95]="Numpad2",e[e.Numpad3=96]="Numpad3",e[e.Numpad4=97]="Numpad4",e[e.Numpad5=98]="Numpad5",e[e.Numpad6=99]="Numpad6",e[e.Numpad7=100]="Numpad7",e[e.Numpad8=101]="Numpad8",e[e.Numpad9=102]="Numpad9",e[e.NumpadMultiply=103]="NumpadMultiply",e[e.NumpadAdd=104]="NumpadAdd",e[e.NUMPAD_SEPARATOR=105]="NUMPAD_SEPARATOR",e[e.NumpadSubtract=106]="NumpadSubtract",e[e.NumpadDecimal=107]="NumpadDecimal",e[e.NumpadDivide=108]="NumpadDivide",e[e.KEY_IN_COMPOSITION=109]="KEY_IN_COMPOSITION",e[e.ABNT_C1=110]="ABNT_C1",e[e.ABNT_C2=111]="ABNT_C2",e[e.AudioVolumeMute=112]="AudioVolumeMute",e[e.AudioVolumeUp=113]="AudioVolumeUp",e[e.AudioVolumeDown=114]="AudioVolumeDown",e[e.BrowserSearch=115]="BrowserSearch",e[e.BrowserHome=116]="BrowserHome",e[e.BrowserBack=117]="BrowserBack",e[e.BrowserForward=118]="BrowserForward",e[e.MediaTrackNext=119]="MediaTrackNext",e[e.MediaTrackPrevious=120]="MediaTrackPrevious",e[e.MediaStop=121]="MediaStop",e[e.MediaPlayPause=122]="MediaPlayPause",e[e.LaunchMediaPlayer=123]="LaunchMediaPlayer",e[e.LaunchMail=124]="LaunchMail",e[e.LaunchApp2=125]="LaunchApp2",e[e.Clear=126]="Clear",e[e.MAX_VALUE=127]="MAX_VALUE"})(xr||(xr={}));var Sr;(function(e){e[e.Hint=1]="Hint",e[e.Info=2]="Info",e[e.Warning=4]="Warning",e[e.Error=8]="Error"})(Sr||(Sr={}));var Cr;(function(e){e[e.Unnecessary=1]="Unnecessary",e[e.Deprecated=2]="Deprecated"})(Cr||(Cr={}));var Es;(function(e){e[e.Inline=1]="Inline",e[e.Gutter=2]="Gutter"})(Es||(Es={}));var Ds;(function(e){e[e.UNKNOWN=0]="UNKNOWN",e[e.TEXTAREA=1]="TEXTAREA",e[e.GUTTER_GLYPH_MARGIN=2]="GUTTER_GLYPH_MARGIN",e[e.GUTTER_LINE_NUMBERS=3]="GUTTER_LINE_NUMBERS",e[e.GUTTER_LINE_DECORATIONS=4]="GUTTER_LINE_DECORATIONS",e[e.GUTTER_VIEW_ZONE=5]="GUTTER_VIEW_ZONE",e[e.CONTENT_TEXT=6]="CONTENT_TEXT",e[e.CONTENT_EMPTY=7]="CONTENT_EMPTY",e[e.CONTENT_VIEW_ZONE=8]="CONTENT_VIEW_ZONE",e[e.CONTENT_WIDGET=9]="CONTENT_WIDGET",e[e.OVERVIEW_RULER=10]="OVERVIEW_RULER",e[e.SCROLLBAR=11]="SCROLLBAR",e[e.OVERLAY_WIDGET=12]="OVERLAY_WIDGET",e[e.OUTSIDE_EDITOR=13]="OUTSIDE_EDITOR"})(Ds||(Ds={}));var Rs;(function(e){e[e.TOP_RIGHT_CORNER=0]="TOP_RIGHT_CORNER",e[e.BOTTOM_RIGHT_CORNER=1]="BOTTOM_RIGHT_CORNER",e[e.TOP_CENTER=2]="TOP_CENTER"})(Rs||(Rs={}));var As;(function(e){e[e.Left=1]="Left",e[e.Center=2]="Center",e[e.Right=4]="Right",e[e.Full=7]="Full"})(As||(As={}));var zs;(function(e){e[e.Left=0]="Left",e[e.Right=1]="Right",e[e.None=2]="None",e[e.LeftOfInjectedText=3]="LeftOfInjectedText",e[e.RightOfInjectedText=4]="RightOfInjectedText"})(zs||(zs={}));var Ms;(function(e){e[e.Off=0]="Off",e[e.On=1]="On",e[e.Relative=2]="Relative",e[e.Interval=3]="Interval",e[e.Custom=4]="Custom"})(Ms||(Ms={}));var Ns;(function(e){e[e.None=0]="None",e[e.Text=1]="Text",e[e.Blocks=2]="Blocks"})(Ns||(Ns={}));var Ps;(function(e){e[e.Smooth=0]="Smooth",e[e.Immediate=1]="Immediate"})(Ps||(Ps={}));var Is;(function(e){e[e.Auto=1]="Auto",e[e.Hidden=2]="Hidden",e[e.Visible=3]="Visible"})(Is||(Is={}));var kr;(function(e){e[e.LTR=0]="LTR",e[e.RTL=1]="RTL"})(kr||(kr={}));var Ls;(function(e){e[e.Invoke=1]="Invoke",e[e.TriggerCharacter=2]="TriggerCharacter",e[e.ContentChange=3]="ContentChange"})(Ls||(Ls={}));var Ts;(function(e){e[e.File=0]="File",e[e.Module=1]="Module",e[e.Namespace=2]="Namespace",e[e.Package=3]="Package",e[e.Class=4]="Class",e[e.Method=5]="Method",e[e.Property=6]="Property",e[e.Field=7]="Field",e[e.Constructor=8]="Constructor",e[e.Enum=9]="Enum",e[e.Interface=10]="Interface",e[e.Function=11]="Function",e[e.Variable=12]="Variable",e[e.Constant=13]="Constant",e[e.String=14]="String",e[e.Number=15]="Number",e[e.Boolean=16]="Boolean",e[e.Array=17]="Array",e[e.Object=18]="Object",e[e.Key=19]="Key",e[e.Null=20]="Null",e[e.EnumMember=21]="EnumMember",e[e.Struct=22]="Struct",e[e.Event=23]="Event",e[e.Operator=24]="Operator",e[e.TypeParameter=25]="TypeParameter"})(Ts||(Ts={}));var Ws;(function(e){e[e.Deprecated=1]="Deprecated"})(Ws||(Ws={}));var Os;(function(e){e[e.Hidden=0]="Hidden",e[e.Blink=1]="Blink",e[e.Smooth=2]="Smooth",e[e.Phase=3]="Phase",e[e.Expand=4]="Expand",e[e.Solid=5]="Solid"})(Os||(Os={}));var Us;(function(e){e[e.Line=1]="Line",e[e.Block=2]="Block",e[e.Underline=3]="Underline",e[e.LineThin=4]="LineThin",e[e.BlockOutline=5]="BlockOutline",e[e.UnderlineThin=6]="UnderlineThin"})(Us||(Us={}));var Vs;(function(e){e[e.AlwaysGrowsWhenTypingAtEdges=0]="AlwaysGrowsWhenTypingAtEdges",e[e.NeverGrowsWhenTypingAtEdges=1]="NeverGrowsWhenTypingAtEdges",e[e.GrowsOnlyWhenTypingBefore=2]="GrowsOnlyWhenTypingBefore",e[e.GrowsOnlyWhenTypingAfter=3]="GrowsOnlyWhenTypingAfter"})(Vs||(Vs={}));var Bs;(function(e){e[e.None=0]="None",e[e.Same=1]="Same",e[e.Indent=2]="Indent",e[e.DeepIndent=3]="DeepIndent"})(Bs||(Bs={}));class Vt{static chord(t,n){return yc(t,n)}}Vt.CtrlCmd=2048,Vt.Shift=1024,Vt.Alt=512,Vt.WinCtrl=256;function kc(){return{editor:void 0,languages:void 0,CancellationTokenSource:gc,Emitter:We,KeyCode:xr,KeyMod:Vt,Position:Ce,Range:re,Selection:ke,SelectionDirection:kr,MarkerSeverity:Sr,MarkerTag:Cr,Uri:ht,Token:Cc}}var js;(function(e){e[e.Left=1]="Left",e[e.Center=2]="Center",e[e.Right=4]="Right",e[e.Full=7]="Full"})(js||(js={}));var qs;(function(e){e[e.Inline=1]="Inline",e[e.Gutter=2]="Gutter"})(qs||(qs={}));var $s;(function(e){e[e.Both=0]="Both",e[e.Right=1]="Right",e[e.Left=2]="Left",e[e.None=3]="None"})($s||($s={}));function _c(e,t,n,r,i){if(r===0)return!0;const s=t.charCodeAt(r-1);if(e.get(s)!==0||s===13||s===10)return!0;if(i>0){const o=t.charCodeAt(r);if(e.get(o)!==0)return!0}return!1}function Fc(e,t,n,r,i){if(r+i===n)return!0;const s=t.charCodeAt(r+i);if(e.get(s)!==0||s===13||s===10)return!0;if(i>0){const o=t.charCodeAt(r+i-1);if(e.get(o)!==0)return!0}return!1}function Ec(e,t,n,r,i){return _c(e,t,n,r,i)&&Fc(e,t,n,r,i)}class Dc{constructor(t,n){this._wordSeparators=t,this._searchRegex=n,this._prevMatchStartIndex=-1,this._prevMatchLength=0}reset(t){this._searchRegex.lastIndex=t,this._prevMatchStartIndex=-1,this._prevMatchLength=0}next(t){const n=t.length;let r;do{if(this._prevMatchStartIndex+this._prevMatchLength===n||(r=this._searchRegex.exec(t),!r))return null;const i=r.index,s=r[0].length;if(i===this._prevMatchStartIndex&&s===this._prevMatchLength){if(s===0){Dl(t,n,this._searchRegex.lastIndex)>65535?this._searchRegex.lastIndex+=2:this._searchRegex.lastIndex+=1;continue}return null}if(this._prevMatchStartIndex=i,this._prevMatchLength=s,!this._wordSeparators||Ec(this._wordSeparators,t,n,i,s))return r}while(r);return null}}class Rc{static computeUnicodeHighlights(t,n,r){const i=r?r.startLineNumber:1,s=r?r.endLineNumber:t.getLineCount(),o=new Hs(n),l=o.getCandidateCodePoints();let c;l==="allNonBasicAscii"?c=new RegExp("[^\\t\\n\\r\\x20-\\x7E]","g"):c=new RegExp(`${Ac(Array.from(l))}`,"g");const h=new Dc(null,c),d=[];let u=!1,f,m=0,v=0,b=0;e:for(let w=i,N=s;w<=N;w++){const E=t.getLineContent(w),M=E.length;h.reset(0);do if(f=h.next(E),f){let k=f.index,P=f.index+f[0].length;if(k>0){const z=E.charCodeAt(k-1);ar(z)&&k--}if(P+1=z){u=!0;break e}d.push(new re(w,k+1,w,P+1))}}while(f)}return{ranges:d,hasMore:u,ambiguousCharacterCount:m,invisibleCharacterCount:v,nonBasicAsciiCharacterCount:b}}static computeUnicodeHighlightReason(t,n){const r=new Hs(n);switch(r.shouldHighlightNonBasicASCII(t,null)){case 0:return null;case 2:return{kind:1};case 3:{const s=t.codePointAt(0),o=r.ambiguousCharacters.getPrimaryConfusable(s),l=Re.getLocales().filter(c=>!Re.getInstance(new Set([...n.allowedLocales,c])).isAmbiguous(s));return{kind:0,confusableWith:String.fromCodePoint(o),notAmbiguousInLocales:l}}case 1:return{kind:2}}}}function Ac(e,t){return`[${Sl(e.map(r=>String.fromCodePoint(r)).join(""))}]`}class Hs{constructor(t){this.options=t,this.allowedCodePoints=new Set(t.allowedCodePoints),this.ambiguousCharacters=Re.getInstance(new Set(t.allowedLocales))}getCandidateCodePoints(){if(this.options.nonBasicASCII)return"allNonBasicAscii";const t=new Set;if(this.options.invisibleCharacters)for(const n of Ke.codePoints)Gs(String.fromCodePoint(n))||t.add(n);if(this.options.ambiguousCharacters)for(const n of this.ambiguousCharacters.getConfusableCodePoints())t.add(n);for(const n of this.allowedCodePoints)t.delete(n);return t}shouldHighlightNonBasicASCII(t,n){const r=t.codePointAt(0);if(this.allowedCodePoints.has(r))return 0;if(this.options.nonBasicASCII)return 1;let i=!1,s=!1;if(n)for(const o of n){const l=o.codePointAt(0),c=Al(o);i=i||c,!c&&!this.ambiguousCharacters.isAmbiguous(l)&&!Ke.isInvisibleCharacter(l)&&(s=!0)}return!i&&s?0:this.options.invisibleCharacters&&!Gs(t)&&Ke.isInvisibleCharacter(r)?2:this.options.ambiguousCharacters&&this.ambiguousCharacters.isAmbiguous(r)?3:0}}function Gs(e){return e===" "||e===` +`||e===" "}var dt=function(e,t,n,r){function i(s){return s instanceof n?s:new n(function(o){o(s)})}return new(n||(n=Promise))(function(s,o){function l(d){try{h(r.next(d))}catch(u){o(u)}}function c(d){try{h(r.throw(d))}catch(u){o(u)}}function h(d){d.done?s(d.value):i(d.value).then(l,c)}h((r=r.apply(e,t||[])).next())})};class zc extends ac{get uri(){return this._uri}get eol(){return this._eol}getValue(){return this.getText()}getLinesContent(){return this._lines.slice(0)}getLineCount(){return this._lines.length}getLineContent(t){return this._lines[t-1]}getWordAtPosition(t,n){const r=pr(t.column,cc(n),this._lines[t.lineNumber-1],0);return r?new re(t.lineNumber,r.startColumn,t.lineNumber,r.endColumn):null}words(t){const n=this._lines,r=this._wordenize.bind(this);let i=0,s="",o=0,l=[];return{*[Symbol.iterator](){for(;;)if(othis._lines.length)n=this._lines.length,r=this._lines[n-1].length+1,i=!0;else{const s=this._lines[n-1].length+1;r<1?(r=1,i=!0):r>s&&(r=s,i=!0)}return i?{lineNumber:n,column:r}:t}}class ut{constructor(t,n){this._host=t,this._models=Object.create(null),this._foreignModuleFactory=n,this._foreignModule=null}dispose(){this._models=Object.create(null)}_getModel(t){return this._models[t]}_getModels(){const t=[];return Object.keys(this._models).forEach(n=>t.push(this._models[n])),t}acceptNewModel(t){this._models[t.url]=new zc(ht.parse(t.url),t.lines,t.EOL,t.versionId)}acceptModelChanged(t,n){if(!this._models[t])return;this._models[t].onEvents(n)}acceptRemovedModel(t){!this._models[t]||delete this._models[t]}computeUnicodeHighlights(t,n,r){return dt(this,void 0,void 0,function*(){const i=this._getModel(t);return i?Rc.computeUnicodeHighlights(i,n,r):{ranges:[],hasMore:!1,ambiguousCharacterCount:0,invisibleCharacterCount:0,nonBasicAsciiCharacterCount:0}})}computeDiff(t,n,r,i){return dt(this,void 0,void 0,function*(){const s=this._getModel(t),o=this._getModel(n);return!s||!o?null:ut.computeDiff(s,o,r,i)})}static computeDiff(t,n,r,i){const s=t.getLinesContent(),o=n.getLinesContent(),c=new rc(s,o,{shouldComputeCharChanges:!0,shouldPostProcessCharChanges:!0,shouldIgnoreTrimWhitespace:r,shouldMakePrettyDiff:!0,maxComputationTime:i}).computeDiff(),h=c.changes.length>0?!1:this._modelsAreIdentical(t,n);return{quitEarly:c.quitEarly,identical:h,changes:c.changes}}static _modelsAreIdentical(t,n){const r=t.getLineCount(),i=n.getLineCount();if(r!==i)return!1;for(let s=1;s<=r;s++){const o=t.getLineContent(s),l=n.getLineContent(s);if(o!==l)return!1}return!0}computeMoreMinimalEdits(t,n){return dt(this,void 0,void 0,function*(){const r=this._getModel(t);if(!r)return n;const i=[];let s;n=n.slice(0).sort((o,l)=>{if(o.range&&l.range)return re.compareRangesUsingStarts(o.range,l.range);const c=o.range?0:1,h=l.range?0:1;return c-h});for(let{range:o,text:l,eol:c}of n){if(typeof c=="number"&&(s=c),re.isEmpty(o)&&!l)continue;const h=r.getValueInRange(o);if(l=l.replace(/\r\n|\n|\r/g,r.eol),h===l)continue;if(Math.max(l.length,h.length)>ut._diffLimit){i.push({range:o,text:l});continue}const d=Ul(h,l,!1),u=r.offsetAt(re.lift(o).getStartPosition());for(const f of d){const m=r.positionAt(u+f.originalStart),v=r.positionAt(u+f.originalStart+f.originalLength),b={text:l.substr(f.modifiedStart,f.modifiedLength),range:{startLineNumber:m.lineNumber,startColumn:m.column,endLineNumber:v.lineNumber,endColumn:v.column}};r.getValueInRange(b.range)!==b.text&&i.push(b)}}return typeof s=="number"&&i.push({eol:s,text:"",range:{startLineNumber:0,startColumn:0,endLineNumber:0,endColumn:0}}),i})}computeLinks(t){return dt(this,void 0,void 0,function*(){const n=this._getModel(t);return n?mc(n):null})}textualSuggest(t,n,r,i){return dt(this,void 0,void 0,function*(){const s=new bn(!0),o=new RegExp(r,i),l=new Set;e:for(const c of t){const h=this._getModel(c);if(!!h){for(const d of h.words(o))if(!(d===n||!isNaN(Number(d)))&&(l.add(d),l.size>ut._suggestionsLimit))break e}}return{words:Array.from(l),duration:s.elapsed()}})}computeWordRanges(t,n,r,i){return dt(this,void 0,void 0,function*(){const s=this._getModel(t);if(!s)return Object.create(null);const o=new RegExp(r,i),l=Object.create(null);for(let c=n.startLineNumber;cthis._host.fhr(l,c)),getMirrorModels:()=>this._getModels()};return this._foreignModuleFactory?(this._foreignModule=this._foreignModuleFactory(o,n),Promise.resolve(sr(this._foreignModule))):Promise.reject(new Error("Unexpected usage"))}fmr(t,n){if(!this._foreignModule||typeof this._foreignModule[t]!="function")return Promise.reject(new Error("Missing requestHandler or method: "+t));try{return Promise.resolve(this._foreignModule[t].apply(this._foreignModule,n))}catch(r){return Promise.reject(r)}}}ut._diffLimit=1e5,ut._suggestionsLimit=1e4,typeof importScripts=="function"&&(ue.monaco=kc());let _r=!1;function Js(e){if(_r)return;_r=!0;const t=new Wl(n=>{self.postMessage(n)},n=>new ut(n,e));self.onmessage=n=>{t.onmessage(n.data)}}self.onmessage=e=>{_r||Js(null)};/*!----------------------------------------------------------------------------- + * Copyright (c) Microsoft Corporation. All rights reserved. + * Version: 0.34.1(547870b6881302c5b4ff32173c16d06009e3588f) + * Released under the MIT license + * https://github.com/microsoft/monaco-editor/blob/main/LICENSE.txt + *-----------------------------------------------------------------------------*/var p;(function(e){e[e.Ident=0]="Ident",e[e.AtKeyword=1]="AtKeyword",e[e.String=2]="String",e[e.BadString=3]="BadString",e[e.UnquotedString=4]="UnquotedString",e[e.Hash=5]="Hash",e[e.Num=6]="Num",e[e.Percentage=7]="Percentage",e[e.Dimension=8]="Dimension",e[e.UnicodeRange=9]="UnicodeRange",e[e.CDO=10]="CDO",e[e.CDC=11]="CDC",e[e.Colon=12]="Colon",e[e.SemiColon=13]="SemiColon",e[e.CurlyL=14]="CurlyL",e[e.CurlyR=15]="CurlyR",e[e.ParenthesisL=16]="ParenthesisL",e[e.ParenthesisR=17]="ParenthesisR",e[e.BracketL=18]="BracketL",e[e.BracketR=19]="BracketR",e[e.Whitespace=20]="Whitespace",e[e.Includes=21]="Includes",e[e.Dashmatch=22]="Dashmatch",e[e.SubstringOperator=23]="SubstringOperator",e[e.PrefixOperator=24]="PrefixOperator",e[e.SuffixOperator=25]="SuffixOperator",e[e.Delim=26]="Delim",e[e.EMS=27]="EMS",e[e.EXS=28]="EXS",e[e.Length=29]="Length",e[e.Angle=30]="Angle",e[e.Time=31]="Time",e[e.Freq=32]="Freq",e[e.Exclamation=33]="Exclamation",e[e.Resolution=34]="Resolution",e[e.Comma=35]="Comma",e[e.Charset=36]="Charset",e[e.EscapedJavaScript=37]="EscapedJavaScript",e[e.BadEscapedJavaScript=38]="BadEscapedJavaScript",e[e.Comment=39]="Comment",e[e.SingleLineComment=40]="SingleLineComment",e[e.EOF=41]="EOF",e[e.CustomToken=42]="CustomToken"})(p||(p={}));var Xs=function(){function e(t){this.source=t,this.len=t.length,this.position=0}return e.prototype.substring=function(t,n){return n===void 0&&(n=this.position),this.source.substring(t,n)},e.prototype.eos=function(){return this.len<=this.position},e.prototype.pos=function(){return this.position},e.prototype.goBackTo=function(t){this.position=t},e.prototype.goBack=function(t){this.position-=t},e.prototype.advance=function(t){this.position+=t},e.prototype.nextChar=function(){return this.source.charCodeAt(this.position++)||0},e.prototype.peekChar=function(t){return t===void 0&&(t=0),this.source.charCodeAt(this.position+t)||0},e.prototype.lookbackChar=function(t){return t===void 0&&(t=0),this.source.charCodeAt(this.position-t)||0},e.prototype.advanceIfChar=function(t){return t===this.source.charCodeAt(this.position)?(this.position++,!0):!1},e.prototype.advanceIfChars=function(t){if(this.position+t.length>this.source.length)return!1;for(var n=0;n=Bt&&n<=jt?(this.stream.advance(t+1),this.stream.advanceWhileChar(function(r){return r>=Bt&&r<=jt||t===0&&r===aa}),!0):!1},e.prototype._newline=function(t){var n=this.stream.peekChar();switch(n){case Ft:case $t:case _t:return this.stream.advance(1),t.push(String.fromCharCode(n)),n===Ft&&this.stream.advanceIfChar(_t)&&t.push(` +`),!0}return!1},e.prototype._escape=function(t,n){var r=this.stream.peekChar();if(r===Er){this.stream.advance(1),r=this.stream.peekChar();for(var i=0;i<6&&(r>=Bt&&r<=jt||r>=_n&&r<=Ys||r>=Fn&&r<=Zs);)this.stream.advance(1),r=this.stream.peekChar(),i++;if(i>0){try{var s=parseInt(this.stream.substring(this.stream.pos()-i),16);s&&t.push(String.fromCharCode(s))}catch{}return r===Dr||r===Rr?this.stream.advance(1):this._newline([]),!0}if(r!==Ft&&r!==$t&&r!==_t)return this.stream.advance(1),t.push(String.fromCharCode(r)),!0;if(n)return this._newline(t)}return!1},e.prototype._stringChar=function(t,n){var r=this.stream.peekChar();return r!==0&&r!==t&&r!==Er&&r!==Ft&&r!==$t&&r!==_t?(this.stream.advance(1),n.push(String.fromCharCode(r)),!0):!1},e.prototype._string=function(t){if(this.stream.peekChar()===sa||this.stream.peekChar()===ia){var n=this.stream.nextChar();for(t.push(String.fromCharCode(n));this._stringChar(n,t)||this._escape(t,!0););return this.stream.peekChar()===n?(this.stream.nextChar(),t.push(String.fromCharCode(n)),p.String):p.BadString}return null},e.prototype._unquotedChar=function(t){var n=this.stream.peekChar();return n!==0&&n!==Er&&n!==sa&&n!==ia&&n!==ta&&n!==na&&n!==Dr&&n!==Rr&&n!==_t&&n!==$t&&n!==Ft?(this.stream.advance(1),t.push(String.fromCharCode(n)),!0):!1},e.prototype._unquotedString=function(t){for(var n=!1;this._unquotedChar(t)||this._escape(t);)n=!0;return n},e.prototype._whitespace=function(){var t=this.stream.advanceWhileChar(function(n){return n===Dr||n===Rr||n===_t||n===$t||n===Ft});return t>0},e.prototype._name=function(t){for(var n=!1;this._identChar(t)||this._escape(t);)n=!0;return n},e.prototype.ident=function(t){var n=this.stream.pos(),r=this._minus(t);if(r){if(this._minus(t)||this._identFirstChar(t)||this._escape(t)){for(;this._identChar(t)||this._escape(t););return!0}}else if(this._identFirstChar(t)||this._escape(t)){for(;this._identChar(t)||this._escape(t););return!0}return this.stream.goBackTo(n),!1},e.prototype._identFirstChar=function(t){var n=this.stream.peekChar();return n===ea||n>=_n&&n<=Ks||n>=Fn&&n<=Qs||n>=128&&n<=65535?(this.stream.advance(1),t.push(String.fromCharCode(n)),!0):!1},e.prototype._minus=function(t){var n=this.stream.peekChar();return n===pt?(this.stream.advance(1),t.push(String.fromCharCode(n)),!0):!1},e.prototype._identChar=function(t){var n=this.stream.peekChar();return n===ea||n===pt||n>=_n&&n<=Ks||n>=Fn&&n<=Qs||n>=Bt&&n<=jt||n>=128&&n<=65535?(this.stream.advance(1),t.push(String.fromCharCode(n)),!0):!1},e.prototype._unicodeRange=function(){if(this.stream.advanceIfChar(Xc)){var t=function(i){return i>=Bt&&i<=jt||i>=_n&&i<=Ys||i>=Fn&&i<=Zs},n=this.stream.advanceWhileChar(t)+this.stream.advanceWhileChar(function(i){return i===Jc});if(n>=1&&n<=6)if(this.stream.advanceIfChar(pt)){var r=this.stream.advanceWhileChar(t);if(r>=1&&r<=6)return!0}else return!0}return!1},e}();function le(e,t){if(e.length0?e.lastIndexOf(t)===n:n===0?e===t:!1}function Yc(e,t,n){n===void 0&&(n=4);var r=Math.abs(e.length-t.length);if(r>n)return 0;var i=[],s=[],o,l;for(o=0;o0;)(t&1)===1&&(n+=e),e+=e,t=t>>>1;return n}var O=function(){var e=function(t,n){return e=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(r,i){r.__proto__=i}||function(r,i){for(var s in i)Object.prototype.hasOwnProperty.call(i,s)&&(r[s]=i[s])},e(t,n)};return function(t,n){if(typeof n!="function"&&n!==null)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");e(t,n);function r(){this.constructor=t}t.prototype=n===null?Object.create(n):(r.prototype=n.prototype,new r)}}(),g;(function(e){e[e.Undefined=0]="Undefined",e[e.Identifier=1]="Identifier",e[e.Stylesheet=2]="Stylesheet",e[e.Ruleset=3]="Ruleset",e[e.Selector=4]="Selector",e[e.SimpleSelector=5]="SimpleSelector",e[e.SelectorInterpolation=6]="SelectorInterpolation",e[e.SelectorCombinator=7]="SelectorCombinator",e[e.SelectorCombinatorParent=8]="SelectorCombinatorParent",e[e.SelectorCombinatorSibling=9]="SelectorCombinatorSibling",e[e.SelectorCombinatorAllSiblings=10]="SelectorCombinatorAllSiblings",e[e.SelectorCombinatorShadowPiercingDescendant=11]="SelectorCombinatorShadowPiercingDescendant",e[e.Page=12]="Page",e[e.PageBoxMarginBox=13]="PageBoxMarginBox",e[e.ClassSelector=14]="ClassSelector",e[e.IdentifierSelector=15]="IdentifierSelector",e[e.ElementNameSelector=16]="ElementNameSelector",e[e.PseudoSelector=17]="PseudoSelector",e[e.AttributeSelector=18]="AttributeSelector",e[e.Declaration=19]="Declaration",e[e.Declarations=20]="Declarations",e[e.Property=21]="Property",e[e.Expression=22]="Expression",e[e.BinaryExpression=23]="BinaryExpression",e[e.Term=24]="Term",e[e.Operator=25]="Operator",e[e.Value=26]="Value",e[e.StringLiteral=27]="StringLiteral",e[e.URILiteral=28]="URILiteral",e[e.EscapedValue=29]="EscapedValue",e[e.Function=30]="Function",e[e.NumericValue=31]="NumericValue",e[e.HexColorValue=32]="HexColorValue",e[e.RatioValue=33]="RatioValue",e[e.MixinDeclaration=34]="MixinDeclaration",e[e.MixinReference=35]="MixinReference",e[e.VariableName=36]="VariableName",e[e.VariableDeclaration=37]="VariableDeclaration",e[e.Prio=38]="Prio",e[e.Interpolation=39]="Interpolation",e[e.NestedProperties=40]="NestedProperties",e[e.ExtendsReference=41]="ExtendsReference",e[e.SelectorPlaceholder=42]="SelectorPlaceholder",e[e.Debug=43]="Debug",e[e.If=44]="If",e[e.Else=45]="Else",e[e.For=46]="For",e[e.Each=47]="Each",e[e.While=48]="While",e[e.MixinContentReference=49]="MixinContentReference",e[e.MixinContentDeclaration=50]="MixinContentDeclaration",e[e.Media=51]="Media",e[e.Keyframe=52]="Keyframe",e[e.FontFace=53]="FontFace",e[e.Import=54]="Import",e[e.Namespace=55]="Namespace",e[e.Invocation=56]="Invocation",e[e.FunctionDeclaration=57]="FunctionDeclaration",e[e.ReturnStatement=58]="ReturnStatement",e[e.MediaQuery=59]="MediaQuery",e[e.MediaCondition=60]="MediaCondition",e[e.MediaFeature=61]="MediaFeature",e[e.FunctionParameter=62]="FunctionParameter",e[e.FunctionArgument=63]="FunctionArgument",e[e.KeyframeSelector=64]="KeyframeSelector",e[e.ViewPort=65]="ViewPort",e[e.Document=66]="Document",e[e.AtApplyRule=67]="AtApplyRule",e[e.CustomPropertyDeclaration=68]="CustomPropertyDeclaration",e[e.CustomPropertySet=69]="CustomPropertySet",e[e.ListEntry=70]="ListEntry",e[e.Supports=71]="Supports",e[e.SupportsCondition=72]="SupportsCondition",e[e.NamespacePrefix=73]="NamespacePrefix",e[e.GridLine=74]="GridLine",e[e.Plugin=75]="Plugin",e[e.UnknownAtRule=76]="UnknownAtRule",e[e.Use=77]="Use",e[e.ModuleConfiguration=78]="ModuleConfiguration",e[e.Forward=79]="Forward",e[e.ForwardVisibility=80]="ForwardVisibility",e[e.Module=81]="Module",e[e.UnicodeRange=82]="UnicodeRange"})(g||(g={}));var J;(function(e){e[e.Mixin=0]="Mixin",e[e.Rule=1]="Rule",e[e.Variable=2]="Variable",e[e.Function=3]="Function",e[e.Keyframe=4]="Keyframe",e[e.Unknown=5]="Unknown",e[e.Module=6]="Module",e[e.Forward=7]="Forward",e[e.ForwardVisibility=8]="ForwardVisibility"})(J||(J={}));function Ar(e,t){var n=null;return!e||te.end?null:(e.accept(function(r){return r.offset===-1&&r.length===-1?!0:r.offset<=t&&r.end>=t?(n?r.length<=n.length&&(n=r):n=r,!0):!1}),n)}function zr(e,t){for(var n=Ar(e,t),r=[];n;)r.unshift(n),n=n.parent;return r}function Zc(e){var t=e.findParent(g.Declaration),n=t&&t.getValue();return n&&n.encloses(e)?t:null}var U=function(){function e(t,n,r){t===void 0&&(t=-1),n===void 0&&(n=-1),this.parent=null,this.offset=t,this.length=n,r&&(this.nodeType=r)}return Object.defineProperty(e.prototype,"end",{get:function(){return this.offset+this.length},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"type",{get:function(){return this.nodeType||g.Undefined},set:function(t){this.nodeType=t},enumerable:!1,configurable:!0}),e.prototype.getTextProvider=function(){for(var t=this;t&&!t.textProvider;)t=t.parent;return t?t.textProvider:function(){return"unknown"}},e.prototype.getText=function(){return this.getTextProvider()(this.offset,this.length)},e.prototype.matches=function(t){return this.length===t.length&&this.getTextProvider()(this.offset,this.length)===t},e.prototype.startsWith=function(t){return this.length>=t.length&&this.getTextProvider()(this.offset,t.length)===t},e.prototype.endsWith=function(t){return this.length>=t.length&&this.getTextProvider()(this.end-t.length,t.length)===t},e.prototype.accept=function(t){if(t(this)&&this.children)for(var n=0,r=this.children;n=0&&t.parent.children.splice(r,1)}t.parent=this;var i=this.children;return i||(i=this.children=[]),n!==-1?i.splice(n,0,t):i.push(t),t},e.prototype.attachTo=function(t,n){return n===void 0&&(n=-1),t&&t.adoptChild(this,n),this},e.prototype.collectIssues=function(t){this.issues&&t.push.apply(t,this.issues)},e.prototype.addIssue=function(t){this.issues||(this.issues=[]),this.issues.push(t)},e.prototype.hasIssue=function(t){return Array.isArray(this.issues)&&this.issues.some(function(n){return n.getRule()===t})},e.prototype.isErroneous=function(t){return t===void 0&&(t=!1),this.issues&&this.issues.length>0?!0:t&&Array.isArray(this.children)&&this.children.some(function(n){return n.isErroneous(!0)})},e.prototype.setNode=function(t,n,r){return r===void 0&&(r=-1),n?(n.attachTo(this,r),this[t]=n,!0):!1},e.prototype.addChild=function(t){return t?(this.children||(this.children=[]),t.attachTo(this),this.updateOffsetAndLength(t),!0):!1},e.prototype.updateOffsetAndLength=function(t){(t.offsetthis.end||this.length===-1)&&(this.length=n-this.offset)},e.prototype.hasChildren=function(){return!!this.children&&this.children.length>0},e.prototype.getChildren=function(){return this.children?this.children.slice(0):[]},e.prototype.getChild=function(t){return this.children&&t=0;r--)if(n=this.children[r],n.offset<=t)return n}return null},e.prototype.findChildAtOffset=function(t,n){var r=this.findFirstChildBeforeOffset(t);return r&&r.end>=t?n&&r.findChildAtOffset(t,!0)||r:null},e.prototype.encloses=function(t){return this.offset<=t.offset&&this.offset+this.length>=t.offset+t.length},e.prototype.getParent=function(){for(var t=this.parent;t instanceof fe;)t=t.parent;return t},e.prototype.findParent=function(t){for(var n=this;n&&n.type!==t;)n=n.parent;return n},e.prototype.findAParent=function(){for(var t=[],n=0;n{let s=i[0];return typeof t[s]<"u"?t[s]:r}),n}function Nh(e,t,...n){return Mh(t,n)}function Ie(e){return Nh}var X=Ie(),Y=function(){function e(t,n){this.id=t,this.message=n}return e}(),y={NumberExpected:new Y("css-numberexpected",X("expected.number","number expected")),ConditionExpected:new Y("css-conditionexpected",X("expected.condt","condition expected")),RuleOrSelectorExpected:new Y("css-ruleorselectorexpected",X("expected.ruleorselector","at-rule or selector expected")),DotExpected:new Y("css-dotexpected",X("expected.dot","dot expected")),ColonExpected:new Y("css-colonexpected",X("expected.colon","colon expected")),SemiColonExpected:new Y("css-semicolonexpected",X("expected.semicolon","semi-colon expected")),TermExpected:new Y("css-termexpected",X("expected.term","term expected")),ExpressionExpected:new Y("css-expressionexpected",X("expected.expression","expression expected")),OperatorExpected:new Y("css-operatorexpected",X("expected.operator","operator expected")),IdentifierExpected:new Y("css-identifierexpected",X("expected.ident","identifier expected")),PercentageExpected:new Y("css-percentageexpected",X("expected.percentage","percentage expected")),URIOrStringExpected:new Y("css-uriorstringexpected",X("expected.uriorstring","uri or string expected")),URIExpected:new Y("css-uriexpected",X("expected.uri","URI expected")),VariableNameExpected:new Y("css-varnameexpected",X("expected.varname","variable name expected")),VariableValueExpected:new Y("css-varvalueexpected",X("expected.varvalue","variable value expected")),PropertyValueExpected:new Y("css-propertyvalueexpected",X("expected.propvalue","property value expected")),LeftCurlyExpected:new Y("css-lcurlyexpected",X("expected.lcurly","{ expected")),RightCurlyExpected:new Y("css-rcurlyexpected",X("expected.rcurly","} expected")),LeftSquareBracketExpected:new Y("css-rbracketexpected",X("expected.lsquare","[ expected")),RightSquareBracketExpected:new Y("css-lbracketexpected",X("expected.rsquare","] expected")),LeftParenthesisExpected:new Y("css-lparentexpected",X("expected.lparen","( expected")),RightParenthesisExpected:new Y("css-rparentexpected",X("expected.rparent",") expected")),CommaExpected:new Y("css-commaexpected",X("expected.comma","comma expected")),PageDirectiveOrDeclarationExpected:new Y("css-pagedirordeclexpected",X("expected.pagedirordecl","page directive or declaraton expected")),UnknownAtRule:new Y("css-unknownatrule",X("unknown.atrule","at-rule unknown")),UnknownKeyword:new Y("css-unknownkeyword",X("unknown.keyword","unknown keyword")),SelectorExpected:new Y("css-selectorexpected",X("expected.selector","selector expected")),StringLiteralExpected:new Y("css-stringliteralexpected",X("expected.stringliteral","string literal expected")),WhitespaceExpected:new Y("css-whitespaceexpected",X("expected.whitespace","whitespace expected")),MediaQueryExpected:new Y("css-mediaqueryexpected",X("expected.mediaquery","media query expected")),IdentifierOrWildcardExpected:new Y("css-idorwildcardexpected",X("expected.idorwildcard","identifier or wildcard expected")),WildcardExpected:new Y("css-wildcardexpected",X("expected.wildcard","wildcard expected")),IdentifierOrVariableExpected:new Y("css-idorvarexpected",X("expected.idorvar","identifier or variable expected"))},Sa;(function(e){e.MIN_VALUE=-2147483648,e.MAX_VALUE=2147483647})(Sa||(Sa={}));var zn;(function(e){e.MIN_VALUE=0,e.MAX_VALUE=2147483647})(zn||(zn={}));var we;(function(e){function t(r,i){return r===Number.MAX_VALUE&&(r=zn.MAX_VALUE),i===Number.MAX_VALUE&&(i=zn.MAX_VALUE),{line:r,character:i}}e.create=t;function n(r){var i=r;return S.objectLiteral(i)&&S.uinteger(i.line)&&S.uinteger(i.character)}e.is=n})(we||(we={}));var K;(function(e){function t(r,i,s,o){if(S.uinteger(r)&&S.uinteger(i)&&S.uinteger(s)&&S.uinteger(o))return{start:we.create(r,i),end:we.create(s,o)};if(we.is(r)&&we.is(i))return{start:r,end:i};throw new Error("Range#create called with invalid arguments["+r+", "+i+", "+s+", "+o+"]")}e.create=t;function n(r){var i=r;return S.objectLiteral(i)&&we.is(i.start)&&we.is(i.end)}e.is=n})(K||(K={}));var Zt;(function(e){function t(r,i){return{uri:r,range:i}}e.create=t;function n(r){var i=r;return S.defined(i)&&K.is(i.range)&&(S.string(i.uri)||S.undefined(i.uri))}e.is=n})(Zt||(Zt={}));var Ca;(function(e){function t(r,i,s,o){return{targetUri:r,targetRange:i,targetSelectionRange:s,originSelectionRange:o}}e.create=t;function n(r){var i=r;return S.defined(i)&&K.is(i.targetRange)&&S.string(i.targetUri)&&(K.is(i.targetSelectionRange)||S.undefined(i.targetSelectionRange))&&(K.is(i.originSelectionRange)||S.undefined(i.originSelectionRange))}e.is=n})(Ca||(Ca={}));var Br;(function(e){function t(r,i,s,o){return{red:r,green:i,blue:s,alpha:o}}e.create=t;function n(r){var i=r;return S.numberRange(i.red,0,1)&&S.numberRange(i.green,0,1)&&S.numberRange(i.blue,0,1)&&S.numberRange(i.alpha,0,1)}e.is=n})(Br||(Br={}));var ka;(function(e){function t(r,i){return{range:r,color:i}}e.create=t;function n(r){var i=r;return K.is(i.range)&&Br.is(i.color)}e.is=n})(ka||(ka={}));var _a;(function(e){function t(r,i,s){return{label:r,textEdit:i,additionalTextEdits:s}}e.create=t;function n(r){var i=r;return S.string(i.label)&&(S.undefined(i.textEdit)||j.is(i))&&(S.undefined(i.additionalTextEdits)||S.typedArray(i.additionalTextEdits,j.is))}e.is=n})(_a||(_a={}));var Fa;(function(e){e.Comment="comment",e.Imports="imports",e.Region="region"})(Fa||(Fa={}));var Ea;(function(e){function t(r,i,s,o,l){var c={startLine:r,endLine:i};return S.defined(s)&&(c.startCharacter=s),S.defined(o)&&(c.endCharacter=o),S.defined(l)&&(c.kind=l),c}e.create=t;function n(r){var i=r;return S.uinteger(i.startLine)&&S.uinteger(i.startLine)&&(S.undefined(i.startCharacter)||S.uinteger(i.startCharacter))&&(S.undefined(i.endCharacter)||S.uinteger(i.endCharacter))&&(S.undefined(i.kind)||S.string(i.kind))}e.is=n})(Ea||(Ea={}));var jr;(function(e){function t(r,i){return{location:r,message:i}}e.create=t;function n(r){var i=r;return S.defined(i)&&Zt.is(i.location)&&S.string(i.message)}e.is=n})(jr||(jr={}));var Mn;(function(e){e.Error=1,e.Warning=2,e.Information=3,e.Hint=4})(Mn||(Mn={}));var Da;(function(e){e.Unnecessary=1,e.Deprecated=2})(Da||(Da={}));var Ra;(function(e){function t(n){var r=n;return r!=null&&S.string(r.href)}e.is=t})(Ra||(Ra={}));var Nn;(function(e){function t(r,i,s,o,l,c){var h={range:r,message:i};return S.defined(s)&&(h.severity=s),S.defined(o)&&(h.code=o),S.defined(l)&&(h.source=l),S.defined(c)&&(h.relatedInformation=c),h}e.create=t;function n(r){var i,s=r;return S.defined(s)&&K.is(s.range)&&S.string(s.message)&&(S.number(s.severity)||S.undefined(s.severity))&&(S.integer(s.code)||S.string(s.code)||S.undefined(s.code))&&(S.undefined(s.codeDescription)||S.string((i=s.codeDescription)===null||i===void 0?void 0:i.href))&&(S.string(s.source)||S.undefined(s.source))&&(S.undefined(s.relatedInformation)||S.typedArray(s.relatedInformation,jr.is))}e.is=n})(Nn||(Nn={}));var At;(function(e){function t(r,i){for(var s=[],o=2;o0&&(l.arguments=s),l}e.create=t;function n(r){var i=r;return S.defined(i)&&S.string(i.title)&&S.string(i.command)}e.is=n})(At||(At={}));var j;(function(e){function t(s,o){return{range:s,newText:o}}e.replace=t;function n(s,o){return{range:{start:s,end:s},newText:o}}e.insert=n;function r(s){return{range:s,newText:""}}e.del=r;function i(s){var o=s;return S.objectLiteral(o)&&S.string(o.newText)&&K.is(o.range)}e.is=i})(j||(j={}));var zt;(function(e){function t(r,i,s){var o={label:r};return i!==void 0&&(o.needsConfirmation=i),s!==void 0&&(o.description=s),o}e.create=t;function n(r){var i=r;return i!==void 0&&S.objectLiteral(i)&&S.string(i.label)&&(S.boolean(i.needsConfirmation)||i.needsConfirmation===void 0)&&(S.string(i.description)||i.description===void 0)}e.is=n})(zt||(zt={}));var ge;(function(e){function t(n){var r=n;return typeof r=="string"}e.is=t})(ge||(ge={}));var rt;(function(e){function t(s,o,l){return{range:s,newText:o,annotationId:l}}e.replace=t;function n(s,o,l){return{range:{start:s,end:s},newText:o,annotationId:l}}e.insert=n;function r(s,o){return{range:s,newText:"",annotationId:o}}e.del=r;function i(s){var o=s;return j.is(o)&&(zt.is(o.annotationId)||ge.is(o.annotationId))}e.is=i})(rt||(rt={}));var Qt;(function(e){function t(r,i){return{textDocument:r,edits:i}}e.create=t;function n(r){var i=r;return S.defined(i)&&In.is(i.textDocument)&&Array.isArray(i.edits)}e.is=n})(Qt||(Qt={}));var en;(function(e){function t(r,i,s){var o={kind:"create",uri:r};return i!==void 0&&(i.overwrite!==void 0||i.ignoreIfExists!==void 0)&&(o.options=i),s!==void 0&&(o.annotationId=s),o}e.create=t;function n(r){var i=r;return i&&i.kind==="create"&&S.string(i.uri)&&(i.options===void 0||(i.options.overwrite===void 0||S.boolean(i.options.overwrite))&&(i.options.ignoreIfExists===void 0||S.boolean(i.options.ignoreIfExists)))&&(i.annotationId===void 0||ge.is(i.annotationId))}e.is=n})(en||(en={}));var tn;(function(e){function t(r,i,s,o){var l={kind:"rename",oldUri:r,newUri:i};return s!==void 0&&(s.overwrite!==void 0||s.ignoreIfExists!==void 0)&&(l.options=s),o!==void 0&&(l.annotationId=o),l}e.create=t;function n(r){var i=r;return i&&i.kind==="rename"&&S.string(i.oldUri)&&S.string(i.newUri)&&(i.options===void 0||(i.options.overwrite===void 0||S.boolean(i.options.overwrite))&&(i.options.ignoreIfExists===void 0||S.boolean(i.options.ignoreIfExists)))&&(i.annotationId===void 0||ge.is(i.annotationId))}e.is=n})(tn||(tn={}));var nn;(function(e){function t(r,i,s){var o={kind:"delete",uri:r};return i!==void 0&&(i.recursive!==void 0||i.ignoreIfNotExists!==void 0)&&(o.options=i),s!==void 0&&(o.annotationId=s),o}e.create=t;function n(r){var i=r;return i&&i.kind==="delete"&&S.string(i.uri)&&(i.options===void 0||(i.options.recursive===void 0||S.boolean(i.options.recursive))&&(i.options.ignoreIfNotExists===void 0||S.boolean(i.options.ignoreIfNotExists)))&&(i.annotationId===void 0||ge.is(i.annotationId))}e.is=n})(nn||(nn={}));var qr;(function(e){function t(n){var r=n;return r&&(r.changes!==void 0||r.documentChanges!==void 0)&&(r.documentChanges===void 0||r.documentChanges.every(function(i){return S.string(i.kind)?en.is(i)||tn.is(i)||nn.is(i):Qt.is(i)}))}e.is=t})(qr||(qr={}));var Pn=function(){function e(t,n){this.edits=t,this.changeAnnotations=n}return e.prototype.insert=function(t,n,r){var i,s;if(r===void 0?i=j.insert(t,n):ge.is(r)?(s=r,i=rt.insert(t,n,r)):(this.assertChangeAnnotations(this.changeAnnotations),s=this.changeAnnotations.manage(r),i=rt.insert(t,n,s)),this.edits.push(i),s!==void 0)return s},e.prototype.replace=function(t,n,r){var i,s;if(r===void 0?i=j.replace(t,n):ge.is(r)?(s=r,i=rt.replace(t,n,r)):(this.assertChangeAnnotations(this.changeAnnotations),s=this.changeAnnotations.manage(r),i=rt.replace(t,n,s)),this.edits.push(i),s!==void 0)return s},e.prototype.delete=function(t,n){var r,i;if(n===void 0?r=j.del(t):ge.is(n)?(i=n,r=rt.del(t,n)):(this.assertChangeAnnotations(this.changeAnnotations),i=this.changeAnnotations.manage(n),r=rt.del(t,i)),this.edits.push(r),i!==void 0)return i},e.prototype.add=function(t){this.edits.push(t)},e.prototype.all=function(){return this.edits},e.prototype.clear=function(){this.edits.splice(0,this.edits.length)},e.prototype.assertChangeAnnotations=function(t){if(t===void 0)throw new Error("Text edit change is not configured to manage change annotations.")},e}(),Aa=function(){function e(t){this._annotations=t===void 0?Object.create(null):t,this._counter=0,this._size=0}return e.prototype.all=function(){return this._annotations},Object.defineProperty(e.prototype,"size",{get:function(){return this._size},enumerable:!1,configurable:!0}),e.prototype.manage=function(t,n){var r;if(ge.is(t)?r=t:(r=this.nextId(),n=t),this._annotations[r]!==void 0)throw new Error("Id "+r+" is already in use.");if(n===void 0)throw new Error("No annotation provided for id "+r);return this._annotations[r]=n,this._size++,r},e.prototype.nextId=function(){return this._counter++,this._counter.toString()},e}();(function(){function e(t){var n=this;this._textEditChanges=Object.create(null),t!==void 0?(this._workspaceEdit=t,t.documentChanges?(this._changeAnnotations=new Aa(t.changeAnnotations),t.changeAnnotations=this._changeAnnotations.all(),t.documentChanges.forEach(function(r){if(Qt.is(r)){var i=new Pn(r.edits,n._changeAnnotations);n._textEditChanges[r.textDocument.uri]=i}})):t.changes&&Object.keys(t.changes).forEach(function(r){var i=new Pn(t.changes[r]);n._textEditChanges[r]=i})):this._workspaceEdit={}}return Object.defineProperty(e.prototype,"edit",{get:function(){return this.initDocumentChanges(),this._changeAnnotations!==void 0&&(this._changeAnnotations.size===0?this._workspaceEdit.changeAnnotations=void 0:this._workspaceEdit.changeAnnotations=this._changeAnnotations.all()),this._workspaceEdit},enumerable:!1,configurable:!0}),e.prototype.getTextEditChange=function(t){if(In.is(t)){if(this.initDocumentChanges(),this._workspaceEdit.documentChanges===void 0)throw new Error("Workspace edit is not configured for document changes.");var n={uri:t.uri,version:t.version},r=this._textEditChanges[n.uri];if(!r){var i=[],s={textDocument:n,edits:i};this._workspaceEdit.documentChanges.push(s),r=new Pn(i,this._changeAnnotations),this._textEditChanges[n.uri]=r}return r}else{if(this.initChanges(),this._workspaceEdit.changes===void 0)throw new Error("Workspace edit is not configured for normal text edit changes.");var r=this._textEditChanges[t];if(!r){var i=[];this._workspaceEdit.changes[t]=i,r=new Pn(i),this._textEditChanges[t]=r}return r}},e.prototype.initDocumentChanges=function(){this._workspaceEdit.documentChanges===void 0&&this._workspaceEdit.changes===void 0&&(this._changeAnnotations=new Aa,this._workspaceEdit.documentChanges=[],this._workspaceEdit.changeAnnotations=this._changeAnnotations.all())},e.prototype.initChanges=function(){this._workspaceEdit.documentChanges===void 0&&this._workspaceEdit.changes===void 0&&(this._workspaceEdit.changes=Object.create(null))},e.prototype.createFile=function(t,n,r){if(this.initDocumentChanges(),this._workspaceEdit.documentChanges===void 0)throw new Error("Workspace edit is not configured for document changes.");var i;zt.is(n)||ge.is(n)?i=n:r=n;var s,o;if(i===void 0?s=en.create(t,r):(o=ge.is(i)?i:this._changeAnnotations.manage(i),s=en.create(t,r,o)),this._workspaceEdit.documentChanges.push(s),o!==void 0)return o},e.prototype.renameFile=function(t,n,r,i){if(this.initDocumentChanges(),this._workspaceEdit.documentChanges===void 0)throw new Error("Workspace edit is not configured for document changes.");var s;zt.is(r)||ge.is(r)?s=r:i=r;var o,l;if(s===void 0?o=tn.create(t,n,i):(l=ge.is(s)?s:this._changeAnnotations.manage(s),o=tn.create(t,n,i,l)),this._workspaceEdit.documentChanges.push(o),l!==void 0)return l},e.prototype.deleteFile=function(t,n,r){if(this.initDocumentChanges(),this._workspaceEdit.documentChanges===void 0)throw new Error("Workspace edit is not configured for document changes.");var i;zt.is(n)||ge.is(n)?i=n:r=n;var s,o;if(i===void 0?s=nn.create(t,r):(o=ge.is(i)?i:this._changeAnnotations.manage(i),s=nn.create(t,r,o)),this._workspaceEdit.documentChanges.push(s),o!==void 0)return o},e})();var za;(function(e){function t(r){return{uri:r}}e.create=t;function n(r){var i=r;return S.defined(i)&&S.string(i.uri)}e.is=n})(za||(za={}));var $r;(function(e){function t(r,i){return{uri:r,version:i}}e.create=t;function n(r){var i=r;return S.defined(i)&&S.string(i.uri)&&S.integer(i.version)}e.is=n})($r||($r={}));var In;(function(e){function t(r,i){return{uri:r,version:i}}e.create=t;function n(r){var i=r;return S.defined(i)&&S.string(i.uri)&&(i.version===null||S.integer(i.version))}e.is=n})(In||(In={}));var Ma;(function(e){function t(r,i,s,o){return{uri:r,languageId:i,version:s,text:o}}e.create=t;function n(r){var i=r;return S.defined(i)&&S.string(i.uri)&&S.string(i.languageId)&&S.integer(i.version)&&S.string(i.text)}e.is=n})(Ma||(Ma={}));var ze;(function(e){e.PlainText="plaintext",e.Markdown="markdown"})(ze||(ze={})),function(e){function t(n){var r=n;return r===e.PlainText||r===e.Markdown}e.is=t}(ze||(ze={}));var Hr;(function(e){function t(n){var r=n;return S.objectLiteral(n)&&ze.is(r.kind)&&S.string(r.value)}e.is=t})(Hr||(Hr={}));var B;(function(e){e.Text=1,e.Method=2,e.Function=3,e.Constructor=4,e.Field=5,e.Variable=6,e.Class=7,e.Interface=8,e.Module=9,e.Property=10,e.Unit=11,e.Value=12,e.Enum=13,e.Keyword=14,e.Snippet=15,e.Color=16,e.File=17,e.Reference=18,e.Folder=19,e.EnumMember=20,e.Constant=21,e.Struct=22,e.Event=23,e.Operator=24,e.TypeParameter=25})(B||(B={}));var Fe;(function(e){e.PlainText=1,e.Snippet=2})(Fe||(Fe={}));var ft;(function(e){e.Deprecated=1})(ft||(ft={}));var Na;(function(e){function t(r,i,s){return{newText:r,insert:i,replace:s}}e.create=t;function n(r){var i=r;return i&&S.string(i.newText)&&K.is(i.insert)&&K.is(i.replace)}e.is=n})(Na||(Na={}));var Pa;(function(e){e.asIs=1,e.adjustIndentation=2})(Pa||(Pa={}));var Ia;(function(e){function t(n){return{label:n}}e.create=t})(Ia||(Ia={}));var La;(function(e){function t(n,r){return{items:n||[],isIncomplete:!!r}}e.create=t})(La||(La={}));var Ln;(function(e){function t(r){return r.replace(/[\\`*_{}[\]()#+\-.!]/g,"\\$&")}e.fromPlainText=t;function n(r){var i=r;return S.string(i)||S.objectLiteral(i)&&S.string(i.language)&&S.string(i.value)}e.is=n})(Ln||(Ln={}));var Ta;(function(e){function t(n){var r=n;return!!r&&S.objectLiteral(r)&&(Hr.is(r.contents)||Ln.is(r.contents)||S.typedArray(r.contents,Ln.is))&&(n.range===void 0||K.is(n.range))}e.is=t})(Ta||(Ta={}));var Wa;(function(e){function t(n,r){return r?{label:n,documentation:r}:{label:n}}e.create=t})(Wa||(Wa={}));var Oa;(function(e){function t(n,r){for(var i=[],s=2;s=0;d--){var u=c[d],f=s.offsetAt(u.range.start),m=s.offsetAt(u.range.end);if(m<=h)l=l.substring(0,f)+u.newText+l.substring(m,l.length);else throw new Error("Overlapping edit");h=f}return l}e.applyEdits=r;function i(s,o){if(s.length<=1)return s;var l=s.length/2|0,c=s.slice(0,l),h=s.slice(l);i(c,o),i(h,o);for(var d=0,u=0,f=0;d0&&t.push(n.length),this._lineOffsets=t}return this._lineOffsets},e.prototype.positionAt=function(t){t=Math.max(Math.min(t,this._content.length),0);var n=this.getLineOffsets(),r=0,i=n.length;if(i===0)return we.create(0,t);for(;rt?i=s:r=s+1}var o=r-1;return we.create(o,t-n[o])},e.prototype.offsetAt=function(t){var n=this.getLineOffsets();if(t.line>=n.length)return this._content.length;if(t.line<0)return 0;var r=n[t.line],i=t.line+1"u"}e.undefined=r;function i(m){return m===!0||m===!1}e.boolean=i;function s(m){return t.call(m)==="[object String]"}e.string=s;function o(m){return t.call(m)==="[object Number]"}e.number=o;function l(m,v,b){return t.call(m)==="[object Number]"&&v<=m&&m<=b}e.numberRange=l;function c(m){return t.call(m)==="[object Number]"&&-2147483648<=m&&m<=2147483647}e.integer=c;function h(m){return t.call(m)==="[object Number]"&&0<=m&&m<=2147483647}e.uinteger=h;function d(m){return t.call(m)==="[object Function]"}e.func=d;function u(m){return m!==null&&typeof m=="object"}e.objectLiteral=u;function f(m,v){return Array.isArray(m)&&m.every(v)}e.typedArray=f})(S||(S={}));var Wn=class{constructor(e,t,n,r){this._uri=e,this._languageId=t,this._version=n,this._content=r,this._lineOffsets=void 0}get uri(){return this._uri}get languageId(){return this._languageId}get version(){return this._version}getText(e){if(e){const t=this.offsetAt(e.start),n=this.offsetAt(e.end);return this._content.substring(t,n)}return this._content}update(e,t){for(let n of e)if(Wn.isIncremental(n)){const r=Ya(n.range),i=this.offsetAt(r.start),s=this.offsetAt(r.end);this._content=this._content.substring(0,i)+n.text+this._content.substring(s,this._content.length);const o=Math.max(r.start.line,0),l=Math.max(r.end.line,0);let c=this._lineOffsets;const h=Xa(n.text,!1,i);if(l-o===h.length)for(let u=0,f=h.length;ue?r=s:n=s+1}let i=n-1;return{line:i,character:e-t[i]}}offsetAt(e){let t=this.getLineOffsets();if(e.line>=t.length)return this._content.length;if(e.line<0)return 0;let n=t[e.line],r=e.line+1{let f=d.range.start.line-u.range.start.line;return f===0?d.range.start.character-u.range.start.character:f}),c=0;const h=[];for(const d of l){let u=i.offsetAt(d.range.start);if(uc&&h.push(o.substring(c,u)),d.newText.length&&h.push(d.newText),c=i.offsetAt(d.range.end)}return h.push(o.substr(c)),h.join("")}e.applyEdits=r})(Xr||(Xr={}));function Yr(e,t){if(e.length<=1)return e;const n=e.length/2|0,r=e.slice(0,n),i=e.slice(n);Yr(r,t),Yr(i,t);let s=0,o=0,l=0;for(;sn.line||t.line===n.line&&t.character>n.character?{start:n,end:t}:e}function Ih(e){const t=Ya(e.range);return t!==e.range?{newText:e.newText,range:t}:e}var Ka;(function(e){e.LATEST={textDocument:{completion:{completionItem:{documentationFormat:[ze.Markdown,ze.PlainText]}},hover:{contentFormat:[ze.Markdown,ze.PlainText]}}}})(Ka||(Ka={}));var rn;(function(e){e[e.Unknown=0]="Unknown",e[e.File=1]="File",e[e.Directory=2]="Directory",e[e.SymbolicLink=64]="SymbolicLink"})(rn||(rn={}));var Za={E:"Edge",FF:"Firefox",S:"Safari",C:"Chrome",IE:"IE",O:"Opera"};function Qa(e){switch(e){case"experimental":return`\u26A0\uFE0F Property is experimental. Be cautious when using it.\uFE0F + +`;case"nonstandard":return`\u{1F6A8}\uFE0F Property is nonstandard. Avoid using it. + +`;case"obsolete":return`\u{1F6A8}\uFE0F\uFE0F\uFE0F Property is obsolete. Avoid using it. + +`;default:return""}}function it(e,t,n){var r;if(t?r={kind:"markdown",value:Th(e,n)}:r={kind:"plaintext",value:Lh(e,n)},r.value!=="")return r}function On(e){return e=e.replace(/[\\`*_{}[\]()#+\-.!]/g,"\\$&"),e.replace(//g,">")}function Lh(e,t){if(!e.description||e.description==="")return"";if(typeof e.description!="string")return e.description.value;var n="";if((t==null?void 0:t.documentation)!==!1){e.status&&(n+=Qa(e.status)),n+=e.description;var r=eo(e.browsers);r&&(n+=` +(`+r+")"),"syntax"in e&&(n+=` + +Syntax: `.concat(e.syntax))}return e.references&&e.references.length>0&&(t==null?void 0:t.references)!==!1&&(n.length>0&&(n+=` + +`),n+=e.references.map(function(i){return"".concat(i.name,": ").concat(i.url)}).join(" | ")),n}function Th(e,t){if(!e.description||e.description==="")return"";var n="";if((t==null?void 0:t.documentation)!==!1){e.status&&(n+=Qa(e.status)),typeof e.description=="string"?n+=On(e.description):n+=e.description.kind===ze.Markdown?e.description.value:On(e.description.value);var r=eo(e.browsers);r&&(n+=` + +(`+On(r)+")"),"syntax"in e&&e.syntax&&(n+=` + +Syntax: `.concat(On(e.syntax)))}return e.references&&e.references.length>0&&(t==null?void 0:t.references)!==!1&&(n.length>0&&(n+=` + +`),n+=e.references.map(function(i){return"[".concat(i.name,"](").concat(i.url,")")}).join(" | ")),n}function eo(e){return e===void 0&&(e=[]),e.length===0?null:e.map(function(t){var n="",r=t.match(/([A-Z]+)(\d+)?/),i=r[1],s=r[2];return i in Za&&(n+=Za[i]),s&&(n+=" "+s),n}).join(", ")}var sn=Ie(),Wh=[{func:"rgb($red, $green, $blue)",desc:sn("css.builtin.rgb","Creates a Color from red, green, and blue values.")},{func:"rgba($red, $green, $blue, $alpha)",desc:sn("css.builtin.rgba","Creates a Color from red, green, blue, and alpha values.")},{func:"hsl($hue, $saturation, $lightness)",desc:sn("css.builtin.hsl","Creates a Color from hue, saturation, and lightness values.")},{func:"hsla($hue, $saturation, $lightness, $alpha)",desc:sn("css.builtin.hsla","Creates a Color from hue, saturation, lightness, and alpha values.")},{func:"hwb($hue $white $black)",desc:sn("css.builtin.hwb","Creates a Color from hue, white and black.")}],Un={aliceblue:"#f0f8ff",antiquewhite:"#faebd7",aqua:"#00ffff",aquamarine:"#7fffd4",azure:"#f0ffff",beige:"#f5f5dc",bisque:"#ffe4c4",black:"#000000",blanchedalmond:"#ffebcd",blue:"#0000ff",blueviolet:"#8a2be2",brown:"#a52a2a",burlywood:"#deb887",cadetblue:"#5f9ea0",chartreuse:"#7fff00",chocolate:"#d2691e",coral:"#ff7f50",cornflowerblue:"#6495ed",cornsilk:"#fff8dc",crimson:"#dc143c",cyan:"#00ffff",darkblue:"#00008b",darkcyan:"#008b8b",darkgoldenrod:"#b8860b",darkgray:"#a9a9a9",darkgrey:"#a9a9a9",darkgreen:"#006400",darkkhaki:"#bdb76b",darkmagenta:"#8b008b",darkolivegreen:"#556b2f",darkorange:"#ff8c00",darkorchid:"#9932cc",darkred:"#8b0000",darksalmon:"#e9967a",darkseagreen:"#8fbc8f",darkslateblue:"#483d8b",darkslategray:"#2f4f4f",darkslategrey:"#2f4f4f",darkturquoise:"#00ced1",darkviolet:"#9400d3",deeppink:"#ff1493",deepskyblue:"#00bfff",dimgray:"#696969",dimgrey:"#696969",dodgerblue:"#1e90ff",firebrick:"#b22222",floralwhite:"#fffaf0",forestgreen:"#228b22",fuchsia:"#ff00ff",gainsboro:"#dcdcdc",ghostwhite:"#f8f8ff",gold:"#ffd700",goldenrod:"#daa520",gray:"#808080",grey:"#808080",green:"#008000",greenyellow:"#adff2f",honeydew:"#f0fff0",hotpink:"#ff69b4",indianred:"#cd5c5c",indigo:"#4b0082",ivory:"#fffff0",khaki:"#f0e68c",lavender:"#e6e6fa",lavenderblush:"#fff0f5",lawngreen:"#7cfc00",lemonchiffon:"#fffacd",lightblue:"#add8e6",lightcoral:"#f08080",lightcyan:"#e0ffff",lightgoldenrodyellow:"#fafad2",lightgray:"#d3d3d3",lightgrey:"#d3d3d3",lightgreen:"#90ee90",lightpink:"#ffb6c1",lightsalmon:"#ffa07a",lightseagreen:"#20b2aa",lightskyblue:"#87cefa",lightslategray:"#778899",lightslategrey:"#778899",lightsteelblue:"#b0c4de",lightyellow:"#ffffe0",lime:"#00ff00",limegreen:"#32cd32",linen:"#faf0e6",magenta:"#ff00ff",maroon:"#800000",mediumaquamarine:"#66cdaa",mediumblue:"#0000cd",mediumorchid:"#ba55d3",mediumpurple:"#9370d8",mediumseagreen:"#3cb371",mediumslateblue:"#7b68ee",mediumspringgreen:"#00fa9a",mediumturquoise:"#48d1cc",mediumvioletred:"#c71585",midnightblue:"#191970",mintcream:"#f5fffa",mistyrose:"#ffe4e1",moccasin:"#ffe4b5",navajowhite:"#ffdead",navy:"#000080",oldlace:"#fdf5e6",olive:"#808000",olivedrab:"#6b8e23",orange:"#ffa500",orangered:"#ff4500",orchid:"#da70d6",palegoldenrod:"#eee8aa",palegreen:"#98fb98",paleturquoise:"#afeeee",palevioletred:"#d87093",papayawhip:"#ffefd5",peachpuff:"#ffdab9",peru:"#cd853f",pink:"#ffc0cb",plum:"#dda0dd",powderblue:"#b0e0e6",purple:"#800080",red:"#ff0000",rebeccapurple:"#663399",rosybrown:"#bc8f8f",royalblue:"#4169e1",saddlebrown:"#8b4513",salmon:"#fa8072",sandybrown:"#f4a460",seagreen:"#2e8b57",seashell:"#fff5ee",sienna:"#a0522d",silver:"#c0c0c0",skyblue:"#87ceeb",slateblue:"#6a5acd",slategray:"#708090",slategrey:"#708090",snow:"#fffafa",springgreen:"#00ff7f",steelblue:"#4682b4",tan:"#d2b48c",teal:"#008080",thistle:"#d8bfd8",tomato:"#ff6347",turquoise:"#40e0d0",violet:"#ee82ee",wheat:"#f5deb3",white:"#ffffff",whitesmoke:"#f5f5f5",yellow:"#ffff00",yellowgreen:"#9acd32"},to={currentColor:"The value of the 'color' property. The computed value of the 'currentColor' keyword is the computed value of the 'color' property. If the 'currentColor' keyword is set on the 'color' property itself, it is treated as 'color:inherit' at parse time.",transparent:"Fully transparent. This keyword can be considered a shorthand for rgba(0,0,0,0) which is its computed value."};function st(e,t){var n=e.getText(),r=n.match(/^([-+]?[0-9]*\.?[0-9]+)(%?)$/);if(r){r[2]&&(t=100);var i=parseFloat(r[1])/t;if(i>=0&&i<=1)return i}throw new Error}function no(e){var t=e.getText(),n=t.match(/^([-+]?[0-9]*\.?[0-9]+)(deg|rad|grad|turn)?$/);if(n)switch(n[2]){case"deg":return parseFloat(t)%360;case"rad":return parseFloat(t)*180/Math.PI%360;case"grad":return parseFloat(t)*.9%360;case"turn":return parseFloat(t)*360%360;default:if(typeof n[2]>"u")return parseFloat(t)%360}throw new Error}function Oh(e){var t=e.getName();return t?/^(rgb|rgba|hsl|hsla|hwb)$/gi.test(t):!1}var ro=48,Uh=57,Vh=65,Vn=97,Bh=102;function ae(e){return e=Vn&&e<=Bh?e-Vn+10:0)}function io(e){if(e[0]!=="#")return null;switch(e.length){case 4:return{red:ae(e.charCodeAt(1))*17/255,green:ae(e.charCodeAt(2))*17/255,blue:ae(e.charCodeAt(3))*17/255,alpha:1};case 5:return{red:ae(e.charCodeAt(1))*17/255,green:ae(e.charCodeAt(2))*17/255,blue:ae(e.charCodeAt(3))*17/255,alpha:ae(e.charCodeAt(4))*17/255};case 7:return{red:(ae(e.charCodeAt(1))*16+ae(e.charCodeAt(2)))/255,green:(ae(e.charCodeAt(3))*16+ae(e.charCodeAt(4)))/255,blue:(ae(e.charCodeAt(5))*16+ae(e.charCodeAt(6)))/255,alpha:1};case 9:return{red:(ae(e.charCodeAt(1))*16+ae(e.charCodeAt(2)))/255,green:(ae(e.charCodeAt(3))*16+ae(e.charCodeAt(4)))/255,blue:(ae(e.charCodeAt(5))*16+ae(e.charCodeAt(6)))/255,alpha:(ae(e.charCodeAt(7))*16+ae(e.charCodeAt(8)))/255}}return null}function so(e,t,n,r){if(r===void 0&&(r=1),e=e/60,t===0)return{red:n,green:n,blue:n,alpha:r};var i=function(l,c,h){for(;h<0;)h+=6;for(;h>=6;)h-=6;return h<1?(c-l)*h+l:h<3?c:h<4?(c-l)*(4-h)+l:l},s=n<=.5?n*(t+1):n+t-n*t,o=n*2-s;return{red:i(o,s,e+2),green:i(o,s,e),blue:i(o,s,e-2),alpha:r}}function ao(e){var t=e.red,n=e.green,r=e.blue,i=e.alpha,s=Math.max(t,n,r),o=Math.min(t,n,r),l=0,c=0,h=(o+s)/2,d=s-o;if(d>0){switch(c=Math.min(h<=.5?d/(2*h):d/(2-2*h),1),s){case t:l=(n-r)/d+(n=1){var i=t/(t+n);return{red:i,green:i,blue:i,alpha:r}}var s=so(e,1,.5,r),o=s.red;o*=1-t-n,o+=t;var l=s.green;l*=1-t-n,l+=t;var c=s.blue;return c*=1-t-n,c+=t,{red:o,green:l,blue:c,alpha:r}}function qh(e){var t=ao(e),n=Math.min(e.red,e.green,e.blue),r=1-Math.max(e.red,e.green,e.blue);return{h:t.h,w:n,b:r,a:t.a}}function $h(e){if(e.type===g.HexColorValue){var t=e.getText();return io(t)}else if(e.type===g.Function){var n=e,r=n.getName(),i=n.getArguments().getChildren();if(i.length===1){var s=i[0].getChildren();if(s.length===1&&s[0].type===g.Expression&&(i=s[0].getChildren(),i.length===3)){var o=i[2];if(o instanceof Tr){var l=o.getLeft(),c=o.getRight(),h=o.getOperator();l&&c&&h&&h.matches("/")&&(i=[i[0],i[1],l,c])}}}if(!r||i.length<3||i.length>4)return null;try{var d=i.length===4?st(i[3],1):1;if(r==="rgb"||r==="rgba")return{red:st(i[0],255),green:st(i[1],255),blue:st(i[2],255),alpha:d};if(r==="hsl"||r==="hsla"){var u=no(i[0]),f=st(i[1],100),m=st(i[2],100);return so(u,f,m,d)}else if(r==="hwb"){var u=no(i[0]),v=st(i[1],100),b=st(i[2],100);return jh(u,v,b,d)}}catch{return null}}else if(e.type===g.Identifier){if(e.parent&&e.parent.type!==g.Term)return null;var w=e.parent;if(w&&w.parent&&w.parent.type===g.BinaryExpression){var N=w.parent;if(N.parent&&N.parent.type===g.ListEntry&&N.parent.key===N)return null}var E=e.getText().toLowerCase();if(E==="none")return null;var M=Un[E];if(M)return io(M)}return null}var oo={bottom:"Computes to \u2018100%\u2019 for the vertical position if one or two values are given, otherwise specifies the bottom edge as the origin for the next offset.",center:"Computes to \u201850%\u2019 (\u2018left 50%\u2019) for the horizontal position if the horizontal position is not otherwise specified, or \u201850%\u2019 (\u2018top 50%\u2019) for the vertical position if it is.",left:"Computes to \u20180%\u2019 for the horizontal position if one or two values are given, otherwise specifies the left edge as the origin for the next offset.",right:"Computes to \u2018100%\u2019 for the horizontal position if one or two values are given, otherwise specifies the right edge as the origin for the next offset.",top:"Computes to \u20180%\u2019 for the vertical position if one or two values are given, otherwise specifies the top edge as the origin for the next offset."},lo={"no-repeat":"Placed once and not repeated in this direction.",repeat:"Repeated in this direction as often as needed to cover the background painting area.","repeat-x":"Computes to \u2018repeat no-repeat\u2019.","repeat-y":"Computes to \u2018no-repeat repeat\u2019.",round:"Repeated as often as will fit within the background positioning area. If it doesn\u2019t fit a whole number of times, it is rescaled so that it does.",space:"Repeated as often as will fit within the background positioning area without being clipped and then the images are spaced out to fill the area."},co={dashed:"A series of square-ended dashes.",dotted:"A series of round dots.",double:"Two parallel solid lines with some space between them.",groove:"Looks as if it were carved in the canvas.",hidden:"Same as \u2018none\u2019, but has different behavior in the border conflict resolution rules for border-collapsed tables.",inset:"Looks as if the content on the inside of the border is sunken into the canvas.",none:"No border. Color and width are ignored.",outset:"Looks as if the content on the inside of the border is coming out of the canvas.",ridge:"Looks as if it were coming out of the canvas.",solid:"A single line segment."},Hh=["medium","thick","thin"],ho={"border-box":"The background is painted within (clipped to) the border box.","content-box":"The background is painted within (clipped to) the content box.","padding-box":"The background is painted within (clipped to) the padding box."},uo={"margin-box":"Uses the margin box as reference box.","fill-box":"Uses the object bounding box as reference box.","stroke-box":"Uses the stroke bounding box as reference box.","view-box":"Uses the nearest SVG viewport as reference box."},po={initial:"Represents the value specified as the property\u2019s initial value.",inherit:"Represents the computed value of the property on the element\u2019s parent.",unset:"Acts as either `inherit` or `initial`, depending on whether the property is inherited or not."},fo={"var()":"Evaluates the value of a custom variable.","calc()":"Evaluates an mathematical expression. The following operators can be used: + - * /."},mo={"url()":"Reference an image file by URL","image()":"Provide image fallbacks and annotations.","-webkit-image-set()":"Provide multiple resolutions. Remember to use unprefixed image-set() in addition.","image-set()":"Provide multiple resolutions of an image and const the UA decide which is most appropriate in a given situation.","-moz-element()":"Use an element in the document as an image. Remember to use unprefixed element() in addition.","element()":"Use an element in the document as an image.","cross-fade()":"Indicates the two images to be combined and how far along in the transition the combination is.","-webkit-gradient()":"Deprecated. Use modern linear-gradient() or radial-gradient() instead.","-webkit-linear-gradient()":"Linear gradient. Remember to use unprefixed version in addition.","-moz-linear-gradient()":"Linear gradient. Remember to use unprefixed version in addition.","-o-linear-gradient()":"Linear gradient. Remember to use unprefixed version in addition.","linear-gradient()":"A linear gradient is created by specifying a straight gradient line, and then several colors placed along that line.","-webkit-repeating-linear-gradient()":"Repeating Linear gradient. Remember to use unprefixed version in addition.","-moz-repeating-linear-gradient()":"Repeating Linear gradient. Remember to use unprefixed version in addition.","-o-repeating-linear-gradient()":"Repeating Linear gradient. Remember to use unprefixed version in addition.","repeating-linear-gradient()":"Same as linear-gradient, except the color-stops are repeated infinitely in both directions, with their positions shifted by multiples of the difference between the last specified color-stop\u2019s position and the first specified color-stop\u2019s position.","-webkit-radial-gradient()":"Radial gradient. Remember to use unprefixed version in addition.","-moz-radial-gradient()":"Radial gradient. Remember to use unprefixed version in addition.","radial-gradient()":"Colors emerge from a single point and smoothly spread outward in a circular or elliptical shape.","-webkit-repeating-radial-gradient()":"Repeating radial gradient. Remember to use unprefixed version in addition.","-moz-repeating-radial-gradient()":"Repeating radial gradient. Remember to use unprefixed version in addition.","repeating-radial-gradient()":"Same as radial-gradient, except the color-stops are repeated infinitely in both directions, with their positions shifted by multiples of the difference between the last specified color-stop\u2019s position and the first specified color-stop\u2019s position."},go={ease:"Equivalent to cubic-bezier(0.25, 0.1, 0.25, 1.0).","ease-in":"Equivalent to cubic-bezier(0.42, 0, 1.0, 1.0).","ease-in-out":"Equivalent to cubic-bezier(0.42, 0, 0.58, 1.0).","ease-out":"Equivalent to cubic-bezier(0, 0, 0.58, 1.0).",linear:"Equivalent to cubic-bezier(0.0, 0.0, 1.0, 1.0).","step-end":"Equivalent to steps(1, end).","step-start":"Equivalent to steps(1, start).","steps()":"The first parameter specifies the number of intervals in the function. The second parameter, which is optional, is either the value \u201Cstart\u201D or \u201Cend\u201D.","cubic-bezier()":"Specifies a cubic-bezier curve. The four values specify points P1 and P2 of the curve as (x1, y1, x2, y2).","cubic-bezier(0.6, -0.28, 0.735, 0.045)":"Ease-in Back. Overshoots.","cubic-bezier(0.68, -0.55, 0.265, 1.55)":"Ease-in-out Back. Overshoots.","cubic-bezier(0.175, 0.885, 0.32, 1.275)":"Ease-out Back. Overshoots.","cubic-bezier(0.6, 0.04, 0.98, 0.335)":"Ease-in Circular. Based on half circle.","cubic-bezier(0.785, 0.135, 0.15, 0.86)":"Ease-in-out Circular. Based on half circle.","cubic-bezier(0.075, 0.82, 0.165, 1)":"Ease-out Circular. Based on half circle.","cubic-bezier(0.55, 0.055, 0.675, 0.19)":"Ease-in Cubic. Based on power of three.","cubic-bezier(0.645, 0.045, 0.355, 1)":"Ease-in-out Cubic. Based on power of three.","cubic-bezier(0.215, 0.610, 0.355, 1)":"Ease-out Cubic. Based on power of three.","cubic-bezier(0.95, 0.05, 0.795, 0.035)":"Ease-in Exponential. Based on two to the power ten.","cubic-bezier(1, 0, 0, 1)":"Ease-in-out Exponential. Based on two to the power ten.","cubic-bezier(0.19, 1, 0.22, 1)":"Ease-out Exponential. Based on two to the power ten.","cubic-bezier(0.47, 0, 0.745, 0.715)":"Ease-in Sine.","cubic-bezier(0.445, 0.05, 0.55, 0.95)":"Ease-in-out Sine.","cubic-bezier(0.39, 0.575, 0.565, 1)":"Ease-out Sine.","cubic-bezier(0.55, 0.085, 0.68, 0.53)":"Ease-in Quadratic. Based on power of two.","cubic-bezier(0.455, 0.03, 0.515, 0.955)":"Ease-in-out Quadratic. Based on power of two.","cubic-bezier(0.25, 0.46, 0.45, 0.94)":"Ease-out Quadratic. Based on power of two.","cubic-bezier(0.895, 0.03, 0.685, 0.22)":"Ease-in Quartic. Based on power of four.","cubic-bezier(0.77, 0, 0.175, 1)":"Ease-in-out Quartic. Based on power of four.","cubic-bezier(0.165, 0.84, 0.44, 1)":"Ease-out Quartic. Based on power of four.","cubic-bezier(0.755, 0.05, 0.855, 0.06)":"Ease-in Quintic. Based on power of five.","cubic-bezier(0.86, 0, 0.07, 1)":"Ease-in-out Quintic. Based on power of five.","cubic-bezier(0.23, 1, 0.320, 1)":"Ease-out Quintic. Based on power of five."},bo={"circle()":"Defines a circle.","ellipse()":"Defines an ellipse.","inset()":"Defines an inset rectangle.","polygon()":"Defines a polygon."},vo={length:["em","rem","ex","px","cm","mm","in","pt","pc","ch","vw","vh","vmin","vmax"],angle:["deg","rad","grad","turn"],time:["ms","s"],frequency:["Hz","kHz"],resolution:["dpi","dpcm","dppx"],percentage:["%","fr"]},Gh=["a","abbr","address","area","article","aside","audio","b","base","bdi","bdo","blockquote","body","br","button","canvas","caption","cite","code","col","colgroup","data","datalist","dd","del","details","dfn","dialog","div","dl","dt","em","embed","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","head","header","hgroup","hr","html","i","iframe","img","input","ins","kbd","keygen","label","legend","li","link","main","map","mark","menu","menuitem","meta","meter","nav","noscript","object","ol","optgroup","option","output","p","param","picture","pre","progress","q","rb","rp","rt","rtc","ruby","s","samp","script","section","select","small","source","span","strong","style","sub","summary","sup","table","tbody","td","template","textarea","tfoot","th","thead","time","title","tr","track","u","ul","const","video","wbr"],Jh=["circle","clipPath","cursor","defs","desc","ellipse","feBlend","feColorMatrix","feComponentTransfer","feComposite","feConvolveMatrix","feDiffuseLighting","feDisplacementMap","feDistantLight","feDropShadow","feFlood","feFuncA","feFuncB","feFuncG","feFuncR","feGaussianBlur","feImage","feMerge","feMergeNode","feMorphology","feOffset","fePointLight","feSpecularLighting","feSpotLight","feTile","feTurbulence","filter","foreignObject","g","hatch","hatchpath","image","line","linearGradient","marker","mask","mesh","meshpatch","meshrow","metadata","mpath","path","pattern","polygon","polyline","radialGradient","rect","set","solidcolor","stop","svg","switch","symbol","text","textPath","tspan","use","view"],Xh=["@bottom-center","@bottom-left","@bottom-left-corner","@bottom-right","@bottom-right-corner","@left-bottom","@left-middle","@left-top","@right-bottom","@right-middle","@right-top","@top-center","@top-left","@top-left-corner","@top-right","@top-right-corner"];function Bn(e){return Object.keys(e).map(function(t){return e[t]})}function Me(e){return typeof e<"u"}var wo=function(e,t,n){if(n||arguments.length===2)for(var r=0,i=t.length,s;rt.offset?s-t.offset:0}return t},e.prototype.markError=function(t,n,r,i){this.token!==this.lastErrorToken&&(t.addIssue(new xa(t,n,_e.Error,void 0,this.token.offset,this.token.len)),this.lastErrorToken=this.token),(r||i)&&this.resync(r,i)},e.prototype.parseStylesheet=function(t){var n=t.version,r=t.getText(),i=function(s,o){if(t.version!==n)throw new Error("Underlying model has changed, AST is no longer valid");return r.substr(s,o)};return this.internalParse(r,this._parseStylesheet,i)},e.prototype.internalParse=function(t,n,r){this.scanner.setSource(t),this.token=this.scanner.scan();var i=n.bind(this)();return i&&(r?i.textProvider=r:i.textProvider=function(s,o){return t.substr(s,o)}),i},e.prototype._parseStylesheet=function(){for(var t=this.create(eh);t.addChild(this._parseStylesheetStart()););var n=!1;do{var r=!1;do{r=!1;var i=this._parseStylesheetStatement();for(i&&(t.addChild(i),r=!0,n=!1,!this.peek(p.EOF)&&this._needsSemicolonAfter(i)&&!this.accept(p.SemiColon)&&this.markError(t,y.SemiColonExpected));this.accept(p.SemiColon)||this.accept(p.CDO)||this.accept(p.CDC);)r=!0,n=!1}while(r);if(this.peek(p.EOF))break;n||(this.peek(p.AtKeyword)?this.markError(t,y.UnknownAtRule):this.markError(t,y.RuleOrSelectorExpected),n=!0),this.consumeToken()}while(!this.peek(p.EOF));return this.finish(t)},e.prototype._parseStylesheetStart=function(){return this._parseCharset()},e.prototype._parseStylesheetStatement=function(t){return t===void 0&&(t=!1),this.peek(p.AtKeyword)?this._parseStylesheetAtStatement(t):this._parseRuleset(t)},e.prototype._parseStylesheetAtStatement=function(t){return t===void 0&&(t=!1),this._parseImport()||this._parseMedia(t)||this._parsePage()||this._parseFontFace()||this._parseKeyframe()||this._parseSupports(t)||this._parseViewPort()||this._parseNamespace()||this._parseDocument()||this._parseUnknownAtRule()},e.prototype._tryParseRuleset=function(t){var n=this.mark();if(this._parseSelector(t)){for(;this.accept(p.Comma)&&this._parseSelector(t););if(this.accept(p.CurlyL))return this.restoreAtMark(n),this._parseRuleset(t)}return this.restoreAtMark(n),null},e.prototype._parseRuleset=function(t){t===void 0&&(t=!1);var n=this.create(Et),r=n.getSelectors();if(!r.addChild(this._parseSelector(t)))return null;for(;this.accept(p.Comma);)if(!r.addChild(this._parseSelector(t)))return this.finish(n,y.SelectorExpected);return this._parseBody(n,this._parseRuleSetDeclaration.bind(this))},e.prototype._parseRuleSetDeclarationAtStatement=function(){return this._parseUnknownAtRule()},e.prototype._parseRuleSetDeclaration=function(){return this.peek(p.AtKeyword)?this._parseRuleSetDeclarationAtStatement():this._parseDeclaration()},e.prototype._needsSemicolonAfter=function(t){switch(t.type){case g.Keyframe:case g.ViewPort:case g.Media:case g.Ruleset:case g.Namespace:case g.If:case g.For:case g.Each:case g.While:case g.MixinDeclaration:case g.FunctionDeclaration:case g.MixinContentDeclaration:return!1;case g.ExtendsReference:case g.MixinContentReference:case g.ReturnStatement:case g.MediaQuery:case g.Debug:case g.Import:case g.AtApplyRule:case g.CustomPropertyDeclaration:return!0;case g.VariableDeclaration:return t.needsSemicolon;case g.MixinReference:return!t.getContent();case g.Declaration:return!t.getNestedProperties()}return!1},e.prototype._parseDeclarations=function(t){var n=this.create(Mr);if(!this.accept(p.CurlyL))return null;for(var r=t();n.addChild(r)&&!this.peek(p.CurlyR);){if(this._needsSemicolonAfter(r)&&!this.accept(p.SemiColon))return this.finish(n,y.SemiColonExpected,[p.SemiColon,p.CurlyR]);for(r&&this.prevToken&&this.prevToken.type===p.SemiColon&&(r.semicolonPosition=this.prevToken.offset);this.accept(p.SemiColon););r=t()}return this.accept(p.CurlyR)?this.finish(n):this.finish(n,y.RightCurlyExpected,[p.CurlyR,p.SemiColon])},e.prototype._parseBody=function(t,n){return t.setDeclarations(this._parseDeclarations(n))?this.finish(t):this.finish(t,y.LeftCurlyExpected,[p.CurlyR,p.SemiColon])},e.prototype._parseSelector=function(t){var n=this.create(Gt),r=!1;for(t&&(r=n.addChild(this._parseCombinator()));n.addChild(this._parseSimpleSelector());)r=!0,n.addChild(this._parseCombinator());return r?this.finish(n):null},e.prototype._parseDeclaration=function(t){var n=this._tryParseCustomPropertyDeclaration(t);if(n)return n;var r=this.create(Ue);return r.setProperty(this._parseProperty())?this.accept(p.Colon)?(this.prevToken&&(r.colonPosition=this.prevToken.offset),r.setValue(this._parseExpr())?(r.addChild(this._parsePrio()),this.peek(p.SemiColon)&&(r.semicolonPosition=this.token.offset),this.finish(r)):this.finish(r,y.PropertyValueExpected)):this.finish(r,y.ColonExpected,[p.Colon],t||[p.SemiColon]):null},e.prototype._tryParseCustomPropertyDeclaration=function(t){if(!this.peekRegExp(p.Ident,/^--/))return null;var n=this.create(nh);if(!n.setProperty(this._parseProperty()))return null;if(!this.accept(p.Colon))return this.finish(n,y.ColonExpected,[p.Colon]);this.prevToken&&(n.colonPosition=this.prevToken.offset);var r=this.mark();if(this.peek(p.CurlyL)){var i=this.create(th),s=this._parseDeclarations(this._parseRuleSetDeclaration.bind(this));if(i.setDeclarations(s)&&!s.isErroneous(!0)&&(i.addChild(this._parsePrio()),this.peek(p.SemiColon)))return this.finish(i),n.setPropertySet(i),n.semicolonPosition=this.token.offset,this.finish(n);this.restoreAtMark(r)}var o=this._parseExpr();return o&&!o.isErroneous(!0)&&(this._parsePrio(),this.peekOne.apply(this,wo(wo([],t||[],!1),[p.SemiColon,p.EOF],!1)))?(n.setValue(o),this.peek(p.SemiColon)&&(n.semicolonPosition=this.token.offset),this.finish(n)):(this.restoreAtMark(r),n.addChild(this._parseCustomPropertyValue(t)),n.addChild(this._parsePrio()),Me(n.colonPosition)&&this.token.offset===n.colonPosition+1?this.finish(n,y.PropertyValueExpected):this.finish(n))},e.prototype._parseCustomPropertyValue=function(t){var n=this;t===void 0&&(t=[p.CurlyR]);var r=this.create(U),i=function(){return o===0&&l===0&&c===0},s=function(){return t.indexOf(n.token.type)!==-1},o=0,l=0,c=0;e:for(;;){switch(this.token.type){case p.SemiColon:if(i())break e;break;case p.Exclamation:if(i())break e;break;case p.CurlyL:o++;break;case p.CurlyR:if(o--,o<0){if(s()&&l===0&&c===0)break e;return this.finish(r,y.LeftCurlyExpected)}break;case p.ParenthesisL:l++;break;case p.ParenthesisR:if(l--,l<0){if(s()&&c===0&&o===0)break e;return this.finish(r,y.LeftParenthesisExpected)}break;case p.BracketL:c++;break;case p.BracketR:if(c--,c<0)return this.finish(r,y.LeftSquareBracketExpected);break;case p.BadString:break e;case p.EOF:var h=y.RightCurlyExpected;return c>0?h=y.RightSquareBracketExpected:l>0&&(h=y.RightParenthesisExpected),this.finish(r,h)}this.consumeToken()}return this.finish(r)},e.prototype._tryToParseDeclaration=function(t){var n=this.mark();return this._parseProperty()&&this.accept(p.Colon)?(this.restoreAtMark(n),this._parseDeclaration(t)):(this.restoreAtMark(n),null)},e.prototype._parseProperty=function(){var t=this.create(Pr),n=this.mark();return(this.acceptDelim("*")||this.acceptDelim("_"))&&this.hasWhitespace()?(this.restoreAtMark(n),null):t.setIdentifier(this._parsePropertyIdentifier())?this.finish(t):null},e.prototype._parsePropertyIdentifier=function(){return this._parseIdent()},e.prototype._parseCharset=function(){if(!this.peek(p.Charset))return null;var t=this.create(U);return this.consumeToken(),this.accept(p.String)?this.accept(p.SemiColon)?this.finish(t):this.finish(t,y.SemiColonExpected):this.finish(t,y.IdentifierExpected)},e.prototype._parseImport=function(){if(!this.peekKeyword("@import"))return null;var t=this.create(Ir);return this.consumeToken(),!t.addChild(this._parseURILiteral())&&!t.addChild(this._parseStringLiteral())?this.finish(t,y.URIOrStringExpected):(!this.peek(p.SemiColon)&&!this.peek(p.EOF)&&t.setMedialist(this._parseMediaQueryList()),this.finish(t))},e.prototype._parseNamespace=function(){if(!this.peekKeyword("@namespace"))return null;var t=this.create(fh);return this.consumeToken(),!t.addChild(this._parseURILiteral())&&(t.addChild(this._parseIdent()),!t.addChild(this._parseURILiteral())&&!t.addChild(this._parseStringLiteral()))?this.finish(t,y.URIExpected,[p.SemiColon]):this.accept(p.SemiColon)?this.finish(t):this.finish(t,y.SemiColonExpected)},e.prototype._parseFontFace=function(){if(!this.peekKeyword("@font-face"))return null;var t=this.create(da);return this.consumeToken(),this._parseBody(t,this._parseRuleSetDeclaration.bind(this))},e.prototype._parseViewPort=function(){if(!this.peekKeyword("@-ms-viewport")&&!this.peekKeyword("@-o-viewport")&&!this.peekKeyword("@viewport"))return null;var t=this.create(ch);return this.consumeToken(),this._parseBody(t,this._parseRuleSetDeclaration.bind(this))},e.prototype._parseKeyframe=function(){if(!this.peekRegExp(p.AtKeyword,this.keyframeRegex))return null;var t=this.create(pa),n=this.create(U);return this.consumeToken(),t.setKeyword(this.finish(n)),n.matches("@-ms-keyframes")&&this.markError(n,y.UnknownKeyword),t.setIdentifier(this._parseKeyframeIdent())?this._parseBody(t,this._parseKeyframeSelector.bind(this)):this.finish(t,y.IdentifierExpected,[p.CurlyR])},e.prototype._parseKeyframeIdent=function(){return this._parseIdent([J.Keyframe])},e.prototype._parseKeyframeSelector=function(){var t=this.create(fa);if(!t.addChild(this._parseIdent())&&!this.accept(p.Percentage))return null;for(;this.accept(p.Comma);)if(!t.addChild(this._parseIdent())&&!this.accept(p.Percentage))return this.finish(t,y.PercentageExpected);return this._parseBody(t,this._parseRuleSetDeclaration.bind(this))},e.prototype._tryParseKeyframeSelector=function(){var t=this.create(fa),n=this.mark();if(!t.addChild(this._parseIdent())&&!this.accept(p.Percentage))return null;for(;this.accept(p.Comma);)if(!t.addChild(this._parseIdent())&&!this.accept(p.Percentage))return this.restoreAtMark(n),null;return this.peek(p.CurlyL)?this._parseBody(t,this._parseRuleSetDeclaration.bind(this)):(this.restoreAtMark(n),null)},e.prototype._parseSupports=function(t){if(t===void 0&&(t=!1),!this.peekKeyword("@supports"))return null;var n=this.create(Lr);return this.consumeToken(),n.addChild(this._parseSupportsCondition()),this._parseBody(n,this._parseSupportsDeclaration.bind(this,t))},e.prototype._parseSupportsDeclaration=function(t){return t===void 0&&(t=!1),t?this._tryParseRuleset(!0)||this._tryToParseDeclaration()||this._parseStylesheetStatement(!0):this._parseStylesheetStatement(!1)},e.prototype._parseSupportsCondition=function(){var t=this.create(Xt);if(this.acceptIdent("not"))t.addChild(this._parseSupportsConditionInParens());else if(t.addChild(this._parseSupportsConditionInParens()),this.peekRegExp(p.Ident,/^(and|or)$/i))for(var n=this.token.text.toLowerCase();this.acceptIdent(n);)t.addChild(this._parseSupportsConditionInParens());return this.finish(t)},e.prototype._parseSupportsConditionInParens=function(){var t=this.create(Xt);if(this.accept(p.ParenthesisL))return this.prevToken&&(t.lParent=this.prevToken.offset),!t.addChild(this._tryToParseDeclaration([p.ParenthesisR]))&&!this._parseSupportsCondition()?this.finish(t,y.ConditionExpected):this.accept(p.ParenthesisR)?(this.prevToken&&(t.rParent=this.prevToken.offset),this.finish(t)):this.finish(t,y.RightParenthesisExpected,[p.ParenthesisR],[]);if(this.peek(p.Ident)){var n=this.mark();if(this.consumeToken(),!this.hasWhitespace()&&this.accept(p.ParenthesisL)){for(var r=1;this.token.type!==p.EOF&&r!==0;)this.token.type===p.ParenthesisL?r++:this.token.type===p.ParenthesisR&&r--,this.consumeToken();return this.finish(t)}else this.restoreAtMark(n)}return this.finish(t,y.LeftParenthesisExpected,[],[p.ParenthesisL])},e.prototype._parseMediaDeclaration=function(t){return t===void 0&&(t=!1),t?this._tryParseRuleset(!0)||this._tryToParseDeclaration()||this._parseStylesheetStatement(!0):this._parseStylesheetStatement(!1)},e.prototype._parseMedia=function(t){if(t===void 0&&(t=!1),!this.peekKeyword("@media"))return null;var n=this.create(ma);return this.consumeToken(),n.addChild(this._parseMediaQueryList())?this._parseBody(n,this._parseMediaDeclaration.bind(this,t)):this.finish(n,y.MediaQueryExpected)},e.prototype._parseMediaQueryList=function(){var t=this.create(ga);if(!t.addChild(this._parseMediaQuery()))return this.finish(t,y.MediaQueryExpected);for(;this.accept(p.Comma);)if(!t.addChild(this._parseMediaQuery()))return this.finish(t,y.MediaQueryExpected);return this.finish(t)},e.prototype._parseMediaQuery=function(){var t=this.create(ba),n=this.mark();if(this.acceptIdent("not"),this.peek(p.ParenthesisL))this.restoreAtMark(n),t.addChild(this._parseMediaCondition());else{if(this.acceptIdent("only"),!t.addChild(this._parseIdent()))return null;this.acceptIdent("and")&&t.addChild(this._parseMediaCondition())}return this.finish(t)},e.prototype._parseRatio=function(){var t=this.mark(),n=this.create(Sh);return this._parseNumeric()?this.acceptDelim("/")?this._parseNumeric()?this.finish(n):this.finish(n,y.NumberExpected):(this.restoreAtMark(t),null):null},e.prototype._parseMediaCondition=function(){var t=this.create(gh);this.acceptIdent("not");for(var n=!0;n;){if(!this.accept(p.ParenthesisL))return this.finish(t,y.LeftParenthesisExpected,[],[p.CurlyL]);if(this.peek(p.ParenthesisL)||this.peekIdent("not")?t.addChild(this._parseMediaCondition()):t.addChild(this._parseMediaFeature()),!this.accept(p.ParenthesisR))return this.finish(t,y.RightParenthesisExpected,[],[p.CurlyL]);n=this.acceptIdent("and")||this.acceptIdent("or")}return this.finish(t)},e.prototype._parseMediaFeature=function(){var t=this,n=[p.ParenthesisR],r=this.create(bh),i=function(){return t.acceptDelim("<")||t.acceptDelim(">")?(t.hasWhitespace()||t.acceptDelim("="),!0):!!t.acceptDelim("=")};if(r.addChild(this._parseMediaFeatureName())){if(this.accept(p.Colon)){if(!r.addChild(this._parseMediaFeatureValue()))return this.finish(r,y.TermExpected,[],n)}else if(i()){if(!r.addChild(this._parseMediaFeatureValue()))return this.finish(r,y.TermExpected,[],n);if(i()&&!r.addChild(this._parseMediaFeatureValue()))return this.finish(r,y.TermExpected,[],n)}}else if(r.addChild(this._parseMediaFeatureValue())){if(!i())return this.finish(r,y.OperatorExpected,[],n);if(!r.addChild(this._parseMediaFeatureName()))return this.finish(r,y.IdentifierExpected,[],n);if(i()&&!r.addChild(this._parseMediaFeatureValue()))return this.finish(r,y.TermExpected,[],n)}else return this.finish(r,y.IdentifierExpected,[],n);return this.finish(r)},e.prototype._parseMediaFeatureName=function(){return this._parseIdent()},e.prototype._parseMediaFeatureValue=function(){return this._parseRatio()||this._parseTermExpression()},e.prototype._parseMedium=function(){var t=this.create(U);return t.addChild(this._parseIdent())?this.finish(t):null},e.prototype._parsePageDeclaration=function(){return this._parsePageMarginBox()||this._parseRuleSetDeclaration()},e.prototype._parsePage=function(){if(!this.peekKeyword("@page"))return null;var t=this.create(vh);if(this.consumeToken(),t.addChild(this._parsePageSelector())){for(;this.accept(p.Comma);)if(!t.addChild(this._parsePageSelector()))return this.finish(t,y.IdentifierExpected)}return this._parseBody(t,this._parsePageDeclaration.bind(this))},e.prototype._parsePageMarginBox=function(){if(!this.peek(p.AtKeyword))return null;var t=this.create(wh);return this.acceptOneKeyword(Xh)||this.markError(t,y.UnknownAtRule,[],[p.CurlyL]),this._parseBody(t,this._parseRuleSetDeclaration.bind(this))},e.prototype._parsePageSelector=function(){if(!this.peek(p.Ident)&&!this.peek(p.Colon))return null;var t=this.create(U);return t.addChild(this._parseIdent()),this.accept(p.Colon)&&!t.addChild(this._parseIdent())?this.finish(t,y.IdentifierExpected):this.finish(t)},e.prototype._parseDocument=function(){if(!this.peekKeyword("@-moz-document"))return null;var t=this.create(mh);return this.consumeToken(),this.resync([],[p.CurlyL]),this._parseBody(t,this._parseStylesheetStatement.bind(this))},e.prototype._parseUnknownAtRule=function(){if(!this.peek(p.AtKeyword))return null;var t=this.create(wa);t.addChild(this._parseUnknownAtRuleName());var n=function(){return i===0&&s===0&&o===0},r=0,i=0,s=0,o=0;e:for(;;){switch(this.token.type){case p.SemiColon:if(n())break e;break;case p.EOF:return i>0?this.finish(t,y.RightCurlyExpected):o>0?this.finish(t,y.RightSquareBracketExpected):s>0?this.finish(t,y.RightParenthesisExpected):this.finish(t);case p.CurlyL:r++,i++;break;case p.CurlyR:if(i--,r>0&&i===0){if(this.consumeToken(),o>0)return this.finish(t,y.RightSquareBracketExpected);if(s>0)return this.finish(t,y.RightParenthesisExpected);break e}if(i<0){if(s===0&&o===0)break e;return this.finish(t,y.LeftCurlyExpected)}break;case p.ParenthesisL:s++;break;case p.ParenthesisR:if(s--,s<0)return this.finish(t,y.LeftParenthesisExpected);break;case p.BracketL:o++;break;case p.BracketR:if(o--,o<0)return this.finish(t,y.LeftSquareBracketExpected);break}this.consumeToken()}return t},e.prototype._parseUnknownAtRuleName=function(){var t=this.create(U);return this.accept(p.AtKeyword)?this.finish(t):t},e.prototype._parseOperator=function(){if(this.peekDelim("/")||this.peekDelim("*")||this.peekDelim("+")||this.peekDelim("-")||this.peek(p.Dashmatch)||this.peek(p.Includes)||this.peek(p.SubstringOperator)||this.peek(p.PrefixOperator)||this.peek(p.SuffixOperator)||this.peekDelim("=")){var t=this.createNode(g.Operator);return this.consumeToken(),this.finish(t)}else return null},e.prototype._parseUnaryOperator=function(){if(!this.peekDelim("+")&&!this.peekDelim("-"))return null;var t=this.create(U);return this.consumeToken(),this.finish(t)},e.prototype._parseCombinator=function(){if(this.peekDelim(">")){var t=this.create(U);this.consumeToken();var n=this.mark();if(!this.hasWhitespace()&&this.acceptDelim(">")){if(!this.hasWhitespace()&&this.acceptDelim(">"))return t.type=g.SelectorCombinatorShadowPiercingDescendant,this.finish(t);this.restoreAtMark(n)}return t.type=g.SelectorCombinatorParent,this.finish(t)}else if(this.peekDelim("+")){var t=this.create(U);return this.consumeToken(),t.type=g.SelectorCombinatorSibling,this.finish(t)}else if(this.peekDelim("~")){var t=this.create(U);return this.consumeToken(),t.type=g.SelectorCombinatorAllSiblings,this.finish(t)}else if(this.peekDelim("/")){var t=this.create(U);this.consumeToken();var n=this.mark();if(!this.hasWhitespace()&&this.acceptIdent("deep")&&!this.hasWhitespace()&&this.acceptDelim("/"))return t.type=g.SelectorCombinatorShadowPiercingDescendant,this.finish(t);this.restoreAtMark(n)}return null},e.prototype._parseSimpleSelector=function(){var t=this.create(Dt),n=0;for(t.addChild(this._parseElementName())&&n++;(n===0||!this.hasWhitespace())&&t.addChild(this._parseSimpleSelectorBody());)n++;return n>0?this.finish(t):null},e.prototype._parseSimpleSelectorBody=function(){return this._parsePseudo()||this._parseHash()||this._parseClass()||this._parseAttrib()},e.prototype._parseSelectorIdent=function(){return this._parseIdent()},e.prototype._parseHash=function(){if(!this.peek(p.Hash)&&!this.peekDelim("#"))return null;var t=this.createNode(g.IdentifierSelector);if(this.acceptDelim("#")){if(this.hasWhitespace()||!t.addChild(this._parseSelectorIdent()))return this.finish(t,y.IdentifierExpected)}else this.consumeToken();return this.finish(t)},e.prototype._parseClass=function(){if(!this.peekDelim("."))return null;var t=this.createNode(g.ClassSelector);return this.consumeToken(),this.hasWhitespace()||!t.addChild(this._parseSelectorIdent())?this.finish(t,y.IdentifierExpected):this.finish(t)},e.prototype._parseElementName=function(){var t=this.mark(),n=this.createNode(g.ElementNameSelector);return n.addChild(this._parseNamespacePrefix()),!n.addChild(this._parseSelectorIdent())&&!this.acceptDelim("*")?(this.restoreAtMark(t),null):this.finish(n)},e.prototype._parseNamespacePrefix=function(){var t=this.mark(),n=this.createNode(g.NamespacePrefix);return!n.addChild(this._parseIdent())&&this.acceptDelim("*"),this.acceptDelim("|")?this.finish(n):(this.restoreAtMark(t),null)},e.prototype._parseAttrib=function(){if(!this.peek(p.BracketL))return null;var t=this.create(xh);return this.consumeToken(),t.setNamespacePrefix(this._parseNamespacePrefix()),t.setIdentifier(this._parseIdent())?(t.setOperator(this._parseOperator())&&(t.setValue(this._parseBinaryExpr()),this.acceptIdent("i"),this.acceptIdent("s")),this.accept(p.BracketR)?this.finish(t):this.finish(t,y.RightSquareBracketExpected)):this.finish(t,y.IdentifierExpected)},e.prototype._parsePseudo=function(){var t=this,n=this._tryParsePseudoIdentifier();if(n){if(!this.hasWhitespace()&&this.accept(p.ParenthesisL)){var r=function(){var i=t.create(U);if(!i.addChild(t._parseSelector(!1)))return null;for(;t.accept(p.Comma)&&i.addChild(t._parseSelector(!1)););return t.peek(p.ParenthesisR)?t.finish(i):null};if(n.addChild(this.try(r)||this._parseBinaryExpr()),!this.accept(p.ParenthesisR))return this.finish(n,y.RightParenthesisExpected)}return this.finish(n)}return null},e.prototype._tryParsePseudoIdentifier=function(){if(!this.peek(p.Colon))return null;var t=this.mark(),n=this.createNode(g.PseudoSelector);return this.consumeToken(),this.hasWhitespace()?(this.restoreAtMark(t),null):(this.accept(p.Colon),this.hasWhitespace()||!n.addChild(this._parseIdent())?this.finish(n,y.IdentifierExpected):this.finish(n))},e.prototype._tryParsePrio=function(){var t=this.mark(),n=this._parsePrio();return n||(this.restoreAtMark(t),null)},e.prototype._parsePrio=function(){if(!this.peek(p.Exclamation))return null;var t=this.createNode(g.Prio);return this.accept(p.Exclamation)&&this.acceptIdent("important")?this.finish(t):null},e.prototype._parseExpr=function(t){t===void 0&&(t=!1);var n=this.create(va);if(!n.addChild(this._parseBinaryExpr()))return null;for(;;){if(this.peek(p.Comma)){if(t)return this.finish(n);this.consumeToken()}else if(!this.hasWhitespace())break;if(!n.addChild(this._parseBinaryExpr()))break}return this.finish(n)},e.prototype._parseUnicodeRange=function(){if(!this.peekIdent("u"))return null;var t=this.create(Qc);return this.acceptUnicodeRange()?this.finish(t):null},e.prototype._parseNamedLine=function(){if(!this.peek(p.BracketL))return null;var t=this.createNode(g.GridLine);for(this.consumeToken();t.addChild(this._parseIdent()););return this.accept(p.BracketR)?this.finish(t):this.finish(t,y.RightSquareBracketExpected)},e.prototype._parseBinaryExpr=function(t,n){var r=this.create(Tr);if(!r.setLeft(t||this._parseTerm()))return null;if(!r.setOperator(n||this._parseOperator()))return this.finish(r);if(!r.setRight(this._parseTerm()))return this.finish(r,y.TermExpected);r=this.finish(r);var i=this._parseOperator();return i&&(r=this._parseBinaryExpr(r,i)),this.finish(r)},e.prototype._parseTerm=function(){var t=this.create(yh);return t.setOperator(this._parseUnaryOperator()),t.setExpression(this._parseTermExpression())?this.finish(t):null},e.prototype._parseTermExpression=function(){return this._parseURILiteral()||this._parseUnicodeRange()||this._parseFunction()||this._parseIdent()||this._parseStringLiteral()||this._parseNumeric()||this._parseHexColor()||this._parseOperation()||this._parseNamedLine()},e.prototype._parseOperation=function(){if(!this.peek(p.ParenthesisL))return null;var t=this.create(U);return this.consumeToken(),t.addChild(this._parseExpr()),this.accept(p.ParenthesisR)?this.finish(t):this.finish(t,y.RightParenthesisExpected)},e.prototype._parseNumeric=function(){if(this.peek(p.Num)||this.peek(p.Percentage)||this.peek(p.Resolution)||this.peek(p.Length)||this.peek(p.EMS)||this.peek(p.EXS)||this.peek(p.Angle)||this.peek(p.Time)||this.peek(p.Dimension)||this.peek(p.Freq)){var t=this.create(Or);return this.consumeToken(),this.finish(t)}return null},e.prototype._parseStringLiteral=function(){if(!this.peek(p.String)&&!this.peek(p.BadString))return null;var t=this.createNode(g.StringLiteral);return this.consumeToken(),this.finish(t)},e.prototype._parseURILiteral=function(){if(!this.peekRegExp(p.Ident,/^url(-prefix)?$/i))return null;var t=this.mark(),n=this.createNode(g.URILiteral);return this.accept(p.Ident),this.hasWhitespace()||!this.peek(p.ParenthesisL)?(this.restoreAtMark(t),null):(this.scanner.inURL=!0,this.consumeToken(),n.addChild(this._parseURLArgument()),this.scanner.inURL=!1,this.accept(p.ParenthesisR)?this.finish(n):this.finish(n,y.RightParenthesisExpected))},e.prototype._parseURLArgument=function(){var t=this.create(U);return!this.accept(p.String)&&!this.accept(p.BadString)&&!this.acceptUnquotedString()?null:this.finish(t)},e.prototype._parseIdent=function(t){if(!this.peek(p.Ident))return null;var n=this.create(Ae);return t&&(n.referenceTypes=t),n.isCustomProperty=this.peekRegExp(p.Ident,/^--/),this.consumeToken(),this.finish(n)},e.prototype._parseFunction=function(){var t=this.mark(),n=this.create(Jt);if(!n.setIdentifier(this._parseFunctionIdentifier()))return null;if(this.hasWhitespace()||!this.accept(p.ParenthesisL))return this.restoreAtMark(t),null;if(n.getArguments().addChild(this._parseFunctionArgument()))for(;this.accept(p.Comma)&&!this.peek(p.ParenthesisR);)n.getArguments().addChild(this._parseFunctionArgument())||this.markError(n,y.ExpressionExpected);return this.accept(p.ParenthesisR)?this.finish(n):this.finish(n,y.RightParenthesisExpected)},e.prototype._parseFunctionIdentifier=function(){if(!this.peek(p.Ident))return null;var t=this.create(Ae);if(t.referenceTypes=[J.Function],this.acceptIdent("progid")){if(this.accept(p.Colon))for(;this.accept(p.Ident)&&this.acceptDelim("."););return this.finish(t)}return this.consumeToken(),this.finish(t)},e.prototype._parseFunctionArgument=function(){var t=this.create(Rt);return t.setValue(this._parseExpr(!0))?this.finish(t):null},e.prototype._parseHexColor=function(){if(this.peekRegExp(p.Hash,/^#([A-Fa-f0-9]{3}|[A-Fa-f0-9]{4}|[A-Fa-f0-9]{6}|[A-Fa-f0-9]{8})$/g)){var t=this.create(Wr);return this.consumeToken(),this.finish(t)}else return null},e}();function Yh(e,t){var n=0,r=e.length;if(r===0)return 0;for(;nt+n||this.offset===t&&this.length===n?this.findInScope(t,n):null},e.prototype.findInScope=function(t,n){n===void 0&&(n=0);var r=t+n,i=Yh(this.children,function(o){return o.offset>r});if(i===0)return this;var s=this.children[i-1];return s.offset<=t&&s.offset+s.length>=t+n?s.findInScope(t,n):this},e.prototype.addSymbol=function(t){this.symbols.push(t)},e.prototype.getSymbol=function(t,n){for(var r=0;r{var e={470:r=>{function i(l){if(typeof l!="string")throw new TypeError("Path must be a string. Received "+JSON.stringify(l))}function s(l,c){for(var h,d="",u=0,f=-1,m=0,v=0;v<=l.length;++v){if(v2){var b=d.lastIndexOf("/");if(b!==d.length-1){b===-1?(d="",u=0):u=(d=d.slice(0,b)).length-1-d.lastIndexOf("/"),f=v,m=0;continue}}else if(d.length===2||d.length===1){d="",u=0,f=v,m=0;continue}}c&&(d.length>0?d+="/..":d="..",u=2)}else d.length>0?d+="/"+l.slice(f+1,v):d=l.slice(f+1,v),u=v-f-1;f=v,m=0}else h===46&&m!==-1?++m:m=-1}return d}var o={resolve:function(){for(var l,c="",h=!1,d=arguments.length-1;d>=-1&&!h;d--){var u;d>=0?u=arguments[d]:(l===void 0&&(l=process.cwd()),u=l),i(u),u.length!==0&&(c=u+"/"+c,h=u.charCodeAt(0)===47)}return c=s(c,!h),h?c.length>0?"/"+c:"/":c.length>0?c:"."},normalize:function(l){if(i(l),l.length===0)return".";var c=l.charCodeAt(0)===47,h=l.charCodeAt(l.length-1)===47;return(l=s(l,!c)).length!==0||c||(l="."),l.length>0&&h&&(l+="/"),c?"/"+l:l},isAbsolute:function(l){return i(l),l.length>0&&l.charCodeAt(0)===47},join:function(){if(arguments.length===0)return".";for(var l,c=0;c0&&(l===void 0?l=h:l+="/"+h)}return l===void 0?".":o.normalize(l)},relative:function(l,c){if(i(l),i(c),l===c||(l=o.resolve(l))===(c=o.resolve(c)))return"";for(var h=1;hv){if(c.charCodeAt(f+w)===47)return c.slice(f+w+1);if(w===0)return c.slice(f+w)}else u>v&&(l.charCodeAt(h+w)===47?b=w:w===0&&(b=0));break}var N=l.charCodeAt(h+w);if(N!==c.charCodeAt(f+w))break;N===47&&(b=w)}var E="";for(w=h+b+1;w<=d;++w)w!==d&&l.charCodeAt(w)!==47||(E.length===0?E+="..":E+="/..");return E.length>0?E+c.slice(f+b):(f+=b,c.charCodeAt(f)===47&&++f,c.slice(f))},_makeLong:function(l){return l},dirname:function(l){if(i(l),l.length===0)return".";for(var c=l.charCodeAt(0),h=c===47,d=-1,u=!0,f=l.length-1;f>=1;--f)if((c=l.charCodeAt(f))===47){if(!u){d=f;break}}else u=!1;return d===-1?h?"/":".":h&&d===1?"//":l.slice(0,d)},basename:function(l,c){if(c!==void 0&&typeof c!="string")throw new TypeError('"ext" argument must be a string');i(l);var h,d=0,u=-1,f=!0;if(c!==void 0&&c.length>0&&c.length<=l.length){if(c.length===l.length&&c===l)return"";var m=c.length-1,v=-1;for(h=l.length-1;h>=0;--h){var b=l.charCodeAt(h);if(b===47){if(!f){d=h+1;break}}else v===-1&&(f=!1,v=h+1),m>=0&&(b===c.charCodeAt(m)?--m==-1&&(u=h):(m=-1,u=v))}return d===u?u=v:u===-1&&(u=l.length),l.slice(d,u)}for(h=l.length-1;h>=0;--h)if(l.charCodeAt(h)===47){if(!f){d=h+1;break}}else u===-1&&(f=!1,u=h+1);return u===-1?"":l.slice(d,u)},extname:function(l){i(l);for(var c=-1,h=0,d=-1,u=!0,f=0,m=l.length-1;m>=0;--m){var v=l.charCodeAt(m);if(v!==47)d===-1&&(u=!1,d=m+1),v===46?c===-1?c=m:f!==1&&(f=1):c!==-1&&(f=-1);else if(!u){h=m+1;break}}return c===-1||d===-1||f===0||f===1&&c===d-1&&c===h+1?"":l.slice(c,d)},format:function(l){if(l===null||typeof l!="object")throw new TypeError('The "pathObject" argument must be of type Object. Received type '+typeof l);return function(c,h){var d=h.dir||h.root,u=h.base||(h.name||"")+(h.ext||"");return d?d===h.root?d+u:d+"/"+u:u}(0,l)},parse:function(l){i(l);var c={root:"",dir:"",base:"",ext:"",name:""};if(l.length===0)return c;var h,d=l.charCodeAt(0),u=d===47;u?(c.root="/",h=1):h=0;for(var f=-1,m=0,v=-1,b=!0,w=l.length-1,N=0;w>=h;--w)if((d=l.charCodeAt(w))!==47)v===-1&&(b=!1,v=w+1),d===46?f===-1?f=w:N!==1&&(N=1):f!==-1&&(N=-1);else if(!b){m=w+1;break}return f===-1||v===-1||N===0||N===1&&f===v-1&&f===m+1?v!==-1&&(c.base=c.name=m===0&&u?l.slice(1,v):l.slice(m,v)):(m===0&&u?(c.name=l.slice(1,f),c.base=l.slice(1,v)):(c.name=l.slice(m,f),c.base=l.slice(m,v)),c.ext=l.slice(f,v)),m>0?c.dir=l.slice(0,m-1):u&&(c.dir="/"),c},sep:"/",delimiter:":",win32:null,posix:null};o.posix=o,r.exports=o},447:(r,i,s)=>{var o;if(s.r(i),s.d(i,{URI:()=>E,Utils:()=>$}),typeof process=="object")o=process.platform==="win32";else if(typeof navigator=="object"){var l=navigator.userAgent;o=l.indexOf("Windows")>=0}var c,h,d=(c=function(D,x){return(c=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(R,I){R.__proto__=I}||function(R,I){for(var H in I)Object.prototype.hasOwnProperty.call(I,H)&&(R[H]=I[H])})(D,x)},function(D,x){if(typeof x!="function"&&x!==null)throw new TypeError("Class extends value "+String(x)+" is not a constructor or null");function R(){this.constructor=D}c(D,x),D.prototype=x===null?Object.create(x):(R.prototype=x.prototype,new R)}),u=/^\w[\w\d+.-]*$/,f=/^\//,m=/^\/\//;function v(D,x){if(!D.scheme&&x)throw new Error('[UriError]: Scheme is missing: {scheme: "", authority: "'.concat(D.authority,'", path: "').concat(D.path,'", query: "').concat(D.query,'", fragment: "').concat(D.fragment,'"}'));if(D.scheme&&!u.test(D.scheme))throw new Error("[UriError]: Scheme contains illegal characters.");if(D.path){if(D.authority){if(!f.test(D.path))throw new Error('[UriError]: If a URI contains an authority component, then the path component must either be empty or begin with a slash ("/") character')}else if(m.test(D.path))throw new Error('[UriError]: If a URI does not contain an authority component, then the path cannot begin with two slash characters ("//")')}}var b="",w="/",N=/^(([^:/?#]+?):)?(\/\/([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?/,E=function(){function D(x,R,I,H,G,Z){Z===void 0&&(Z=!1),typeof x=="object"?(this.scheme=x.scheme||b,this.authority=x.authority||b,this.path=x.path||b,this.query=x.query||b,this.fragment=x.fragment||b):(this.scheme=function(Be,be){return Be||be?Be:"file"}(x,Z),this.authority=R||b,this.path=function(Be,be){switch(Be){case"https":case"http":case"file":be?be[0]!==w&&(be=w+be):be=w}return be}(this.scheme,I||b),this.query=H||b,this.fragment=G||b,v(this,Z))}return D.isUri=function(x){return x instanceof D||!!x&&typeof x.authority=="string"&&typeof x.fragment=="string"&&typeof x.path=="string"&&typeof x.query=="string"&&typeof x.scheme=="string"&&typeof x.fsPath=="string"&&typeof x.with=="function"&&typeof x.toString=="function"},Object.defineProperty(D.prototype,"fsPath",{get:function(){return C(this,!1)},enumerable:!1,configurable:!0}),D.prototype.with=function(x){if(!x)return this;var R=x.scheme,I=x.authority,H=x.path,G=x.query,Z=x.fragment;return R===void 0?R=this.scheme:R===null&&(R=b),I===void 0?I=this.authority:I===null&&(I=b),H===void 0?H=this.path:H===null&&(H=b),G===void 0?G=this.query:G===null&&(G=b),Z===void 0?Z=this.fragment:Z===null&&(Z=b),R===this.scheme&&I===this.authority&&H===this.path&&G===this.query&&Z===this.fragment?this:new k(R,I,H,G,Z)},D.parse=function(x,R){R===void 0&&(R=!1);var I=N.exec(x);return I?new k(I[2]||b,L(I[4]||b),L(I[5]||b),L(I[7]||b),L(I[9]||b),R):new k(b,b,b,b,b)},D.file=function(x){var R=b;if(o&&(x=x.replace(/\\/g,w)),x[0]===w&&x[1]===w){var I=x.indexOf(w,2);I===-1?(R=x.substring(2),x=w):(R=x.substring(2,I),x=x.substring(I)||w)}return new k("file",R,x,b,b)},D.from=function(x){var R=new k(x.scheme,x.authority,x.path,x.query,x.fragment);return v(R,!0),R},D.prototype.toString=function(x){return x===void 0&&(x=!1),z(this,x)},D.prototype.toJSON=function(){return this},D.revive=function(x){if(x){if(x instanceof D)return x;var R=new k(x);return R._formatted=x.external,R._fsPath=x._sep===M?x.fsPath:null,R}return x},D}(),M=o?1:void 0,k=function(D){function x(){var R=D!==null&&D.apply(this,arguments)||this;return R._formatted=null,R._fsPath=null,R}return d(x,D),Object.defineProperty(x.prototype,"fsPath",{get:function(){return this._fsPath||(this._fsPath=C(this,!1)),this._fsPath},enumerable:!1,configurable:!0}),x.prototype.toString=function(R){return R===void 0&&(R=!1),R?z(this,!0):(this._formatted||(this._formatted=z(this,!1)),this._formatted)},x.prototype.toJSON=function(){var R={$mid:1};return this._fsPath&&(R.fsPath=this._fsPath,R._sep=M),this._formatted&&(R.external=this._formatted),this.path&&(R.path=this.path),this.scheme&&(R.scheme=this.scheme),this.authority&&(R.authority=this.authority),this.query&&(R.query=this.query),this.fragment&&(R.fragment=this.fragment),R},x}(E),P=((h={})[58]="%3A",h[47]="%2F",h[63]="%3F",h[35]="%23",h[91]="%5B",h[93]="%5D",h[64]="%40",h[33]="%21",h[36]="%24",h[38]="%26",h[39]="%27",h[40]="%28",h[41]="%29",h[42]="%2A",h[43]="%2B",h[44]="%2C",h[59]="%3B",h[61]="%3D",h[32]="%20",h);function F(D,x){for(var R=void 0,I=-1,H=0;H=97&&G<=122||G>=65&&G<=90||G>=48&&G<=57||G===45||G===46||G===95||G===126||x&&G===47)I!==-1&&(R+=encodeURIComponent(D.substring(I,H)),I=-1),R!==void 0&&(R+=D.charAt(H));else{R===void 0&&(R=D.substr(0,H));var Z=P[G];Z!==void 0?(I!==-1&&(R+=encodeURIComponent(D.substring(I,H)),I=-1),R+=Z):I===-1&&(I=H)}}return I!==-1&&(R+=encodeURIComponent(D.substring(I))),R!==void 0?R:D}function _(D){for(var x=void 0,R=0;R1&&D.scheme==="file"?"//".concat(D.authority).concat(D.path):D.path.charCodeAt(0)===47&&(D.path.charCodeAt(1)>=65&&D.path.charCodeAt(1)<=90||D.path.charCodeAt(1)>=97&&D.path.charCodeAt(1)<=122)&&D.path.charCodeAt(2)===58?x?D.path.substr(1):D.path[1].toLowerCase()+D.path.substr(2):D.path,o&&(R=R.replace(/\//g,"\\")),R}function z(D,x){var R=x?_:F,I="",H=D.scheme,G=D.authority,Z=D.path,Be=D.query,be=D.fragment;if(H&&(I+=H,I+=":"),(G||H==="file")&&(I+=w,I+=w),G){var De=G.indexOf("@");if(De!==-1){var vt=G.substr(0,De);G=G.substr(De+1),(De=vt.indexOf(":"))===-1?I+=R(vt,!1):(I+=R(vt.substr(0,De),!1),I+=":",I+=R(vt.substr(De+1),!1)),I+="@"}(De=(G=G.toLowerCase()).indexOf(":"))===-1?I+=R(G,!1):(I+=R(G.substr(0,De),!1),I+=G.substr(De))}if(Z){if(Z.length>=3&&Z.charCodeAt(0)===47&&Z.charCodeAt(2)===58)(Xe=Z.charCodeAt(1))>=65&&Xe<=90&&(Z="/".concat(String.fromCharCode(Xe+32),":").concat(Z.substr(3)));else if(Z.length>=2&&Z.charCodeAt(1)===58){var Xe;(Xe=Z.charCodeAt(0))>=65&&Xe<=90&&(Z="".concat(String.fromCharCode(Xe+32),":").concat(Z.substr(2)))}I+=R(Z,!0)}return Be&&(I+="?",I+=R(Be,!1)),be&&(I+="#",I+=x?be:F(be,!1)),I}function W(D){try{return decodeURIComponent(D)}catch{return D.length>3?D.substr(0,3)+W(D.substr(3)):D}}var T=/(%[0-9A-Za-z][0-9A-Za-z])+/g;function L(D){return D.match(T)?D.replace(T,function(x){return W(x)}):D}var $,te=s(470),Ee=function(D,x,R){if(R||arguments.length===2)for(var I,H=0,G=x.length;H{for(var s in i)n.o(i,s)&&!n.o(r,s)&&Object.defineProperty(r,s,{enumerable:!0,get:i[s]})},n.o=(r,i)=>Object.prototype.hasOwnProperty.call(r,i),n.r=r=>{typeof Symbol<"u"&&Symbol.toStringTag&&Object.defineProperty(r,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(r,"__esModule",{value:!0})},n(447)})();var{URI:Qr,Utils:ei}=So,ed=function(e,t,n){if(n||arguments.length===2)for(var r=0,i=t.length,s;r0&&s[s.length-1])&&(h[0]===6||h[0]===2)){n=0;continue}if(h[0]===3&&(!s||h[1]>s[0]&&h[1]0&&s[s.length-1])&&(h[0]===6||h[0]===2)){n=0;continue}if(h[0]===3&&(!s||h[1]>s[0]&&h[1]=0;o--){var l=this.nodePath[o];if(l instanceof Pr)this.getCompletionsForDeclarationProperty(l.getParent(),s);else if(l instanceof va)l.parent instanceof Ur?this.getVariableProposals(null,s):this.getCompletionsForExpression(l,s);else if(l instanceof Dt){var c=l.findAParent(g.ExtendsReference,g.Ruleset);if(c)if(c.type===g.ExtendsReference)this.getCompletionsForExtendsReference(c,l,s);else{var h=c;this.getCompletionsForSelector(h,h&&h.isNested(),s)}}else if(l instanceof Rt)this.getCompletionsForFunctionArgument(l,l.getParent(),s);else if(l instanceof Mr)this.getCompletionsForDeclarations(l,s);else if(l instanceof Rn)this.getCompletionsForVariableDeclaration(l,s);else if(l instanceof Et)this.getCompletionsForRuleSet(l,s);else if(l instanceof Ur)this.getCompletionsForInterpolation(l,s);else if(l instanceof Dn)this.getCompletionsForFunctionDeclaration(l,s);else if(l instanceof An)this.getCompletionsForMixinReference(l,s);else if(l instanceof Jt)this.getCompletionsForFunctionArgument(null,l,s);else if(l instanceof Lr)this.getCompletionsForSupports(l,s);else if(l instanceof Xt)this.getCompletionsForSupportsCondition(l,s);else if(l instanceof Yt)this.getCompletionsForExtendsReference(l,null,s);else if(l.type===g.URILiteral)this.getCompletionForUriLiteralValue(l,s);else if(l.parent===null)this.getCompletionForTopLevel(s);else if(l.type===g.StringLiteral&&this.isImportPathParent(l.parent.type))this.getCompletionForImportPath(l,s);else continue;if(s.items.length>0||this.offset>l.offset)return this.finalize(s)}return this.getCompletionsForStylesheet(s),s.items.length===0&&this.variablePrefix&&this.currentWord.indexOf(this.variablePrefix)===0&&this.getVariableProposals(null,s),this.finalize(s)}finally{this.position=null,this.currentWord=null,this.textDocument=null,this.styleSheet=null,this.symbolContext=null,this.defaultReplaceRange=null,this.nodePath=null}},e.prototype.isImportPathParent=function(t){return t===g.Import},e.prototype.finalize=function(t){return t},e.prototype.findInNodePath=function(){for(var t=[],n=0;n=0;r--){var i=this.nodePath[r];if(t.indexOf(i.type)!==-1)return i}return null},e.prototype.getCompletionsForDeclarationProperty=function(t,n){return this.getPropertyProposals(t,n)},e.prototype.getPropertyProposals=function(t,n){var r=this,i=this.isTriggerPropertyValueCompletionEnabled,s=this.isCompletePropertyWithSemicolonEnabled,o=this.cssDataManager.getProperties();return o.forEach(function(l){var c,h,d=!1;t?(c=r.getCompletionRange(t.getProperty()),h=l.name,Me(t.colonPosition)||(h+=": ",d=!0)):(c=r.getCompletionRange(null),h=l.name+": ",d=!0),!t&&s&&(h+="$0;"),t&&!t.semicolonPosition&&s&&r.offset>=r.textDocument.offsetAt(c.end)&&(h+="$0;");var u={label:l.name,documentation:it(l,r.doesSupportMarkdown()),tags:an(l)?[ft.Deprecated]:[],textEdit:j.replace(c,h),insertTextFormat:Fe.Snippet,kind:B.Property};l.restrictions||(d=!1),i&&d&&(u.command=_o);var f=typeof l.relevance=="number"?Math.min(Math.max(l.relevance,0),99):50,m=(255-f).toString(16),v=le(l.name,"-")?Ve.VendorPrefixed:Ve.Normal;u.sortText=v+"_"+m,n.items.push(u)}),this.completionParticipants.forEach(function(l){l.onCssProperty&&l.onCssProperty({propertyName:r.currentWord,range:r.defaultReplaceRange})}),n},Object.defineProperty(e.prototype,"isTriggerPropertyValueCompletionEnabled",{get:function(){var t,n;return(n=(t=this.documentSettings)===null||t===void 0?void 0:t.triggerPropertyValueCompletion)!==null&&n!==void 0?n:!0},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"isCompletePropertyWithSemicolonEnabled",{get:function(){var t,n;return(n=(t=this.documentSettings)===null||t===void 0?void 0:t.completePropertyWithSemicolon)!==null&&n!==void 0?n:!0},enumerable:!1,configurable:!0}),e.prototype.getCompletionsForDeclarationValue=function(t,n){for(var r=this,i=t.getFullPropertyName(),s=this.cssDataManager.getProperty(i),o=t.getValue()||null;o&&o.hasChildren();)o=o.findChildAtOffset(this.offset,!1);if(this.completionParticipants.forEach(function(v){v.onCssPropertyValue&&v.onCssPropertyValue({propertyName:i,propertyValue:r.currentWord,range:r.getCompletionRange(o)})}),s){if(s.restrictions)for(var l=0,c=s.restrictions;l=t.offset+2&&this.getVariableProposals(null,n),n},e.prototype.getVariableProposals=function(t,n){for(var r=this.getSymbolContext().findSymbolsAtOffset(this.offset,J.Variable),i=0,s=r;i0){var s=this.currentWord.match(/^-?\d[\.\d+]*/);s&&(i=s[0],r.isIncomplete=i.length===this.currentWord.length)}else this.currentWord.length===0&&(r.isIncomplete=!0);if(n&&n.parent&&n.parent.type===g.Term&&(n=n.getParent()),t.restrictions)for(var o=0,l=t.restrictions;o=r.end;if(i)return this.getCompletionForTopLevel(n);var s=!r||this.offset<=r.offset;return s?this.getCompletionsForSelector(t,t.isNested(),n):this.getCompletionsForDeclarations(t.getDeclarations(),n)},e.prototype.getCompletionsForSelector=function(t,n,r){var i=this,s=this.findInNodePath(g.PseudoSelector,g.IdentifierSelector,g.ClassSelector,g.ElementNameSelector);!s&&this.hasCharacterAtPosition(this.offset-this.currentWord.length-1,":")&&(this.currentWord=":"+this.currentWord,this.hasCharacterAtPosition(this.offset-this.currentWord.length-1,":")&&(this.currentWord=":"+this.currentWord),this.defaultReplaceRange=K.create(we.create(this.position.line,this.position.character-this.currentWord.length),this.position));var o=this.cssDataManager.getPseudoClasses();o.forEach(function(w){var N=Nt(w.name),E={label:w.name,textEdit:j.replace(i.getCompletionRange(s),N),documentation:it(w,i.doesSupportMarkdown()),tags:an(w)?[ft.Deprecated]:[],kind:B.Function,insertTextFormat:w.name!==N?qe:void 0};le(w.name,":-")&&(E.sortText=Ve.VendorPrefixed),r.items.push(E)});var l=this.cssDataManager.getPseudoElements();if(l.forEach(function(w){var N=Nt(w.name),E={label:w.name,textEdit:j.replace(i.getCompletionRange(s),N),documentation:it(w,i.doesSupportMarkdown()),tags:an(w)?[ft.Deprecated]:[],kind:B.Function,insertTextFormat:w.name!==N?qe:void 0};le(w.name,"::-")&&(E.sortText=Ve.VendorPrefixed),r.items.push(E)}),!n){for(var c=0,h=Gh;c0){var N=v.substr(w.offset,w.length);return N.charAt(0)==="."&&!m[N]&&(m[N]=!0,r.items.push({label:N,textEdit:j.replace(i.getCompletionRange(s),N),kind:B.Keyword})),!1}return!0}),t&&t.isNested()){var b=t.getSelectors().findFirstChildBeforeOffset(this.offset);b&&t.getSelectors().getChildren().indexOf(b)===0&&this.getPropertyProposals(null,r)}return r},e.prototype.getCompletionsForDeclarations=function(t,n){if(!t||this.offset===t.offset)return n;var r=t.findFirstChildBeforeOffset(this.offset);if(!r)return this.getCompletionsForDeclarationProperty(null,n);if(r instanceof Nr){var i=r;if(!Me(i.colonPosition)||this.offset<=i.colonPosition)return this.getCompletionsForDeclarationProperty(i,n);if(Me(i.semicolonPosition)&&i.semicolonPositiont.colonPosition&&this.getVariableProposals(t.getValue(),n),n},e.prototype.getCompletionsForExpression=function(t,n){var r=t.getParent();if(r instanceof Rt)return this.getCompletionsForFunctionArgument(r,r.getParent(),n),n;var i=t.findParent(g.Declaration);if(!i)return this.getTermProposals(void 0,null,n),n;var s=t.findChildAtOffset(this.offset,!0);return s?s instanceof Or||s instanceof Ae?this.getCompletionsForDeclarationValue(i,n):n:this.getCompletionsForDeclarationValue(i,n)},e.prototype.getCompletionsForFunctionArgument=function(t,n,r){var i=n.getIdentifier();return i&&i.matches("var")&&(!n.getArguments().hasChildren()||n.getArguments().getChild(0)===t)&&this.getVariableProposalsForCSSVarFunction(r),r},e.prototype.getCompletionsForFunctionDeclaration=function(t,n){var r=t.getDeclarations();return r&&this.offset>r.offset&&this.offsett.lParent&&(!Me(t.rParent)||this.offset<=t.rParent)?this.getCompletionsForDeclarationProperty(null,n):n},e.prototype.getCompletionsForSupports=function(t,n){var r=t.getDeclarations(),i=!r||this.offset<=r.offset;if(i){var s=t.findFirstChildBeforeOffset(this.offset);return s instanceof Xt?this.getCompletionsForSupportsCondition(s,n):n}return this.getCompletionForTopLevel(n)},e.prototype.getCompletionsForExtendsReference=function(t,n,r){return r},e.prototype.getCompletionForUriLiteralValue=function(t,n){var r,i,s;if(t.hasChildren()){var l=t.getChild(0);r=l.getText(),i=this.position,s=this.getCompletionRange(l)}else{r="",i=this.position;var o=this.textDocument.positionAt(t.offset+4);s=K.create(o,o)}return this.completionParticipants.forEach(function(c){c.onCssURILiteralValue&&c.onCssURILiteralValue({uriValue:r,position:i,range:s})}),n},e.prototype.getCompletionForImportPath=function(t,n){var r=this;return this.completionParticipants.forEach(function(i){i.onCssImportPath&&i.onCssImportPath({pathValue:t.getText(),position:r.position,range:r.getCompletionRange(t)})}),n},e.prototype.hasCharacterAtPosition=function(t,n){var r=this.textDocument.getText();return t>=0&&t=0&&` +\r":{[()]},*>+`.indexOf(r.charAt(n))===-1;)n--;return r.substring(n+1,t)}function Fo(e){return e.toLowerCase()in Un||/(^#[0-9A-F]{6}$)|(^#[0-9A-F]{3}$)/i.test(e)}var Eo=function(){var e=function(t,n){return e=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(r,i){r.__proto__=i}||function(r,i){for(var s in i)Object.prototype.hasOwnProperty.call(i,s)&&(r[s]=i[s])},e(t,n)};return function(t,n){if(typeof n!="function"&&n!==null)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");e(t,n);function r(){this.constructor=t}t.prototype=n===null?Object.create(n):(r.prototype=n.prototype,new r)}}(),pd=Ie(),ai=function(){function e(){this.parent=null,this.children=null,this.attributes=null}return e.prototype.findAttribute=function(t){if(this.attributes)for(var n=0,r=this.attributes;n"),this.writeLine(n,i.join(""))},e}(),$e;(function(e){function t(r,i){return i+n(r)+i}e.ensure=t;function n(r){var i=r.match(/^['"](.*)["']$/);return i?i[1]:r}e.remove=n})($e||($e={}));var Ro=function(){function e(){this.id=0,this.attr=0,this.tag=0}return e}();function Ao(e,t){for(var n=new ai,r=0,i=e.getChildren();r1){var h=t.cloneWithParent();n.addChild(h.findRoot()),n=h}n.append(o[c])}}break;case g.SelectorPlaceholder:if(s.matches("@at-root"))return n;case g.ElementNameSelector:var d=s.getText();n.addAttr("name",d==="*"?"element":Ne(d));break;case g.ClassSelector:n.addAttr("class",Ne(s.getText().substring(1)));break;case g.IdentifierSelector:n.addAttr("id",Ne(s.getText().substring(1)));break;case g.MixinDeclaration:n.addAttr("class",s.getName());break;case g.PseudoSelector:n.addAttr(Ne(s.getText()),"");break;case g.AttributeSelector:var u=s,f=u.getIdentifier();if(f){var m=u.getValue(),v=u.getOperator(),b=void 0;if(m&&v)switch(Ne(v.getText())){case"|=":b="".concat($e.remove(Ne(m.getText())),"-\u2026");break;case"^=":b="".concat($e.remove(Ne(m.getText())),"\u2026");break;case"$=":b="\u2026".concat($e.remove(Ne(m.getText())));break;case"~=":b=" \u2026 ".concat($e.remove(Ne(m.getText()))," \u2026 ");break;case"*=":b="\u2026".concat($e.remove(Ne(m.getText())),"\u2026");break;default:b=$e.remove(Ne(m.getText()));break}n.addAttr(Ne(f.getText()),b)}break}}return n}function Ne(e){var t=new Ht;t.setSource(e);var n=t.scanUnquotedString();return n?n.text:e}var fd=function(){function e(t){this.cssDataManager=t}return e.prototype.selectorToMarkedString=function(t){var n=bd(t);if(n){var r=new Do('"').print(n);return r.push(this.selectorToSpecificityMarkedString(t)),r}else return[]},e.prototype.simpleSelectorToMarkedString=function(t){var n=Ao(t),r=new Do('"').print(n);return r.push(this.selectorToSpecificityMarkedString(t)),r},e.prototype.isPseudoElementIdentifier=function(t){var n=t.match(/^::?([\w-]+)/);return n?!!this.cssDataManager.getPseudoElement("::"+n[1]):!1},e.prototype.selectorToSpecificityMarkedString=function(t){var n=this,r=function(s){var o=new Ro;e:for(var l=0,c=s.getChildren();l0){for(var u=new Ro,f=0,m=h.getChildren();fu.id){u=E;continue}else if(E.idu.attr){u=E;continue}else if(E.attru.tag){u=E;continue}}}o.id+=u.id,o.attr+=u.attr,o.tag+=u.tag;continue e}o.attr++;break}if(h.getChildren().length>0){var E=r(h);o.id+=E.id,o.attr+=E.attr,o.tag+=E.tag}}return o},i=r(t);return pd("specificity","[Selector Specificity](https://developer.mozilla.org/en-US/docs/Web/CSS/Specificity): ({0}, {1}, {2})",i.id,i.attr,i.tag)},e}(),md=function(){function e(t){this.prev=null,this.element=t}return e.prototype.processSelector=function(t){var n=null;if(!(this.element instanceof Pt)&&t.getChildren().some(function(d){return d.hasChildren()&&d.getChild(0).type===g.SelectorCombinator})){var r=this.element.findRoot();r.parent instanceof Pt&&(n=this.element,this.element=r.parent,this.element.removeChild(r),this.prev=null)}for(var i=0,s=t.getChildren();i=0;o--){var l=n[o].getSelectors().getChild(0);l&&s.processSelector(l)}return s.processSelector(e),t}var li=function(){function e(t,n){this.clientCapabilities=t,this.cssDataManager=n,this.selectorPrinting=new fd(n)}return e.prototype.configure=function(t){this.defaultSettings=t},e.prototype.doHover=function(t,n,r,i){i===void 0&&(i=this.defaultSettings);function s(w){return K.create(t.positionAt(w.offset),t.positionAt(w.end))}for(var o=t.offsetAt(n),l=zr(r,o),c=null,h=0;h0&&s[s.length-1])&&(h[0]===6||h[0]===2)){n=0;continue}if(h[0]===3&&(!s||h[1]>s[0]&&h[1]=s.length/2&&o.push({property:N.name,score:E})}),o.sort(function(N,E){return E.score-N.score||N.property.localeCompare(E.property)});for(var l=3,c=0,h=o;c=0;c--){var h=l[c];if(h instanceof Ue){var d=h.getProperty();if(d&&d.offset===s&&d.end===o){this.getFixesForUnknownProperty(t,d,r,i);return}}}},e}(),_d=function(){function e(t){this.fullPropertyName=t.getFullPropertyName().toLowerCase(),this.node=t}return e}();function cn(e,t,n,r){var i=e[t];i.value=n,n&&(yo(i.properties,r)||i.properties.push(r))}function Fd(e,t,n){cn(e,"top",t,n),cn(e,"right",t,n),cn(e,"bottom",t,n),cn(e,"left",t,n)}function me(e,t,n,r){t==="top"||t==="right"||t==="bottom"||t==="left"?cn(e,t,n,r):Fd(e,n,r)}function di(e,t,n){switch(t.length){case 1:me(e,void 0,t[0],n);break;case 2:me(e,"top",t[0],n),me(e,"bottom",t[0],n),me(e,"right",t[1],n),me(e,"left",t[1],n);break;case 3:me(e,"top",t[0],n),me(e,"right",t[1],n),me(e,"left",t[1],n),me(e,"bottom",t[2],n);break;case 4:me(e,"top",t[0],n),me(e,"right",t[1],n),me(e,"bottom",t[2],n),me(e,"left",t[3],n);break}}function ui(e,t){for(var n=0,r=t;n"u"))switch(i.fullPropertyName){case"box-sizing":return{top:{value:!1,properties:[]},right:{value:!1,properties:[]},bottom:{value:!1,properties:[]},left:{value:!1,properties:[]}};case"width":t.width=i;break;case"height":t.height=i;break;default:var o=i.fullPropertyName.split("-");switch(o[0]){case"border":switch(o[1]){case void 0:case"top":case"right":case"bottom":case"left":switch(o[2]){case void 0:me(t,o[1],Dd(s),i);break;case"width":me(t,o[1],hn(s,!1),i);break;case"style":me(t,o[1],Gn(s,!0),i);break}break;case"width":di(t,Lo(s.getChildren(),!1),i);break;case"style":di(t,Ed(s.getChildren(),!0),i);break}break;case"padding":o.length===1?di(t,Lo(s.getChildren(),!0),i):me(t,o[1],hn(s,!0),i);break}break}}return t}var He=Ie(),To=function(){function e(){this.data={}}return e.prototype.add=function(t,n,r){var i=this.data[t];i||(i={nodes:[],names:[]},this.data[t]=i),i.names.push(n),r&&i.nodes.push(r)},e}(),Ad=function(){function e(t,n,r){var i=this;this.cssDataManager=r,this.warnings=[],this.settings=n,this.documentText=t.getText(),this.keyframes=new To,this.validProperties={};var s=n.getSetting(xd.ValidProperties);Array.isArray(s)&&s.forEach(function(o){if(typeof o=="string"){var l=o.trim().toLowerCase();l.length&&(i.validProperties[l]=!0)}})}return e.entries=function(t,n,r,i,s){var o=new e(n,r,i);return t.acceptVisitor(o),o.completeValidations(),o.getEntries(s)},e.prototype.isValidPropertyDeclaration=function(t){var n=t.fullPropertyName;return this.validProperties[n]},e.prototype.fetch=function(t,n){for(var r=[],i=0,s=t;i0)for(var b=this.fetch(r,"float"),w=0;w0)for(var b=this.fetch(r,"vertical-align"),w=0;w1)for(var F=0;F")||this.peekDelim("<")||this.peekIdent("and")||this.peekIdent("or")||this.peekDelim("%")){var n=this.createNode(g.Operator);return this.consumeToken(),this.finish(n)}return e.prototype._parseOperator.call(this)},t.prototype._parseUnaryOperator=function(){if(this.peekIdent("not")){var n=this.create(U);return this.consumeToken(),this.finish(n)}return e.prototype._parseUnaryOperator.call(this)},t.prototype._parseRuleSetDeclaration=function(){return this.peek(p.AtKeyword)?this._parseKeyframe()||this._parseImport()||this._parseMedia(!0)||this._parseFontFace()||this._parseWarnAndDebug()||this._parseControlStatement()||this._parseFunctionDeclaration()||this._parseExtends()||this._parseMixinReference()||this._parseMixinContent()||this._parseMixinDeclaration()||this._parseRuleset(!0)||this._parseSupports(!0)||e.prototype._parseRuleSetDeclarationAtStatement.call(this):this._parseVariableDeclaration()||this._tryParseRuleset(!0)||e.prototype._parseRuleSetDeclaration.call(this)},t.prototype._parseDeclaration=function(n){var r=this._tryParseCustomPropertyDeclaration(n);if(r)return r;var i=this.create(Ue);if(!i.setProperty(this._parseProperty()))return null;if(!this.accept(p.Colon))return this.finish(i,y.ColonExpected,[p.Colon],n||[p.SemiColon]);this.prevToken&&(i.colonPosition=this.prevToken.offset);var s=!1;if(i.setValue(this._parseExpr())&&(s=!0,i.addChild(this._parsePrio())),this.peek(p.CurlyL))i.setNestedProperties(this._parseNestedProperties());else if(!s)return this.finish(i,y.PropertyValueExpected);return this.peek(p.SemiColon)&&(i.semicolonPosition=this.token.offset),this.finish(i)},t.prototype._parseNestedProperties=function(){var n=this.create(ua);return this._parseBody(n,this._parseDeclaration.bind(this))},t.prototype._parseExtends=function(){if(this.peekKeyword("@extend")){var n=this.create(Yt);if(this.consumeToken(),!n.getSelectors().addChild(this._parseSimpleSelector()))return this.finish(n,y.SelectorExpected);for(;this.accept(p.Comma);)n.getSelectors().addChild(this._parseSimpleSelector());return this.accept(p.Exclamation)&&!this.acceptIdent("optional")?this.finish(n,y.UnknownKeyword):this.finish(n)}return null},t.prototype._parseSimpleSelectorBody=function(){return this._parseSelectorCombinator()||this._parseSelectorPlaceholder()||e.prototype._parseSimpleSelectorBody.call(this)},t.prototype._parseSelectorCombinator=function(){if(this.peekDelim("&")){var n=this.createNode(g.SelectorCombinator);for(this.consumeToken();!this.hasWhitespace()&&(this.acceptDelim("-")||this.accept(p.Num)||this.accept(p.Dimension)||n.addChild(this._parseIdent())||this.acceptDelim("&")););return this.finish(n)}return null},t.prototype._parseSelectorPlaceholder=function(){if(this.peekDelim("%")){var n=this.createNode(g.SelectorPlaceholder);return this.consumeToken(),this._parseIdent(),this.finish(n)}else if(this.peekKeyword("@at-root")){var n=this.createNode(g.SelectorPlaceholder);return this.consumeToken(),this.finish(n)}return null},t.prototype._parseElementName=function(){var n=this.mark(),r=e.prototype._parseElementName.call(this);return r&&!this.hasWhitespace()&&this.peek(p.ParenthesisL)?(this.restoreAtMark(n),null):r},t.prototype._tryParsePseudoIdentifier=function(){return this._parseInterpolation()||e.prototype._tryParsePseudoIdentifier.call(this)},t.prototype._parseWarnAndDebug=function(){if(!this.peekKeyword("@debug")&&!this.peekKeyword("@warn")&&!this.peekKeyword("@error"))return null;var n=this.createNode(g.Debug);return this.consumeToken(),n.addChild(this._parseExpr()),this.finish(n)},t.prototype._parseControlStatement=function(n){return n===void 0&&(n=this._parseRuleSetDeclaration.bind(this)),this.peek(p.AtKeyword)?this._parseIfStatement(n)||this._parseForStatement(n)||this._parseEachStatement(n)||this._parseWhileStatement(n):null},t.prototype._parseIfStatement=function(n){return this.peekKeyword("@if")?this._internalParseIfStatement(n):null},t.prototype._internalParseIfStatement=function(n){var r=this.create(ih);if(this.consumeToken(),!r.setExpression(this._parseExpr(!0)))return this.finish(r,y.ExpressionExpected);if(this._parseBody(r,n),this.acceptKeyword("@else")){if(this.peekIdent("if"))r.setElseClause(this._internalParseIfStatement(n));else if(this.peek(p.CurlyL)){var i=this.create(lh);this._parseBody(i,n),r.setElseClause(i)}}return this.finish(r)},t.prototype._parseForStatement=function(n){if(!this.peekKeyword("@for"))return null;var r=this.create(sh);return this.consumeToken(),r.setVariable(this._parseVariable())?this.acceptIdent("from")?r.addChild(this._parseBinaryExpr())?!this.acceptIdent("to")&&!this.acceptIdent("through")?this.finish(r,vi.ThroughOrToExpected,[p.CurlyR]):r.addChild(this._parseBinaryExpr())?this._parseBody(r,n):this.finish(r,y.ExpressionExpected,[p.CurlyR]):this.finish(r,y.ExpressionExpected,[p.CurlyR]):this.finish(r,vi.FromExpected,[p.CurlyR]):this.finish(r,y.VariableNameExpected,[p.CurlyR])},t.prototype._parseEachStatement=function(n){if(!this.peekKeyword("@each"))return null;var r=this.create(ah);this.consumeToken();var i=r.getVariables();if(!i.addChild(this._parseVariable()))return this.finish(r,y.VariableNameExpected,[p.CurlyR]);for(;this.accept(p.Comma);)if(!i.addChild(this._parseVariable()))return this.finish(r,y.VariableNameExpected,[p.CurlyR]);return this.finish(i),this.acceptIdent("in")?r.addChild(this._parseExpr())?this._parseBody(r,n):this.finish(r,y.ExpressionExpected,[p.CurlyR]):this.finish(r,vi.InExpected,[p.CurlyR])},t.prototype._parseWhileStatement=function(n){if(!this.peekKeyword("@while"))return null;var r=this.create(oh);return this.consumeToken(),r.addChild(this._parseBinaryExpr())?this._parseBody(r,n):this.finish(r,y.ExpressionExpected,[p.CurlyR])},t.prototype._parseFunctionBodyDeclaration=function(){return this._parseVariableDeclaration()||this._parseReturnStatement()||this._parseWarnAndDebug()||this._parseControlStatement(this._parseFunctionBodyDeclaration.bind(this))},t.prototype._parseFunctionDeclaration=function(){if(!this.peekKeyword("@function"))return null;var n=this.create(Dn);if(this.consumeToken(),!n.setIdentifier(this._parseIdent([J.Function])))return this.finish(n,y.IdentifierExpected,[p.CurlyR]);if(!this.accept(p.ParenthesisL))return this.finish(n,y.LeftParenthesisExpected,[p.CurlyR]);if(n.getParameters().addChild(this._parseParameterDeclaration())){for(;this.accept(p.Comma)&&!this.peek(p.ParenthesisR);)if(!n.getParameters().addChild(this._parseParameterDeclaration()))return this.finish(n,y.VariableNameExpected)}return this.accept(p.ParenthesisR)?this._parseBody(n,this._parseFunctionBodyDeclaration.bind(this)):this.finish(n,y.RightParenthesisExpected,[p.CurlyR])},t.prototype._parseReturnStatement=function(){if(!this.peekKeyword("@return"))return null;var n=this.createNode(g.ReturnStatement);return this.consumeToken(),n.addChild(this._parseExpr())?this.finish(n):this.finish(n,y.ExpressionExpected)},t.prototype._parseMixinDeclaration=function(){if(!this.peekKeyword("@mixin"))return null;var n=this.create(Kt);if(this.consumeToken(),!n.setIdentifier(this._parseIdent([J.Mixin])))return this.finish(n,y.IdentifierExpected,[p.CurlyR]);if(this.accept(p.ParenthesisL)){if(n.getParameters().addChild(this._parseParameterDeclaration())){for(;this.accept(p.Comma)&&!this.peek(p.ParenthesisR);)if(!n.getParameters().addChild(this._parseParameterDeclaration()))return this.finish(n,y.VariableNameExpected)}if(!this.accept(p.ParenthesisR))return this.finish(n,y.RightParenthesisExpected,[p.CurlyR])}return this._parseBody(n,this._parseRuleSetDeclaration.bind(this))},t.prototype._parseParameterDeclaration=function(){var n=this.create(En);return n.setIdentifier(this._parseVariable())?(this.accept(Xn),this.accept(p.Colon)&&!n.setDefaultValue(this._parseExpr(!0))?this.finish(n,y.VariableValueExpected,[],[p.Comma,p.ParenthesisR]):this.finish(n)):null},t.prototype._parseMixinContent=function(){if(!this.peekKeyword("@content"))return null;var n=this.create(Fh);if(this.consumeToken(),this.accept(p.ParenthesisL)){if(n.getArguments().addChild(this._parseFunctionArgument())){for(;this.accept(p.Comma)&&!this.peek(p.ParenthesisR);)if(!n.getArguments().addChild(this._parseFunctionArgument()))return this.finish(n,y.ExpressionExpected)}if(!this.accept(p.ParenthesisR))return this.finish(n,y.RightParenthesisExpected)}return this.finish(n)},t.prototype._parseMixinReference=function(){if(!this.peekKeyword("@include"))return null;var n=this.create(An);this.consumeToken();var r=this._parseIdent([J.Mixin]);if(!n.setIdentifier(r))return this.finish(n,y.IdentifierExpected,[p.CurlyR]);if(!this.hasWhitespace()&&this.acceptDelim(".")&&!this.hasWhitespace()){var i=this._parseIdent([J.Mixin]);if(!i)return this.finish(n,y.IdentifierExpected,[p.CurlyR]);var s=this.create(ya);r.referenceTypes=[J.Module],s.setIdentifier(r),n.setIdentifier(i),n.addChild(s)}if(this.accept(p.ParenthesisL)){if(n.getArguments().addChild(this._parseFunctionArgument())){for(;this.accept(p.Comma)&&!this.peek(p.ParenthesisR);)if(!n.getArguments().addChild(this._parseFunctionArgument()))return this.finish(n,y.ExpressionExpected)}if(!this.accept(p.ParenthesisR))return this.finish(n,y.RightParenthesisExpected)}return(this.peekIdent("using")||this.peek(p.CurlyL))&&n.setContent(this._parseMixinContentDeclaration()),this.finish(n)},t.prototype._parseMixinContentDeclaration=function(){var n=this.create(Eh);if(this.acceptIdent("using")){if(!this.accept(p.ParenthesisL))return this.finish(n,y.LeftParenthesisExpected,[p.CurlyL]);if(n.getParameters().addChild(this._parseParameterDeclaration())){for(;this.accept(p.Comma)&&!this.peek(p.ParenthesisR);)if(!n.getParameters().addChild(this._parseParameterDeclaration()))return this.finish(n,y.VariableNameExpected)}if(!this.accept(p.ParenthesisR))return this.finish(n,y.RightParenthesisExpected,[p.CurlyL])}return this.peek(p.CurlyL)&&this._parseBody(n,this._parseMixinReferenceBodyStatement.bind(this)),this.finish(n)},t.prototype._parseMixinReferenceBodyStatement=function(){return this._tryParseKeyframeSelector()||this._parseRuleSetDeclaration()},t.prototype._parseFunctionArgument=function(){var n=this.create(Rt),r=this.mark(),i=this._parseVariable();if(i)if(this.accept(p.Colon))n.setIdentifier(i);else{if(this.accept(Xn))return n.setValue(i),this.finish(n);this.restoreAtMark(r)}return n.setValue(this._parseExpr(!0))?(this.accept(Xn),n.addChild(this._parsePrio()),this.finish(n)):n.setValue(this._tryParsePrio())?this.finish(n):null},t.prototype._parseURLArgument=function(){var n=this.mark(),r=e.prototype._parseURLArgument.call(this);if(!r||!this.peek(p.ParenthesisR)){this.restoreAtMark(n);var i=this.create(U);return i.addChild(this._parseBinaryExpr()),this.finish(i)}return r},t.prototype._parseOperation=function(){if(!this.peek(p.ParenthesisL))return null;var n=this.create(U);for(this.consumeToken();n.addChild(this._parseListElement());)this.accept(p.Comma);return this.accept(p.ParenthesisR)?this.finish(n):this.finish(n,y.RightParenthesisExpected)},t.prototype._parseListElement=function(){var n=this.create(Dh),r=this._parseBinaryExpr();if(!r)return null;if(this.accept(p.Colon)){if(n.setKey(r),!n.setValue(this._parseBinaryExpr()))return this.finish(n,y.ExpressionExpected)}else n.setValue(r);return this.finish(n)},t.prototype._parseUse=function(){if(!this.peekKeyword("@use"))return null;var n=this.create(hh);if(this.consumeToken(),!n.addChild(this._parseStringLiteral()))return this.finish(n,y.StringLiteralExpected);if(!this.peek(p.SemiColon)&&!this.peek(p.EOF)){if(!this.peekRegExp(p.Ident,/as|with/))return this.finish(n,y.UnknownKeyword);if(this.acceptIdent("as")&&!n.setIdentifier(this._parseIdent([J.Module]))&&!this.acceptDelim("*"))return this.finish(n,y.IdentifierOrWildcardExpected);if(this.acceptIdent("with")){if(!this.accept(p.ParenthesisL))return this.finish(n,y.LeftParenthesisExpected,[p.ParenthesisR]);if(!n.getParameters().addChild(this._parseModuleConfigDeclaration()))return this.finish(n,y.VariableNameExpected);for(;this.accept(p.Comma)&&!this.peek(p.ParenthesisR);)if(!n.getParameters().addChild(this._parseModuleConfigDeclaration()))return this.finish(n,y.VariableNameExpected);if(!this.accept(p.ParenthesisR))return this.finish(n,y.RightParenthesisExpected)}}return!this.accept(p.SemiColon)&&!this.accept(p.EOF)?this.finish(n,y.SemiColonExpected):this.finish(n)},t.prototype._parseModuleConfigDeclaration=function(){var n=this.create(dh);return n.setIdentifier(this._parseVariable())?!this.accept(p.Colon)||!n.setValue(this._parseExpr(!0))?this.finish(n,y.VariableValueExpected,[],[p.Comma,p.ParenthesisR]):this.accept(p.Exclamation)&&(this.hasWhitespace()||!this.acceptIdent("default"))?this.finish(n,y.UnknownKeyword):this.finish(n):null},t.prototype._parseForward=function(){if(!this.peekKeyword("@forward"))return null;var n=this.create(uh);if(this.consumeToken(),!n.addChild(this._parseStringLiteral()))return this.finish(n,y.StringLiteralExpected);if(this.acceptIdent("with")){if(!this.accept(p.ParenthesisL))return this.finish(n,y.LeftParenthesisExpected,[p.ParenthesisR]);if(!n.getParameters().addChild(this._parseModuleConfigDeclaration()))return this.finish(n,y.VariableNameExpected);for(;this.accept(p.Comma)&&!this.peek(p.ParenthesisR);)if(!n.getParameters().addChild(this._parseModuleConfigDeclaration()))return this.finish(n,y.VariableNameExpected);if(!this.accept(p.ParenthesisR))return this.finish(n,y.RightParenthesisExpected)}if(!this.peek(p.SemiColon)&&!this.peek(p.EOF)){if(!this.peekRegExp(p.Ident,/as|hide|show/))return this.finish(n,y.UnknownKeyword);if(this.acceptIdent("as")){var r=this._parseIdent([J.Forward]);if(!n.setIdentifier(r))return this.finish(n,y.IdentifierExpected);if(this.hasWhitespace()||!this.acceptDelim("*"))return this.finish(n,y.WildcardExpected)}if((this.peekIdent("hide")||this.peekIdent("show"))&&!n.addChild(this._parseForwardVisibility()))return this.finish(n,y.IdentifierOrVariableExpected)}return!this.accept(p.SemiColon)&&!this.accept(p.EOF)?this.finish(n,y.SemiColonExpected):this.finish(n)},t.prototype._parseForwardVisibility=function(){var n=this.create(ph);for(n.setIdentifier(this._parseIdent());n.addChild(this._parseVariable()||this._parseIdent());)this.accept(p.Comma);return n.getChildren().length>1?n:null},t.prototype._parseSupportsCondition=function(){return this._parseInterpolation()||e.prototype._parseSupportsCondition.call(this)},t}(Kr),jd=function(){var e=function(t,n){return e=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(r,i){r.__proto__=i}||function(r,i){for(var s in i)Object.prototype.hasOwnProperty.call(i,s)&&(r[s]=i[s])},e(t,n)};return function(t,n){if(typeof n!="function"&&n!==null)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");e(t,n);function r(){this.constructor=t}t.prototype=n===null?Object.create(n):(r.prototype=n.prototype,new r)}}(),A=Ie(),qd=function(e){jd(t,e);function t(n,r){var i=e.call(this,"$",n,r)||this;return qo(t.scssModuleLoaders),qo(t.scssModuleBuiltIns),i}return t.prototype.isImportPathParent=function(n){return n===g.Forward||n===g.Use||e.prototype.isImportPathParent.call(this,n)},t.prototype.getCompletionForImportPath=function(n,r){var i=n.getParent().type;if(i===g.Forward||i===g.Use)for(var s=0,o=t.scssModuleBuiltIns;s0){var n=typeof t.documentation=="string"?{kind:"markdown",value:t.documentation}:{kind:"markdown",value:t.documentation.value};n.value+=` + +`,n.value+=t.references.map(function(r){return"[".concat(r.name,"](").concat(r.url,")")}).join(" | "),t.documentation=n}})}var $d=function(){var e=function(t,n){return e=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(r,i){r.__proto__=i}||function(r,i){for(var s in i)Object.prototype.hasOwnProperty.call(i,s)&&(r[s]=i[s])},e(t,n)};return function(t,n){if(typeof n!="function"&&n!==null)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");e(t,n);function r(){this.constructor=t}t.prototype=n===null?Object.create(n):(r.prototype=n.prototype,new r)}}(),$o="/".charCodeAt(0),Hd=` +`.charCodeAt(0),Gd="\r".charCodeAt(0),Jd="\f".charCodeAt(0),wi="`".charCodeAt(0),yi=".".charCodeAt(0),Xd=p.CustomToken,xi=Xd++,Ho=function(e){$d(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.scanNext=function(n){var r=this.escapedJavaScript();return r!==null?this.finishToken(n,r):this.stream.advanceIfChars([yi,yi,yi])?this.finishToken(n,xi):e.prototype.scanNext.call(this,n)},t.prototype.comment=function(){return e.prototype.comment.call(this)?!0:!this.inURL&&this.stream.advanceIfChars([$o,$o])?(this.stream.advanceWhileChar(function(n){switch(n){case Hd:case Gd:case Jd:return!1;default:return!0}}),!0):!1},t.prototype.escapedJavaScript=function(){var n=this.stream.peekChar();return n===wi?(this.stream.advance(1),this.stream.advanceWhileChar(function(r){return r!==wi}),this.stream.advanceIfChar(wi)?p.EscapedJavaScript:p.BadEscapedJavaScript):null},t}(Ht),Yd=function(){var e=function(t,n){return e=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(r,i){r.__proto__=i}||function(r,i){for(var s in i)Object.prototype.hasOwnProperty.call(i,s)&&(r[s]=i[s])},e(t,n)};return function(t,n){if(typeof n!="function"&&n!==null)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");e(t,n);function r(){this.constructor=t}t.prototype=n===null?Object.create(n):(r.prototype=n.prototype,new r)}}(),Kd=function(e){Yd(t,e);function t(){return e.call(this,new Ho)||this}return t.prototype._parseStylesheetStatement=function(n){return n===void 0&&(n=!1),this.peek(p.AtKeyword)?this._parseVariableDeclaration()||this._parsePlugin()||e.prototype._parseStylesheetAtStatement.call(this,n):this._tryParseMixinDeclaration()||this._tryParseMixinReference()||this._parseFunction()||this._parseRuleset(!0)},t.prototype._parseImport=function(){if(!this.peekKeyword("@import")&&!this.peekKeyword("@import-once"))return null;var n=this.create(Ir);if(this.consumeToken(),this.accept(p.ParenthesisL)){if(!this.accept(p.Ident))return this.finish(n,y.IdentifierExpected,[p.SemiColon]);do if(!this.accept(p.Comma))break;while(this.accept(p.Ident));if(!this.accept(p.ParenthesisR))return this.finish(n,y.RightParenthesisExpected,[p.SemiColon])}return!n.addChild(this._parseURILiteral())&&!n.addChild(this._parseStringLiteral())?this.finish(n,y.URIOrStringExpected,[p.SemiColon]):(!this.peek(p.SemiColon)&&!this.peek(p.EOF)&&n.setMedialist(this._parseMediaQueryList()),this.finish(n))},t.prototype._parsePlugin=function(){if(!this.peekKeyword("@plugin"))return null;var n=this.createNode(g.Plugin);return this.consumeToken(),n.addChild(this._parseStringLiteral())?this.accept(p.SemiColon)?this.finish(n):this.finish(n,y.SemiColonExpected):this.finish(n,y.StringLiteralExpected)},t.prototype._parseMediaQuery=function(){var n=e.prototype._parseMediaQuery.call(this);if(!n){var r=this.create(ba);return r.addChild(this._parseVariable())?this.finish(r):null}return n},t.prototype._parseMediaDeclaration=function(n){return n===void 0&&(n=!1),this._tryParseRuleset(n)||this._tryToParseDeclaration()||this._tryParseMixinDeclaration()||this._tryParseMixinReference()||this._parseDetachedRuleSetMixin()||this._parseStylesheetStatement(n)},t.prototype._parseMediaFeatureName=function(){return this._parseIdent()||this._parseVariable()},t.prototype._parseVariableDeclaration=function(n){n===void 0&&(n=[]);var r=this.create(Rn),i=this.mark();if(!r.setVariable(this._parseVariable(!0)))return null;if(this.accept(p.Colon)){if(this.prevToken&&(r.colonPosition=this.prevToken.offset),r.setValue(this._parseDetachedRuleSet()))r.needsSemicolon=!1;else if(!r.setValue(this._parseExpr()))return this.finish(r,y.VariableValueExpected,[],n);r.addChild(this._parsePrio())}else return this.restoreAtMark(i),null;return this.peek(p.SemiColon)&&(r.semicolonPosition=this.token.offset),this.finish(r)},t.prototype._parseDetachedRuleSet=function(){var n=this.mark();if(this.peekDelim("#")||this.peekDelim("."))if(this.consumeToken(),!this.hasWhitespace()&&this.accept(p.ParenthesisL)){var r=this.create(Kt);if(r.getParameters().addChild(this._parseMixinParameter()))for(;(this.accept(p.Comma)||this.accept(p.SemiColon))&&!this.peek(p.ParenthesisR);)r.getParameters().addChild(this._parseMixinParameter())||this.markError(r,y.IdentifierExpected,[],[p.ParenthesisR]);if(!this.accept(p.ParenthesisR))return this.restoreAtMark(n),null}else return this.restoreAtMark(n),null;if(!this.peek(p.CurlyL))return null;var i=this.create(ie);return this._parseBody(i,this._parseDetachedRuleSetBody.bind(this)),this.finish(i)},t.prototype._parseDetachedRuleSetBody=function(){return this._tryParseKeyframeSelector()||this._parseRuleSetDeclaration()},t.prototype._addLookupChildren=function(n){if(!n.addChild(this._parseLookupValue()))return!1;for(var r=!1;this.peek(p.BracketL)&&(r=!0),!!n.addChild(this._parseLookupValue());)r=!1;return!r},t.prototype._parseLookupValue=function(){var n=this.create(U),r=this.mark();return this.accept(p.BracketL)?(n.addChild(this._parseVariable(!1,!0))||n.addChild(this._parsePropertyIdentifier()))&&this.accept(p.BracketR)||this.accept(p.BracketR)?n:(this.restoreAtMark(r),null):(this.restoreAtMark(r),null)},t.prototype._parseVariable=function(n,r){n===void 0&&(n=!1),r===void 0&&(r=!1);var i=!n&&this.peekDelim("$");if(!this.peekDelim("@")&&!i&&!this.peek(p.AtKeyword))return null;for(var s=this.create(Vr),o=this.mark();this.acceptDelim("@")||!n&&this.acceptDelim("$");)if(this.hasWhitespace())return this.restoreAtMark(o),null;return!this.accept(p.AtKeyword)&&!this.accept(p.Ident)?(this.restoreAtMark(o),null):!r&&this.peek(p.BracketL)&&!this._addLookupChildren(s)?(this.restoreAtMark(o),null):s},t.prototype._parseTermExpression=function(){return this._parseVariable()||this._parseEscaped()||e.prototype._parseTermExpression.call(this)||this._tryParseMixinReference(!1)},t.prototype._parseEscaped=function(){if(this.peek(p.EscapedJavaScript)||this.peek(p.BadEscapedJavaScript)){var n=this.createNode(g.EscapedValue);return this.consumeToken(),this.finish(n)}if(this.peekDelim("~")){var n=this.createNode(g.EscapedValue);return this.consumeToken(),this.accept(p.String)||this.accept(p.EscapedJavaScript)?this.finish(n):this.finish(n,y.TermExpected)}return null},t.prototype._parseOperator=function(){var n=this._parseGuardOperator();return n||e.prototype._parseOperator.call(this)},t.prototype._parseGuardOperator=function(){if(this.peekDelim(">")){var n=this.createNode(g.Operator);return this.consumeToken(),this.acceptDelim("="),n}else if(this.peekDelim("=")){var n=this.createNode(g.Operator);return this.consumeToken(),this.acceptDelim("<"),n}else if(this.peekDelim("<")){var n=this.createNode(g.Operator);return this.consumeToken(),this.acceptDelim("="),n}return null},t.prototype._parseRuleSetDeclaration=function(){return this.peek(p.AtKeyword)?this._parseKeyframe()||this._parseMedia(!0)||this._parseImport()||this._parseSupports(!0)||this._parseDetachedRuleSetMixin()||this._parseVariableDeclaration()||e.prototype._parseRuleSetDeclarationAtStatement.call(this):this._tryParseMixinDeclaration()||this._tryParseRuleset(!0)||this._tryParseMixinReference()||this._parseFunction()||this._parseExtend()||e.prototype._parseRuleSetDeclaration.call(this)},t.prototype._parseKeyframeIdent=function(){return this._parseIdent([J.Keyframe])||this._parseVariable()},t.prototype._parseKeyframeSelector=function(){return this._parseDetachedRuleSetMixin()||e.prototype._parseKeyframeSelector.call(this)},t.prototype._parseSimpleSelectorBody=function(){return this._parseSelectorCombinator()||e.prototype._parseSimpleSelectorBody.call(this)},t.prototype._parseSelector=function(n){var r=this.create(Gt),i=!1;for(n&&(i=r.addChild(this._parseCombinator()));r.addChild(this._parseSimpleSelector());){i=!0;var s=this.mark();if(r.addChild(this._parseGuard())&&this.peek(p.CurlyL))break;this.restoreAtMark(s),r.addChild(this._parseCombinator())}return i?this.finish(r):null},t.prototype._parseSelectorCombinator=function(){if(this.peekDelim("&")){var n=this.createNode(g.SelectorCombinator);for(this.consumeToken();!this.hasWhitespace()&&(this.acceptDelim("-")||this.accept(p.Num)||this.accept(p.Dimension)||n.addChild(this._parseIdent())||this.acceptDelim("&")););return this.finish(n)}return null},t.prototype._parseSelectorIdent=function(){if(!this.peekInterpolatedIdent())return null;var n=this.createNode(g.SelectorInterpolation),r=this._acceptInterpolatedIdent(n);return r?this.finish(n):null},t.prototype._parsePropertyIdentifier=function(n){n===void 0&&(n=!1);var r=/^[\w-]+/;if(!this.peekInterpolatedIdent()&&!this.peekRegExp(this.token.type,r))return null;var i=this.mark(),s=this.create(Ae);s.isCustomProperty=this.acceptDelim("-")&&this.acceptDelim("-");var o=!1;return n?s.isCustomProperty?o=s.addChild(this._parseIdent()):o=s.addChild(this._parseRegexp(r)):s.isCustomProperty?o=this._acceptInterpolatedIdent(s):o=this._acceptInterpolatedIdent(s,r),o?(!n&&!this.hasWhitespace()&&(this.acceptDelim("+"),this.hasWhitespace()||this.acceptIdent("_")),this.finish(s)):(this.restoreAtMark(i),null)},t.prototype.peekInterpolatedIdent=function(){return this.peek(p.Ident)||this.peekDelim("@")||this.peekDelim("$")||this.peekDelim("-")},t.prototype._acceptInterpolatedIdent=function(n,r){for(var i=this,s=!1,o=function(){var c=i.mark();return i.acceptDelim("-")&&(i.hasWhitespace()||i.acceptDelim("-"),i.hasWhitespace())?(i.restoreAtMark(c),null):i._parseInterpolation()},l=r?function(){return i.acceptRegexp(r)}:function(){return i.accept(p.Ident)};(l()||n.addChild(this._parseInterpolation()||this.try(o)))&&(s=!0,!this.hasWhitespace()););return s},t.prototype._parseInterpolation=function(){var n=this.mark();if(this.peekDelim("@")||this.peekDelim("$")){var r=this.createNode(g.Interpolation);return this.consumeToken(),this.hasWhitespace()||!this.accept(p.CurlyL)?(this.restoreAtMark(n),null):r.addChild(this._parseIdent())?this.accept(p.CurlyR)?this.finish(r):this.finish(r,y.RightCurlyExpected):this.finish(r,y.IdentifierExpected)}return null},t.prototype._tryParseMixinDeclaration=function(){var n=this.mark(),r=this.create(Kt);if(!r.setIdentifier(this._parseMixinDeclarationIdentifier())||!this.accept(p.ParenthesisL))return this.restoreAtMark(n),null;if(r.getParameters().addChild(this._parseMixinParameter()))for(;(this.accept(p.Comma)||this.accept(p.SemiColon))&&!this.peek(p.ParenthesisR);)r.getParameters().addChild(this._parseMixinParameter())||this.markError(r,y.IdentifierExpected,[],[p.ParenthesisR]);return this.accept(p.ParenthesisR)?(r.setGuard(this._parseGuard()),this.peek(p.CurlyL)?this._parseBody(r,this._parseMixInBodyDeclaration.bind(this)):(this.restoreAtMark(n),null)):(this.restoreAtMark(n),null)},t.prototype._parseMixInBodyDeclaration=function(){return this._parseFontFace()||this._parseRuleSetDeclaration()},t.prototype._parseMixinDeclarationIdentifier=function(){var n;if(this.peekDelim("#")||this.peekDelim(".")){if(n=this.create(Ae),this.consumeToken(),this.hasWhitespace()||!n.addChild(this._parseIdent()))return null}else if(this.peek(p.Hash))n=this.create(Ae),this.consumeToken();else return null;return n.referenceTypes=[J.Mixin],this.finish(n)},t.prototype._parsePseudo=function(){if(!this.peek(p.Colon))return null;var n=this.mark(),r=this.create(Yt);return this.consumeToken(),this.acceptIdent("extend")?this._completeExtends(r):(this.restoreAtMark(n),e.prototype._parsePseudo.call(this))},t.prototype._parseExtend=function(){if(!this.peekDelim("&"))return null;var n=this.mark(),r=this.create(Yt);return this.consumeToken(),this.hasWhitespace()||!this.accept(p.Colon)||!this.acceptIdent("extend")?(this.restoreAtMark(n),null):this._completeExtends(r)},t.prototype._completeExtends=function(n){if(!this.accept(p.ParenthesisL))return this.finish(n,y.LeftParenthesisExpected);var r=n.getSelectors();if(!r.addChild(this._parseSelector(!0)))return this.finish(n,y.SelectorExpected);for(;this.accept(p.Comma);)if(!r.addChild(this._parseSelector(!0)))return this.finish(n,y.SelectorExpected);return this.accept(p.ParenthesisR)?this.finish(n):this.finish(n,y.RightParenthesisExpected)},t.prototype._parseDetachedRuleSetMixin=function(){if(!this.peek(p.AtKeyword))return null;var n=this.mark(),r=this.create(An);return r.addChild(this._parseVariable(!0))&&(this.hasWhitespace()||!this.accept(p.ParenthesisL))?(this.restoreAtMark(n),null):this.accept(p.ParenthesisR)?this.finish(r):this.finish(r,y.RightParenthesisExpected)},t.prototype._tryParseMixinReference=function(n){n===void 0&&(n=!0);for(var r=this.mark(),i=this.create(An),s=this._parseMixinDeclarationIdentifier();s;){this.acceptDelim(">");var o=this._parseMixinDeclarationIdentifier();if(o)i.getNamespaces().addChild(s),s=o;else break}if(!i.setIdentifier(s))return this.restoreAtMark(r),null;var l=!1;if(this.accept(p.ParenthesisL)){if(l=!0,i.getArguments().addChild(this._parseMixinArgument())){for(;(this.accept(p.Comma)||this.accept(p.SemiColon))&&!this.peek(p.ParenthesisR);)if(!i.getArguments().addChild(this._parseMixinArgument()))return this.finish(i,y.ExpressionExpected)}if(!this.accept(p.ParenthesisR))return this.finish(i,y.RightParenthesisExpected);s.referenceTypes=[J.Mixin]}else s.referenceTypes=[J.Mixin,J.Rule];return this.peek(p.BracketL)?n||this._addLookupChildren(i):i.addChild(this._parsePrio()),!l&&!this.peek(p.SemiColon)&&!this.peek(p.CurlyR)&&!this.peek(p.EOF)?(this.restoreAtMark(r),null):this.finish(i)},t.prototype._parseMixinArgument=function(){var n=this.create(Rt),r=this.mark(),i=this._parseVariable();return i&&(this.accept(p.Colon)?n.setIdentifier(i):this.restoreAtMark(r)),n.setValue(this._parseDetachedRuleSet()||this._parseExpr(!0))?this.finish(n):(this.restoreAtMark(r),null)},t.prototype._parseMixinParameter=function(){var n=this.create(En);if(this.peekKeyword("@rest")){var r=this.create(U);return this.consumeToken(),this.accept(xi)?(n.setIdentifier(this.finish(r)),this.finish(n)):this.finish(n,y.DotExpected,[],[p.Comma,p.ParenthesisR])}if(this.peek(xi)){var i=this.create(U);return this.consumeToken(),n.setIdentifier(this.finish(i)),this.finish(n)}var s=!1;return n.setIdentifier(this._parseVariable())&&(this.accept(p.Colon),s=!0),!n.setDefaultValue(this._parseDetachedRuleSet()||this._parseExpr(!0))&&!s?null:this.finish(n)},t.prototype._parseGuard=function(){if(!this.peekIdent("when"))return null;var n=this.create(Rh);if(this.consumeToken(),n.isNegated=this.acceptIdent("not"),!n.getConditions().addChild(this._parseGuardCondition()))return this.finish(n,y.ConditionExpected);for(;this.acceptIdent("and")||this.accept(p.Comma);)if(!n.getConditions().addChild(this._parseGuardCondition()))return this.finish(n,y.ConditionExpected);return this.finish(n)},t.prototype._parseGuardCondition=function(){if(!this.peek(p.ParenthesisL))return null;var n=this.create(Ah);return this.consumeToken(),n.addChild(this._parseExpr()),this.accept(p.ParenthesisR)?this.finish(n):this.finish(n,y.RightParenthesisExpected)},t.prototype._parseFunction=function(){var n=this.mark(),r=this.create(Jt);if(!r.setIdentifier(this._parseFunctionIdentifier()))return null;if(this.hasWhitespace()||!this.accept(p.ParenthesisL))return this.restoreAtMark(n),null;if(r.getArguments().addChild(this._parseMixinArgument())){for(;(this.accept(p.Comma)||this.accept(p.SemiColon))&&!this.peek(p.ParenthesisR);)if(!r.getArguments().addChild(this._parseMixinArgument()))return this.finish(r,y.ExpressionExpected)}return this.accept(p.ParenthesisR)?this.finish(r):this.finish(r,y.RightParenthesisExpected)},t.prototype._parseFunctionIdentifier=function(){if(this.peekDelim("%")){var n=this.create(Ae);return n.referenceTypes=[J.Function],this.consumeToken(),this.finish(n)}return e.prototype._parseFunctionIdentifier.call(this)},t.prototype._parseURLArgument=function(){var n=this.mark(),r=e.prototype._parseURLArgument.call(this);if(!r||!this.peek(p.ParenthesisR)){this.restoreAtMark(n);var i=this.create(U);return i.addChild(this._parseBinaryExpr()),this.finish(i)}return r},t}(Kr),Zd=function(){var e=function(t,n){return e=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(r,i){r.__proto__=i}||function(r,i){for(var s in i)Object.prototype.hasOwnProperty.call(i,s)&&(r[s]=i[s])},e(t,n)};return function(t,n){if(typeof n!="function"&&n!==null)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");e(t,n);function r(){this.constructor=t}t.prototype=n===null?Object.create(n):(r.prototype=n.prototype,new r)}}(),V=Ie(),Qd=function(e){Zd(t,e);function t(n,r){return e.call(this,"@",n,r)||this}return t.prototype.createFunctionProposals=function(n,r,i,s){for(var o=0,l=n;o 50%"),example:"percentage(@number);",type:"percentage"},{name:"round",description:V("less.builtin.round","rounds a number to a number of places"),example:"round(number, [places: 0]);"},{name:"sqrt",description:V("less.builtin.sqrt","calculates square root of a number"),example:"sqrt(number);"},{name:"sin",description:V("less.builtin.sin","sine function"),example:"sin(number);"},{name:"tan",description:V("less.builtin.tan","tangent function"),example:"tan(number);"},{name:"atan",description:V("less.builtin.atan","arctangent - inverse of tangent function"),example:"atan(number);"},{name:"pi",description:V("less.builtin.pi","returns pi"),example:"pi();"},{name:"pow",description:V("less.builtin.pow","first argument raised to the power of the second argument"),example:"pow(@base, @exponent);"},{name:"mod",description:V("less.builtin.mod","first argument modulus second argument"),example:"mod(number, number);"},{name:"min",description:V("less.builtin.min","returns the lowest of one or more values"),example:"min(@x, @y);"},{name:"max",description:V("less.builtin.max","returns the lowest of one or more values"),example:"max(@x, @y);"}],t.colorProposals=[{name:"argb",example:"argb(@color);",description:V("less.builtin.argb","creates a #AARRGGBB")},{name:"hsl",example:"hsl(@hue, @saturation, @lightness);",description:V("less.builtin.hsl","creates a color")},{name:"hsla",example:"hsla(@hue, @saturation, @lightness, @alpha);",description:V("less.builtin.hsla","creates a color")},{name:"hsv",example:"hsv(@hue, @saturation, @value);",description:V("less.builtin.hsv","creates a color")},{name:"hsva",example:"hsva(@hue, @saturation, @value, @alpha);",description:V("less.builtin.hsva","creates a color")},{name:"hue",example:"hue(@color);",description:V("less.builtin.hue","returns the `hue` channel of `@color` in the HSL space")},{name:"saturation",example:"saturation(@color);",description:V("less.builtin.saturation","returns the `saturation` channel of `@color` in the HSL space")},{name:"lightness",example:"lightness(@color);",description:V("less.builtin.lightness","returns the `lightness` channel of `@color` in the HSL space")},{name:"hsvhue",example:"hsvhue(@color);",description:V("less.builtin.hsvhue","returns the `hue` channel of `@color` in the HSV space")},{name:"hsvsaturation",example:"hsvsaturation(@color);",description:V("less.builtin.hsvsaturation","returns the `saturation` channel of `@color` in the HSV space")},{name:"hsvvalue",example:"hsvvalue(@color);",description:V("less.builtin.hsvvalue","returns the `value` channel of `@color` in the HSV space")},{name:"red",example:"red(@color);",description:V("less.builtin.red","returns the `red` channel of `@color`")},{name:"green",example:"green(@color);",description:V("less.builtin.green","returns the `green` channel of `@color`")},{name:"blue",example:"blue(@color);",description:V("less.builtin.blue","returns the `blue` channel of `@color`")},{name:"alpha",example:"alpha(@color);",description:V("less.builtin.alpha","returns the `alpha` channel of `@color`")},{name:"luma",example:"luma(@color);",description:V("less.builtin.luma","returns the `luma` value (perceptual brightness) of `@color`")},{name:"saturate",example:"saturate(@color, 10%);",description:V("less.builtin.saturate","return `@color` 10% points more saturated")},{name:"desaturate",example:"desaturate(@color, 10%);",description:V("less.builtin.desaturate","return `@color` 10% points less saturated")},{name:"lighten",example:"lighten(@color, 10%);",description:V("less.builtin.lighten","return `@color` 10% points lighter")},{name:"darken",example:"darken(@color, 10%);",description:V("less.builtin.darken","return `@color` 10% points darker")},{name:"fadein",example:"fadein(@color, 10%);",description:V("less.builtin.fadein","return `@color` 10% points less transparent")},{name:"fadeout",example:"fadeout(@color, 10%);",description:V("less.builtin.fadeout","return `@color` 10% points more transparent")},{name:"fade",example:"fade(@color, 50%);",description:V("less.builtin.fade","return `@color` with 50% transparency")},{name:"spin",example:"spin(@color, 10);",description:V("less.builtin.spin","return `@color` with a 10 degree larger in hue")},{name:"mix",example:"mix(@color1, @color2, [@weight: 50%]);",description:V("less.builtin.mix","return a mix of `@color1` and `@color2`")},{name:"greyscale",example:"greyscale(@color);",description:V("less.builtin.greyscale","returns a grey, 100% desaturated color")},{name:"contrast",example:"contrast(@color1, [@darkcolor: black], [@lightcolor: white], [@threshold: 43%]);",description:V("less.builtin.contrast","return `@darkcolor` if `@color1 is> 43% luma` otherwise return `@lightcolor`, see notes")},{name:"multiply",example:"multiply(@color1, @color2);"},{name:"screen",example:"screen(@color1, @color2);"},{name:"overlay",example:"overlay(@color1, @color2);"},{name:"softlight",example:"softlight(@color1, @color2);"},{name:"hardlight",example:"hardlight(@color1, @color2);"},{name:"difference",example:"difference(@color1, @color2);"},{name:"exclusion",example:"exclusion(@color1, @color2);"},{name:"average",example:"average(@color1, @color2);"},{name:"negation",example:"negation(@color1, @color2);"}],t}(ii);function eu(e,t){var n=tu(e);return nu(n,t)}function tu(e){function t(u){return e.positionAt(u.offset).line}function n(u){return e.positionAt(u.offset+u.len).line}function r(){switch(e.languageId){case"scss":return new jo;case"less":return new Ho;default:return new Ht}}function i(u,f){var m=t(u),v=n(u);return m!==v?{startLine:m,endLine:v,kind:f}:null}var s=[],o=[],l=r();l.ignoreComment=!1,l.setSource(e.getText());for(var c=l.scan(),h=null,d=function(){switch(c.type){case p.CurlyL:case Jn:{o.push({line:t(c),type:"brace",isStart:!0});break}case p.CurlyR:{if(o.length!==0){var u=Go(o,"brace");if(!u)break;var f=n(c);u.type==="brace"&&(h&&n(h)!==f&&f--,u.line!==f&&s.push({startLine:u.line,endLine:f,kind:void 0}))}break}case p.Comment:{var m=function(N){return N==="#region"?{line:t(c),type:"comment",isStart:!0}:{line:n(c),type:"comment",isStart:!1}},v=function(N){var E=N.text.match(/^\s*\/\*\s*(#region|#endregion)\b\s*(.*?)\s*\*\//);if(E)return m(E[1]);if(e.languageId==="scss"||e.languageId==="less"){var M=N.text.match(/^\s*\/\/\s*(#region|#endregion)\b\s*(.*?)\s*/);if(M)return m(M[1])}return null},b=v(c);if(b)if(b.isStart)o.push(b);else{var u=Go(o,"comment");if(!u)break;u.type==="comment"&&u.line!==b.line&&s.push({startLine:u.line,endLine:b.line,kind:"region"})}else{var w=i(c,"comment");w&&s.push(w)}break}}h=c,c=l.scan()};c.type!==p.EOF;)d();return s}function Go(e,t){if(e.length===0)return null;for(var n=e.length-1;n>=0;n--)if(e[n].type===t&&e[n].isStart)return e.splice(n,1)[0];return null}function nu(e,t){var n=t&&t.rangeLimit||Number.MAX_VALUE,r=e.sort(function(o,l){var c=o.startLine-l.startLine;return c===0&&(c=o.endLine-l.endLine),c}),i=[],s=-1;return r.forEach(function(o){o.startLine=0;h--)if(this.__items[h].match(c))return!0;return!1},s.prototype.set_indent=function(c,h){this.is_empty()&&(this.__indent_count=c||0,this.__alignment_count=h||0,this.__character_count=this.__parent.get_indent_size(this.__indent_count,this.__alignment_count))},s.prototype._set_wrap_point=function(){this.__parent.wrap_line_length&&(this.__wrap_point_index=this.__items.length,this.__wrap_point_character_count=this.__character_count,this.__wrap_point_indent_count=this.__parent.next_line.__indent_count,this.__wrap_point_alignment_count=this.__parent.next_line.__alignment_count)},s.prototype._should_wrap=function(){return this.__wrap_point_index&&this.__character_count>this.__parent.wrap_line_length&&this.__wrap_point_character_count>this.__parent.next_line.__character_count},s.prototype._allow_wrap=function(){if(this._should_wrap()){this.__parent.add_new_line();var c=this.__parent.current_line;return c.set_indent(this.__wrap_point_indent_count,this.__wrap_point_alignment_count),c.__items=this.__items.slice(this.__wrap_point_index),this.__items=this.__items.slice(0,this.__wrap_point_index),c.__character_count+=this.__character_count-this.__wrap_point_character_count,this.__character_count=this.__wrap_point_character_count,c.__items[0]===" "&&(c.__items.splice(0,1),c.__character_count-=1),!0}return!1},s.prototype.is_empty=function(){return this.__items.length===0},s.prototype.last=function(){return this.is_empty()?null:this.__items[this.__items.length-1]},s.prototype.push=function(c){this.__items.push(c);var h=c.lastIndexOf(` +`);h!==-1?this.__character_count=c.length-h:this.__character_count+=c.length},s.prototype.pop=function(){var c=null;return this.is_empty()||(c=this.__items.pop(),this.__character_count-=c.length),c},s.prototype._remove_indent=function(){this.__indent_count>0&&(this.__indent_count-=1,this.__character_count-=this.__parent.indent_size)},s.prototype._remove_wrap_indent=function(){this.__wrap_point_indent_count>0&&(this.__wrap_point_indent_count-=1)},s.prototype.trim=function(){for(;this.last()===" ";)this.__items.pop(),this.__character_count-=1},s.prototype.toString=function(){var c="";return this.is_empty()?this.__parent.indent_empty_lines&&(c=this.__parent.get_indent_string(this.__indent_count)):(c=this.__parent.get_indent_string(this.__indent_count,this.__alignment_count),c+=this.__items.join("")),c};function o(c,h){this.__cache=[""],this.__indent_size=c.indent_size,this.__indent_string=c.indent_char,c.indent_with_tabs||(this.__indent_string=new Array(c.indent_size+1).join(c.indent_char)),h=h||"",c.indent_level>0&&(h=new Array(c.indent_level+1).join(this.__indent_string)),this.__base_string=h,this.__base_string_length=h.length}o.prototype.get_indent_size=function(c,h){var d=this.__base_string_length;return h=h||0,c<0&&(d=0),d+=c*this.__indent_size,d+=h,d},o.prototype.get_indent_string=function(c,h){var d=this.__base_string;return h=h||0,c<0&&(c=0,d=""),h+=c*this.__indent_size,this.__ensure_cache(h),d+=this.__cache[h],d},o.prototype.__ensure_cache=function(c){for(;c>=this.__cache.length;)this.__add_column()},o.prototype.__add_column=function(){var c=this.__cache.length,h=0,d="";this.__indent_size&&c>=this.__indent_size&&(h=Math.floor(c/this.__indent_size),c-=h*this.__indent_size,d=new Array(h+1).join(this.__indent_string)),c&&(d+=new Array(c+1).join(" ")),this.__cache.push(d)};function l(c,h){this.__indent_cache=new o(c,h),this.raw=!1,this._end_with_newline=c.end_with_newline,this.indent_size=c.indent_size,this.wrap_line_length=c.wrap_line_length,this.indent_empty_lines=c.indent_empty_lines,this.__lines=[],this.previous_line=null,this.current_line=null,this.next_line=new s(this),this.space_before_token=!1,this.non_breaking_space=!1,this.previous_token_wrapped=!1,this.__add_outputline()}l.prototype.__add_outputline=function(){this.previous_line=this.current_line,this.current_line=this.next_line.clone_empty(),this.__lines.push(this.current_line)},l.prototype.get_line_number=function(){return this.__lines.length},l.prototype.get_indent_string=function(c,h){return this.__indent_cache.get_indent_string(c,h)},l.prototype.get_indent_size=function(c,h){return this.__indent_cache.get_indent_size(c,h)},l.prototype.is_empty=function(){return!this.previous_line&&this.current_line.is_empty()},l.prototype.add_new_line=function(c){return this.is_empty()||!c&&this.just_added_newline()?!1:(this.raw||this.__add_outputline(),!0)},l.prototype.get_code=function(c){this.trim(!0);var h=this.current_line.pop();h&&(h[h.length-1]===` +`&&(h=h.replace(/\n+$/g,"")),this.current_line.push(h)),this._end_with_newline&&this.__add_outputline();var d=this.__lines.join(` +`);return c!==` +`&&(d=d.replace(/[\n]/g,c)),d},l.prototype.set_wrap_point=function(){this.current_line._set_wrap_point()},l.prototype.set_indent=function(c,h){return c=c||0,h=h||0,this.next_line.set_indent(c,h),this.__lines.length>1?(this.current_line.set_indent(c,h),!0):(this.current_line.set_indent(),!1)},l.prototype.add_raw_token=function(c){for(var h=0;h1&&this.current_line.is_empty();)this.__lines.pop(),this.current_line=this.__lines[this.__lines.length-1],this.current_line.trim();this.previous_line=this.__lines.length>1?this.__lines[this.__lines.length-2]:null},l.prototype.just_added_newline=function(){return this.current_line.is_empty()},l.prototype.just_added_blankline=function(){return this.is_empty()||this.current_line.is_empty()&&this.previous_line.is_empty()},l.prototype.ensure_empty_line_above=function(c,h){for(var d=this.__lines.length-2;d>=0;){var u=this.__lines[d];if(u.is_empty())break;if(u.item(0).indexOf(c)!==0&&u.item(-1)!==h){this.__lines.splice(d+1,0,new s(this)),this.previous_line=this.__lines[this.__lines.length-2];break}d--}},i.exports.Output=l},,,,function(i){function s(c,h){this.raw_options=o(c,h),this.disabled=this._get_boolean("disabled"),this.eol=this._get_characters("eol","auto"),this.end_with_newline=this._get_boolean("end_with_newline"),this.indent_size=this._get_number("indent_size",4),this.indent_char=this._get_characters("indent_char"," "),this.indent_level=this._get_number("indent_level"),this.preserve_newlines=this._get_boolean("preserve_newlines",!0),this.max_preserve_newlines=this._get_number("max_preserve_newlines",32786),this.preserve_newlines||(this.max_preserve_newlines=0),this.indent_with_tabs=this._get_boolean("indent_with_tabs",this.indent_char===" "),this.indent_with_tabs&&(this.indent_char=" ",this.indent_size===1&&(this.indent_size=4)),this.wrap_line_length=this._get_number("wrap_line_length",this._get_number("max_char")),this.indent_empty_lines=this._get_boolean("indent_empty_lines"),this.templating=this._get_selection_list("templating",["auto","none","django","erb","handlebars","php","smarty"],["auto"])}s.prototype._get_array=function(c,h){var d=this.raw_options[c],u=h||[];return typeof d=="object"?d!==null&&typeof d.concat=="function"&&(u=d.concat()):typeof d=="string"&&(u=d.split(/[^a-zA-Z0-9_\/\-]+/)),u},s.prototype._get_boolean=function(c,h){var d=this.raw_options[c],u=d===void 0?!!h:!!d;return u},s.prototype._get_characters=function(c,h){var d=this.raw_options[c],u=h||"";return typeof d=="string"&&(u=d.replace(/\\r/,"\r").replace(/\\n/,` +`).replace(/\\t/," ")),u},s.prototype._get_number=function(c,h){var d=this.raw_options[c];h=parseInt(h,10),isNaN(h)&&(h=0);var u=parseInt(d,10);return isNaN(u)&&(u=h),u},s.prototype._get_selection=function(c,h,d){var u=this._get_selection_list(c,h,d);if(u.length!==1)throw new Error("Invalid Option Value: The option '"+c+`' can only be one of the following values: +`+h+` +You passed in: '`+this.raw_options[c]+"'");return u[0]},s.prototype._get_selection_list=function(c,h,d){if(!h||h.length===0)throw new Error("Selection list cannot be empty.");if(d=d||[h[0]],!this._is_valid_selection(d,h))throw new Error("Invalid Default Value!");var u=this._get_array(c,d);if(!this._is_valid_selection(u,h))throw new Error("Invalid Option Value: The option '"+c+`' can contain only the following values: +`+h+` +You passed in: '`+this.raw_options[c]+"'");return u},s.prototype._is_valid_selection=function(c,h){return c.length&&h.length&&!c.some(function(d){return h.indexOf(d)===-1})};function o(c,h){var d={};c=l(c);var u;for(u in c)u!==h&&(d[u]=c[u]);if(h&&c[h])for(u in c[h])d[u]=c[h][u];return d}function l(c){var h={},d;for(d in c){var u=d.replace(/-/g,"_");h[u]=c[d]}return h}i.exports.Options=s,i.exports.normalizeOpts=l,i.exports.mergeOpts=o},,function(i){var s=RegExp.prototype.hasOwnProperty("sticky");function o(l){this.__input=l||"",this.__input_length=this.__input.length,this.__position=0}o.prototype.restart=function(){this.__position=0},o.prototype.back=function(){this.__position>0&&(this.__position-=1)},o.prototype.hasNext=function(){return this.__position=0&&l=0&&c=l.length&&this.__input.substring(c-l.length,c).toLowerCase()===l},i.exports.InputScanner=o},,,,,function(i){function s(o,l){o=typeof o=="string"?o:o.source,l=typeof l=="string"?l:l.source,this.__directives_block_pattern=new RegExp(o+/ beautify( \w+[:]\w+)+ /.source+l,"g"),this.__directive_pattern=/ (\w+)[:](\w+)/g,this.__directives_end_ignore_pattern=new RegExp(o+/\sbeautify\signore:end\s/.source+l,"g")}s.prototype.get_directives=function(o){if(!o.match(this.__directives_block_pattern))return null;var l={};this.__directive_pattern.lastIndex=0;for(var c=this.__directive_pattern.exec(o);c;)l[c[1]]=c[2],c=this.__directive_pattern.exec(o);return l},s.prototype.readIgnored=function(o){return o.readUntilAfter(this.__directives_end_ignore_pattern)},i.exports.Directives=s},,function(i,s,o){var l=o(16).Beautifier,c=o(17).Options;function h(d,u){var f=new l(d,u);return f.beautify()}i.exports=h,i.exports.defaultOptions=function(){return new c}},function(i,s,o){var l=o(17).Options,c=o(2).Output,h=o(8).InputScanner,d=o(13).Directives,u=new d(/\/\*/,/\*\//),f=/\r\n|[\r\n]/,m=/\r\n|[\r\n]/g,v=/\s/,b=/(?:\s|\n)+/g,w=/\/\*(?:[\s\S]*?)((?:\*\/)|$)/g,N=/\/\/(?:[^\n\r\u2028\u2029]*)/g;function E(M,k){this._source_text=M||"",this._options=new l(k),this._ch=null,this._input=null,this.NESTED_AT_RULE={"@page":!0,"@font-face":!0,"@keyframes":!0,"@media":!0,"@supports":!0,"@document":!0},this.CONDITIONAL_GROUP_RULE={"@media":!0,"@supports":!0,"@document":!0}}E.prototype.eatString=function(M){var k="";for(this._ch=this._input.next();this._ch;){if(k+=this._ch,this._ch==="\\")k+=this._input.next();else if(M.indexOf(this._ch)!==-1||this._ch===` +`)break;this._ch=this._input.next()}return k},E.prototype.eatWhitespace=function(M){for(var k=v.test(this._input.peek()),P=0;v.test(this._input.peek());)this._ch=this._input.next(),M&&this._ch===` +`&&(P===0||P0&&this._indentLevel--},E.prototype.beautify=function(){if(this._options.disabled)return this._source_text;var M=this._source_text,k=this._options.eol;k==="auto"&&(k=` +`,M&&f.test(M||"")&&(k=M.match(f)[0])),M=M.replace(m,` +`);var P=M.match(/^[\t ]*/)[0];this._output=new c(this._options,P),this._input=new h(M),this._indentLevel=0,this._nestedLevel=0,this._ch=null;for(var F=0,_=!1,C=!1,z=!1,W=!1,T=!1,L=this._ch,$,te,Ee;$=this._input.read(b),te=$!=="",Ee=L,this._ch=this._input.next(),this._ch==="\\"&&this._input.hasNext()&&(this._ch+=this._input.next()),L=this._ch,this._ch;)if(this._ch==="/"&&this._input.peek()==="*"){this._output.add_new_line(),this._input.back();var ye=this._input.read(w),D=u.get_directives(ye);D&&D.ignore==="start"&&(ye+=u.readIgnored(this._input)),this.print_string(ye),this.eatWhitespace(!0),this._output.add_new_line()}else if(this._ch==="/"&&this._input.peek()==="/")this._output.space_before_token=!0,this._input.back(),this.print_string(this._input.read(N)),this.eatWhitespace(!0);else if(this._ch==="@")if(this.preserveSingleSpace(te),this._input.peek()==="{")this.print_string(this._ch+this.eatString("}"));else{this.print_string(this._ch);var x=this._input.peekUntilAfter(/[: ,;{}()[\]\/='"]/g);x.match(/[ :]$/)&&(x=this.eatString(": ").replace(/\s$/,""),this.print_string(x),this._output.space_before_token=!0),x=x.replace(/\s$/,""),x==="extend"?W=!0:x==="import"&&(T=!0),x in this.NESTED_AT_RULE?(this._nestedLevel+=1,x in this.CONDITIONAL_GROUP_RULE&&(z=!0)):!_&&F===0&&x.indexOf(":")!==-1&&(C=!0,this.indent())}else this._ch==="#"&&this._input.peek()==="{"?(this.preserveSingleSpace(te),this.print_string(this._ch+this.eatString("}"))):this._ch==="{"?(C&&(C=!1,this.outdent()),z?(z=!1,_=this._indentLevel>=this._nestedLevel):_=this._indentLevel>=this._nestedLevel-1,this._options.newline_between_rules&&_&&this._output.previous_line&&this._output.previous_line.item(-1)!=="{"&&this._output.ensure_empty_line_above("/",","),this._output.space_before_token=!0,this._options.brace_style==="expand"?(this._output.add_new_line(),this.print_string(this._ch),this.indent(),this._output.set_indent(this._indentLevel)):(this.indent(),this.print_string(this._ch)),this.eatWhitespace(!0),this._output.add_new_line()):this._ch==="}"?(this.outdent(),this._output.add_new_line(),Ee==="{"&&this._output.trim(!0),T=!1,W=!1,C&&(this.outdent(),C=!1),this.print_string(this._ch),_=!1,this._nestedLevel&&this._nestedLevel--,this.eatWhitespace(!0),this._output.add_new_line(),this._options.newline_between_rules&&!this._output.just_added_blankline()&&this._input.peek()!=="}"&&this._output.add_new_line(!0)):this._ch===":"?(_||z)&&!(this._input.lookBack("&")||this.foundNestedPseudoClass())&&!this._input.lookBack("(")&&!W&&F===0?(this.print_string(":"),C||(C=!0,this._output.space_before_token=!0,this.eatWhitespace(!0),this.indent())):(this._input.lookBack(" ")&&(this._output.space_before_token=!0),this._input.peek()===":"?(this._ch=this._input.next(),this.print_string("::")):this.print_string(":")):this._ch==='"'||this._ch==="'"?(this.preserveSingleSpace(te),this.print_string(this._ch+this.eatString(this._ch)),this.eatWhitespace(!0)):this._ch===";"?F===0?(C&&(this.outdent(),C=!1),W=!1,T=!1,this.print_string(this._ch),this.eatWhitespace(!0),this._input.peek()!=="/"&&this._output.add_new_line()):(this.print_string(this._ch),this.eatWhitespace(!0),this._output.space_before_token=!0):this._ch==="("?this._input.lookBack("url")?(this.print_string(this._ch),this.eatWhitespace(),F++,this.indent(),this._ch=this._input.next(),this._ch===")"||this._ch==='"'||this._ch==="'"?this._input.back():this._ch&&(this.print_string(this._ch+this.eatString(")")),F&&(F--,this.outdent()))):(this.preserveSingleSpace(te),this.print_string(this._ch),this.eatWhitespace(),F++,this.indent()):this._ch===")"?(F&&(F--,this.outdent()),this.print_string(this._ch)):this._ch===","?(this.print_string(this._ch),this.eatWhitespace(!0),this._options.selector_separator_newline&&!C&&F===0&&!T&&!W?this._output.add_new_line():this._output.space_before_token=!0):(this._ch===">"||this._ch==="+"||this._ch==="~")&&!C&&F===0?this._options.space_around_combinator?(this._output.space_before_token=!0,this.print_string(this._ch),this._output.space_before_token=!0):(this.print_string(this._ch),this.eatWhitespace(),this._ch&&v.test(this._ch)&&(this._ch="")):this._ch==="]"?this.print_string(this._ch):this._ch==="["?(this.preserveSingleSpace(te),this.print_string(this._ch)):this._ch==="="?(this.eatWhitespace(),this.print_string("="),v.test(this._ch)&&(this._ch="")):this._ch==="!"&&!this._input.lookBack("\\")?(this.print_string(" "),this.print_string(this._ch)):(this.preserveSingleSpace(te),this.print_string(this._ch));var R=this._output.get_code(k);return R},i.exports.Beautifier=E},function(i,s,o){var l=o(6).Options;function c(h){l.call(this,h,"css"),this.selector_separator_newline=this._get_boolean("selector_separator_newline",!0),this.newline_between_rules=this._get_boolean("newline_between_rules",!0);var d=this._get_boolean("space_around_selector_separator");this.space_around_combinator=this._get_boolean("space_around_combinator")||d;var u=this._get_selection_list("brace_style",["collapse","expand","end-expand","none","preserve-inline"]);this.brace_style="collapse";for(var f=0;f0&&Ko(r,h-1);)h--;h===0||Yo(r,h-1)?c=h:h0){var b=n.insertSpaces?ha(" ",l*s):ha(" ",s);v=v.split(` +`).join(` +`+b),t.start.character===0&&(v=b+v)}return[{range:t,newText:v}]}function Xo(e){return e.replace(/^\s+/,"")}var su="{".charCodeAt(0),au="}".charCodeAt(0);function ou(e,t){for(;t>=0;){var n=e.charCodeAt(t);if(n===su)return!0;if(n===au)return!1;t--}return!1}function Je(e,t,n){if(e&&e.hasOwnProperty(t)){var r=e[t];if(r!==null)return r}return n}function lu(e,t,n){for(var r=t,i=0,s=n.tabSize||4;r && ]#",relevance:50,description:"@counter-style descriptor. Specifies the symbols used by the marker-construction algorithm specified by the system descriptor. Needs to be specified if the counter system is 'additive'.",restrictions:["integer","string","image","identifier"]},{name:"align-content",values:[{name:"center",description:"Lines are packed toward the center of the flex container."},{name:"flex-end",description:"Lines are packed toward the end of the flex container."},{name:"flex-start",description:"Lines are packed toward the start of the flex container."},{name:"space-around",description:"Lines are evenly distributed in the flex container, with half-size spaces on either end."},{name:"space-between",description:"Lines are evenly distributed in the flex container."},{name:"stretch",description:"Lines stretch to take up the remaining space."}],syntax:"normal | | | ? ",relevance:62,description:"Aligns a flex container\u2019s lines within the flex container when there is extra space in the cross-axis, similar to how 'justify-content' aligns individual items within the main-axis.",restrictions:["enum"]},{name:"align-items",values:[{name:"baseline",description:"If the flex item\u2019s inline axis is the same as the cross axis, this value is identical to 'flex-start'. Otherwise, it participates in baseline alignment."},{name:"center",description:"The flex item\u2019s margin box is centered in the cross axis within the line."},{name:"flex-end",description:"The cross-end margin edge of the flex item is placed flush with the cross-end edge of the line."},{name:"flex-start",description:"The cross-start margin edge of the flex item is placed flush with the cross-start edge of the line."},{name:"stretch",description:"If the cross size property of the flex item computes to auto, and neither of the cross-axis margins are auto, the flex item is stretched."}],syntax:"normal | stretch | | [ ? ]",relevance:85,description:"Aligns flex items along the cross axis of the current line of the flex container.",restrictions:["enum"]},{name:"justify-items",values:[{name:"auto"},{name:"normal"},{name:"end"},{name:"start"},{name:"flex-end",description:'"Flex items are packed toward the end of the line."'},{name:"flex-start",description:'"Flex items are packed toward the start of the line."'},{name:"self-end",description:"The item is packed flush to the edge of the alignment container of the end side of the item, in the appropriate axis."},{name:"self-start",description:"The item is packed flush to the edge of the alignment container of the start side of the item, in the appropriate axis.."},{name:"center",description:"The items are packed flush to each other toward the center of the of the alignment container."},{name:"left"},{name:"right"},{name:"baseline"},{name:"first baseline"},{name:"last baseline"},{name:"stretch",description:"If the cross size property of the flex item computes to auto, and neither of the cross-axis margins are auto, the flex item is stretched."},{name:"save"},{name:"unsave"},{name:"legacy"}],syntax:"normal | stretch | | ? [ | left | right ] | legacy | legacy && [ left | right | center ]",relevance:53,description:"Defines the default justify-self for all items of the box, giving them the default way of justifying each box along the appropriate axis",restrictions:["enum"]},{name:"justify-self",values:[{name:"auto"},{name:"normal"},{name:"end"},{name:"start"},{name:"flex-end",description:'"Flex items are packed toward the end of the line."'},{name:"flex-start",description:'"Flex items are packed toward the start of the line."'},{name:"self-end",description:"The item is packed flush to the edge of the alignment container of the end side of the item, in the appropriate axis."},{name:"self-start",description:"The item is packed flush to the edge of the alignment container of the start side of the item, in the appropriate axis.."},{name:"center",description:"The items are packed flush to each other toward the center of the of the alignment container."},{name:"left"},{name:"right"},{name:"baseline"},{name:"first baseline"},{name:"last baseline"},{name:"stretch",description:"If the cross size property of the flex item computes to auto, and neither of the cross-axis margins are auto, the flex item is stretched."},{name:"save"},{name:"unsave"}],syntax:"auto | normal | stretch | | ? [ | left | right ]",relevance:53,description:"Defines the way of justifying a box inside its container along the appropriate axis.",restrictions:["enum"]},{name:"align-self",values:[{name:"auto",description:"Computes to the value of 'align-items' on the element\u2019s parent, or 'stretch' if the element has no parent. On absolutely positioned elements, it computes to itself."},{name:"baseline",description:"If the flex item\u2019s inline axis is the same as the cross axis, this value is identical to 'flex-start'. Otherwise, it participates in baseline alignment."},{name:"center",description:"The flex item\u2019s margin box is centered in the cross axis within the line."},{name:"flex-end",description:"The cross-end margin edge of the flex item is placed flush with the cross-end edge of the line."},{name:"flex-start",description:"The cross-start margin edge of the flex item is placed flush with the cross-start edge of the line."},{name:"stretch",description:"If the cross size property of the flex item computes to auto, and neither of the cross-axis margins are auto, the flex item is stretched."}],syntax:"auto | normal | stretch | | ? ",relevance:72,description:"Allows the default alignment along the cross axis to be overridden for individual flex items.",restrictions:["enum"]},{name:"all",browsers:["E79","FF27","S9.1","C37","O24"],values:[],syntax:"initial | inherit | unset | revert",relevance:53,references:[{name:"MDN Reference",url:"https://developer.mozilla.org/docs/Web/CSS/all"}],description:"Shorthand that resets all properties except 'direction' and 'unicode-bidi'.",restrictions:["enum"]},{name:"alt",browsers:["S9"],values:[],relevance:50,references:[{name:"MDN Reference",url:"https://developer.mozilla.org/docs/Web/CSS/alt"}],description:"Provides alternative text for assistive technology to replace the generated content of a ::before or ::after element.",restrictions:["string","enum"]},{name:"animation",values:[{name:"alternate",description:"The animation cycle iterations that are odd counts are played in the normal direction, and the animation cycle iterations that are even counts are played in a reverse direction."},{name:"alternate-reverse",description:"The animation cycle iterations that are odd counts are played in the reverse direction, and the animation cycle iterations that are even counts are played in a normal direction."},{name:"backwards",description:"The beginning property value (as defined in the first @keyframes at-rule) is applied before the animation is displayed, during the period defined by 'animation-delay'."},{name:"both",description:"Both forwards and backwards fill modes are applied."},{name:"forwards",description:"The final property value (as defined in the last @keyframes at-rule) is maintained after the animation completes."},{name:"infinite",description:"Causes the animation to repeat forever."},{name:"none",description:"No animation is performed"},{name:"normal",description:"Normal playback."},{name:"reverse",description:"All iterations of the animation are played in the reverse direction from the way they were specified."}],syntax:"#",relevance:82,references:[{name:"MDN Reference",url:"https://developer.mozilla.org/docs/Web/CSS/animation"}],description:"Shorthand property combines six of the animation properties into a single property.",restrictions:["time","timing-function","enum","identifier","number"]},{name:"animation-delay",syntax:"