Apache DolphinScheduler(海豚调度) - 1.3 系列核心表结构剖析

    技术2023-12-17  72

    Apache DolphinScheduler 是一个分布式去中心化,易扩展的可视化 DAG 工作流任务调度系统。致力于解决数据处理流程中错综复杂的依赖关系,使调度系统在数据处理流程中开箱即用。

    近日,伯毅同学给社区贡献了工作流核心表结构的剖析文章,非常细致,喜欢的伙伴请转走

    1. 工作流总体存储结构

    在 dolphinscheduler 库中创建的所有工作流定义(模板)都保存在 t_ds_process_definition 表中.

    该数据库表结构如下表所示:

    序号字段类型描述1idint(11)主键2namevarchar(255)流程定义名称3versionint(11)流程定义版本4release_statetinyint(4)流程定义的发布状态:0 未上线 , 1已上线5project_idint(11)项目id6user_idint(11)流程定义所属用户id7process_definition_jsonlongtext流程定义JSON8descriptiontext流程定义描述9global_paramstext全局参数10flagtinyint(4)流程是否可用:0 不可用,1 可用11locationstext节点坐标信息12connectstext节点连线信息13receiverstext收件人14receivers_cctext抄送人15create_timedatetime创建时间16timeoutint(11)超时时间17tenant_idint(11)租户id18update_timedatetime更新时间19modify_byvarchar(36)修改用户20resource_idsvarchar(255)资源ids

    其中 process_definition_json 字段为核心字段, 定义了 DAG 图中的任务信息.该数据以JSON 的方式进行存储.

    公共的数据结构如下表:

    序号字段类型描述1globalParamsArray全局参数2tasksArray流程中的任务集合 [ 各个类型的结构请参考如下章节]3tenantIdint租户id4timeoutint超时时间

    数据示例:

    { "globalParams":[ { "prop":"golbal_bizdate", "direct":"IN", "type":"VARCHAR", "value":"${system.biz.date}" } ], "tasks":Array[1], "tenantId":0, "timeout":0 }

    2. 各任务类型存储结构详解

    2.1 Shell 节点

    ** Shell 节点数据结构如下:**

    序号参数名类型描述描述1idString任务编码2typeString类型SHELL3nameString名称4paramsObject自定义参数Json 格式5rawScriptStringShell脚本6localParamsArray自定义参数7resourceListArray资源文件8descriptionString描述9runFlagString运行标识10conditionResultObject条件分支11successNodeArray成功跳转节点12failedNodeArray失败跳转节点13dependenceObject任务依赖与params互斥14maxRetryTimesString最大重试次数15retryIntervalString重试间隔16timeoutObject超时控制17taskInstancePriorityString任务优先级18workerGroupStringWorker 分组19preTasksArray前置任务

    Shell 节点数据样例:

    { "type":"SHELL", "id":"tasks-80760", "name":"Shell Task", "params":{ "resourceList":[ { "id":3, "name":"run.sh", "res":"run.sh" } ], "localParams":[ ], "rawScript":"echo "This is a shell script"" }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.2 SQL节点

    通过 SQL 对指定的数据源进行数据查询、更新操作.

    ** SQL 节点数据结构如下:**

    序号参数名类型描述描述1idString任务编码2typeString类型SQL3nameString名称4paramsObject自定义参数Json 格式5typeString数据库类型6datasourceInt数据源id7sqlString查询SQL语句8udfsStringudf函数UDF函数id,以逗号分隔.9sqlTypeStringSQL节点类型0 查询 , 1 非查询10titleString邮件标题11receiversString收件人12receiversCcString抄送人13showTypeString邮件显示类型TABLE 表格 , ATTACHMENT附件14connParamsString连接参数15preStatementsArray前置SQL16postStatementsArray后置SQL17localParamsArray自定义参数18descriptionString描述19runFlagString运行标识20conditionResultObject条件分支21successNodeArray成功跳转节点22failedNodeArray失败跳转节点23dependenceObject任务依赖与params互斥24maxRetryTimesString最大重试次数25retryIntervalString重试间隔26timeoutObject超时控制27taskInstancePriorityString任务优先级28workerGroupStringWorker 分组29preTasksArray前置任务

    ** SQL 节点数据样例:**

    { "type":"SQL", "id":"tasks-95648", "name":"SqlTask-Query", "params":{ "type":"MYSQL", "datasource":1, "sql":"select id , namge , age from emp where id = ${id}", "udfs":"", "sqlType":"0", "title":"xxxx@xxx.com", "receivers":"xxxx@xxx.com", "receiversCc":"", "showType":"TABLE", "localParams":[ { "prop":"id", "direct":"IN", "type":"INTEGER", "value":"1" } ], "connParams":"", "preStatements":[ "insert into emp ( id,name ) value (1,'Li' )" ], "postStatements":[ ] }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.3 Spark 节点

    ** Spark 节点数据结构如下:**

    序号参数名类型描述描述1idString任务编码2typeString类型SPARK3nameString名称4paramsObject自定义参数Json 格式5mainClassString运行主类6mainArgsString运行参数7othersString其他参数8mainJarObject程序 jar 包9deployModeString部署模式local,client,cluster10driverCoresStringdriver核数11driverMemoryStringdriver 内存数12numExecutorsStringexecutor数量13executorMemoryStringexecutor内存14executorCoresStringexecutor核数15programTypeString程序类型JAVA,SCALA,PYTHON16sparkVersionStringSpark 版本SPARK1 , SPARK217localParamsArray自定义参数18resourceListArray资源文件19descriptionString描述20runFlagString运行标识21conditionResultObject条件分支22successNodeArray成功跳转节点23failedNodeArray失败跳转节点24dependenceObject任务依赖与params互斥25maxRetryTimesString最大重试次数26retryIntervalString重试间隔27timeoutObject超时控制28taskInstancePriorityString任务优先级29workerGroupStringWorker 分组30preTasksArray前置任务

    ** Spark 节点数据样例:**

    { "type":"SPARK", "id":"tasks-87430", "name":"SparkTask", "params":{ "mainClass":"org.apache.spark.examples.SparkPi", "mainJar":{ "id":4 }, "deployMode":"cluster", "resourceList":[ { "id":3, "name":"run.sh", "res":"run.sh" } ], "localParams":[ ], "driverCores":1, "driverMemory":"512M", "numExecutors":2, "executorMemory":"2G", "executorCores":2, "mainArgs":"10", "others":"", "programType":"SCALA", "sparkVersion":"SPARK2" }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.4 MapReduce(MR)节点

    ** MapReduce(MR) 节点数据结构如下:**

    序号参数名类型描述描述1idString任务编码2typeString类型MR3nameString名称4paramsObject自定义参数Json 格式5mainClassString运行主类6mainArgsString运行参数7othersString其他参数8mainJarObject程序 jar 包9programTypeString程序类型JAVA,PYTHON10localParamsArray自定义参数11resourceListArray资源文件12descriptionString描述13runFlagString运行标识14conditionResultObject条件分支15successNodeArray成功跳转节点16failedNodeArray失败跳转节点17dependenceObject任务依赖与params互斥18maxRetryTimesString最大重试次数19retryIntervalString重试间隔20timeoutObject超时控制21taskInstancePriorityString任务优先级22workerGroupStringWorker 分组23preTasksArray前置任务

    ** MapReduce(MR) 节点数据样例:**

    { "type":"MR", "id":"tasks-28997", "name":"MRTask", "params":{ "mainClass":"wordcount", "mainJar":{ "id":5 }, "resourceList":[ { "id":3, "name":"run.sh", "res":"run.sh" } ], "localParams":[ ], "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/", "others":"", "programType":"JAVA" }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.5 Python节点

    ** Python 节点数据结构如下:**

    序号参数名类型描述描述1idString任务编码2typeString类型PYTHON3nameString名称4paramsObject自定义参数Json 格式5rawScriptStringPython脚本6localParamsArray自定义参数7resourceListArray资源文件8descriptionString描述9runFlagString运行标识10conditionResultObject条件分支11successNodeArray成功跳转节点12failedNodeArray失败跳转节点13dependenceObject任务依赖与params互斥14maxRetryTimesString最大重试次数15retryIntervalString重试间隔16timeoutObject超时控制17taskInstancePriorityString任务优先级18workerGroupStringWorker 分组19preTasksArray前置任务

    Python节点数据样例:

    { "type":"PYTHON", "id":"tasks-5463", "name":"Python Task", "params":{ "resourceList":[ { "id":3, "name":"run.sh", "res":"run.sh" } ], "localParams":[ ], "rawScript":"print("This is a python script")" }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.6 Flink节点

    Flink 节点数据结构如下:

    序号参数名类型描述描述1idString任务编码2typeString类型FLINK3nameString名称4paramsObject自定义参数Json 格式5mainClassString运行主类6mainArgsString运行参数7othersString其他参数8mainJarObject程序 jar 包9deployModeString部署模式local,client,cluster10slotStringslot数量11taskManagerStringtaskManage数量12taskManagerMemoryStringtaskManager内存数13jobManagerMemoryStringjobManager内存数14programTypeString程序类型JAVA,SCALA,PYTHON15localParamsArray自定义参数16resourceListArray资源文件17descriptionString描述18runFlagString运行标识19conditionResultObject条件分支20successNodeArray成功跳转节点21failedNodeArray失败跳转节点22dependenceObject任务依赖与params互斥23maxRetryTimesString最大重试次数24retryIntervalString重试间隔25timeoutObject超时控制26taskInstancePriorityString任务优先级27workerGroupStringWorker 分组38preTasksArray前置任务

    ** Flink 节点数据样例:**

    { "type":"FLINK", "id":"tasks-17135", "name":"FlinkTask", "params":{ "mainClass":"com.flink.demo", "mainJar":{ "id":6 }, "deployMode":"cluster", "resourceList":[ { "id":3, "name":"run.sh", "res":"run.sh" } ], "localParams":[ ], "slot":1, "taskManager":"2", "jobManagerMemory":"1G", "taskManagerMemory":"2G", "executorCores":2, "mainArgs":"100", "others":"", "programType":"SCALA" }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.7 Http 节点

    Http 节点数据结构如下:

    序号参数名类型描述描述1idString任务编码2typeString类型HTTP3nameString名称4paramsObject自定义参数Json 格式5urlString请求地址6httpMethodString请求方式GET,POST,HEAD,PUT,DELETE7httpParamsArray请求参数8httpCheckConditionString校验条件默认响应码2009conditionString校验内容10localParamsArray自定义参数11descriptionString描述12runFlagString运行标识13conditionResultObject条件分支14successNodeArray成功跳转节点15failedNodeArray失败跳转节点16dependenceObject任务依赖与params互斥17maxRetryTimesString最大重试次数18retryIntervalString重试间隔19timeoutObject超时控制20taskInstancePriorityString任务优先级21workerGroupStringWorker 分组22preTasksArray前置任务

    ** Http 节点数据样例:**

    { "type":"HTTP", "id":"tasks-60499", "name":"HttpTask", "params":{ "localParams":[ ], "httpParams":[ { "prop":"id", "httpParametersType":"PARAMETER", "value":"1" }, { "prop":"name", "httpParametersType":"PARAMETER", "value":"Bo" } ], "url":"https://www.xxxxx.com:9012", "httpMethod":"POST", "httpCheckCondition":"STATUS_CODE_DEFAULT", "condition":"" }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.8 DataX节点

    ** DataX 节点数据结构如下:**

    序号参数名类型描述描述1idString任务编码2typeString类型DATAX3nameString名称4paramsObject自定义参数Json 格式5customConfigInt自定义类型0定制 , 1自定义6dsTypeString源数据库类型7dataSourceInt源数据库ID8dtTypeString目标数据库类型9dataTargetInt目标数据库ID10sqlStringSQL语句11targetTableString目标表12jobSpeedByteInt限流(字节数)13jobSpeedRecordInt限流(记录数)14preStatementsArray前置SQL15postStatementsArray后置SQL16jsonString自定义配置customConfig=1时生效17localParamsArray自定义参数customConfig=1时生效18descriptionString描述19runFlagString运行标识20conditionResultObject条件分支21successNodeArray成功跳转节点22failedNodeArray失败跳转节点23dependenceObject任务依赖与params互斥24maxRetryTimesString最大重试次数25retryIntervalString重试间隔26timeoutObject超时控制27taskInstancePriorityString任务优先级28workerGroupStringWorker 分组29preTasksArray前置任务

    DataX 节点数据样例:

    { "type":"DATAX", "id":"tasks-91196", "name":"DataxTask-DB", "params":{ "customConfig":0, "dsType":"MYSQL", "dataSource":1, "dtType":"MYSQL", "dataTarget":1, "sql":"select id, name ,age from user ", "targetTable":"emp", "jobSpeedByte":524288, "jobSpeedRecord":500, "preStatements":[ "truncate table emp " ], "postStatements":[ "truncate table user" ] }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.9 Sqoop节点

    Sqoop 节点数据结构如下:

    序号参数名类型描述描述1idString任务编码2typeString类型SQOOP3nameString名称4paramsObject自定义参数JSON 格式5concurrencyInt并发度6modelTypeString流向import,export7sourceTypeString数据源类型8sourceParamsString数据源参数JSON格式9targetTypeString目标数据类型10targetParamsString目标数据参数JSON格式11localParamsArray自定义参数12descriptionString描述13runFlagString运行标识14conditionResultObject条件分支15successNodeArray成功跳转节点16failedNodeArray失败跳转节点17dependenceObject任务依赖与params互斥18maxRetryTimesString最大重试次数19retryIntervalString重试间隔20timeoutObject超时控制21taskInstancePriorityString任务优先级22workerGroupStringWorker 分组23preTasksArray前置任务

    Sqoop 节点数据样例:

    { "type":"SQOOP", "id":"tasks-82041", "name":"Sqoop Task", "params":{ "concurrency":1, "modelType":"import", "sourceType":"MYSQL", "targetType":"HDFS", "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}", "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}", "localParams":[ ] }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.10 条件分支节点

    条件分支节点数据结构如下:

    序号参数名类型描述描述1idString任务编码2typeString类型SHELL3nameString名称4paramsObject自定义参数null5descriptionString描述6runFlagString运行标识7conditionResultObject条件分支8successNodeArray成功跳转节点9failedNodeArray失败跳转节点10dependenceObject任务依赖与params互斥11maxRetryTimesString最大重试次数12retryIntervalString重试间隔13timeoutObject超时控制14taskInstancePriorityString任务优先级15workerGroupStringWorker 分组16preTasksArray前置任务

    条件分支节点数据样例:

    { "type":"CONDITIONS", "id":"tasks-96189", "name":"条件", "params":{ }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "test04" ], "failedNode":[ "test05" ] }, "dependence":{ "relation":"AND", "dependTaskList":[ ] }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ "test01", "test02" ] }

    2.11子流程节点

    子流程节点数据结构如下:

    序号参数名类型描述描述1idString任务编码2typeString类型SHELL3nameString名称4paramsObject自定义参数Json 格式5processDefinitionIdInt流程定义id6descriptionString描述7runFlagString运行标识8conditionResultObject条件分支9successNodeArray成功跳转节点10failedNodeArray失败跳转节点11dependenceObject任务依赖与params互斥12maxRetryTimesString最大重试次数13retryIntervalString重试间隔14timeoutObject超时控制15taskInstancePriorityString任务优先级16workerGroupStringWorker 分组17preTasksArray前置任务

    子流程节点数据样例:

    { "type":"SUB_PROCESS", "id":"tasks-14806", "name":"SubProcessTask", "params":{ "processDefinitionId":2 }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ }, "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    2.12 依赖(DEPENDENT)节点

    依赖(DEPENDENT)节点数据结构如下:

    序号参数名类型描述描述1idString任务编码2typeString类型DEPENDENT3nameString名称4paramsObject自定义参数Json 格式5rawScriptStringShell脚本6localParamsArray自定义参数7resourceListArray资源文件8descriptionString描述9runFlagString运行标识10conditionResultObject条件分支11successNodeArray成功跳转节点12failedNodeArray失败跳转节点13dependenceObject任务依赖与params互斥14relationString关系AND,OR15dependTaskListArray依赖任务清单16maxRetryTimesString最大重试次数17retryIntervalString重试间隔18timeoutObject超时控制19taskInstancePriorityString任务优先级20workerGroupStringWorker 分组21preTasksArray前置任务

    依赖(DEPENDENT)节点数据样例:

    { "type":"DEPENDENT", "id":"tasks-57057", "name":"DenpendentTask", "params":{ }, "description":"", "runFlag":"NORMAL", "conditionResult":{ "successNode":[ "" ], "failedNode":[ "" ] }, "dependence":{ "relation":"AND", "dependTaskList":[ { "relation":"AND", "dependItemList":[ { "projectId":1, "definitionId":7, "definitionList":[ { "value":8, "label":"MRTask" }, { "value":7, "label":"FlinkTask" }, { "value":6, "label":"SparkTask" }, { "value":5, "label":"SqlTask-Update" }, { "value":4, "label":"SqlTask-Query" }, { "value":3, "label":"SubProcessTask" }, { "value":2, "label":"Python Task" }, { "value":1, "label":"Shell Task" } ], "depTasks":"ALL", "cycle":"day", "dateValue":"today" } ] }, { "relation":"AND", "dependItemList":[ { "projectId":1, "definitionId":5, "definitionList":[ { "value":8, "label":"MRTask" }, { "value":7, "label":"FlinkTask" }, { "value":6, "label":"SparkTask" }, { "value":5, "label":"SqlTask-Update" }, { "value":4, "label":"SqlTask-Query" }, { "value":3, "label":"SubProcessTask" }, { "value":2, "label":"Python Task" }, { "value":1, "label":"Shell Task" } ], "depTasks":"SqlTask-Update", "cycle":"day", "dateValue":"today" } ] } ] }, "maxRetryTimes":"0", "retryInterval":"1", "timeout":{ "strategy":"", "interval":null, "enable":false }, "taskInstancePriority":"MEDIUM", "workerGroup":"default", "preTasks":[ ] }

    DolphinScheduler 社区介绍:

    Apache DolphinScheduler 是一个非常多样化的社区,至今贡献者已近100名, 他们分别来自 30 多家不同的公司。 微信群用户3000人。

    Apache DolphinScheduler 部分用户案例(排名不分先后)

    已经有300多家企业和科研机构在使用DolphinScheduler,来处理各类调度和定时任务,另有500多家公司开通了海豚调度的试用:

    Apache DolphinScheduler项目起源 - 需求决定

    Apache DolphinScheduler四大特性

    Apache DolphinScheduler 能力:

    以DAG图的方式将Task按照任务的依赖关系关联起来,可实时可视化监控任务的运行状态

    支持丰富的任务类型:Shell、MR、Spark、Flink、SQL(mysql、postgresql、hive、sparksql)、Python、Http、Sub_Process、Procedure等

    支持工作流定时调度、依赖调度、手动调度、手动暂停/停止/恢复,同时支持失败重试/告警、从指定节点恢复失败、Kill任务等操作

    支持工作流优先级、任务优先级及任务的故障转移及任务超时告警/失败

    支持工作流全局参数及节点自定义参数设置

    支持资源文件的在线上传/下载,管理等,支持在线文件创建、编辑

    支持任务日志在线查看及滚动、在线下载日志等

    实现集群HA,通过Zookeeper实现Master集群和Worker集群去中心化

    支持对Master/Worker cpu load,memory,cpu在线查看

    支持工作流运行历史树形/甘特图展示、支持任务状态统计、流程状态统计

    支持补数

    支持多租户

    支持国际化

    Apache DolphinScheduler 1.3 新特性

    * Worker实现重构,提升Worker性能 * Master和Worker引入Netty通信 * 去zookeeper任务队列 * Worker节点的三种选择:随机、循环和CPU和内存的线性加权负载平衡 * Worker去数据库操作 * 资源中心支持多目录 * 添加 if/else 条件任务 * 添加 sqoop/datax 任务 * 支持 k8s 部署 * 添加DAG流程图一键格式化 * 流程图美化 * 支持 ambari 插件安装 * 批量导出和导入工作流 * 流程定义支持复制 * 大幅简化配置项,简化部署

    在线DEMO试用

    http://106.75.43.194:8888/

    DolphinScheduler Slogan

    加入 Apache DolphinScheduler

    在使用 DolphinScheduler 的过程中,如果您有任何问题或者想法、建议,都可以通过Apache 邮件列表参与到 DolphinScheduler 的社区建设中来。

    欢迎加入贡献的队伍,加入开源社区从提交第一个 PR开始,

    找到带有”easy to fix”标记或者一些非常简单的issue(比如拼写错误等),先通过第一个PR熟悉提交流程,如果有任何疑问,欢迎联系
    Processed: 0.027, SQL: 9