[canal源码解析]之esAdapter etl功能
在上一篇使用canal client-adapter完成mysql到es数据同步教程(包括全量和增量)编辑的时候看到了esAdapter中对于etl功能的代码,由于之前自己也写过类似的功能点,为此这里我打算再看下阿里的大佬是如何写全量同步代码的,作为学习与借鉴
CommonRest
etl类的入口controller类为:com.alibaba.otter.canal.adapter.launcher.rest.CommonRest
package com.alibaba.otter.canal.adapter.launcher.rest;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import javax.annotation.PostConstruct;
import javax.annotation.Resource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import com.alibaba.otter.canal.adapter.launcher.common.EtlLock;
import com.alibaba.otter.canal.adapter.launcher.common.SyncSwitch;
import com.alibaba.otter.canal.adapter.launcher.config.AdapterCanalConfig;
import com.alibaba.otter.canal.client.adapter.OuterAdapter;
import com.alibaba.otter.canal.client.adapter.support.EtlResult;
import com.alibaba.otter.canal.client.adapter.support.ExtensionLoader;
import com.alibaba.otter.canal.client.adapter.support.Result;
/**
* 适配器操作Rest
*
* @author rewerma @ 2018-10-20
* @version 1.0.0
*/
@RestController
public class CommonRest {
private static Logger logger = LoggerFactory.getLogger(CommonRest.class);
private static final String ETL_LOCK_ZK_NODE = "/sync-etl/";
private ExtensionLoader<OuterAdapter> loader;
@Resource
private SyncSwitch syncSwitch;
@Resource
private EtlLock etlLock;
@Resource
private AdapterCanalConfig adapterCanalConfig;
@PostConstruct
public void init() {
loader = ExtensionLoader.getExtensionLoader(OuterAdapter.class);
}
/**
* ETL curl http://127.0.0.1:8081/etl/rdb/oracle1/mytest_user.yml -X POST
*
* @param type 类型 hbase, es
* @param key adapter key
* @param task 任务名对应配置文件名 mytest_user.yml
* @param params etl where条件参数, 为空全部导入
*/
@PostMapping("/etl/{type}/{key}/{task}")
public EtlResult etl(@PathVariable String type, @PathVariable String key, @PathVariable String task,
@RequestParam(name = "params", required = false) String params) {
OuterAdapter adapter = loader.getExtension(type, key);
String destination = adapter.getDestination(task);
String lockKey = destination == null ? task : destination;
boolean locked = etlLock.tryLock(ETL_LOCK_ZK_NODE + type + "-" + lockKey);
if (!locked) {
EtlResult result = new EtlResult();
result.setSucceeded(false);
result.setErrorMessage(task + " 有其他进程正在导入中, 请稍后再试");
return result;
}
try {
boolean oriSwitchStatus;
if (destination != null) {
oriSwitchStatus = syncSwitch.status(destination);
if (oriSwitchStatus) {
syncSwitch.off(destination);
}
} else {
// task可能为destination,直接锁task
oriSwitchStatus = syncSwitch.status(task);
if (oriSwitchStatus) {
syncSwitch.off(task);
}
}
try {
List<String> paramArray = null;
if (params != null) {
paramArray = Arrays.asList(params.trim().split(";"));
}
return adapter.etl(task, paramArray);
} finally {
if (destination != null && oriSwitchStatus) {
syncSwitch.on(destination);
} else if (destination == null && oriSwitchStatus) {
syncSwitch.on(task);
}
}
} finally {
etlLock.unlock(ETL_LOCK_ZK_NODE + type + "-" + lockKey);
}
}
/**
* ETL curl http://127.0.0.1:8081/etl/hbase/mytest_person2.yml -X POST
*
* @param type 类型 hbase, es
* @param task 任务名对应配置文件名 mytest_person2.yml
* @param params etl where条件参数, 为空全部导入
*/
@PostMapping("/etl/{type}/{task}")
public EtlResult etl(@PathVariable String type, @PathVariable String task,
@RequestParam(name = "params", required = false) String params) {
return etl(type, null, task, params);
}
/**
* 统计总数 curl http://127.0.0.1:8081/count/rdb/oracle1/mytest_user.yml
*
* @param type 类型 hbase, es
* @param key adapter key
* @param task 任务名对应配置文件名 mytest_person2.yml
* @return
*/
@GetMapping("/count/{type}/{key}/{task}")
public Map<String, Object> count(@PathVariable String type, @PathVariable String key, @PathVariable String task) {
OuterAdapter adapter = loader.getExtension(type, key);
return adapter.count(task);
}
/**
* 统计总数 curl http://127.0.0.1:8081/count/hbase/mytest_person2.yml
*
* @param type 类型 hbase, es
* @param task 任务名对应配置文件名 mytest_person2.yml
* @return
*/
@GetMapping("/count/{type}/{task}")
public Map<String, Object> count(@PathVariable String type, @PathVariable String task) {
return count(type, null, task);
}
/**
* 返回所有实例 curl http://127.0.0.1:8081/destinations
*/
@GetMapping("/destinations")
public List<Map<String, String>> destinations() {
List<Map<String, String>> result = new ArrayList<>();
Set<String> destinations = adapterCanalConfig.DESTINATIONS;
for (String destination : destinations) {
Map<String, String> resMap = new LinkedHashMap<>();
boolean status = syncSwitch.status(destination);
String resStatus;
if (status) {
resStatus = "on";
} else {
resStatus = "off";
}
resMap.put("destination", destination);
resMap.put("status", resStatus);
result.add(resMap);
}
return result;
}
/**
* 实例同步开关 curl http://127.0.0.1:8081/syncSwitch/example/off -X PUT
*
* @param destination 实例名称
* @param status 开关状态: off on
* @return
*/
@PutMapping("/syncSwitch/{destination}/{status}")
public Result etl(@PathVariable String destination, @PathVariable String status) {
if (status.equals("on")) {
syncSwitch.on(destination);
logger.info("#Destination: {} sync on", destination);
return Result.createSuccess("实例: " + destination + " 开启同步成功");
} else if (status.equals("off")) {
syncSwitch.off(destination);
logger.info("#Destination: {} sync off", destination);
return Result.createSuccess("实例: " + destination + " 关闭同步成功");
} else {
Result result = new Result();
result.setCode(50000);
result.setMessage("实例: " + destination + " 操作失败");
return result;
}
}
/**
* 获取实例开关状态 curl http://127.0.0.1:8081/syncSwitch/example
*
* @param destination 实例名称
* @return
*/
@GetMapping("/syncSwitch/{destination}")
public Map<String, String> etl(@PathVariable String destination) {
boolean status = syncSwitch.status(destination);
String resStatus;
if (status) {
resStatus = "on";
} else {
resStatus = "off";
}
Map<String, String> res = new LinkedHashMap<>();
res.put("stauts", resStatus);
return res;
}
}
这里我们只需关注如下这个类,其中我加了一点注释
/**
* ETL curl http://127.0.0.1:8081/etl/rdb/oracle1/mytest_user.yml -X POST
*
* @param type 类型 hbase, es
* @param key adapter key
* @param task 任务名对应配置文件名 mytest_user.yml
* @param params etl where条件参数, 为空全部导入
*/
@PostMapping("/etl/{type}/{key}/{task}")
public EtlResult etl(@PathVariable String type, @PathVariable String key, @PathVariable String task,
@RequestParam(name = "params", required = false) String params) {
//由type和task获取到外部的es配置文件
OuterAdapter adapter = loader.getExtension(type, key);
String destination = adapter.getDestination(task);
//从配置文件中获取出adapter中的destination,即canal instance的名称
String lockKey = destination == null ? task : destination;
//尝试获取lock,锁名为/sync-etl/+type名称+资源名
boolean locked = etlLock.tryLock(ETL_LOCK_ZK_NODE + type + "-" + lockKey);
if (!locked) {
//获取锁失败,返回失败
EtlResult result = new EtlResult();
result.setSucceeded(false);
result.setErrorMessage(task + " 有其他进程正在导入中, 请稍后再试");
return result;
}
try {
boolean oriSwitchStatus;
if (destination != null) {
//获取destination的同步状态,首次加载情况下destination的状态都为true,代表可以此destination可以进行操作(门是开着的可以进行访问)
oriSwitchStatus = syncSwitch.status(destination);
if (oriSwitchStatus) {
//将destination的状态修改为false,代表不可以获取锁(门关上了,不可以进行访问)
syncSwitch.off(destination);
}
} else {
// task可能为destination,直接锁task
oriSwitchStatus = syncSwitch.status(task);
if (oriSwitchStatus) {
syncSwitch.off(task);
}
}
try {
List<String> paramArray = null;
if (params != null) {
//如果有多个参数,则以;对参数进行分割
paramArray = Arrays.asList(params.trim().split(";"));
}
return adapter.etl(task, paramArray);
} finally {
if (destination != null && oriSwitchStatus) {
//如果在进入方法前destination的锁状态为true,则这里将destination锁设为true,开锁
syncSwitch.on(destination);
} else if (destination == null && oriSwitchStatus) {
syncSwitch.on(task);
}
}
} finally {
//释放锁
etlLock.unlock(ETL_LOCK_ZK_NODE + type + "-" + lockKey);
}
}
etlLock(重入锁或curator实现的分布式锁)
此方法中的etlLock是一个自定义的etlLock类,它会根据当前有无zookeeper自动对应的锁,如果有zookeeper则会试用curator作为分布式锁,如果没有的话即单机环境,则会采用ReentrantLock作为锁
其中etlLock在初始化时会来进行判断并确定当前的环境:
@PostConstruct
public void init() {
CuratorFramework curator = curatorClient.getCurator();
if (curator != null) {
mode = Mode.DISTRIBUTED;
} else {
mode = Mode.LOCAL;
}
}
上面的代码中还有一处值得关注的地方,注意到关键字:syncSwitch
在此controller中,还有一处用到了syncSwitch的代码为destinations的请求代码,它会返回所有destination当前的状态:
/**
* 返回所有实例 curl http://127.0.0.1:8081/destinations
*/
@GetMapping("/destinations")
public List<Map<String, String>> destinations() {
List<Map<String, String>> result = new ArrayList<>();
Set<String> destinations = adapterCanalConfig.DESTINATIONS;
for (String destination : destinations) {
Map<String, String> resMap = new LinkedHashMap<>();
boolean status = syncSwitch.status(destination);
String resStatus;
if (status) {
resStatus = "on";
} else {
resStatus = "off";
}
resMap.put("destination", destination);
resMap.put("status", resStatus);
result.add(resMap);
}
return result;
}
syncSwitch在这里代表每个destination的当前状态,其值可以理解为当前destination是否可以进行操作,on代表可以进行操作(on 门是开打的),off代表当前不可以进行操作(off 门是关着的)
其SyncSwitch的实现原理与etlLock的实现原理有类似之处,它也会根据当前的环境来选择用何种方式进行实现
@PostConstruct
public void init() {
CuratorFramework curator = curatorClient.getCurator();
if (curator != null) {
mode = Mode.DISTRIBUTED;
DISTRIBUTED_LOCK.clear();
for (String destination : adapterCanalConfig.DESTINATIONS) {
// 对应每个destination注册锁
BooleanMutex mutex = new BooleanMutex(true);
initMutex(curator, destination, mutex);
DISTRIBUTED_LOCK.put(destination, mutex);
startListen(destination, mutex);
}
} else {
mode = Mode.LOCAL;
LOCAL_LOCK.clear();
for (String destination : adapterCanalConfig.DESTINATIONS) {
// 对应每个destination注册锁
LOCAL_LOCK.put(destination, new BooleanMutex(true));
}
}
}
其中BooleanMutex是一个基于AQS的实现,其中的set方法中的innerSetTrue和innerSetFalse可以参考下面Sync的代码:
/**
* 重新设置对应的Boolean mutex
*
* @param mutex
*/
public void set(Boolean mutex) {
if (mutex) {
sync.innerSetTrue();
} else {
sync.innerSetFalse();
}
}
BooleanMutex.Sync(其于AQS实现的互斥锁)
Sync的代码:
/**
* Synchronization control for BooleanMutex. Uses AQS sync state to
* represent run status
*/
private final class Sync extends AbstractQueuedSynchronizer {
private static final long serialVersionUID = 2559471934544126329L;
/** State value representing that TRUE */
private static final int TRUE = 1;
/** State value representing that FALSE */
private static final int FALSE = 2;
private boolean isTrue(int state) {
return (state & TRUE) != 0;
}
/**
* 实现AQS的接口,获取共享锁的判断
*/
protected int tryAcquireShared(int state) {
// 如果为true,直接允许获取锁对象
// 如果为false,进入阻塞队列,等待被唤醒
return isTrue(getState()) ? 1 : -1;
}
/**
* 实现AQS的接口,释放共享锁的判断
*/
protected boolean tryReleaseShared(int ignore) {
// 始终返回true,代表可以release
return true;
}
boolean innerState() {
return isTrue(getState());
}
void innerGet() throws InterruptedException {
acquireSharedInterruptibly(0);
}
void innerGet(long nanosTimeout) throws InterruptedException, TimeoutException {
if (!tryAcquireSharedNanos(0, nanosTimeout)) throw new TimeoutException();
}
void innerSetTrue() {
for (;;) {
int s = getState();
if (s == TRUE) {
return; // 直接退出
}
if (compareAndSetState(s, TRUE)) {// cas更新状态,避免并发更新true操作
releaseShared(0);// 释放一下锁对象,唤醒一下阻塞的Thread
return;
}
}
}
void innerSetFalse() {
for (;;) {
int s = getState();
if (s == FALSE) {
return; // 直接退出
}
if (compareAndSetState(s, FALSE)) {// cas更新状态,避免并发更新false操作
return;
}
}
}
}
多线程执行数据同步
上面的代码主要是对执行etl请求controller部分的代码,接下来看下具体执行数据同步的细节实现
主要关注AbstractEtlService的protected EtlResult importData(String sql, List<String> params)方法
protected EtlResult importData(String sql, List<String> params) {
EtlResult etlResult = new EtlResult();
AtomicLong impCount = new AtomicLong();
List<String> errMsg = new ArrayList<>();
if (config == null) {
logger.warn("{} mapping config is null, etl go end ", type);
etlResult.setErrorMessage(type + "mapping config is null, etl go end ");
return etlResult;
}
long start = System.currentTimeMillis();
try {
DruidDataSource dataSource = DatasourceConfig.DATA_SOURCES.get(config.getDataSourceKey());
List<Object> values = new ArrayList<>();
// 拼接条件
if (config.getMapping().getEtlCondition() != null && params != null) {
String etlCondition = config.getMapping().getEtlCondition();
for (String param : params) {
etlCondition = etlCondition.replace("{}", "?");
values.add(param);
}
sql += " " + etlCondition;
}
if (logger.isDebugEnabled()) {
logger.debug("etl sql : {}", sql);
}
// 获取总数
String countSql = "SELECT COUNT(1) FROM ( " + sql + ") _CNT ";
long cnt = (Long) Util.sqlRS(dataSource, countSql, values, rs -> {
Long count = null;
try {
if (rs.next()) {
count = ((Number) rs.getObject(1)).longValue();
}
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
return count == null ? 0L : count;
});
// 当大于1万条记录时开启多线程
if (cnt >= 10000) {
int threadCount = Runtime.getRuntime().availableProcessors();
long offset;
long size = CNT_PER_TASK;
long workerCnt = cnt / size + (cnt % size == 0 ? 0 : 1);
if (logger.isDebugEnabled()) {
logger.debug("workerCnt {} for cnt {} threadCount {}", workerCnt, cnt, threadCount);
}
ExecutorService executor = Util.newFixedThreadPool(threadCount, 5000L);
List<Future<Boolean>> futures = new ArrayList<>();
for (long i = 0; i < workerCnt; i++) {
offset = size * i;
String sqlFinal = sql + " LIMIT " + offset + "," + size;
Future<Boolean> future = executor.submit(() -> executeSqlImport(dataSource,
sqlFinal,
values,
config.getMapping(),
impCount,
errMsg));
futures.add(future);
}
for (Future<Boolean> future : futures) {
future.get();
}
executor.shutdown();
} else {
executeSqlImport(dataSource, sql, values, config.getMapping(), impCount, errMsg);
}
logger.info("数据全量导入完成, 一共导入 {} 条数据, 耗时: {}", impCount.get(), System.currentTimeMillis() - start);
etlResult.setResultMessage("导入" + type + " 数据:" + impCount.get() + " 条");
} catch (Exception e) {
logger.error(e.getMessage(), e);
errMsg.add(type + " 数据导入异常 =>" + e.getMessage());
}
if (errMsg.isEmpty()) {
etlResult.setSucceeded(true);
} else {
etlResult.setErrorMessage(Joiner.on("\n").join(errMsg));
}
return etlResult;
}
从代码可以看出,在执行数据同步前程序会先统计出需要同步数据的总数,如果总数大于了10000条,则会根据当前系统的可用线程线来创建一个固定大小的线程池来采用分批进行查询,也就是如下的代码:
// 当大于1万条记录时开启多线程
if (cnt >= 10000) {
int threadCount = Runtime.getRuntime().availableProcessors();
//起始偏移量
long offset;
//每次查询条数:这里为10000
long size = CNT_PER_TASK;
//需要创建的worker数:(总记录数/每页数)+(总记录数%每页数,判断是否刚刚整除完毕)
long workerCnt = cnt / size + (cnt % size == 0 ? 0 : 1);
if (logger.isDebugEnabled()) {
logger.debug("workerCnt {} for cnt {} threadCount {}", workerCnt, cnt, threadCount);
}
ExecutorService executor = Util.newFixedThreadPool(threadCount, 5000L);
List<Future<Boolean>> futures = new ArrayList<>();
for (long i = 0; i < workerCnt; i++) {
//offset每次进行偏移
offset = size * i;
//每次取1000条
String sqlFinal = sql + " LIMIT " + offset + "," + size;
Future<Boolean> future = executor.submit(() -> executeSqlImport(dataSource,
sqlFinal,
values,
config.getMapping(),
impCount,
errMsg));
futures.add(future);
}
for (Future<Boolean> future : futures) {
future.get();
}
executor.shutdown();
}
再来关注下executeSqlImport方法:
protected boolean executeSqlImport(DataSource ds, String sql, List<Object> values,
AdapterConfig.AdapterMapping adapterMapping, AtomicLong impCount,
List<String> errMsg) {
try {
ESMapping mapping = (ESMapping) adapterMapping;
Util.sqlRS(ds, sql, values, rs -> {
int count = 0;
try {
ESBulkRequest esBulkRequest = this.esConnection.new ESBulkRequest();
long batchBegin = System.currentTimeMillis();
while (rs.next()) {
Map<String, Object> esFieldData = new LinkedHashMap<>();
Object idVal = null;
for (FieldItem fieldItem : mapping.getSchemaItem().getSelectFields().values()) {
String fieldName = fieldItem.getFieldName();
if (mapping.getSkips().contains(fieldName)) {
continue;
}
// 如果是主键字段则不插入
if (fieldItem.getFieldName().equals(mapping.get_id())) {
idVal = esTemplate.getValFromRS(mapping, rs, fieldName, fieldName);
} else {
Object val = esTemplate.getValFromRS(mapping, rs, fieldName, fieldName);
esFieldData.put(Util.cleanColumn(fieldName), val);
}
}
if (!mapping.getRelations().isEmpty()) {
mapping.getRelations().forEach((relationField, relationMapping) -> {
Map<String, Object> relations = new HashMap<>();
relations.put("name", relationMapping.getName());
if (StringUtils.isNotEmpty(relationMapping.getParent())) {
FieldItem parentFieldItem = mapping.getSchemaItem()
.getSelectFields()
.get(relationMapping.getParent());
Object parentVal;
try {
parentVal = esTemplate.getValFromRS(mapping,
rs,
parentFieldItem.getFieldName(),
parentFieldItem.getFieldName());
} catch (SQLException e) {
throw new RuntimeException(e);
}
if (parentVal != null) {
relations.put("parent", parentVal.toString());
esFieldData.put("$parent_routing", parentVal.toString());
}
}
esFieldData.put(Util.cleanColumn(relationField), relations);
});
}
if (idVal != null) {
String parentVal = (String) esFieldData.remove("$parent_routing");
if (mapping.isUpsert()) {
ESUpdateRequest esUpdateRequest = this.esConnection.new ESUpdateRequest(
mapping.get_index(),
mapping.get_type(),
idVal.toString()).setDoc(esFieldData).setDocAsUpsert(true);
if (StringUtils.isNotEmpty(parentVal)) {
esUpdateRequest.setRouting(parentVal);
}
esBulkRequest.add(esUpdateRequest);
} else {
ESIndexRequest esIndexRequest = this.esConnection.new ESIndexRequest(mapping
.get_index(), mapping.get_type(), idVal.toString()).setSource(esFieldData);
if (StringUtils.isNotEmpty(parentVal)) {
esIndexRequest.setRouting(parentVal);
}
esBulkRequest.add(esIndexRequest);
}
} else {
idVal = esFieldData.get(mapping.getPk());
ESSearchRequest esSearchRequest = this.esConnection.new ESSearchRequest(mapping.get_index(),
mapping.get_type()).setQuery(QueryBuilders.termQuery(mapping.getPk(), idVal))
.size(10000);
SearchResponse response = esSearchRequest.getResponse();
for (SearchHit hit : response.getHits()) {
ESUpdateRequest esUpdateRequest = this.esConnection.new ESUpdateRequest(mapping
.get_index(), mapping.get_type(), hit.getId()).setDoc(esFieldData);
esBulkRequest.add(esUpdateRequest);
}
}
if (esBulkRequest.numberOfActions() % mapping.getCommitBatch() == 0
&& esBulkRequest.numberOfActions() > 0) {
long esBatchBegin = System.currentTimeMillis();
BulkResponse rp = esBulkRequest.bulk();
if (rp.hasFailures()) {
this.processFailBulkResponse(rp);
}
if (logger.isTraceEnabled()) {
logger.trace("全量数据批量导入批次耗时: {}, es执行时间: {}, 批次大小: {}, index; {}",
(System.currentTimeMillis() - batchBegin),
(System.currentTimeMillis() - esBatchBegin),
esBulkRequest.numberOfActions(),
mapping.get_index());
}
batchBegin = System.currentTimeMillis();
esBulkRequest.resetBulk();
}
count++;
impCount.incrementAndGet();
}
if (esBulkRequest.numberOfActions() > 0) {
long esBatchBegin = System.currentTimeMillis();
BulkResponse rp = esBulkRequest.bulk();
if (rp.hasFailures()) {
this.processFailBulkResponse(rp);
}
if (logger.isTraceEnabled()) {
logger.trace("全量数据批量导入最后批次耗时: {}, es执行时间: {}, 批次大小: {}, index; {}",
(System.currentTimeMillis() - batchBegin),
(System.currentTimeMillis() - esBatchBegin),
esBulkRequest.numberOfActions(),
mapping.get_index());
}
}
} catch (Exception e) {
logger.error(e.getMessage(), e);
errMsg.add(mapping.get_index() + " etl failed! ==>" + e.getMessage());
throw new RuntimeException(e);
}
return count;
});
return true;
} catch (Exception e) {
logger.error(e.getMessage(), e);
return false;
}
}
采用游标流式查询
此方法就是执行查询并将数据同步到es的方法,其中看下sqlRS(DataSource ds, String sql, List<Object> values, Function<ResultSet, Object> fun)的实现细节:
public static Object sqlRS(DataSource ds, String sql, List<Object> values, Function<ResultSet, Object> fun) {
try (Connection conn = ds.getConnection()) {
try (PreparedStatement pstmt = conn
.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)) {
pstmt.setFetchSize(Integer.MIN_VALUE);
if (values != null) {
for (int i = 0; i < values.size(); i++) {
pstmt.setObject(i + 1, values.get(i));
}
}
try (ResultSet rs = pstmt.executeQuery()) {
return fun.apply(rs);
}
}
} catch (Exception e) {
logger.error("sqlRs has error, sql: {} ", sql);
throw new RuntimeException(e);
}
}
其中参数设为ResultSet.TYPE_FORWARD_ONLY和fetchSize为Integer.MIN_VALUE的原因是:
当statement设置以下属性时,采用的是流数据接收方式,每次只从服务器接收部份数据,直到所有数据处理完毕,不会发生JVM OOM
setResultSetType(ResultSet.TYPE_FORWARD_ONLY);
setFetchSize(Integer.MIN_VALUE);
总结
通过查看canal esAdapater的etl同步中的代码知道了它有如下特点:
1.数据同步的controller入口处加入了锁(根据当前环境会启用jvm的锁或zk的分布式锁),确保不会造成重复提交
2.会根据数据量的大小自动开启多线程进行查询,从而提高查询效率
3.查询时会采用游标的方式流式进行查询,避免oom
更多推荐
所有评论(0)