Self-service BI has today become mainstream in the enterprise set-up. BI tools have the capability to connect and visualize data located in disparate systems. Their primary purpose is to realize value out of the enterprise data-marts. The traditional concept of discovery, planning, budgeting, allocating, and implementing data-marts within enterprises takes time and, consequently, delays value realization. With the availability of fairly-priced and scalable hardware (with enterprise-grade proven open-source technologies) in the market that offer unlimited data storage and processing options, it is only logical to continue down the path of self-service data ingestion.

Data integration, in its simplest form, involves retrieval from source, apply single (or) multi-step transformation(s) and save it in the target system. Enterprises are leveraging a variety of secondary data-stores to replicate transactional data for analytics / historical statistics or other reasons. Data synchronization between primary to secondary data-sources involved enterprises creating custom solutions that entailed investing a substantial amount of time and money. To address this crippling issue, the Kafka ecosystem came up with a framework called ‘Kafka Connect’. The framework addresses the data extraction / load aspect of the use-case only. If the enterprises need transformation / mediation, then Apache Spark or other middleware technologies shall have to be leveraged with the transformed data fed back into Kafka before saving in the target system. For additional details referring to system design, please check out Confluent documentation.

To put it simply, the framework (see image) connects to the data-source, retrieves data, and stores it in the Kafka topic. The stored data can be consumed by one or more sinks to paralelly ingesting into one or multiple secondary stores. In case this data needs to be transformed / translated, a spark stream or similar technologies can be leveraged for stream processing.

kafka-connect-img

MSRCOSMOS has built an in-house product on similar lines with added capabilities, which is being successfully leveraged by multiple customers. Our next release of the product will leverage Kafka Connect for data ingestion while still keeping intact the core strengths of our product (i.e. analytics + machine learning).

We leverage docker for testing new products / technologies / framework for obvious reasons. Here is a sample docker-compose file for kick-starting Kafka Connect:

version: ’2′
services:
kafkaconnectui:
image: landoop/kafka-connect-ui
environment:
CONNECT_URL: “https://connect:8083/”
links:
– connect
ports:
– “8000:8000″
zk:
image: 31z4/zookeeper:3.4.8

kafka:
image: ches/kafka
links:
– zk
environment:
KAFKA_BROKER_ID: 0
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ADVERTISED_PORT: 9092
ZOOKEEPER_CONNECTION_STRING: zk:2181
ZOOKEEPER_CHROOT: /broker-0
connect:
image: 1ambda/kafka-connect
links:
– kafka
ports:
– “8083:8083″
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka:9092
CONNECT_GROUP_ID: cluster1

Happy Business Transformation!

SHARE:

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>