{"id":709,"date":"2020-04-05T16:31:18","date_gmt":"2020-04-05T15:31:18","guid":{"rendered":"http:\/\/www.igfasouza.com\/blog\/?p=709"},"modified":"2021-05-20T14:22:49","modified_gmt":"2021-05-20T13:22:49","slug":"kafka-connector-architecture","status":"publish","type":"post","link":"http:\/\/www.igfasouza.com\/blog\/kafka-connector-architecture\/","title":{"rendered":"kafka Connector Architecture"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/kafka-connect-image.png\" alt=\"\" width=\"600\" height=\"350\" class=\"alignnone size-full wp-image-710\" style=\"border:none\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/kafka-connect-image.png 600w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/kafka-connect-image-300x175.png 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/p>\n<p><b>What\u2019s the story Rory?<\/b><\/p>\n<p>This blog post is part of my series of posts regarding &#8220;<a href=\"http:\/\/www.igfasouza.com\/blog\/kafka-connect-overview\/\" rel=\"noopener\" target=\"_blank\">Kafka Connect Overview<\/a>&#8220;.<br \/>\nIf you&#8217;re not familiar with Kafka, I suggest you have a look at my previous post &#8220;<a href=\"http:\/\/www.igfasouza.com\/blog\/what-is-kafka\/\" rel=\"noopener\" target=\"_blank\">What is Kafka?<\/a>&#8221; before.<br \/>\nThis post is a collection of links, videos, tutorials, blogs and books that I found mixed with my opinion. <\/p>\n<p><b>Table of contents<\/b><\/p>\n<p>1. Kafka Connect<br \/>\n2. Source &#038; Sink Connectors<br \/>\n3. Standalone &#038; Distributed<br \/>\n4. Converters &#038; Transforms<br \/>\n5. Life cycle<br \/>\n6. Code<br \/>\n7. Books<br \/>\n8. Link<\/p>\n<h2>1. Kafka Connect<\/h2>\n<p>Kafka Connects goal of copying data between systems has been tackled by a variety of frameworks, many of them still actively developed and maintained. This section explains the motivation behind Kafka Connect, where it fits in the design space, and its unique features and design decisions<\/p>\n<p>Kafka Connect has three major models in its design:<\/p>\n<ul>\n<li>\nConnector model\n<\/li>\n<li>\nWorker model\n<\/li>\n<li>\nData model\n<\/li>\n<\/ul>\n<p>The connector model addresses three key user requirements. First, Kafka Connect performs broad copying by default by having users define jobs at the level of Connectors which then break the job into smaller Tasks. This two level scheme strongly encourages connectors to use configurations that encourage copying broad swaths of data since they should have enough inputs to break the job into smaller tasks. It also provides one point of parallelism by requiring Connectors to immediately consider how their job can be broken down into subtasks, and select an appropriate granularity to do so. Finally, by specializing source and sink interfaces, Kafka Connect provides an accessible connector API that makes it very easy to implement connectors for a variety of systems.<\/p>\n<p>The worker model allows Kafka Connect to scale to the application. It can run scaled down to a single worker process that also acts as its own coordinator, or in clustered mode where connectors and tasks are dynamically scheduled on workers. However, it assumes very little about the process management of the workers, so it can easily run on a variety of cluster managers or using traditional service supervision. This architecture allows scaling up and down, but Kafka Connect\u2019s implementation also adds utilities to support both modes well. The REST interface for managing and monitoring jobs makes it easy to run Kafka Connect as an organization-wide service that runs jobs for many users. Command line utilities specialized for ad hoc jobs make it easy to get up and running in a development environment, for testing, or in production environments where an agent-based approach is required.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/workers.jpg\" alt=\"\" width=\"935\" height=\"390\" class=\"alignnone size-full wp-image-711\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/workers.jpg 935w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/workers-300x125.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/workers-768x320.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/workers-624x260.jpg 624w\" sizes=\"auto, (max-width: 935px) 100vw, 935px\" \/><\/p>\n<p>The data model addresses the remaining requirements. Many of the benefits come from coupling tightly with Kafka. Kafka serves as a natural buffer for both streaming and batch systems, removing much of the burden of managing data and ensuring delivery from connector developers. Additionally, by always requiring Kafka as one of the endpoints, the larger data pipeline can leverage the many tools that integrate well with Kafka. This allows Kafka Connect to focus only on copying data because a variety of stream processing tools are available to further process the data, which keeps Kafka Connect simple, both conceptually and in its implementation. This differs greatly from other systems where ETL must occur before hitting a sink. In contrast, Kafka Connect can bookend an ETL process, leaving any transformation to tools specifically designed for that purpose. Finally, Kafka includes partitions in its core abstraction, providing another point of parallelism<\/p>\n<p><iframe loading=\"lazy\" width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/YOGN7qr2nSE\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p><b>In simples words<\/b><\/p>\n<p>Kafka Connect is a distributed, scale, fault-tolerant service designed to reliably stream data between Kafka and other data systems. Data is produced from a source and consumed to a sink.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/connect-1024x425.jpg\" alt=\"\" width=\"625\" height=\"259\" class=\"alignnone size-large wp-image-712\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/connect-1024x425.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/connect-300x124.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/connect-768x318.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/connect-624x259.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/connect.jpg 1611w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<p>Connect tracks the offset that was last consumed for a source, to restart task at the correct starting point after a failure. These offsets are different from Kafka offsets, they are based on the sourse system like database, file, etc.<\/p>\n<p>In standalone mode, the source offset is tracked in a local file and in a distributed mode, the source offset is tracked in a Kafka topic.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/result.gif\" alt=\"\" width=\"1694\" height=\"788\" class=\"alignnone size-full wp-image-735\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/result01.gif\" alt=\"\" width=\"1240\" height=\"730\" class=\"alignnone size-full wp-image-736\" \/><\/p>\n<h2>2. Source &#038; Sink connectors<\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/source_sink-1024x429.jpg\" alt=\"\" width=\"625\" height=\"262\" class=\"alignnone size-large wp-image-715\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/source_sink-1024x429.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/source_sink-300x126.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/source_sink-768x321.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/source_sink-624x261.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/source_sink.jpg 1708w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<p>Producers and Consumers provide complete flexibility to send any data to Kafka or process in any way. This flexibility means you do everything yourself.<\/p>\n<p>Kafka Connect\u2019s simple frameworks allows;<\/p>\n<ul>\n<li>\ndevelopers to create connectors that copy data to or from others systems.\n<\/li>\n<li>\nOperators to use said connectors just by writing configuration files and submitting them to Connect. (No code)\n<\/li>\n<li>\nCommunity and 3rd-party engineers to build reliable plugins for common data sources and sinks.\n<\/li>\n<li>\nDeployments to deliver fault-tolerant and automated load balance out-of-the-box.\n<\/li>\n<\/ul>\n<p>And the frameworks do the hard work;<\/p>\n<ul>\n<li>\nSerialization and deserialization.\n<\/li>\n<li>\nSchema registry integration.\n<\/li>\n<li>\nFault-tolerant and failover.\n<\/li>\n<li>\nPartitioning and scale-out.\n<\/li>\n<li>\nAnd let the developers focus on domain specific details.\n<\/li>\n<\/ul>\n<h2>3. Standalone &#038; Distributed<\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/standalone-1024x407.jpg\" alt=\"\" width=\"625\" height=\"248\" class=\"alignnone size-large wp-image-716\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/standalone-1024x407.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/standalone-300x119.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/standalone-768x305.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/standalone-624x248.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/standalone.jpg 1150w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<p>In standalone mode we have a source or a Sink and a Kafka broker, when we deploy Kafka connect in the standalone we actually need to pass a configuration file containing all the connection properties that we need to run. So standalone&#8217;s main way of providing configuration to our connector is by using properties files and not the Rest API.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/class.png\" alt=\"\" width=\"720\" height=\"540\" class=\"alignnone size-full wp-image-717\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/class.png 720w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/class-300x225.png 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/class-624x468.png 624w\" sizes=\"auto, (max-width: 720px) 100vw, 720px\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed-1024x456.jpg\" alt=\"\" width=\"625\" height=\"278\" class=\"alignnone size-large wp-image-718\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed-1024x456.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed-300x134.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed-768x342.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed-624x278.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed.jpg 1587w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<p>In the distributed mode we usually have more than one worker, since these workers can be in different machines or containers they can not share the same storage space, so a properties file is out of the question. Instead kafka connect in distributed mode leverage kafka topics in order to sink between themselves.<\/p>\n<p><iframe loading=\"lazy\" width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/52HXoxthRs0\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed01-1024x461.jpg\" alt=\"\" width=\"625\" height=\"281\" class=\"alignnone size-large wp-image-719\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed01-1024x461.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed01-300x135.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed01-768x346.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed01-624x281.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/distributed01.jpg 1642w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<h2>4. Converters &#038; Transforms<\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/converter-1024x353.jpg\" alt=\"\" width=\"625\" height=\"215\" class=\"alignnone size-large wp-image-720\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/converter-1024x353.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/converter-300x103.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/converter-768x265.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/converter-624x215.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/converter.jpg 1714w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<p>Pluggable API to convert data between native formats and Kafka. Just like the name says. Converters are used to come for data from a format to another.<br \/>\nIn the Source connectors converters are invoked after the data has been fetched from the source and before it is published to kafka.<br \/>\nIn the Sink connectors converters are invoked after the data has been consumed from Kafka and before it is stored in the sink.<\/p>\n<p>Apache Kafka ships with Json Converter.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/transforms-1024x353.jpg\" alt=\"\" width=\"625\" height=\"215\" class=\"alignnone size-large wp-image-721\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/transforms-1024x353.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/transforms-300x103.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/transforms-768x265.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/transforms-624x215.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/transforms.jpg 1586w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<p>Transform is a simple operation that can be applied in the message level.<\/p>\n<p>There\u2019s a nice blog post about Single message transforms <a href=\"https:\/\/www.confluent.io\/blog\/kafka-connect-single-message-transformation-tutorial-with-examples\/\" rel=\"noopener\" target=\"_blank\">here<\/a>.<\/p>\n<p>Single message transforms SMT modify events before storing in Kafka, mask sensitive information, add identifiers, tag events, remove unnecessary columns and more, modify events going out of Kafka, route high priority events to faster datastore, cast data types to match destination and more.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/builtin-1024x494.jpg\" alt=\"\" width=\"625\" height=\"302\" class=\"alignnone size-large wp-image-722\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/builtin-1024x494.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/builtin-300x145.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/builtin-768x371.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/builtin-624x301.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/builtin.jpg 1786w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<h2>5. Life cycle<\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/result03.gif\" alt=\"\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-737\" \/><\/p>\n<p>I build this animation using the slides from (Randall Hauch, Confluent) Kafka Summit SF 2018 <a href=\"https:\/\/www.slideshare.net\/ConfluentInc\/so-you-want-to-write-a-connector?qid=d4aeb66d-8c3b-41be-b119-35c98a816fa7&#038;v=&#038;b=&#038;from_search=13\" rel=\"noopener\" target=\"_blank\">here<\/a>.<\/p>\n<p><b>Sequence diagram<\/b><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle-1024x658.jpg\" alt=\"\" width=\"625\" height=\"402\" class=\"alignnone size-large wp-image-724\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle-1024x658.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle-300x193.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle-768x494.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle-624x401.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle.jpg 1434w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle01-1024x475.jpg\" alt=\"\" width=\"625\" height=\"290\" class=\"alignnone size-large wp-image-725\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle01-1024x475.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle01-300x139.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle01-768x357.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle01-624x290.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle01.jpg 1809w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle02-1024x441.jpg\" alt=\"\" width=\"625\" height=\"269\" class=\"alignnone size-large wp-image-726\" srcset=\"http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle02-1024x441.jpg 1024w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle02-300x129.jpg 300w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle02-768x331.jpg 768w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle02-624x269.jpg 624w, http:\/\/www.igfasouza.com\/blog\/wp-content\/uploads\/2020\/04\/lifecycle02.jpg 1794w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/p>\n<h2>6. Code<\/h2>\n<p>Confluent makes available a code example that you can found <a href=\"https:\/\/github.com\/apache\/kafka\/tree\/trunk\/connect\/file\/src\/main\/java\/org\/apache\/kafka\/connect\/file\" rel=\"noopener\" target=\"_blank\">here<\/a>.<\/p>\n<p><a href=\"http:\/\/www.confluent.io\/hub\/\" rel=\"noopener\" target=\"_blank\">Confluent hub<\/a><\/p>\n<h2>7. Books<\/h2>\n<p>Modern Big Data Processing with Hadoop<\/p>\n<h2>8. Links<\/h2>\n<p><a href=\"https:\/\/docs.confluent.io\/current\/connect\/managing\/confluent-hub\/component-archive.html\" rel=\"noopener\" target=\"_blank\">https:\/\/docs.confluent.io\/current\/connect\/managing\/confluent-hub\/component-archive.html<\/a><\/p>\n<p><a href=\"https:\/\/docs.confluent.io\/current\/connect\/design.html\" rel=\"noopener\" target=\"_blank\">https:\/\/docs.confluent.io\/current\/connect\/design.html<\/a><\/p>\n<p><a href=\"https:\/\/www.confluent.io\/blog\/kafka-connect-deep-dive-error-handling-dead-letter-queues\/\" rel=\"noopener\" target=\"_blank\">https:\/\/www.confluent.io\/blog\/kafka-connect-deep-dive-error-handling-dead-letter-queues\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>What\u2019s the story Rory? This blog post is part of my series of posts regarding &#8220;Kafka Connect Overview&#8220;. If you&#8217;re not familiar with Kafka, I suggest you have a look at my previous post &#8220;What is Kafka?&#8221; before. This post&hellip; <a href=\"http:\/\/www.igfasouza.com\/blog\/kafka-connector-architecture\/\" class=\"more-link\">Continue Reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":710,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[25],"tags":[7,11],"class_list":["post-709","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-kafka","tag-kafka","tag-kafka-connect"],"_links":{"self":[{"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/posts\/709","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/comments?post=709"}],"version-history":[{"count":6,"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/posts\/709\/revisions"}],"predecessor-version":[{"id":1250,"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/posts\/709\/revisions\/1250"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/media\/710"}],"wp:attachment":[{"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/media?parent=709"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/categories?post=709"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.igfasouza.com\/blog\/wp-json\/wp\/v2\/tags?post=709"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}