아파치 카프카 공식 레퍼런스를 한글로 번역한 문서입니다.
버전은 2.7.X 기준입니다.
목차:
- GETTING STARTED
- 1.1 Introduction
- 1.2 Use Cases
- 1.3 Quick Start
- STEP 1: GET KAFKA
- STEP 2: START THE KAFKA ENVIRONMENT
- STEP 3: CREATE A TOPIC TO STORE YOUR EVENTS
- STEP 4: WRITE SOME EVENTS INTO THE TOPIC
- STEP 5: READ THE EVENTS
- STEP 6: IMPORT/EXPORT YOUR DATA AS STREAMS OF EVENTS WITH KAFKA CONNECT
- STEP 7: PROCESS YOUR EVENTS WITH KAFKA STREAMS
- STEP 8: TERMINATE THE KAFKA ENVIRONMENT
- CONGRATULATIONS!
- 1.4 Ecosystem
- APIS
- CONFIGURATION
- DESIGN
- IMPLEMENTATION
- OPERATIONS
- 6.1 Basic Kafka Operations
- Adding and removing topics
- Modifying topics
- Graceful shutdown
- Balancing leadership
- Balancing Replicas Across Racks
- Mirroring data between clusters & Geo-replication
- Checking consumer position
- Managing Consumer Groups
- Expanding your cluster
- Decommissioning brokers
- Increasing replication factor
- Limiting Bandwidth Usage during Data Migration
- Setting quotas
- 6.2 Datacenters
- 6.3 Geo-Replication (Cross-Cluster Data Mirroring)
- 6.4 Kafka Configuration
- 6.5 Java Version
- 6.6 Hardware and OS
- 6.7 Monitoring
- 6.8 ZooKeeper
- 6.1 Basic Kafka Operations
- SECURITY
- 7.1 Security Overview
- 7.2 Encryption and Authentication using SSL
- 7.3 Authentication using SASL
- JAAS configuration
- SASL configuration
- Authentication using SASL/Kerberos
- Authentication using SASL/PLAIN
- Authentication using SASL/SCRAM
- Authentication using SASL/OAUTHBEARER
- Enabling multiple SASL mechanisms in a broker
- Modifying SASL mechanism in a Running Cluster
- Authentication using Delegation Tokens
- 7.4 Authorization and ACLs
- 7.5 Incorporating Security Features in a Running Cluster
- 7.6 ZooKeeper Authentication
- 7.7 ZooKeeper Encryption
- KAFKA CONNECT
- KAFKA STREAMS
Next :Getting Started
카프카에 대한 기본적인 소개, 사용 사례, 퀵스타트 가이드