相关文章推荐
org.apache.kafka.common.security.auth.PrincipalBuilder
As of Kafka 1.0.0, use KafkaPrincipalBuilder instead. This will be removed in a future major release.
org.apache.kafka.common.serialization.ExtendedDeserializer
This class has been deprecated and will be removed in a future release. Please use Deserializer instead.
org.apache.kafka.common.serialization.ExtendedSerializer
This class has been deprecated and will be removed in a future release. Please use Serializer instead.
org.apache.kafka.streams.processor.PartitionGrouper
since 2.4 release; will be removed in 3.0.0 via KAFKA-7785
org.apache.kafka.common.config.SaslConfigs.DEFAULT_SASL_KERBEROS_PRINCIPAL_TO_LOCAL_RULES
As of 1.0.0. This field will be removed in a future major release.
org.apache.kafka.common.config.SaslConfigs.SASL_ENABLED_MECHANISMS
As of 1.0.0. This field will be removed in a future major release.
org.apache.kafka.common.config.SaslConfigs.SASL_ENABLED_MECHANISMS_DOC
As of 1.0.0. This field will be removed in a future major release.
org.apache.kafka.common.config.SaslConfigs.SASL_KERBEROS_PRINCIPAL_TO_LOCAL_RULES
As of 1.0.0. This field will be removed in a future major release.
org.apache.kafka.common.config.SaslConfigs.SASL_KERBEROS_PRINCIPAL_TO_LOCAL_RULES_DOC
As of 1.0.0. This field will be removed in a future major release.
org.apache.kafka.common.config.SslConfigs.DEFAULT_PRINCIPAL_BUILDER_CLASS
As of 1.0.0. This field will be removed in a future major release. In recent versions, the config is optional and there is no default.
org.apache.kafka.common.config.SslConfigs.PRINCIPAL_BUILDER_CLASS_CONFIG
As of 1.0.0. This field will be removed in a future major release.
org.apache.kafka.common.config.SslConfigs.PRINCIPAL_BUILDER_CLASS_DOC
As of 1.0.0. This field will be removed in a future major release.
org.apache.kafka.common.config.SslConfigs.SSL_CLIENT_AUTH_CONFIG
As of 1.0.0. This field will be removed in a future major release.
org.apache.kafka.common.config.SslConfigs.SSL_CLIENT_AUTH_DOC
As of 1.0.0. This field will be removed in a future major release.
org.apache.kafka.streams.kstream.Windows.segments org.apache.kafka.streams.StreamsConfig.PARTITION_GROUPER_CLASS_CONFIG org.apache.kafka.streams.StreamsConfig.RETRIES_CONFIG
since 2.7
org.apache.kafka.streams.StreamsConfig.TOPOLOGY_OPTIMIZATION org.apache.kafka.clients.consumer.ConsumerConfig.addDeserializerToConfig​(Map<String, Object>, Deserializer<?>, Deserializer<?>)
Since 2.7.0. This will be removed in a future major release.
org.apache.kafka.clients.consumer.ConsumerRecord.checksum()
As of Kafka 0.11.0. Because of the potential for message format conversion on the broker, the checksum returned by the broker may not match what was computed by the producer. It is therefore unsafe to depend on this checksum for end-to-end delivery guarantees. Additionally, message format v2 does not include a record-level checksum (for performance, the record checksum was replaced with a batch checksum). To maintain compatibility, a partial checksum computed from the record timestamp, serialized key size, and serialized value size is returned instead, but this should not be depended on for end-to-end reliability.
org.apache.kafka.clients.consumer.KafkaConsumer.close​(long, TimeUnit) org.apache.kafka.clients.consumer.KafkaConsumer.committed​(TopicPartition)
since 2.4 Use KafkaConsumer.committed(Set) instead
org.apache.kafka.clients.consumer.KafkaConsumer.poll​(long)
Since 2.0. Use KafkaConsumer.poll(Duration) , which does not block beyond the timeout awaiting partition assignment. See KIP-266 for more information.
org.apache.kafka.clients.consumer.MockConsumer.close​(long, TimeUnit) org.apache.kafka.clients.consumer.MockConsumer.committed​(TopicPartition) org.apache.kafka.clients.consumer.MockConsumer.poll​(long) org.apache.kafka.clients.consumer.MockConsumer.setException​(KafkaException) org.apache.kafka.clients.consumer.NoOffsetForPartitionException.partition() org.apache.kafka.clients.producer.Producer.close​(long, TimeUnit) org.apache.kafka.clients.producer.ProducerConfig.addSerializerToConfig​(Map<String, Object>, Serializer<?>, Serializer<?>)
Since 2.7.0. This will be removed in a future major release.
org.apache.kafka.clients.producer.RecordMetadata.checksum()
As of Kafka 0.11.0. Because of the potential for message format conversion on the broker, the computed checksum may not match what was stored on the broker, or what will be returned to the consumer. It is therefore unsafe to depend on this checksum for end-to-end delivery guarantees. Additionally, message format v2 does not include a record-level checksum (for performance, the record checksum was replaced with a batch checksum). To maintain compatibility, a partial checksum computed from the record timestamp, serialized key size, and serialized value size is returned instead, but this should not be depended on for end-to-end reliability.
org.apache.kafka.common.MessageFormatter.init​(Properties)
Use MessageFormatter.configure(Map) instead, this method is for backward compatibility with the older Formatter interface
org.apache.kafka.common.Metric.value()
As of 1.0.0, use Metric.metricValue() instead. This will be removed in a future major release.
org.apache.kafka.common.metrics.KafkaMetric.value() org.apache.kafka.common.security.auth.KafkaPrincipal.fromString​(String)
As of 1.0.0. This method will be removed in a future major release.
org.apache.kafka.connect.sink.SinkTask.onPartitionsAssigned​(Collection<TopicPartition>)
Use SinkTask.open(Collection) for partition initialization.
org.apache.kafka.connect.sink.SinkTask.onPartitionsRevoked​(Collection<TopicPartition>)
Use SinkTask.close(Collection) instead for partition cleanup.
org.apache.kafka.connect.source.SourceTask.commitRecord​(SourceRecord) org.apache.kafka.streams.KafkaClientSupplier.getAdminClient​(Map<String, Object>) org.apache.kafka.streams.KafkaStreams.close​(long, TimeUnit)
Use KafkaStreams.close(Duration) instead; note, that KafkaStreams.close(Duration) has different semantics and does not block on zero, e.g., `Duration.ofMillis(0)`.
org.apache.kafka.streams.KafkaStreams.metadataForKey​(String, K, Serializer<K>) org.apache.kafka.streams.KafkaStreams.setUncaughtExceptionHandler​(Thread.UncaughtExceptionHandler) org.apache.kafka.streams.KafkaStreams.store​(String, QueryableStoreType<T>)
since 2.5 release; use KafkaStreams.store(StoreQueryParameters) instead
org.apache.kafka.streams.KeyQueryMetadata.getActiveHost() org.apache.kafka.streams.KeyQueryMetadata.getPartition() org.apache.kafka.streams.KeyQueryMetadata.getStandbyHosts() org.apache.kafka.streams.kstream.Joined.name()
this method will be removed in a in a future release
org.apache.kafka.streams.kstream.Joined.named​(String) org.apache.kafka.streams.kstream.JoinWindows.after​(long)
Use JoinWindows.after(Duration) instead
org.apache.kafka.streams.kstream.JoinWindows.before​(long) org.apache.kafka.streams.kstream.JoinWindows.maintainMs()
since 2.1. This function should not be used anymore, since JoinWindows.until(long) is deprecated in favor of JoinWindows.grace(Duration) .
org.apache.kafka.streams.kstream.JoinWindows.of​(long) org.apache.kafka.streams.kstream.JoinWindows.until​(long)
since 2.1. Use JoinWindows.grace(Duration) instead.
org.apache.kafka.streams.kstream.KStream.branch​(Predicate<? super K, ? super V>...)
since 2.8. Use KStream.split() instead.
org.apache.kafka.streams.kstream.KStream.groupBy​(KeyValueMapper<? super K, ? super V, KR>, Serialized<KR, V>) org.apache.kafka.streams.kstream.KStream.groupByKey​(Serialized<K, V>)
since 2.1. Use KStream.groupByKey(Grouped) instead
org.apache.kafka.streams.kstream.KStream.join​(KStream<K, VO>, ValueJoiner<? super V, ? super VO, ? extends VR>, JoinWindows, Joined<K, V, VO>) org.apache.kafka.streams.kstream.KStream.leftJoin​(KStream<K, VO>, ValueJoiner<? super V, ? super VO, ? extends VR>, JoinWindows, Joined<K, V, VO>) org.apache.kafka.streams.kstream.KStream.outerJoin​(KStream<K, VO>, ValueJoiner<? super V, ? super VO, ? extends VR>, JoinWindows, Joined<K, V, VO>) org.apache.kafka.streams.kstream.KStream.through​(String)
since 2.6; use KStream.repartition() instead
org.apache.kafka.streams.kstream.KTable.groupBy​(KeyValueMapper<? super K, ? super V, KeyValue<KR, VR>>, Serialized<KR, VR>) org.apache.kafka.streams.kstream.SessionWindows.maintainMs()
since 2.1. Use Materialized.retention instead.
org.apache.kafka.streams.kstream.SessionWindows.until​(long)
since 2.1. Use Materialized.retention or directly configure the retention in a store supplier and use Materialized.as(SessionBytesStoreSupplier) .
org.apache.kafka.streams.kstream.SessionWindows.with​(long) org.apache.kafka.streams.kstream.TimeWindows.advanceBy​(long) org.apache.kafka.streams.kstream.TimeWindows.maintainMs()
since 2.1. Use Materialized.retention instead.
org.apache.kafka.streams.kstream.TimeWindows.of​(long) org.apache.kafka.streams.kstream.TimeWindows.until​(long)
since 2.1. Use Materialized.retention or directly configure the retention in a store supplier and use Materialized.as(WindowBytesStoreSupplier) .
org.apache.kafka.streams.kstream.UnlimitedWindows.maintainMs()
since 2.1. Use Materialized.retention instead.
org.apache.kafka.streams.kstream.UnlimitedWindows.startOn​(long) org.apache.kafka.streams.kstream.UnlimitedWindows.until​(long)
since 2.1.
org.apache.kafka.streams.kstream.WindowedSerdes.timeWindowedSerdeFrom​(Class<T>) org.apache.kafka.streams.kstream.Windows.maintainMs()
since 2.1. Use Materialized.retention instead.
org.apache.kafka.streams.kstream.Windows.segments​(int)
since 2.1 Override segmentInterval() instead.
org.apache.kafka.streams.kstream.Windows.until​(long)
since 2.1. Use Materialized.withRetention(Duration) or directly configure the retention in a store supplier and use Materialized.as(WindowBytesStoreSupplier) .
org.apache.kafka.streams.processor.MockProcessorContext.forward​(K, V, int) org.apache.kafka.streams.processor.MockProcessorContext.schedule​(long, PunctuationType, Punctuator) org.apache.kafka.streams.processor.ProcessorContext.forward​(K, V, int) org.apache.kafka.streams.processor.ProcessorContext.schedule​(long, PunctuationType, Punctuator) org.apache.kafka.streams.processor.StateStore.init​(ProcessorContext, StateStore)
Since 2.7.0. Callers should invoke StateStore.init(StateStoreContext, StateStore) instead. Implementers may choose to implement this method for backward compatibility or to throw an informative exception instead.
org.apache.kafka.streams.state.ReadOnlyWindowStore.fetch​(K, long, long) org.apache.kafka.streams.state.ReadOnlyWindowStore.fetchAll​(long, long) org.apache.kafka.streams.state.Stores.persistentSessionStore​(String, long) org.apache.kafka.streams.state.Stores.persistentWindowStore​(String, long, int, long, boolean) org.apache.kafka.streams.state.WindowBytesStoreSupplier.segments() org.apache.kafka.streams.state.WindowStore.put​(K, V)
as timestamp is not provided for the key-value pair, this causes inconsistency to identify the window frame to which the key belongs. Use WindowStore.put(Object, Object, long) instead.
org.apache.kafka.streams.StreamsBuilder.addGlobalStore​(StoreBuilder<?>, String, String, Consumed<K, V>, String, ProcessorSupplier<K, V>) org.apache.kafka.streams.StreamsConfig.getConsumerConfigs​(String, String) org.apache.kafka.streams.StreamsMetrics.addLatencyAndThroughputSensor​(String, String, String, Sensor.RecordingLevel, String...)
since 2.5. Use addLatencyRateTotalSensor() instead.
org.apache.kafka.streams.StreamsMetrics.addThroughputSensor​(String, String, String, Sensor.RecordingLevel, String...)
since 2.5. Use addRateTotalSensor() instead.
org.apache.kafka.streams.StreamsMetrics.recordLatency​(Sensor, long, long)
since 2.5. Use Sensor#record() instead.
org.apache.kafka.streams.StreamsMetrics.recordThroughput​(Sensor, long)
since 2.5. Use Sensor#record() instead.
org.apache.kafka.streams.Topology.addGlobalStore​(StoreBuilder<?>, String, Deserializer<K>, Deserializer<V>, String, String, ProcessorSupplier<K, V>) org.apache.kafka.streams.Topology.addProcessor​(String, ProcessorSupplier, String...) org.apache.kafka.streams.TopologyDescription.Source.topics() org.apache.kafka.streams.TopologyTestDriver.advanceWallClockTime​(long) org.apache.kafka.streams.TopologyTestDriver.pipeInput​(ConsumerRecord<byte[], byte[]>)
Since 2.4 use methods of TestInputTopic instead
org.apache.kafka.streams.TopologyTestDriver.readOutput​(String)
Since 2.4 use methods of TestOutputTopic instead
Since 2.6.0. Use JmxReporter() Initialize JmxReporter with JmxReporter.contextChange(MetricsContext) Populate prefix by adding _namespace/prefix key value pair to MetricsContext
org.apache.kafka.streams.KafkaStreams​(Topology, StreamsConfig) org.apache.kafka.streams.kstream.TimeWindowedDeserializer​(Deserializer<T>) org.apache.kafka.streams.kstream.WindowedSerdes.TimeWindowedSerde​(Serde<T>) org.apache.kafka.streams.TopologyTestDriver​(Topology, Properties, long)
 
推荐文章