Flowing-Retail Example with Kafka & Camunda: Error

Hi guys

Not sure, if this question belongs here or in https://github.com/flowing/flowing-retail/issues.

I tried to play with the example described here using the method described in Manual start (Kafka, mvn exec:java) on a Windows 10 machine.

Kafka is running, the topic flowing-retail got created and gets listed with

kafka-topics.bat -list -zookeeper localhost:2181

Every module can be started, but only the flowing-retail-kafka-checkout without any error.

All the other Spring Boot projects are starting and are reachable (for example http://localhost:8091 let’s me login to Camunda and the process order.bpmn is deployed)

BUT they are showing the kafka-related error below.

When I place an order in http://localhost:8090/, this seems to work (Thank you for your order…), but in Monitor (http://localhost:8095/) no event is listed and in consequently in the Order microservice no process instance is started.

Any hints?

2018-08-03 18:59:49.983  INFO 9660 --- [ask-scheduler-4] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 1.0.1
2018-08-03 18:59:49.984  INFO 9660 --- [ask-scheduler-4] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : c0518aa65f25317e
2018-08-03 18:59:50.094  INFO 9660 --- [ask-scheduler-4] o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values: 
	auto.commit.interval.ms = 100
	auto.offset.reset = earliest
	bootstrap.servers = [localhost:9092]
	check.crcs = true
	client.id = 
	connections.max.idle.ms = 540000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = order
	heartbeat.interval.ms = 3000
	interceptor.classes = null
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 305000
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer

2018-08-03 18:59:50.098  INFO 9660 --- [ask-scheduler-4] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 1.0.1
2018-08-03 18:59:50.098  INFO 9660 --- [ask-scheduler-4] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : c0518aa65f25317e
2018-08-03 18:59:50.104  WARN 9660 --- [ask-scheduler-4] o.s.c.s.b.k.p.KafkaTopicProvisioner      : The number of expected partitions was: 1, but 0 has been found instead.There will be 1 idle consumers
2018-08-03 18:59:50.105 ERROR 9660 --- [ask-scheduler-4] o.s.cloud.stream.binding.BindingService  : Failed to create consumer binding; retrying in 30 seconds

org.springframework.cloud.stream.binder.BinderException: Exception thrown while starting consumer: 
	at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:326) ~[spring-cloud-stream-2.0.0.RELEASE.jar:2.0.0.RELEASE]
	at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:77) ~[spring-cloud-stream-2.0.0.RELEASE.jar:2.0.0.RELEASE]
	at org.springframework.cloud.stream.binder.AbstractBinder.bindConsumer(AbstractBinder.java:129) ~[spring-cloud-stream-2.0.0.RELEASE.jar:2.0.0.RELEASE]
	at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleConsumerBinding$0(BindingService.java:154) ~[spring-cloud-stream-2.0.0.RELEASE.jar:2.0.0.RELEASE]
	at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) ~[spring-context-5.0.6.RELEASE.jar:5.0.6.RELEASE]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_141]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_141]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[na:1.8.0_141]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[na:1.8.0_141]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_141]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_141]
	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_141]
Caused by: java.lang.IllegalArgumentException: A list of partitions must be provided
	at org.springframework.util.Assert.isTrue(Assert.java:116) ~[spring-core-5.0.6.RELEASE.jar:5.0.6.RELEASE]
	at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createConsumerEndpoint(KafkaMessageChannelBinder.java:354) ~[spring-cloud-stream-binder-kafka-2.0.0.RELEASE.jar:2.0.0.RELEASE]
	at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createConsumerEndpoint(KafkaMessageChannelBinder.java:126) ~[spring-cloud-stream-binder-kafka-2.0.0.RELEASE.jar:2.0.0.RELEASE]
	at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:279) ~[spring-cloud-stream-2.0.0.RELEASE.jar:2.0.0.RELEASE]
	... 11 common frames omitted