Apache Flink-使用数据流中的值动态创建流数据源



我正在尝试使用apache flink构建一个示例应用程序:

  1. 从kafka队列中读取一系列库存符号(例如'csco','fb')。
  2. 对于每个符号,可以实时查找当前价格并流式传输下游处理的值。

*更新到原始帖子 *

我将映射函数移至一个单独的类中,并且没有收到运行时错误消息" MAPFUNCTICT的实现不再可序列化。该对象可能包含或参考不可序列化的字段"。

我现在面临的问题是,我试图写价的Kafka主题" Stockprices"没有收到它们。我正在尝试进行麻烦,并将发布任何更新。

public class RetrieveStockPrices { 
    @SuppressWarnings("serial") 
    public static void main(String[] args) throws Exception { 
        final StreamExecutionEnvironment streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment();
        streamExecEnv.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime); 
        Properties properties = new Properties(); 
        properties.setProperty("bootstrap.servers", "localhost:9092"); 
        properties.setProperty("zookeeper.connect", "localhost:2181"); 
        properties.setProperty("group.id", "stocks"); 
        DataStream<String> streamOfStockSymbols = streamExecEnv.addSource(new FlinkKafkaConsumer08<String>("stocksymbol", new SimpleStringSchema(), properties)); 
        DataStream<String> stockPrice = 
            streamOfStockSymbols 
            //get unique keys 
            .keyBy(new KeySelector<String, String>() { 
                @Override 
                public String getKey(String trend) throws Exception {
                    return trend; 
                }
                }) 
            //collect events over a window 
            .window(TumblingEventTimeWindows.of(Time.seconds(60))) 
            //return the last event from the window...all elements are the same "Symbol" 
            .apply(new WindowFunction<String, String, String, TimeWindow>() {
                @Override 
                public void apply(String key, TimeWindow window, Iterable<String> input, Collector<String> out) throws Exception { 
                    out.collect(input.iterator().next().toString()); 
                }
            })
            .map(new StockSymbolToPriceMapFunction());
        streamExecEnv.execute("Retrieve Stock Prices"); 
    }
}
public class StockSymbolToPriceMapFunction extends RichMapFunction<String, String> {
    @Override
    public String map(String stockSymbol) throws Exception {
        final StreamExecutionEnvironment streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment();
        streamExecEnv.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
        System.out.println("StockSymbolToPriceMapFunction: stockSymbol: " + stockSymbol);
        DataStream<String> stockPrices = streamExecEnv.addSource(new LookupStockPrice(stockSymbol));
        stockPrices.keyBy(new CustomKeySelector()).addSink(new FlinkKafkaProducer08<String>("localhost:9092", "stockprices", new SimpleStringSchema()));
        return "100000";
    }
    private static class CustomKeySelector implements KeySelector<String, String> {
        @Override
        public String getKey(String arg0) throws Exception {
            return arg0.trim();
        }
    }
}

public class LookupStockPrice extends RichSourceFunction<String> { 
    public String stockSymbol = null; 
    public boolean isRunning = true; 
    public LookupStockPrice(String inSymbol) { 
            stockSymbol = inSymbol; 
    } 
    @Override 
    public void open(Configuration parameters) throws Exception { 
            isRunning = true; 
    } 

    @Override 
    public void cancel() { 
            isRunning = false; 
    } 
    @Override 
    public void run(SourceFunction.SourceContext<String> ctx) 
                    throws Exception { 
            String stockPrice = "0";
            while (isRunning) { 
                //TODO: query Google Finance API 
                stockPrice = Integer.toString((new Random()).nextInt(100)+1);
                ctx.collect(stockPrice);
                Thread.sleep(10000);
            } 
    } 
}

StreamExecutionEnvironment没有缩进用于在流媒体应用程序的操作员内部使用。并非有意义的是,这没有受到测试和鼓励。它可能起作用并做点事

您程序中的StockSymbolToPriceMapFunction为每个传入记录指定一个全新独立的新流媒体应用程序。但是,由于您不调用streamExecEnv.execute(),因此没有启动程序,并且map方法无需做任何事情即可返回。

如果您调用streamExecEnv.execute(),则该功能将在工人JVM中启动新的本地弗林克集群,并在此本地flink群集上启动应用程序。本地的Flink实例将占用很多堆空间,并且在启动了几个集群之后,由于OutOfMemoryError,该工人可能会死亡,这不是您想发生的事情。

相关内容

  • 没有找到相关文章

最新更新