Getting error in Azure Stream Analytics with DocumentDB as sink
我正在使用Azure流分析将事件从事件中心流到DocumentDB。我已经按照文档配置了输入、查询和输出,并用示例数据对其进行了测试,并成功地按预期返回了结果。
但是,当我启动流作业并发送与前面的示例数据相同的有效负载时,我收到了以下错误消息:
There was a problem formatting the document [id] column as per DocumentDB constraints for DocumentDB db:[my-database-name], and collection:[my-collection-name].
号
我的示例数据是一个JSON数组:
1 2 3 4 | [ {"Sequence": 1,"Tenant":"T1","Status":"Started" }, {"Sequence": 2,"Tenant":"T1","Status":"Ended" } ] |
我将输入配置如下:
- Input alias: eventhubs-events
- Source Type: Data stream
- Source: Event Hub
- Subscription: same subscription as where I create the Analytics job
- Service bus namespace: an existing Event Hub namespace
- Event hub name: events (existing event hub in the namespace)
- Event hub policy name: a policy with read access
- Event hub consumer group: blank
- Event serialization format: JSON
- Encoding: UTF-8
号
输出如下:
- Output alias: documentdb-events
- Sink: DocumentDB
- Subscription: same subscription as where I create the Analytics job
- Account id: an existing DocumentDB account
- Database: records (an existing database in the account)
- Collection name pattern: collection (an existing collection in the database)
- Document id: id
号
我的查询简单如下:
1 2 3 4 5 6 | SELECT event.Sequence AS id, event.Tenant, event.Status INTO [documentdb-events] FROM [eventhubs-events] AS event |
号
结果显示,输出中的所有字段名称都自动小写。
在我的documentdb集合中,我已将集合配置为分区模式,其中"/tenant"作为分区键。
由于情况与输出的情况不匹配,所以它没有通过约束。
将分区键更改为"/tenant"修复了该问题。
希望通过分享我的发现的结果,可以为遇到这种情况的人省去一些麻烦。
第二个选项
现在我们可以在流分析中更改兼容性级别,而不是在较低的情况下更改分区键。
1.0 versions: Field names were changed to lower case when processed by the Azure Stream Analytics engine.
1.1 version: case-sensitivity is persisted for field names when they are processed by the Azure Stream Analytics engine.
号