Bulk upsert with SQLAlchemy
本问题已经有最佳答案,请猛点这里访问。
我正在使用SQLAlchemy 1.1.0b将大量数据批量插入到PostgreSQL中,并且我遇到了重复的键错误。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | from sqlalchemy import * from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.automap import automap_base import pg engine = create_engine("postgresql+pygresql://" + uname +":" + passw +"@" + url) # reflectively load the database. metadata = MetaData() metadata.reflect(bind=engine) session = sessionmaker(autocommit=True, autoflush=True) session.configure(bind=engine) session = session() base = automap_base(metadata=metadata) base.prepare(engine, reflect=True) table_name ="arbitrary_table_name" # this will always be arbitrary mapped_table = getattr(base.classses, table_name) # col and col2 exist in the table. chunks = [[{"col":"val"},{"col2":"val2"}],[{"col":"val"},{"col2":"val3"}]] for chunk in chunks: session.bulk_insert_mappings(mapped_table, chunk) session.commit() |
当我运行它时,我得到了这个:
1 | sqlalchemy.exc.IntegrityError: (pg.IntegrityError) ERROR: duplicate key value violates unique constraint <constraint> |
我似乎无法正确地将
我正在处理时间序列数据,因此我正在大量抓取数据,并在时间范围内有一些重叠。 我想进行批量upsert以确保数据一致性。
使用大型数据集进行批量upsert的最佳方法是什么? 我知道PostgreSQL现在支持upserts,但我不确定如何在SQLAlchemy中执行此操作。
来自https://stackoverflow.com/a/26018934/465974
After I found this command, I was able to perform upserts, but it is
worth mentioning that this operation is slow for a bulk"upsert".The alternative is to get a list of the primary keys you would like to
upsert, and query the database for any matching ids: