![]() Slow insert times despite SQLite's claims that it is capable I can't see any way to set this parameter using Perl though.įollowing Tim's suggestion that an index was causing increasingly Set SQLITE_FCNTL_CHUNK_SIZE: I don't know C (?!), so I'd prefer to not learn it just to get this done.Attaching, collecting results in a temp table, and detaching hundreds of times per request seems to be a lot of work and overhead, but I'll try it if there are no other alternatives. The problem is that I need to be able to retrieve data from the entire history when querying which means that eventually I'll hit the 62 table attachment limit. Break the table into smaller subtables / files: This will work in the short term and I have already experimented with it.It won't work to make the database completely inaccessible for x minutes / day Drop the index, add the records, and re-index: This is fine as a workaround, but doesn't work when the DB still needs to be usable during updates.The problem I have is that all of the benchmarks only demonstrate fast insert speeds with 10m rows? Known workarounds which I'd like to avoid if possible I have followed the advice here and elsewhere to achieve these speeds and I'm happy with 35k-45k inserts/s. There are also claims that SQLite can handle large amounts of data, with reports of 50+ GB not causing any problems with the right settings. There are many questions here regarding slow insert speeds and a wealth of advice and benchmarks. It is well-known that SQLite needs to be fine tuned to achieve insert speeds on the order of 50k inserts/s.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |