Comments (3)
Log of DataStore.Native:
This was produced by reducing the child count to 3 and doing an element 'switch' at index 1 i.e. the first and second element should change places:
The first 3 "paragaphs" are the creation of the 3 child elements.
The next section is meant to perform the switch in the elements in the join table and that's where one of the Batch update statements fails. I have added comment in this section to state what I think each SQL is doing.
23:51:19,003 (main) DEBUG [DataNucleus.Datastore.Native] - INSERT INTO compositeobject (compositeobject_id,bus_reg,`name`,version,classid) VALUES (<561>,<null>,<'A0'>,<1>,<827687841>)
23:51:19,006 (main) DEBUG [DataNucleus.Datastore.Native] - UPDATE compositeobject SET version=<267> WHERE compositeobject_id=<1> AND version=<266>
23:51:19,007 (main) DEBUG [DataNucleus.Datastore.Native] - SELECT COUNT(*) FROM composite_childobjects THIS LEFT OUTER JOIN compositeobject ELEM ON THIS.compositeobject_id_eid=ELEM.compositeobject_id WHERE THIS.compositeobject_id_oid=<1> AND THIS.integer_idx>=0 AND (ELEM.classid=<827687841> OR classid IS NULL)
23:51:19,009 (main) DEBUG [DataNucleus.Datastore.Native] - INSERT INTO composite_childobjects (compositeobject_id_oid,compositeobject_id_eid,integer_idx) VALUES (<1>,<561>,<0>)
23:51:19,011 (main) DEBUG [DataNucleus.Datastore.Native] - INSERT INTO compositeobject (compositeobject_id,bus_reg,`name`,version,classid) VALUES (<562>,<null>,<'A1'>,<1>,<827687841>)
23:51:19,012 (main) DEBUG [DataNucleus.Datastore.Native] - UPDATE compositeobject SET version=<268> WHERE compositeobject_id=<1> AND version=<267>
23:51:19,013 (main) DEBUG [DataNucleus.Datastore.Native] - SELECT COUNT(*) FROM composite_childobjects THIS LEFT OUTER JOIN compositeobject ELEM ON THIS.compositeobject_id_eid=ELEM.compositeobject_id WHERE THIS.compositeobject_id_oid=<1> AND THIS.integer_idx>=0 AND (ELEM.classid=<827687841> OR classid IS NULL)
23:51:19,014 (main) DEBUG [DataNucleus.Datastore.Native] - INSERT INTO composite_childobjects (compositeobject_id_oid,compositeobject_id_eid,integer_idx) VALUES (<1>,<562>,<1>)
23:51:19,016 (main) DEBUG [DataNucleus.Datastore.Native] - INSERT INTO compositeobject (compositeobject_id,bus_reg,`name`,version,classid) VALUES (<563>,<null>,<'A2'>,<1>,<827687841>)
23:51:19,018 (main) DEBUG [DataNucleus.Datastore.Native] - UPDATE compositeobject SET version=<269> WHERE compositeobject_id=<1> AND version=<268>
23:51:19,019 (main) DEBUG [DataNucleus.Datastore.Native] - SELECT COUNT(*) FROM composite_childobjects THIS LEFT OUTER JOIN compositeobject ELEM ON THIS.compositeobject_id_eid=ELEM.compositeobject_id WHERE THIS.compositeobject_id_oid=<1> AND THIS.integer_idx>=0 AND (ELEM.classid=<827687841> OR classid IS NULL)
23:51:19,020 (main) DEBUG [DataNucleus.Datastore.Native] - INSERT INTO composite_childobjects (compositeobject_id_oid,compositeobject_id_eid,integer_idx) VALUES (<1>,<563>,<2>)
23:51:19,180 (main) DEBUG [DataNucleus.Datastore.Native] - SELECT a0.bus_reg,a0.`name`,a0.compositeobject_id,a0.version,a0.classid FROM compositeobject a0 WHERE a0.classid = 827687841 AND a0.`name` = <'TopLevel'>
Update version number of top level object
23:51:19,185 (main) DEBUG [DataNucleus.Datastore.Native] - UPDATE compositeobject SET version=<270> WHERE compositeobject_id=<1> AND version=<269>
23:51:19,187 (main) DEBUG [DataNucleus.Datastore.Native] - SELECT a1.bus_reg,a1.`name`,a1.compositeobject_id,a1.version,a1.classid FROM composite_childobjects a0 LEFT OUTER JOIN compositeobject a1 ON a0.compositeobject_id_eid = a1.compositeobject_id WHERE ((a1.classid = 827687841 OR a1.classid IS NULL)) AND a0.compositeobject_id_oid = <1> AND a0.integer_idx = 1
23:51:19,189 (main) DEBUG [DataNucleus.Datastore.Native] - SELECT COUNT(*) FROM composite_childobjects THIS LEFT OUTER JOIN compositeobject ELEM ON THIS.compositeobject_id_eid=ELEM.compositeobject_id WHERE THIS.compositeobject_id_oid=<1> AND THIS.integer_idx>=0 AND (ELEM.classid=<827687841> OR classid IS NULL)
Delete the record from the join table at index = 1
23:51:19,190 (main) DEBUG [DataNucleus.Datastore.Native] - DELETE FROM composite_childobjects WHERE compositeobject_id_oid=<1> AND integer_idx=<1>
Decrement the idx of the record following the one that was just deleted. It's current idx = 2 so it satisifes > 1 condition in where clause. It's idx will be set to 2 - 1 = 1.
This would leave us with two elements with idx 0, 1
23:51:19,191 (main) DEBUG [DataNucleus.Datastore.Native] - BATCH [UPDATE composite_childobjects SET integer_idx = integer_idx + <-1> WHERE compositeobject_id_oid=<1> AND integer_idx><1>]
23:51:19,193 (main) DEBUG [DataNucleus.Datastore.Native] - SELECT COUNT(*) FROM composite_childobjects THIS LEFT OUTER JOIN compositeobject ELEM ON THIS.compositeobject_id_eid=ELEM.compositeobject_id WHERE THIS.compositeobject_id_oid=<1> AND THIS.integer_idx>=0 AND (ELEM.classid=<827687841> OR classid IS NULL)
Next is two identical updates which cause all elements (i.e. idx > -1) to be incremented... twice! Why the duplication?
After the first update we would have two elements with idx 1, 2 - which seems correct given that we intend to re-add the previously deleted record at idx = 0.
Question: Why do we need the second identical update in preparation for the re-adding of the previously deleted record which should be at idx = 0 - so having existing records with idx 1, 2 would seem correct? There should be no need for a second incrementing update which would leave idx values at 2, 3
23:51:19,194 (main) DEBUG [DataNucleus.Datastore.Native] - BATCH [UPDATE composite_childobjects SET integer_idx = integer_idx + <1> WHERE compositeobject_id_oid=<1> AND integer_idx><-1>]
23:51:19,208 (main) DEBUG [DataNucleus.Datastore.Native] - BATCH [UPDATE composite_childobjects SET integer_idx = integer_idx + <1> WHERE compositeobject_id_oid=<1> AND integer_idx><-1>
I'm not sure if Datastore.Native performs logging before execution or after successful execution. The exception logs seem to indicate that the issue occurs on the issue of an INSERT and we don't see that in the logs so perhaps Native logging occurs only after successful execution - it would be interesting to see the details of the record it was attempting to insert. Ideally it would have idx = 0 but given that the insert gives a "can't add duplicate" error it might be inserting a record with idx = 2 or 3 as they are the indices of the existing records in the join table.
from datanucleus-rdbms.
I've found a possible cause of the issue: (Note: not the root cause of the issue but still likely to be incorrect code!)
In internalShiftBulk there was maybe a "copy/paste" glitch when the code for building bulk shift statement (shiftBulkStmt) was based on the existing "one at a time" shift statement (shiftStmt).
In internalShiftBulk (full source code below), a local variable shiftBulkStmt is established which holds the return of getShiftBulkStmt() however, when executedStatementUpdate is called the statement passed in is not the local variable shiftBulkStmt but the attribute shiftStmt (the "one at a time" statement").
// Execute the statement
return sqlControl.executeStatementUpdate(ec, conn, >>>> shiftStmt <<<<<, ps, executeNow);
protected int[] internalShiftBulk(ObjectProvider op, ManagedConnection conn, boolean batched, int start, int amount, boolean executeNow)
throws MappedDatastoreException
{
ExecutionContext ec = op.getExecutionContext();
SQLController sqlControl = storeMgr.getSQLController();
String shiftBulkStmt = getShiftBulkStmt();
try
{
PreparedStatement ps = sqlControl.getStatementForUpdate(conn, shiftBulkStmt, batched);
try
{
int jdbcPosition = 1;
jdbcPosition = BackingStoreHelper.populateOrderInStatement(ec, ps, amount, jdbcPosition, orderMapping);
jdbcPosition = BackingStoreHelper.populateOwnerInStatement(op, ec, ps, jdbcPosition, this);
jdbcPosition = BackingStoreHelper.populateOrderInStatement(ec, ps, start, jdbcPosition, orderMapping);
if (relationDiscriminatorMapping != null)
{
jdbcPosition = BackingStoreHelper.populateRelationDiscriminatorInStatement(ec, ps, jdbcPosition, this);
}
// Execute the statement
return sqlControl.executeStatementUpdate(ec, conn, shiftStmt, ps, executeNow); ?????? Wrong stmt passed in ??????
}
finally
{
sqlControl.closeStatement(conn, ps);
}
}
catch (SQLException sqle)
{
throw new MappedDatastoreException(shiftStmt, sqle);
}
}
Result of testing with change to shiftBulkStmt
Changing shiftStmt to shiftBulkStmt did not fix the issue - it still raises an exception when attempting the insert.
from datanucleus-rdbms.
JoinListStore: reversion to non bulk shift in 'internalAdd' resolves issue for now - not optimized but at least it doesn't crash.
It appears as though the original 'single shift' code was still present but commented out. I uncommented it and commented out the call to internalShiftBulk and the test case now succeeds:
// internalShiftBulk(op, mconn, true, start-1, shift, true);
// Revert to "one at a time" shifting to avoid bulk shift bug
boolean batched = currentListSize - start > 0;
for (int i = currentListSize - 1; i >= start; i--)
{
// Shift the index for this row by "shift"
internalShift(op, mconn, batched, i, shift, (i == start));
}
from datanucleus-rdbms.
Related Issues (20)
- Support TINYBLOB on MySQL/MariaDB without size specifier, as per TINYTEXT
- Search of setter function in ResultClassROF.getObject() is broken HOT 1
- NullPointerException in ResultClassROF.getObject() in 6.0.2 HOT 6
- ArgType not cached for null value in first Result in ResultSet HOT 1
- Wrong identity type generated for type `long` with PostgreSQL
- ResultClassROF can fail to set fields/properties when JDBC driver returns unassignable type HOT 9
- datanucleus.query.jdoql.{varName}.join extension does not work when "varName" is not lower case HOT 4
- Support bulk-fetch on a Collection when field is empty
- Provide InitSQL hikari options HOT 2
- Support ConnectionInitSQL with HikariCP
- keyword "COLUMN_NAME" for dm database HOT 3
- `UpdateStmtAllowTableAliasInSet` is not honored for update statements involving `DelegatedExpression`s HOT 2
- Issue #470 doesn't allow for discriminatorColumnName not being in different case
- Provide registerMbeans HikariCP options
- Unexpected query performance numbers in datanucleus log HOT 2
- JDOQLQuery should throw exception if datastore does not support query canceling
- `SQLStatement#getSQLText` implementations are not thread-safe HOT 1
- Question: strategy for dealing with mysql table limit when eagerly fetching fields of a large object
- Add support for persisting PostgreSQL `JSON` and `JSONB` fields HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from datanucleus-rdbms.