Comments (7)
Perhaps we could ORDER BY
the table we sample from within the method, if a seed is provided?
Something roughly like (in estimate_u.py
)
if seed is not None:
table_to_sample_from = "SELECT * FROM __splink__df_concat_with_tf ORDER BY unique_id"
else:
table_to_sample_from = "__splink__df_concat_with_tf"
...
sql = f"""
SELECT *
FROM {table_to_sample_from}
{sampling_sql_string}
"""
Realise there is a computational cost to this, but potentially not too bad, and as it is an (optional) part of training might be a reasonable trade-off to offer users
from splink.
I'm happy to give it a whirl and then revisit if it turns out to not be just a quick few lines
from splink.
Thanks for the report
What do you get if you run this?
from splink.datasets import splink_datasets
from splink.duckdb.linker import DuckDBLinker
import altair as alt
from splink.duckdb.comparison_library import exact_match
import pandas as pd
pd.options.display.max_rows = 1000
df = splink_datasets.historical_50k
from splink.duckdb.blocking_rule_library import block_on
# Simple settings dictionary will be used for exploratory analysis
settings = {
"link_type": "dedupe_only",
"blocking_rules_to_generate_predictions": [
block_on(["first_name", "surname"]),
block_on(["surname", "dob"]),
block_on(["first_name", "dob"]),
block_on(["postcode_fake", "first_name"]),
],
"comparisons": [
exact_match("first_name"),
exact_match("surname"),
],
"retain_matching_columns": True,
"retain_intermediate_calculation_columns": True,
"max_iterations": 10,
"em_convergence": 0.01
}
for i in range(10):
linker = DuckDBLinker(df, settings)
linker.estimate_u_using_random_sampling(1e5, seed=2)
print(linker.save_settings_to_json()["comparisons"][0]["comparison_levels"][1]["u_probability"])
I get 0.011881562201718104
every time
from splink.
Thanks for such a quick response @RobinL! That's very interesting - you're right, I get that exact value every time. However, if I use a more complex comparisons list, I no longer get consistent values with the following:
from splink.datasets import splink_datasets
from splink.duckdb.linker import DuckDBLinker
import altair as alt
from splink.duckdb.comparison_library import exact_match
import splink.duckdb.comparison_template_library as ctl
import splink.duckdb.comparison_library as cl
linker = DuckDBLinker(df, settings)
import pandas as pd
pd.options.display.max_rows = 1000
df = splink_datasets.historical_50k
from splink.duckdb.blocking_rule_library import block_on
# Simple settings dictionary will be used for exploratory analysis
settings = {
"link_type": "dedupe_only",
"blocking_rules_to_generate_predictions": [
block_on(["first_name", "surname"]),
block_on(["surname", "dob"]),
block_on(["first_name", "dob"]),
block_on(["postcode_fake", "first_name"]),
],
"comparisons": [
ctl.name_comparison("first_name", term_frequency_adjustments=True),
ctl.name_comparison("surname", term_frequency_adjustments=True),
ctl.date_comparison("dob", cast_strings_to_date=True, invalid_dates_as_null=True),
ctl.postcode_comparison("postcode_fake"),
cl.exact_match("birth_place", term_frequency_adjustments=True),
cl.exact_match("occupation", term_frequency_adjustments=True),
],
"retain_matching_columns": True,
"retain_intermediate_calculation_columns": True,
"max_iterations": 10,
"em_convergence": 0.01
}
for i in range(10):
linker = DuckDBLinker(df, settings)
linker.estimate_u_using_random_sampling(1e5, seed=2)
print(linker.save_settings_to_json()["comparisons"][0]["comparison_levels"][1]["u_probability"])
from splink.
Haven't looked in great detail yet, but looks like between iterations the table __splink__df_concat_with_tf
is not consistent between iterations. Then when we Bernoulli sample from this, because the rows are in a different order, we get a different __splink__df_concat_with_tf_sample
table, which ultimately means a different u-prob.
This might be driven by the tf tables themselves which are pretty variable in order - I guess this then affects how the join
ends up working
from splink.
Right - nice spot - so it's a consequence of tables (results) in SQL being inherently unordered (unless an ORDER BY is specified) as opposed to anything to do with the random number generator itself. That would make sense.
Whilst this would theoretically be fixable by ensuring we put an unambiguous ORDER BY in all our results, it would add too much complexity to the codebase (and be a big piece of work) so we might just need to drop support for seed
altogether in backends that produce inconsistent results. @RossKen what do you think?
from splink.
Yes - it hadn't occurred to me the solution might be that simple - but if it is, that sounds like a sensible solution to me
from splink.
Related Issues (20)
- [FEAT] Save out `SplinkDataFrame` metadata
- [FEAT] Linkage stats
- [BUG] Bug in `truth_space_table_from_labels_column`/Calculations with Blocking Rules in Splink HOT 2
- [Splink4] Use fresh SQLPipeline for all linker methods HOT 4
- Bug in save model to JSON
- [FEAT] Internally estimate probabilities for blocking-rule-related comparisons to improve EM
- [FEAT] Allow exact or Bayesian pre-specification of m-probabilities for selected comparisons HOT 3
- [MAINT] Add a default value to the `threshold_selection_tool` chart
- Sqlglot 23.0.0 breaks EM Training HOT 2
- ERROR - IndexError: list index out of range HOT 1
- IndexError: List index out of range when calling linker.estimate_parameters_using_expectation_maximisation(training_blocking_rule) HOT 2
- Unable to retrieve m and u Estimates from the Saved Model
- [Splink 4] Find new matches can be simplified by creating a new linker
- [FEAT] Add GitHub action to sort/update custom dictionary HOT 3
- [FEAT] Split out system installs from spellchecker bash script HOT 2
- [MAINT] Ensure consistent capitalisation when referencing functions named after people
- [FEAT] Scala 2.13 support? HOT 4
- Can't train for M values on Databricks HOT 4
- [FEAT] Rename cols in graph metric tables
- [FEAT] Add cluster metrics to cluster studio
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from splink.