Git Product home page Git Product logo

Comments (13)

cseeg avatar cseeg commented on June 5, 2024 1

Yes what Sterling said is correct. I was looking through this Kaggle post to understand more about shap-hypetune and that's where I came to a conclusion to use hyperopt combined with BoostRFA. I will fix those issues and look into Optuna.

from xtal2png.

cseeg avatar cseeg commented on June 5, 2024 1

Ya this image was the best way to help visualize it

image

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

Question brought up during meeting - whether to include compositional information.

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

@cseeg Lattice parameters and unit cell volume can be accessed through the pymatgen.core.structure.Structure objects that Matbench gives you. See

latt_a.append(s._lattice.a)
latt_b.append(s._lattice.b)
latt_c.append(s._lattice.c)
angles.append(list(s._lattice.angles))
volume.append(s.volume)
space_group.append(s.get_space_group_info()[1])

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

Matbench instructions

Example using structure-based model

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

How this fits into the bigger picture

Faris is working on a convolutional neural network that uses the full 64x64 representation and so that will be one of the main comparisons, as well as with the dummy baseline and other Matbench models. If Faris' model performs worse than your model, that would indicate the representation probably just has too many parameters compared with datapoints (64*64 = 4096 features vs. fewer than 10 for yours) to be useful for regression. If Faris' model performs better, then it might be worth adding composition information to your model as a follow-up (i.e. let it know what the chemical formula is), or we might just stop there.

There are a couple ways these results can affect the design decisions of xtal2png. For example, we can see how changes in the design affect regression accuracy and if that correlates well with the performance on the generative benchmark tasks, which are much less established. It has other implications for when we start doing conditional generation, such as whether we could rely on a prediction using the xtal2png representation or if we need to use a separate model (e.g. ALIGNN, MEGNet) to predict properties separate from the generation. My guess is probably the latter, but worth the simple check.

Mostly thinking of it as additional baselines and another perspective on the representation's behavior in a more established space (regression/classification performance).

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

Initial notebook using default xgboost parameters at #78, matbench submission to follow soon

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

Matbench PR submitted in materialsproject/matbench#152

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

Hyperopt submission ready-to-go by @cseeg. Planning to submit a Matbench PR soon.

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

@cseeg hyperopt submission notebook is close, but needs to be reworked and rerun. The hyperparameter optimization should occur once for each Matbench fold in the loop.

i.e. remove the hardcoded hyperparameters params=...:

    #Define dictionary of hyperparameters. This came from the HYPERPARAM TUNING WITH HYPEROPT + RECURSIVE FEATURE ADDITION (RFA)
    params = {'colsample_bytree': 0.7271776258515598, 'learning_rate': 0.032792408056138485, 'max_depth': 19}
    

    #Set up and train XGBoost model
    train = xgb.DMatrix(X, label=y)
    num_round = 100
    my_model = xgb.train(params, train, num_round) # hyperopt should occur here

and before my_model = xgb.train(params, train, num_round), do your hyperparameter optimization (below) within the Matbench fold loop:

# Define regressor and split the dataset into training and validation dataset
X_regr_train, X_regr_valid, y_regr_train, y_regr_valid = train_test_split(X, y, test_size=0.3, shuffle=True,random_state=42)
regr_xgb = XGBRegressor(n_estimators=150, random_state=0, verbosity=0, n_jobs=-1)

#This a dictionary of hyperopt parameters to test through
param_dist_hyperopt = {
    'max_depth': 15 + hp.randint('num_leaves', 5), 
    'learning_rate': hp.loguniform('learning_rate', np.log(0.01), np.log(0.2)),
    'colsample_bytree': hp.uniform('colsample_by_tree', 0.6, 1.0)
}

#Define and fit model
model = BoostRFA(
    regr_xgb, param_grid=param_dist_hyperopt, min_features_to_select=1, step=1,
    n_iter=50, sampling_seed=0
)
model.fit(
    X_regr_train, y_regr_train, trials=Trials(), 
    eval_set=[(X_regr_valid, y_regr_valid)], early_stopping_rounds=6, verbose=0
    )

model.best_params_

Then my_model = xgb.train(params, train, num_round) should use the optimized hyperparameters with all the training + validation data (still not the test data), e.g.:

my_model = xgb.train(model.best_params_, train, num_round)

To recap, for each Matbench fold, split the train_and_val data into train and val, find optimal hyperparameters, and then fit a new model on train_and_val with the new hyperparameters. Use this newly trained model to predict on test and task.record the predictions. Lmk if you have questions on this.

from xtal2png.

kjappelbaum avatar kjappelbaum commented on June 5, 2024

I didn't check the full notebook - but you might want to check out Optuna as an alternative to hyperopt. It tends to be more efficient than hyperopt and also has a pruning callback for XGBoost (there is some note on this in The Kaggle Book)

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

@kjappelbaum oof, I forgot that hyperopt is a package. I've been (in poor taste) using it as an abbreviation for hyperparameter optimization. Glad you mentioned this. I believe @cseeg was using BoostRFA from shap-hypetune which was developed for gradient boosting models like XGBoost and has a sort of sklearn-like interface. I think @cseeg was running out of memory when using the other ones like BoostBoruta, and so went with BoostRFA. That's good to know that Optuna has some support/integration for XGBoost (definitely a number of good examples from https://www.google.com/search?q=optuna+xgboost).

I've enjoyed using RayTune quite a bit, especially given its integration with Ax. It looks like it has Optuna support as well (other link). I should probably give Optuna a try at some point.

from xtal2png.

sgbaird avatar sgbaird commented on June 5, 2024

ah, gotcha, didn't realize shap-hypetune depends on hyperopt (from shap-hypetune):

apply grid-search, random-search, or bayesian-search (from hyperopt);

from xtal2png.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.