Git Product home page Git Product logo

fairness-indicators's Introduction

Fairness Indicators

Fairness_Indicators

Fairness Indicators is designed to support teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit.

The tool is currently actively used internally by many of our products. We would love to partner with you to understand where Fairness Indicators is most useful, and where added functionality would be valuable. Please reach out at [email protected]. You can provide feedback and feature requests here.

Key links

What is Fairness Indicators?

Fairness Indicators enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers.

Many existing tools for evaluating fairness concerns don’t work well on large-scale datasets and models. At Google, it is important for us to have tools that can work on billion-user systems. Fairness Indicators will allow you to evaluate fairenss metrics across any size of use case.

In particular, Fairness Indicators includes the ability to:

  • Evaluate the distribution of datasets
  • Evaluate model performance, sliced across defined groups of users
    • Feel confident about your results with confidence intervals and evals at multiple thresholds
  • Dive deep into individual slices to explore root causes and opportunities for improvement

This case study, complete with videos and programming exercises, demonstrates how Fairness Indicators can be used on one of your own products to evaluate fairness concerns over time.

pip install fairness-indicators

The pip package includes:

Nightly Packages

Fairness Indicators also hosts nightly packages at https://pypi-nightly.tensorflow.org on Google Cloud. To install the latest nightly package, please use the following command:

pip install --extra-index-url https://pypi-nightly.tensorflow.org/simple fairness-indicators

This will install the nightly packages for the major dependencies of Fairness Indicators such as TensorFlow Data Validation (TFDV), TensorFlow Model Analysis (TFMA).

How can I use Fairness Indicators?

Tensorflow Models

  • Access Fairness Indicators as part of the Evaluator component in Tensorflow Extended [docs]
  • Access Fairness Indicators in Tensorboard when evaluating other real-time metrics [docs]

Not using existing Tensorflow tools? No worries!

  • Download the Fairness Indicators pip package, and use Tensorflow Model Analysis as a standalone tool [docs]
  • Model Agnostic TFMA enables you to compute Fairness Indicators based on the output of any model [docs]

Examples directory contains several examples.

More questions?

For more information on how to think about fairness evaluation in the context of your use case, see this link.

If you have found a bug in Fairness Indicators, please file a GitHub issue with as much supporting information as you can provide.

Compatible versions

The following table shows the package versions that are compatible with each other. This is determined by our testing framework, but other untested combinations may also work.

fairness-indicators tensorflow tensorflow-data-validation tensorflow-model-analysis
GitHub master nightly (1.x/2.x) 1.15.1 0.46.0
v0.46.0 2.15 1.15.1 0.46.0
v0.44.0 2.12 1.13.0 0.44.0
v0.43.0 2.11 1.12.0 0.43.0
v0.42.0 1.15.5 / 2.10 1.11.0 0.42.0
v0.41.0 1.15.5 / 2.9 1.10.0 0.41.0
v0.40.0 1.15.5 / 2.9 1.9.0 0.40.0
v0.39.0 1.15.5 / 2.8 1.8.0 0.39.0
v0.38.0 1.15.5 / 2.8 1.7.0 0.38.0
v0.37.0 1.15.5 / 2.7 1.6.0 0.37.0
v0.36.0 1.15.2 / 2.7 1.5.0 0.36.0
v0.35.0 1.15.2 / 2.6 1.4.0 0.35.0
v0.34.0 1.15.2 / 2.6 1.3.0 0.34.0
v0.33.0 1.15.2 / 2.5 1.2.0 0.33.0
v0.30.0 1.15.2 / 2.4 0.30.0 0.30.0
v0.29.0 1.15.2 / 2.4 0.29.0 0.29.0
v0.28.0 1.15.2 / 2.4 0.28.0 0.28.0
v0.27.0 1.15.2 / 2.4 0.27.0 0.27.0
v0.26.0 1.15.2 / 2.3 0.26.0 0.26.0
v0.25.0 1.15.2 / 2.3 0.25.0 0.25.0
v0.24.0 1.15.2 / 2.3 0.24.0 0.24.0
v0.23.0 1.15.2 / 2.3 0.23.0 0.23.0

fairness-indicators's People

Contributors

anirudh161 avatar brills avatar catherinaxu avatar chongkong avatar christinagreer avatar dhruvesh09 avatar embr avatar fhuanming avatar genehwung avatar jay90099 avatar jindalshivam09 avatar kevinrobinson avatar kumarpiyush avatar lamberta avatar markdaoust avatar mattdangerw avatar mdreves avatar paulgc avatar postmasters avatar rchen152 avatar rtg0795 avatar shuklak13 avatar tgreensp avatar venkat2469 avatar vkarampudi avatar yashk2810 avatar yilei avatar zhouhao138 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fairness-indicators's Issues

Widget Not Working in Jupyter Notebook Environment

Hi Dev team,

I am studying the fairness indicator when trying to use it locally instead of colab. When I am trying to render the widget from the following example, the results are not showing up:

event_handlers={'slice-selected':
wit.create_selection_callback(wit_data, DEFAULT_MAX_EXAMPLES)}
widget_view.render_fairness_indicator(eval_result=eval_result,
slicing_column=slice_selection,
event_handlers=event_handlers
)
Later I checked the source code and find out when the environment is not colab, it will pass to a empty class called FairnessIndicatorViewer. I wonder is it intended or it will be fixed later? Many thanks!

Plantilla de gato

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
  • Fairness Indicators version:
  • Python version:
  • Pip version:

Describe the problem

Provide the exact sequence of commands / steps that you executed before running into the problem

Any other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

Update example and tutorial links in blog posts

hello! πŸ‘‹

In looking at some blog posts that are ranked highly when typing "fairness indicators" in search engines (Google AI blog, TF blog), I found that many of the links are broken.

In the blog posts:

Screen Shot 2020-12-08 at 9 53 28 AM

Screen Shot 2020-12-08 at 9 53 32 AM

I submitted a fix for this in the REAMDE in This is the same problem as #196, but since the blog posts from December 2019 come up so highly in search engines, it might be good to backport the link fixes and republish those posts too. Thanks! πŸ‘

AttributeError in Facessd Fairness Indicators Example Colab.ipynb

System information

  • Running "Facessd Fairness Indicators Example Colab.ipynb" on Colab
  • TensorFlow version: 2.2.0-rc3
  • Python version: 3.6.9

Describe the current behavior

Getting the following error when running cell 3 line 2:

AttributeError: module 'tfx_bsl.coders.example_coder' has no attribute 'ExamplesToRecordBatchDecoder' [while running 'DecodeData/BatchSerializedExamplesToArrowTables/BatchDecodeExamples']

Standalone code to reproduce the issue
The error is easy reproduced running the "Facessd Fairness Indicators Example Colab.ipynb" on Colab.

Other info / logs

IndexError                                Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in get(self, instruction_id, bundle_descriptor_id)
    311       # pop() is threadsafe
--> 312       processor = self.cached_bundle_processors[bundle_descriptor_id].pop()
    313     except IndexError:

IndexError: pop from empty list

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._invoke_lifecycle_method()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnInvoker.invoke_setup()

/usr/local/lib/python3.6/dist-packages/tensorflow_data_validation/utils/batch_util.py in setup(self)
    106   def setup(self):
--> 107     self._decoder = example_coder.ExamplesToRecordBatchDecoder()
    108 

AttributeError: module 'tfx_bsl.coders.example_coder' has no attribute 'ExamplesToRecordBatchDecoder'

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
<ipython-input-3-31ccd38caa04> in <module>()
      1 data_location = tf.keras.utils.get_file('lfw_dataset.tf', 'https://storage.googleapis.com/facessd_dataset/lfw_dataset.tfrecord')
      2 
----> 3 stats = tfdv.generate_statistics_from_tfrecord(data_location=data_location)
      4 tfdv.visualize_statistics(stats)

/usr/local/lib/python3.6/dist-packages/tensorflow_data_validation/utils/stats_gen_lib.py in generate_statistics_from_tfrecord(data_location, output_path, stats_options, pipeline_options, compression_type)
    113             shard_name_template='',
    114             coder=beam.coders.ProtoCoder(
--> 115                 statistics_pb2.DatasetFeatureStatisticsList)))
    116   return load_statistics(output_path)
    117 

/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in __exit__(self, exc_type, exc_val, exc_tb)
    501   def __exit__(self, exc_type, exc_val, exc_tb):
    502     if not exc_type:
--> 503       self.run().wait_until_finish()
    504 
    505   def visit(self, visitor):

/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
    481       return Pipeline.from_runner_api(
    482           self.to_runner_api(use_fake_coders=True), self.runner,
--> 483           self._options).run(False)
    484 
    485     if self._options.view_as(TypeOptions).runtime_type_check:

/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
    494       finally:
    495         shutil.rmtree(tmpdir)
--> 496     return self.runner.run_pipeline(self, self._options)
    497 
    498   def __enter__(self):

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/direct/direct_runner.py in run_pipeline(self, pipeline, options)
    128       runner = BundleBasedDirectRunner()
    129 
--> 130     return runner.run_pipeline(pipeline, options)
    131 
    132 

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_pipeline(self, pipeline, options)
    553 
    554     self._latest_run_result = self.run_via_runner_api(
--> 555         pipeline.to_runner_api(default_environment=self._default_environment))
    556     return self._latest_run_result
    557 

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_via_runner_api(self, pipeline_proto)
    563     # TODO(pabloem, BEAM-7514): Create a watermark manager (that has access to
    564     #   the teststream (if any), and all the stages).
--> 565     return self.run_stages(stage_context, stages)
    566 
    567   @contextlib.contextmanager

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_stages(self, stage_context, stages)
    704               stage,
    705               pcoll_buffers,
--> 706               stage_context.safe_coders)
    707           metrics_by_stage[stage.name] = stage_results.process_bundle.metrics
    708           monitoring_infos_by_stage[stage.name] = (

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in _run_stage(self, worker_handler_factory, pipeline_components, stage, pcoll_buffers, safe_coders)
   1071         cache_token_generator=cache_token_generator)
   1072 
-> 1073     result, splits = bundle_manager.process_bundle(data_input, data_output)
   1074 
   1075     def input_for(transform_id, input_id):

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
   2332 
   2333     with UnboundedThreadPoolExecutor() as executor:
-> 2334       for result, split_result in executor.map(execute, part_inputs):
   2335 
   2336         split_result_list += split_result

/usr/lib/python3.6/concurrent/futures/_base.py in result_iterator()
    584                     # Careful not to keep a reference to the popped future
    585                     if timeout is None:
--> 586                         yield fs.pop().result()
    587                     else:
    588                         yield fs.pop().result(end_time - time.monotonic())

/usr/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
    430                 raise CancelledError()
    431             elif self._state == FINISHED:
--> 432                 return self.__get_result()
    433             else:
    434                 raise TimeoutError()

/usr/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
    382     def __get_result(self):
    383         if self._exception:
--> 384             raise self._exception
    385         else:
    386             return self._result

/usr/local/lib/python3.6/dist-packages/apache_beam/utils/thread_pool_executor.py in run(self)
     42       # If the future wasn't cancelled, then attempt to execute it.
     43       try:
---> 44         self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))
     45       except BaseException as exc:
     46         # Even though Python 2 futures library has #set_exection(),

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in execute(part_map)
   2329           self._registered,
   2330           cache_token_generator=self._cache_token_generator)
-> 2331       return bundle_manager.process_bundle(part_map, expected_outputs)
   2332 
   2333     with UnboundedThreadPoolExecutor() as executor:

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
   2243             process_bundle_descriptor_id=self._bundle_descriptor.id,
   2244             cache_tokens=[next(self._cache_token_generator)]))
-> 2245     result_future = self._worker_handler.control_conn.push(process_bundle_req)
   2246 
   2247     split_results = []  # type: List[beam_fn_api_pb2.ProcessBundleSplitResponse]

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in push(self, request)
   1557       self._uid_counter += 1
   1558       request.instruction_id = 'control_%s' % self._uid_counter
-> 1559     response = self.worker.do_instruction(request)
   1560     return ControlFuture(request.instruction_id, response)
   1561 

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in do_instruction(self, request)
    413       # E.g. if register is set, this will call self.register(request.register))
    414       return getattr(self, request_type)(
--> 415           getattr(request, request_type), request.instruction_id)
    416     else:
    417       raise NotImplementedError

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in process_bundle(self, request, instruction_id)
    442     # type: (...) -> beam_fn_api_pb2.InstructionResponse
    443     bundle_processor = self.bundle_processor_cache.get(
--> 444         instruction_id, request.process_bundle_descriptor_id)
    445     try:
    446       with bundle_processor.state_handler.process_instruction_id(

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in get(self, instruction_id, bundle_descriptor_id)
    316           self.state_handler_factory.create_state_handler(
    317               self.fns[bundle_descriptor_id].state_api_service_descriptor),
--> 318           self.data_channel_factory)
    319     self.active_bundle_processors[
    320         instruction_id] = bundle_descriptor_id, processor

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in __init__(self, process_bundle_descriptor, state_handler, data_channel_factory)
    741     self.ops = self.create_execution_tree(self.process_bundle_descriptor)
    742     for op in self.ops.values():
--> 743       op.setup()
    744     self.splitting_lock = threading.Lock()
    745 

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.setup()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.setup()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.setup()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._invoke_lifecycle_method()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()

/usr/local/lib/python3.6/dist-packages/future/utils/__init__.py in raise_with_traceback(exc, traceback)
    417         if traceback == Ellipsis:
    418             _, _, traceback = sys.exc_info()
--> 419         raise exc.with_traceback(traceback)
    420 
    421 else:

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._invoke_lifecycle_method()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnInvoker.invoke_setup()

/usr/local/lib/python3.6/dist-packages/tensorflow_data_validation/utils/batch_util.py in setup(self)
    105 
    106   def setup(self):
--> 107     self._decoder = example_coder.ExamplesToRecordBatchDecoder()
    108 
    109   def process(self, batch: List[bytes]) -> Iterable[pa.Table]:

AttributeError: module 'tfx_bsl.coders.example_coder' has no attribute 'ExamplesToRecordBatchDecoder' [while running 'DecodeData/BatchSerializedExamplesToArrowTables/BatchDecodeExamples']

Need help with evaluating model!

Hi I am new to this, I am successfully able to train and evaluate my model, however now I am wondering how do I recompute the same metrics and performance gap using fairness indicators.

My model is something like this:

def model_func():   
    model = tf.keras.models.Sequential([
        keras.layers.Dense(units = 14, input_dim=14, activation='relu'),
        keras.layers.Dense(units = 28, activation='relu'),
        keras.layers.Dense(units = 1,  activation='sigmoid')
        ])

    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

    return model

Then I train model and test it on test data-set.

# Geting my trained model
model = model_func()

# Training my model
train = model.fit(X_train, y_train, epochs=50, batch_size=10, verbose = 1)

Now how do I recompute the same metrics and performance gap using fairness indicators?

Performance of CelebA constrained model

  • Have I written custom code (as opposed to using stock example code provided): No

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab

  • Fairness Indicators version: 0.30.1

  • TensorFlow version: 2.5.0

  • Python version: 3.7.10

  • TFMA version: 0.30.0

Hey there,

I have noticed that the constrained model in the CelebA example Notebook has a horrible positive rate: 0.1 @ 0.5 threshold.

I expected that the model would perform at least equally good on this metric.

Image in Fairness_Indicators_Lineage_Case_Study.ipynb is not publicly accessible

URL with the issue:

https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb

Description of issue (what needs changing):

The link to the image on line 899 in the notebook is not publicly accessible, you need to sign in with a Google account to view it.

        "![Type I and Type II errors](http://services.google.com/fh/gumdrop/preview/blogs/type_i_type_ii.png)\n",

Correct links

Link to a publicly accessible type_i_type_ii.png.

Widget doesn't show up in Jupyter Notebook

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): RHEL 7
  • Fairness Indicators version:
fairness-indicators                    0.24.0
tensorboard-plugin-fairness-indicators 0.24.0
  • Python version: 3.7
  • Pip version: 20.2
  • NPM version: [email protected]
  • Jupyter Notebook version:
jupyter                                1.0.0
jupyter-client                         5.2.3
jupyter-console                        6.1.0
jupyter-contrib-core                   0.3.3
jupyter-contrib-nbextensions           0.5.1
jupyter-core                           4.4.0
jupyter-highlight-selected-word        0.2.0
jupyter-latex-envs                     1.4.6
jupyter-nbextensions-configurator      0.4.1
jupyterlab                             2.2.4
jupyterlab-launcher                    0.11.2
jupyterlab-pygments                    0.1.1
jupyterlab-server                      1.2.0

Describe the problem

tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)

does not display any widget in Jupyter Notebook. It shows these errors in the browser console,

manager-base.js:273 Could not instantiate widget
        (anonymous) @ manager-base.js:273
        (anonymous) @ manager-base.js:44
        (anonymous) @ manager-base.js:25
        a @ manager-base.js:17
        Promise.then (async)
        u @ manager-base.js:18
        (anonymous) @ manager-base.js:19
        A @ manager-base.js:15
        t._make_model @ manager-base.js:257
        (anonymous) @ manager-base.js:246
        (anonymous) @ manager-base.js:44
        (anonymous) @ manager-base.js:25
        (anonymous) @ manager-base.js:19
        A @ manager-base.js:15
        t.new_model @ manager-base.js:232
        t.handle_comm_open @ manager-base.js:144
        L @ underscore.js:762
        (anonymous) @ underscore.js:775
        (anonymous) @ underscore.js:122
        (anonymous) @ comm.js:89
        Promise.then (async)
        CommManager.comm_open @ comm.js:85
        i @ jquery.min.js:2
        Kernel._handle_iopub_message @ kernel.js:1223
        Kernel._finish_ws_message @ kernel.js:1015
        (anonymous) @ kernel.js:1006
        Promise.then (async)
        Kernel._handle_ws_message @ kernel.js:1006
        i @ jquery.min.js:2

utils.js:119 Error: Could not create a model.
        at utils.js:119
        (anonymous) @ utils.js:119
        Promise.catch (async)
        t.handle_comm_open @ manager-base.js:149
        L @ underscore.js:762
        (anonymous) @ underscore.js:775
        (anonymous) @ underscore.js:122
        (anonymous) @ comm.js:89
        Promise.then (async)
        CommManager.comm_open @ comm.js:85
        i @ jquery.min.js:2
        Kernel._handle_iopub_message @ kernel.js:1223
        Kernel._finish_ws_message @ kernel.js:1015
        (anonymous) @ kernel.js:1006
        Promise.then (async)
        Kernel._handle_ws_message @ kernel.js:1006
        i @ jquery.min.js:2

2kernel.js:1007 Couldn't process kernel message TypeError: Cannot read property 'FairnessIndicatorModel' of undefined
        at manager.js:153
        (anonymous) @ kernel.js:1007
        Promise.catch (async)
        Kernel._handle_ws_message @ kernel.js:1007
        i @ jquery.min.js:2

manager.js:153 Uncaught (in promise) TypeError: Cannot read property 'FairnessIndicatorModel' of undefined
    at manager.js:153
    (anonymous)	@	manager.js:153
    Promise.then (async)		
    t.register_model	@	manager-base.js:208
    (anonymous)	@	manager-base.js:248
    (anonymous)	@	manager-base.js:44
    (anonymous)	@	manager-base.js:25
    (anonymous)	@	manager-base.js:19
    A	@	manager-base.js:15
    t.new_model	@	manager-base.js:232
    t.handle_comm_open	@	manager-base.js:144
    L	@	underscore.js:762
    (anonymous)	@	underscore.js:775
    (anonymous)	@	underscore.js:122
    (anonymous)	@	comm.js:89
    Promise.then (async)		
    CommManager.comm_open	@	comm.js:85
    i	@	jquery.min.js:2
    Kernel._handle_iopub_message	@	kernel.js:1223
    Kernel._finish_ws_message	@	kernel.js:1015
    (anonymous)	@	kernel.js:1006
    Promise.then (async)		
    Kernel._handle_ws_message	@	kernel.js:1006
    i	@	jquery.min.js:2

Provide the exact sequence of commands / steps that you executed before running into the problem
I was just following an example from TF docs which uses bar_pass_prediction.csv dataset. Below code shows how the config is created from sample data.

# Specify Fairness Indicators in eval_config.
eval_config = text_format.Parse("""
  model_specs {
    prediction_key: 'dnn_bar_pass_prediction',
    label_key: 'pass_bar'
  }
  metrics_specs {
    metrics {class_name: "AUC"}
    metrics {
      class_name: "FairnessIndicators"
      config: '{"thresholds": [0.50, 0.90]}'
    }
  }
  slicing_specs {
    feature_keys: 'race1'
  }
  slicing_specs {}
  """, tfma.EvalConfig())

# Run TensorFlow Model Analysis.
eval_result = tfma.analyze_raw_data(
  data=_LSAT_DF,
  eval_config=eval_config,
  output_path=_DATA_ROOT)

Any other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
None

WIT not rendering fairness indicators in Vertex Workbench (works in Colab)

I try to run the Fairness_Indicators_Example_Colab.ipynb in Vertex Workbench (user-managed notebook), but when I run this cell, it says Loading widget for ever and nothing is displayed. I restarted the Kernel after the pip installation. TFDV is shown, but not WIT with the fairness indicators. Any suggestions are appreciated.

widget_view.render_fairness_indicator(eval_result=eval_result,
                                      slicing_column=slice_selection,
                                      event_handlers=event_handlers
                                      )
Loading widget...

This problem does NOT happen in Colab.

No fairness indicators widget shown in Jupyterlab

Hi, now I am learning to use fairness indicators and what-if tool in my jupyterlab environment on SageMaker notebook instance. The witwidget of what-if tool is successfully shown, but the widget of fairness indicators cannot be visualized, with only text "Loading widget..." printed. So I am wondering if fairness indicator's visualization available in my environment. Appreciate it if anyone helps.

My python and pip package versions
Python 3.7.12
pip 22.0.4

(Jupyter):
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.4.3
jupyter-core 4.9.2
jupyterlab 1.2.21
jupyterlab-git 0.11.0
jupyterlab-server 1.2.0
jupyterlab-widgets 1.1.0

(TensorFlow):
TF: 2.6.2
TFDV: 1.3.0
TFMA: 0.34.1
Fairness Indicators version: 0.34.0

(jupyter labextension):
@jupyter-widgets/jupyterlab-manager v1.1.0 enabled OK
@jupyterlab/celltags v0.2.0 enabled OK
@jupyterlab/git v0.11.0 enabled OK
@jupyterlab/toc v2.0.0 enabled OK
nbdime-jupyterlab v1.0.1 enabled OK
sagemaker_examples v0.1.0 enabled OK
sagemaker_session_manager v0.1.0 enabled OK
wit-widget v1.8.1 enabled OK

And here is the sample code I am running.
https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb#scrollTo=MfBg1C5NB3X0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.