t-kalinowski / deep-learning-with-r-2nd-edition-code Goto Github PK
View Code? Open in Web Editor NEWCode from the book "Deep Learning with R, 2nd Edition"
Home Page: https://blogs.rstudio.com/ai/posts/2022-05-31-deep-learning-with-r-2e/
Code from the book "Deep Learning with R, 2nd Edition"
Home Page: https://blogs.rstudio.com/ai/posts/2022-05-31-deep-learning-with-r-2e/
Hi!
I am triyng to reproduce in R some examples from Chapter 7. (Win 10x64, 22H2)
Calling plot(model) after Listing 7.11 falls with error: "See ?keras::plot.keras.engine.training.Model for instructions on how to install graphviz and pydot Error in plot(model) : ImportError: You must install pydot (pip install pydot) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) for plot_model to work."
But I have installed both pydot and graphviz - I did it a lot of times in different ways. Currently it looks like this:
reticulate::py_list_packages(envname = "r-reticulate", type = "conda" )
...................
graphviz 7.1.0 graphviz=7.1.0 conda-forge
...................
pydot 1.4.2 pydot=1.4.2 conda-forge
> reticulate::py_config()
python: C:/Users/UserName/miniconda3/envs/r-reticulate/python.exe
libpython: C:/Users/UserName/miniconda3/envs/r-reticulate/python38.dll
pythonhome: C:/Users/UserName/miniconda3/envs/r-reticulate
version: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 15:53:35) [MSC v.1929 64 bit (AMD64)]
Architecture: 64bit
numpy: C:/Users/UserName/miniconda3/envs/r-reticulate/Lib/site-packages/numpy
numpy_version: 1.24.2
After many hours of searching on the internet I still can't manage to fix this error. So I would appreciate you
help me to solve it.
Hi
I bought the Deep Learning with Python edition, though I have no knowledge of python, then I realized there was a Deep Learning with R edition, so I bought it also, as I have some proficiency with R (in RStudio)
Now I find myself blocked with Python setup problems.
Could you help me ?
After following the instruction in install-r-tensorflow.R I can get nowhere.
at
python <- reticulate::install_python("3.9:latest")
I get failure to install python 3.9
python <- reticulate::install_python("3.9:latest")
- '/usr/local/Cellar/pyenv/2.3.9/libexec/pyenv' install --skip-existing 3.9.16
/usr/local/Cellar/pyenv/2.3.9/libexec/pyenv-latest: line 39: printf: write error: Broken pipe
python-build: use [email protected] from homebrew
python-build: use readline from homebrew
Downloading Python-3.9.16.tar.xz...
-> https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tar.xz
Installing Python-3.9.16...
python-build: use readline from homebrew
python-build: use zlib from xcode sdkBUILD FAILED (OS X 12.6.3 using python-build 20180424)
Inspect or clean up the working tree at /var/folders/m4/kxt6ryhx1qbgz4srx2mmybsw0000gn/T/python-build.20230216005716.54172
Results logged to /var/folders/m4/kxt6ryhx1qbgz4srx2mmybsw0000gn/T/python-build.20230216005716.54172.logLast 10 log lines:
checking for python3.9... python3.9
checking for --enable-universalsdk... no
checking for --with-universal-archs... no
checking MACHDEP... "darwin"
checking for gcc... clang
checking whether the C compiler works... no
configure: error: in/var/folders/m4/kxt6ryhx1qbgz4srx2mmybsw0000gn/T/python-build.20230216005716.54172/Python-3.9.16': configure: error: C compiler cannot create executables See
config.log' for more details
make: *** No targets specified and no makefile found. Stop.
Error: installation of Python 3.9.16 failed
If I impose a 3.9 version already installed by Home-brew a few monthes ago by
python = "/usr/local/Cellar/[email protected]/3.9.16/Frameworks/Python.framework/Versions/3.9/bin/python3.9"
I get error in installing tensorflow
- '/Users/hseverac/.virtualenvs/r-reticulate/bin/python' -m pip install --upgrade --no-user --ignore-installed 'tensorflow==2.11.' 'tensorflow-hub' 'tensorflow-datasets' 'scipy' 'requests' 'Pillow' 'h5py' 'pandas' 'pydot' 'keras-tuner' 'ipython' 'kaggle'
ERROR: Could not find a version that satisfies the requirement tensorflow==2.11. (from versions: 2.12.0rc0)
ERROR: No matching distribution found for tensorflow==2.11.*
Error: Error installing package(s): "'tensorflow==2.11.*'", "'tensorflow-hub'", "'tensorflow-datasets'", "'scipy'", "'requests'", "'Pillow'", "'h5py'", "'pandas'", "'pydot'", "'keras-tuner'", "'ipython'", "'kaggle'"
If I follow your other advice given elsewhere
install.packages("remotes")
remotes::install_github(sprintf("rstudio/%s", c("reticulate", "tensorflow", "keras")))
reticulate::miniconda_uninstall() # start with a blank slate
reticulate::install_miniconda()
keras::install_keras()
It seems to be ok but then with
library(tensorflow)
Error in value[3L] :
Package ‘tensorflow’ version 2.11.0 cannot be unloaded:
Error in unloadNamespace(package) : namespace ‘tensorflow’ is imported by ‘keras’ so cannot be unloaded
and with
mnist <- dataset_mnist()
Error: Valid installation of TensorFlow not found.
Python environments searched for 'tensorflow' package:
/Library/Frameworks/Python.framework/Versions/3.11/bin/python3.11Python exception encountered:
Traceback (most recent call last):
File "/Library/Frameworks/R.framework/Versions/4.2/Resources/library/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/R.framework/Versions/4.2/Resources/library/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
^^^^^^
File "/Library/Frameworks/R.framework/Versions/4.2/Resources/library/reticulate/python/rpytools/loader.py", line 117, in _hook
return find_and_load(name, import)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'tensorflow'
I must say I am lost with all these error messages and a little disappointed by all this setup problems.
FYI, I have the 4 following versions of python installed :
/Library/Frameworks/Python.framework/Versions/3.11/bin/python3.11
/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10
/Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8
/usr/local/Cellar/[email protected]/3.9.16/Frameworks/Python.framework/Versions/3.9/bin/python3.9
Thanks for your help.
hello,Tomasz, I am a student from China, I am curious about practice things about “deep learning” by R follow your code. But unfortunately,
When i tried to install the "tensorflow","keras-tuner",etc by runing the following codes:
"if (tensorflow:::is_mac_arm64()) {
reticulate::install_miniconda()
keras::install_keras(extra_packages = c("ipython"))
reticulate::py_install(c("keras-tuner", "kaggle"),
envname = "r-reticulate",
pip = TRUE)
} else {
python <- reticulate::install_python("3.9:latest")
reticulate::virtualenv_create("r-reticulate", python = python)
keras::install_keras(extra_packages = c("keras-tuner", "ipython", "kaggle"))
reticulate::py_install(
"numpy",
envname = "r-reticulate",
pip = TRUE,
pip_options = c("--force-reinstall", "--no-binary numpy")
)
}".
Then i meet the error message as follows
"Error: Error installing package(s): ""tensorflow==2.9.*"", ""tensorflow-hub"", ""scipy"", ""requests"", ""Pillow"", ""h5py"", ""pandas"", ""pydot"", ""keras-tuner"", ""ipython"", ""kaggle""
In addition: Warning message:
In shell(fi, intern = intern) :
'C:\Users\Allen520\AppData\Local\Temp\RtmpS6xkGK\file68449cc308a.bat'运行失败,错误码为2"
I don't know what to do next and has no idea how to solve such problem, would you mind helping me to find solution ?
As stated in the book's Chapter 10 "...The exact formulation of the problem will be as follows: given data covering the previous five days and sampled once per hour, can we predict the temperature in 24 hours?.."
With this in mind do we really need to subtract 1 in :
delay <- sampling_rate * (sequence_length + 24 - 1)
? (see row #108 Ch 10).
I know, this code matches the book.
But for this delay the 1st sample:
> full_df$`Date Time`[1]
[1] "2009-01-01 00:10:00 -01"
has such target:
> head(tail(full_df$`Date Time`, -delay),1)
[1] "2009-01-06 23:10:00 -01"
It is not exactly 24 hours for a prediction horizon.
Without subtracting 1 things seem to look better:
delay <- sampling_rate * (sequence_length + 24)
head(tail(full_df$`Date Time`, -delay),1)
[1] "2009-01-07 00:10:00 -01"
So i can`t figure out the reason for subtracting of 1.
Any thoughts?
I'm having issues downloading the dog-vs-cat zip file for the example in Chapter 8 using the API. I accepted the T&Cs and verified the .kaggle folder is in the right place on my Mac, but I can't seem to get the actual download to work. I keep getting the following error:
system('kaggle competitions download -c dogs-vs-cats')
#> sh: kaggle: command not found
#> Warning: error in running command
I was able to download it once after a few tries, but never again. The code was the same so I'm not sure why it only worked once. For a weird set of circumstances I need to go back and redownload it, but I can't seem to get it to work. Here's the output of reticulate::py_list_packages()
if it is helpful in diagnosing the issue:
> reticulate::py_list_packages()
package version
1 bleach 6.1.0
2 certifi 2024.2.2
3 charset-normalizer 3.3.2
4 idna 3.6
5 kaggle 1.6.8
6 python-dateutil 2.9.0.post0
7 python-slugify 8.0.4
8 requests 2.31.0
9 six 1.16.0
10 text-unidecode 1.3
11 tqdm 4.66.2
12 urllib3 2.2.1
13 webencodings 0.5.1
requirement
1 bleach==6.1.0
2 certifi==2024.2.2
3 charset-normalizer==3.3.2
4 idna==3.6
5 kaggle==1.6.8
6 python-dateutil==2.9.0.post0
7 python-slugify==8.0.4
8 requests==2.31.0
9 six==1.16.0
10 text-unidecode==1.3
11 tqdm==4.66.2
12 urllib3==2.2.1
13 webencodings==0.5.1
I know I can download the files directly, but I was hoping to resolve the issue for downloading more Kaggle Datasets in the future. Let me know what other information I can provide.
Thank you!
I'm working through the fast feature extraction example in Chapter 8, and I seem to be stuck at the function for extracting features from the datasets. The function uses imagenet_preprocess_input()
, but it doesn't seem to be included in the keras3
package. It wasn't autocompleting nor could I find it using ls(package:keras3)
. I was able to find it in the documentation for keras 2.1.3
.
Here's the error:
Error in imagenet_preprocess_input(images) :
could not find function "imagenet_preprocess_input"
Is imagenet_preprocess_input()
not included in the keras3
package? If so, is there an alternative method I can use to get the same result? Let me know if there is any additional information I can provide.
Thank you!
I have been getting a tensor type error message when building the model (model <- keras_model_sequenial()
) with the keras3 package, but not when it's run with keras. I'm not sure what causes the issue, but I have been able to replicate the fix multiple times when it comes up. Anecdotally, it appears to happen the first time I run the code since opening Rstudio. Here's the error message from the Reuters example:
Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
TypeError: Inputs to a layer should be tensors. Got '<keras.src.engine.sequential.Sequential object at 0x30bd7e320>' (of type <class 'keras.src.engine.sequential.Sequential'>) as input for layer 'dense_2’.
── Python Exception Message ──────────────────
Traceback (most recent call last):
File "/Users/<user_account>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/Users/<user_account>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/keras/src/engine/input_spec.py", line 213, in assert_input_compatibility
raise TypeError(
TypeError: Inputs to a layer should be tensors. Got '<keras.src.engine.sequential.Sequential object at 0x30bd7e320>' (of type <class 'keras.src.engine.sequential.Sequential'>) as input for layer 'dense_2'.
── R Traceback ───────────────────────────────
▆
1. ├─... %>% layer_dense(46, activation = "softmax")
2. ├─keras3::layer_dense(., 46, activation = "softmax")
3. │ └─keras3:::create_layer(keras$layers$Dense, object, args)
4. ├─keras3::layer_dense(., 64, activation = "relu")
5. │ └─keras3:::create_layer(keras$layers$Dense, object, args)
6. └─keras3::layer_dense(., 64, activation = "relu")
7. └─keras3:::create_layer(keras$layers$Dense, object, args)
8. └─keras3:::compose_layer(object, layer)
9. └─reticulate (local) layer(object, ...)
10. └─reticulate:::py_call_impl(callable, call_args$unnamed, call_args$named)
<user_account>
in the error exampleI don't believe this happens after each fresh boot of Rstudio, but it seems more likely to happen then. I copied the code from the repository to make sure it was correct and ran it with keras3.
I am able to fix the issue by changing the package to keras, and I get the notice that the follow S3 methods will be used by the keras package instead of keras3:
Registered S3 methods overwritten by 'keras':
method from
as.data.frame.keras_training_history keras3
plot.keras_training_history keras3
print.keras_training_history keras3
r_to_py.R6ClassGenerator keras3
I then run the code and everything works fine. I can then run rm=list(ls)
and rerun everything with keras3 and it still works fine, but I noticed that the environment for keras_model_sequential() is still listed as keras.
Let me know if there is any additional information I can provide the will be helpful.
Hello,
Could you please check if you get same output for the following chunk of code (section 2.5.1)?
model <- naive_model_sequential(list(
layer_naive_dense(input_size = 28 * 28, output_size = 512,
activation = tf$nn$relu),
layer_naive_dense(input_size = 512, output_size = 10,
activation = tf$nn$softmax)
))
Error in random_array(w_shape, min = 0, max = 0.1) :
could not find function "random_array"
Some additional information:
R version 4.2.1 (2022-06-23 ucrt)
Loaded Tensorflow version 2.9.1
Apologies if that was answered somewhere else and/of I missed to include add information.
Regards. Cicero
Hi!
It looks like compile()
ignores an optimizer argument when compiling/training a custom model.
When i try this code:
model %>% compile(optimizer = optimizer_rmsprop())
(766th row in the book`s code)
it falls with an error: "Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
RuntimeError: in user code:
....
RuntimeError: object 'optimizer' not found".
Instead of a passed argument it takes an optimizer variable from parent environment (Global environment).
In other words, it needs to define in advance: optimizer <- optimizer_rmsprop(), then model is training as it should be.
Is this OK?
Any thoughts?
I'm working on the example problem in Chapter 8 for classifying cats and dog images, but I can't seem to get image_dataset_from_directory()
to work:
train_dataset <-
image_dataset_from_directory(new_base_dir / "train",
image_size = c(180, 180),
batch_size = 32)
#> Error in image_dataset_from_directory(new_base_dir/"train", image_size = c(180, : could not find function "image_dataset_from_directory"
Created on 2024-04-03 with reprex v2.1.0
The regex error is a bit different then what shows up in console:
Found 2000 files belonging to 2 classes.
Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
ValueError: 'size' must be a 1-D int32 Tensor
Here's the python last error:
── Python Exception Message ──────────
Traceback (most recent call last):
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/keras/src/utils/image_dataset_utils.py", line 313, in image_dataset_from_directory
dataset = paths_and_labels_to_dataset(
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/keras/src/utils/image_dataset_utils.py", line 365, in paths_and_labels_to_dataset
img_ds = path_ds.map(
File "/Users/<user_name>.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 2299, in map
return map_op._map_v2(
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/data/ops/map_op.py", line 40, in _map_v2
return _ParallelMapDataset(
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/data/ops/map_op.py", line 148, in __init__
self._map_func = structured_function.StructuredFunctionWrapper(
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 265, in __init__
self._function = fn_factory()
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 1251, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/Users<user_name>t/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 1221, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 696, in _initialize
self._concrete_variable_creation_fn = tracing_compilation.trace_function(
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 178, in trace_function
concrete_function = _maybe_define_function(
File "/Users<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 283, in _maybe_define_function
concrete_function = _create_concrete_function(
File "/Users/<user_name>t/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 310, in _create_concrete_function
traced_func_graph = func_graph_module.func_graph_from_py_func(
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1059, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 599, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 231, in wrapped_fn
ret = wrapper_helper(*args)
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 161, in wrapper_helper
ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
File "/Users/<user_name>.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 690, in wrapper
return converted_call(f, args, kwargs, options=options)
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 377, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in _call_unconverted
return f(*args, **kwargs)
File "/Users<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/keras/src/utils/image_dataset_utils.py", line 366, in <lambda>
lambda x: load_image(x, *args), num_parallel_calls=tf.data.AUTOTUNE
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/keras/src/utils/image_dataset_utils.py", line 402, in load_image
img = tf.image.resize(img, image_size, method=interpolation)
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/ops/image_ops_impl.py", line 1475, in _resize_images_common
raise ValueError('\'size\' must be a 1-D int32 Tensor')
ValueError: 'size' must be a 1-D int32 Tensor
── R Traceback ───────────────────────
▆
1. └─keras3::image_dataset_from_directory(...)
2. ├─base::do.call(keras$utils$image_dataset_from_directory, args)
3. └─reticulate (local) `<python.builtin.function>`(...)
4. └─reticulate:::py_call_impl(callable, call_args$unnamed, call_args$named)
Let me know if there is any additional information I can provide to help troubleshoot the issue.
Thank you!
I am working on the dog-cat model in Chapter 8, and I get the following error when trying to use layer_random_flip()
with a keras_model_sequential()
:
data_augmentation <- keras_model_sequential() %>%
layer_random_flip("horizontal") %>%
layer_random_rotation(0.1) %>%
layer_random_zoom(0.2)
#> Error in keras_model_sequential() %>% layer_random_flip("horizontal") %>% : could not find function "%>%"
Created on 2024-04-04 with reprex v2.1.0
Here is the error:
Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
ValueError: Exception encountered when calling RandomFlip.call().
Attempt to convert a value (<Sequential name=sequential_4, built=False>) with an unsupported type (<class 'keras.src.models.sequential.Sequential'>) to a Tensor.
Arguments received by RandomFlip.call():
• inputs=<Sequential name=sequential_4, built=False>
• training=True
The traceback:
── Python Exception Message ─────────
Traceback (most recent call last):
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/keras/src/layers/preprocessing/tf_data_layer.py", line 46, in __call__
return super().__call__(inputs, **kwargs)
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 123, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/Users/<user_name>/.virtualenvs/r-tensorflow/lib/python3.10/site-packages/tensorflow/python/framework/constant_op.py", line 108, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Exception encountered when calling RandomFlip.call().
Attempt to convert a value (<Sequential name=sequential_1, built=False>) with an unsupported type (<class 'keras.src.models.sequential.Sequential'>) to a Tensor.
Arguments received by RandomFlip.call():
• inputs=<Sequential name=sequential_1, built=False>
• training=True
── R Traceback ──────────────────────
▆
1. ├─... %>% layer_random_zoom(0.2)
2. ├─keras3::layer_random_zoom(., 0.2)
3. │ └─keras3:::create_layer(keras$layers$RandomZoom, object, args)
4. ├─keras3::layer_random_rotation(., 0.1)
5. │ └─keras3:::create_layer(keras$layers$RandomRotation, object, args)
6. └─keras3::layer_random_flip(., "horizontal")
7. └─keras3:::create_layer(keras$layers$RandomFlip, object, args)
8. └─keras3:::compose_layer(object, layer)
9. └─reticulate (local) layer(object, ...)
10. └─reticulate:::py_call_impl(callable, call_args$unnamed, call_args$named)
I noticed the Python exception shows the package being keras
and I confirmed that the version is 3.10.0. Let me know what information I can provide to help troubleshoot the issue.
Thank you!
I run the following code:
base_image_path <- "coast.jpg"
original_img <- preprocess_image(base_image_path)
original_HxW <- dim(original_img)[2:3]
calc_octave_HxW <- function(octave) (as.integer(round(original_HxW / (octave_scale ^ octave))))
octaves <- seq(num_octaves - 1, 0) %>%
{ zip_lists(num = ., HxW = lapply(., calc_octave_HxW)) }
str(octaves)
shrunk_original_img <- original_img %>% tf$image$resize(octaves[[1]]$HxW)
img <- original_img
for (octave in octaves) {
cat(sprintf("Processing octave %i with shape (%s)\n", octave$num, paste(octave$HxW, collapse = ", ")))
img <- img %>%
tf$image$resize(octave$HxW) %>%
gradient_ascent_loop(iterations = iterations, learning_rate = step, max_loss = max_loss)
upscaled_shrunk_original_img <- shrunk_original_img %>% tf$image$resize(octave$HxW)
same_size_original <- original_img %>% tf$image$resize(octave$HxW)
lost_detail <- same_size_original - upscaled_shrunk_original_img
img %<>% "+"(lost_detail)
shrunk_original_img <- original_img %>% tf$image$resize(octave$HxW)
}
img <- deprocess_image(img)
img %>% display_image_tensor()
img %>% tf$io$encode_png() %>% tf$io$write_file("dream.png", .)
But I get the error:
List of 3
$ :List of 2
..$ num: int 2
..$ HxW: int [1:2] 459 612
$ :List of 2
..$ num: int 1
..$ HxW: int [1:2] 643 857
$ :List of 2
..$ num: int 0
..$ HxW: int [1:2] 900 1200
Processing octave 2 with shape (459, 612)
Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type float64 of argument 'x'.
Run `reticulate::py_last_error()` for details.
No sure why there is a type mismatch.
I've been trying to install Keras and Tensorflow from R following the instructions in the book, but I'm encountering an issue that I can't solve even following the discussion here.
First I installed the libraries on Windows using RStudio and everything worked perfectly. However, I only got it to work on CPU, not GPU. To facilitate the installation of dependencies to use the GPU I followed the instructions in the book and used Windows Subsystem for Linux to install Linux on my computer.
Once I did this, using Visual Studio Code with WSL to run the same code, the installation does not work. The end result is as follows:
Error: Valid installation of TensorFlow not found.
Python environments searched for 'tensorflow' package:
/home/dberm/.pyenv/versions/3.9.16/bin/python3.9
/usr/bin/python3.10
My full code so far:
install.packages("keras")
Installing package into ‘/home/dberm/R/x86_64-pc-linux-gnu-library/4.1’
(as ‘lib’ is unspecified)
trying URL 'https://cloud.r-project.org/src/contrib/keras_2.11.1.tar.gz'
Content type 'application/x-gzip' length 3527604 bytes (3.4 MB)downloaded 3.4 MB
- installing source package ‘keras’ ...
** package ‘keras’ successfully unpacked and MD5 sums checked
** using staged installation
** R
** inst
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path- DONE (keras)
The downloaded source packages are in
‘/tmp/RtmpDj8V9u/downloaded_packages’
library(reticulate)
virtualenv_create("r-reticulate", python = install_python())
virtualenv: r-reticulate
library(keras)
install_keras(envname = "r-reticulate")
Using virtual environment 'r-reticulate' ...
'/home/dberm/.virtualenvs/r-reticulate/bin/python' -m pip install --upgrade --no-user --ignore-installed 'tensorflow==2.11.' 'tensorflow-hub' 'tensorflow-datasets' 'scipy' 'requests' 'Pillow' 'h5py' 'pandas' 'pydot'
Collecting tensorflow==2.11.
Using cached tensorflow-2.11.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (588.3 MB)
Collecting tensorflow-hub
Using cached tensorflow_hub-0.13.0-py2.py3-none-any.whl (100 kB)
Collecting tensorflow-datasets
Using cached tensorflow_datasets-4.8.3-py3-none-any.whl (5.4 MB)
Collecting scipy
Using cached scipy-1.10.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.5 MB)
Collecting requests
Using cached requests-2.28.2-py3-none-any.whl (62 kB)
Collecting Pillow
Using cached Pillow-9.4.0-cp39-cp39-manylinux_2_28_x86_64.whl (3.4 MB)
Collecting h5py
Using cached h5py-3.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB)
Collecting pandas
Using cached pandas-1.5.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)
Collecting pydot
Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB)
Collecting astunparse>=1.6.0
Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting opt-einsum>=2.3.2
Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting typing-extensions>=3.6.6
Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Collecting google-pasta>=0.1.1
Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Collecting grpcio<2.0,>=1.24.3
Using cached grpcio-1.51.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.8 MB)
Collecting tensorflow-estimator<2.12,>=2.11.0
Using cached tensorflow_estimator-2.11.0-py2.py3-none-any.whl (439 kB)
Collecting termcolor>=1.1.0
Using cached termcolor-2.2.0-py3-none-any.whl (6.6 kB)
Collecting libclang>=13.0.0
Using cached libclang-15.0.6.1-py2.py3-none-manylinux2010_x86_64.whl (21.5 MB)
Collecting packaging
Using cached packaging-23.0-py3-none-any.whl (42 kB)
Collecting tensorboard<2.12,>=2.11
Using cached tensorboard-2.11.2-py3-none-any.whl (6.0 MB)
Collecting wrapt>=1.11.0
Using cached wrapt-1.15.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (78 kB)
Collecting flatbuffers>=2.0
Using cached flatbuffers-23.3.3-py2.py3-none-any.whl (26 kB)
Collecting tensorflow-io-gcs-filesystem>=0.23.1
Using cached tensorflow_io_gcs_filesystem-0.31.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (2.4 MB)
Collecting numpy>=1.20
Using cached numpy-1.24.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
Collecting protobuf<3.20,>=3.9.2
Using cached protobuf-3.19.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
Collecting absl-py>=1.0.0
Using cached absl_py-1.4.0-py3-none-any.whl (126 kB)
Collecting keras<2.12,>=2.11.0
Using cached keras-2.11.0-py2.py3-none-any.whl (1.7 MB)
Collecting setuptools
Using cached setuptools-67.6.0-py3-none-any.whl (1.1 MB)
Collecting gast<=0.4.0,>=0.2.1
Using cached gast-0.4.0-py3-none-any.whl (9.8 kB)
Collecting six>=1.12.0
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting promise
Using cached promise-2.3.tar.gz (19 kB)
Preparing metadata (setup.py) ... done
Collecting tqdm
Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting dm-tree
Using cached dm_tree-0.1.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (153 kB)
Collecting tensorflow-metadata
Using cached tensorflow_metadata-1.12.0-py3-none-any.whl (52 kB)
Collecting toml
Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting etils[enp,epath]>=0.9.0
Using cached etils-1.1.1-py3-none-any.whl (115 kB)
Collecting psutil
Using cached psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (280 kB)
Collecting click
Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (199 kB)
Collecting idna<4,>=2.5
Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting urllib3<1.27,>=1.21.1
Using cached urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting pytz>=2020.1
Using cached pytz-2022.7.1-py2.py3-none-any.whl (499 kB)
Collecting python-dateutil>=2.8.1
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting pyparsing>=2.1.4
Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting wheel<1.0,>=0.23.0
Using cached wheel-0.40.0-py3-none-any.whl (64 kB)
Collecting zipp
Using cached zipp-3.15.0-py3-none-any.whl (6.8 kB)
Collecting importlib_resources
Using cached importlib_resources-5.12.0-py3-none-any.whl (36 kB)
Collecting tensorboard-data-server<0.7.0,>=0.6.0
Using cached tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl (4.9 MB)
Collecting markdown>=2.6.8
Using cached Markdown-3.4.2-py3-none-any.whl (93 kB)
Collecting tensorboard-plugin-wit>=1.6.0
Using cached tensorboard_plugin_wit-1.8.1-py3-none-any.whl (781 kB)
Collecting google-auth<3,>=1.6.3
Using cached google_auth-2.16.2-py2.py3-none-any.whl (177 kB)
Collecting google-auth-oauthlib<0.5,>=0.4.1
Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB)
Collecting werkzeug>=1.0.1
Using cached Werkzeug-2.2.3-py3-none-any.whl (233 kB)
Collecting googleapis-common-protos<2,>=1.52.0
Using cached googleapis_common_protos-1.59.0-py2.py3-none-any.whl (223 kB)
Collecting rsa<5,>=3.1.4
Using cached rsa-4.9-py3-none-any.whl (34 kB)
Collecting pyasn1-modules>=0.2.1
Using cached pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting cachetools<6.0,>=2.0.0
Using cached cachetools-5.3.0-py3-none-any.whl (9.3 kB)
Collecting requests-oauthlib>=0.7.0
Using cached requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
Collecting importlib-metadata>=4.4
Using cached importlib_metadata-6.1.0-py3-none-any.whl (21 kB)
Collecting MarkupSafe>=2.1.1
Using cached MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Collecting pyasn1<0.5.0,>=0.4.6
Using cached pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Collecting oauthlib>=3.0.0
Using cached oauthlib-3.2.2-py3-none-any.whl (151 kB)
Building wheels for collected packages: promise
Building wheel for promise (setup.py) ... error
error: subprocess-exited-with-error× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [34 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/tmp/pip-install-c9bisgzm/promise_af6168e0cb2e42adaeca9a8025bee6aa/setup.py", line 28, in
setup(
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/setuptools/init.py", line 108, in setup
return distutils.core.setup(**attrs)
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 172, in setup
ok = dist.parse_command_line()
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 475, in parse_command_line
args = self._parse_command_opts(parser, args)
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/setuptools/dist.py", line 1119, in _parse_command_opts
nargs = _Distribution._parse_command_opts(self, parser, args)
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 534, in _parse_command_opts
cmd_class = self.get_command_class(command)
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/setuptools/dist.py", line 966, in get_command_class
self.cmdclass[command] = cmdclass = ep.load()
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/setuptools/_vendor/importlib_metadata/init.py", line 208, in load
module = import_module(match.group('module'))
File "/home/dberm/.pyenv/versions/3.9.16/lib/python3.9/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1030, in _gcd_import
File "", line 1007, in _find_and_load
File "", line 986, in _find_and_load_unlocked
File "", line 680, in _load_unlocked
File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 28, in
from .macosx_libfile import calculate_macosx_platform_tag
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/wheel/macosx_libfile.py", line 43, in
import ctypes
File "/home/dberm/.pyenv/versions/3.9.16/lib/python3.9/ctypes/init.py", line 8, in
from _ctypes import Union, Structure, Array
ModuleNotFoundError: No module named '_ctypes'
[end of output]note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for promise
Running setup.py clean for promise
Failed to build promise
Installing collected packages: tensorboard-plugin-wit, pytz, pyasn1, libclang, flatbuffers, dm-tree, zipp, wrapt, wheel, urllib3, typing-extensions, tqdm, toml, termcolor, tensorflow-io-gcs-filesystem, tensorflow-estimator, tensorboard-data-server, six, setuptools, rsa, pyparsing, pyasn1-modules, psutil, protobuf, Pillow, packaging, oauthlib, numpy, MarkupSafe, keras, idna, grpcio, gast, etils, click, charset-normalizer, certifi, cachetools, absl-py, werkzeug, tensorflow-hub, scipy, requests, python-dateutil, pydot, promise, opt-einsum, importlib_resources, importlib-metadata, h5py, googleapis-common-protos, google-pasta, google-auth, astunparse, tensorflow-metadata, requests-oauthlib, pandas, markdown, tensorflow-datasets, google-auth-oauthlib, tensorboard, tensorflow
Running setup.py install for promise ... done
DEPRECATION: promise was installed using the legacy 'setup.py install' method, because a wheel could not be built for it. pip 23.1 will enforce this behaviour change. A possible replacement is to fix the wheel build issue reported above. Discussion can be found at pypa/pip#8368
Successfully installed MarkupSafe-2.1.2 Pillow-9.4.0 absl-py-1.4.0 astunparse-1.6.3 cachetools-5.3.0 certifi-2022.12.7 charset-normalizer-3.1.0 click-8.1.3 dm-tree-0.1.8 etils-1.1.1 flatbuffers-23.3.3 gast-0.4.0 google-auth-2.16.2 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 googleapis-common-protos-1.59.0 grpcio-1.51.3 h5py-3.8.0 idna-3.4 importlib-metadata-6.1.0 importlib_resources-5.12.0 keras-2.11.0 libclang-15.0.6.1 markdown-3.4.2 numpy-1.24.2 oauthlib-3.2.2 opt-einsum-3.3.0 packaging-23.0 pandas-1.5.3 promise-2.3 protobuf-3.19.6 psutil-5.9.4 pyasn1-0.4.8 pyasn1-modules-0.2.8 pydot-1.4.2 pyparsing-3.0.9 python-dateutil-2.8.2 pytz-2022.7.1 requests-2.28.2 requests-oauthlib-1.3.1 rsa-4.9 scipy-1.10.1 setuptools-67.6.0 six-1.16.0 tensorboard-2.11.2 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorflow-2.11.1 tensorflow-datasets-4.8.3 tensorflow-estimator-2.11.0 tensorflow-hub-0.13.0 tensorflow-io-gcs-filesystem-0.31.0 tensorflow-metadata-1.12.0 termcolor-2.2.0 toml-0.10.2 tqdm-4.65.0 typing-extensions-4.5.0 urllib3-1.26.15 werkzeug-2.2.3 wheel-0.40.0 wrapt-1.15.0 zipp-3.15.0Installation complete.
tensorflow::tf_config()
Valid installation of TensorFlow not found.
Python environments searched for 'tensorflow' package:
/home/dberm/.pyenv/versions/3.9.16/bin/python3.9
/usr/bin/python3.10Python exception encountered:
Traceback (most recent call last):
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 117, in _hook
return find_and_load(name, import)
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/tensorflow/init.py", line 37, in
from tensorflow.python.tools import module_util as _module_util
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 117, in _hook
return find_and_load(name, import)
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 117, in _hook
return find_and_load(name, import)
File "/home/dberm/.virtualenvs/r-reticulate/lib/python3.9/site-packages/tensorflow/python/init.py", line 24, in
import ctypes
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 117, in _hook
return find_and_load(name, import)
File "/home/dberm/.pyenv/versions/3.9.16/lib/python3.9/ctypes/init.py", line 8, in
from _ctypes import Union, Structure, Array
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/dberm/R/x86_64-pc-linux-gnu-library/4.1/reticulate/python/rpytools/loader.py", line 117, in _hook
return find_and_load(name, import)
ModuleNotFoundError: No module named '_ctypes'You can install TensorFlow using the install_tensorflow() function.
If I try to use tensorflow::install_tensorflow()
before tensorflow::tf_config()
the result is the same. I also tried the instructions from the RStudio website, with the same results:
install.packages("tensorflow")
library(reticulate)
path_to_python <- install_python()
virtualenv_create("r-reticulate", python = path_to_python)
library(tensorflow)
install_tensorflow(envname = "r-reticulate")
install.packages("keras")
library(keras)
install_keras(envname = "r-reticulate")
tensorflow::tf_config()
As far as I can tell by the output, tensorflow::tf_config()
is looking for Tensorflow here:
Python environments searched for 'tensorflow' package:
/home/dberm/.pyenv/versions/3.9.16/bin/python3.9
/usr/bin/python3.10
while Tensorflor should be in envname = "r-reticulate"
, but I'm not used to work with Python enviroments so I don't know to change this behaviour.
I also read here this comment:
Additionally, you must be running an Arm native build of R, not the x86 build running under Rosetta.
Which I don't know if applies to my case or is just for Intel Mac.
I don't know what else to try as I have been several days stuck here and I lack the required Python knowledge to make any progress. Any help would be appreciated.
I apologize if this isn’t the right place to post these questions but I had a few questions about the Boston housing example in Chapter 3. I was getting strange MAE results, so I copied the code directly from the book to make sure it was the same. I still am getting over 2X the MAE in the book and sometimes as high as 17. I realize there is going to be some randomness but this is higher than I expected. It there any issues with the following code:
library(keras)
dataset <- dataset_boston_housing()
c(c(train_data, train_targets), c(test_data, test_targets)) %<-% dataset
mean <- apply(train_data, 2, mean)
std <- apply(train_data, 2, sd)
train_data <- scale(train_data, center = mean, scale = std)
test_data <- scale(test_data, center = mean, scale = std)
build_model <- function(){
model <- keras_model_sequential() %>%
layer_dense(units = 64, activation = "relu",
input_shape = dim(train_data)[[2]]) %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 1)
model %>% compile( optimizer = "rmsprop",
loss = "mse",
metrics = c("mae"))
}
k <- 4
indices <- sample(1:nrow(train_data))
folds <- cut(indices, breaks = k, labels = FALSE)
num_epochs <- 100
all_scores <- c()
for (i in 1:k){
cat("processing fold #", i, "\n")
val_indices <- which(folds == i, arr.ind = TRUE)
val_data <- train_data[val_indices,]
val_targets<- train_targets[val_indices]
partial_train_data <- train_data[-val_indices,]
partial_train_targets <- train_targets[-val_indices]
model <- build_model()
model %>%
fit(partial_train_data, partial_train_targets,
epochs = num_epochs, batch_size = 1, verbose = 0)
results <- model %>% evaluate(val_data, val_targets, verbose = 0)
all_scores <- c(all_scores, results["mae"])
}
I have some additional questions regarding the Keras package and I was wondering if I would be allowed to use this code as an example in other forms (Stack Overflow, Rstudio, Keras repository, etc). I just wanted to make sure it was ok since the code portion comes from the book directly.
I am currently working on the feature extraction with data augmentation example in Chapter 8 and I have run into a bit of an issue while adjusting the code to work with the keras3
package.
The original code wither keras
:
outputs <- inputs %>%
data_augmentation() %>%
imagenet_preprocess_input() %>%
conv_base() %>%
If I try and change imagenet_preprocess_input()
to application_preprocess_inputs()
I get the following error when I build the model:
Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
TypeError: Cannot serialize object Ellipsis of type <class 'ellipsis'>. To be serializable, a class must implement the `get_config()` method.
It looks like those using the Python book have the same issue, and the were able to fix it using:
x = keras.applications.vgg16.preprocess_input(x)
x = keras.layers.Lambda(lambda x: keras.applications.vgg16.preprocess_input(x))(x)
And then using safe_mode=False
for the ModelCheckPoint.
Any thoughts to how I can do this in R? I haven't been able to make any headway in terms of getting it to work. I also saw another option to define a get_config()
method for custom objects, but I'm not sure what I would be defining. Let me know if I can provide any additional information.
Thank you!
I have noticed a strange issue with the mini Xception model from Chapter 9. When running it on the Linux machine, it runs smoothly with keras
but not with keras3
. I haven't run into this issue with other examples since the latest 2.15 release of keras
and just wanted to see what I may be doing to cause the issue.
The linux machine has both keras 2.15
and keras3 0.2.0
installed. When running the code with keras3
, only the keras3
package is attached (terminal is showing r-keras
environment). Here's the metric plots using both packages (I restarted Rstudio between each example):
The accuracy using keras3
is basically 50% across all epochs, which was strange. I ran the same code on a Mac that only had keras3
installed and got similar results to the model when using the keras
package. You can find the sessionInfo()
and py_list_packages
below for each package on the Linux machine:
R version 4.2.3 (2023-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 22.04.4 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so
locale:
[1] LC_CTYPE=en_US.UTF-8
[2] LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8
[4] LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8
[6] LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8
[8] LC_NAME=C
[9] LC_ADDRESS=C
[10] LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8
[12] LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils
[5] datasets methods base
other attached packages:
[1] keras_2.15.0
loaded via a namespace (and not attached):
[1] zip_2.3.1
[2] Rcpp_1.0.12
[3] pillar_1.9.0
[4] compiler_4.2.3
[5] base64enc_0.1-3
[6] tools_4.2.3
[7] zeallot_0.1.0
[8] nlme_3.1-162
[9] jsonlite_1.8.8
[10] lifecycle_1.0.4
[11] tibble_3.2.1
[12] gtable_0.3.4
[13] lattice_0.20-45
[14] mgcv_1.8-42
[15] pkgconfig_2.0.3
[16] png_0.1-8
[17] rlang_1.1.3
[18] Matrix_1.5-3
[19] cli_3.6.2
[20] rstudioapi_0.16.0
[21] withr_3.0.0
[22] dplyr_1.1.4
[23] generics_0.1.3
[24] vctrs_0.6.5
[25] rprojroot_2.0.4
[26] grid_4.2.3
[27] tidyselect_1.2.1
[28] reticulate_1.36.0
[29] glue_1.7.0
[30] here_1.0.1
[31] R6_2.5.1
[32] fansi_1.0.6
[33] farver_2.1.1
[34] ggplot2_3.5.0
[35] magrittr_2.0.3
[36] whisker_0.4.1
[37] splines_4.2.3
[38] listarrays_0.4.0
[39] scales_1.3.0
[40] tfruns_1.5.2
[41] colorspace_2.1-0
[42] labeling_0.4.3
[43] tensorflow_2.16.0.9000
[44] utf8_1.2.4
[45] munsell_0.5.1
> reticulate::py_list_packages()
package version
1 absl-py 2.1.0
2 array_record 0.5.1
3 astunparse 1.6.3
4 cachetools 5.3.3
5 certifi 2024.2.2
6 charset-normalizer 3.3.2
7 click 8.1.7
8 dm-tree 0.1.8
9 etils 1.7.0
10 flatbuffers 24.3.25
11 fsspec 2024.3.1
12 gast 0.5.4
13 google-auth 2.29.0
14 google-auth-oauthlib 1.2.0
15 google-pasta 0.2.0
16 grpcio 1.62.2
17 h5py 3.11.0
18 idna 3.7
19 importlib_resources 6.4.0
20 keras 2.15.0
21 libclang 18.1.1
22 Markdown 3.6
23 MarkupSafe 2.1.5
24 ml-dtypes 0.3.2
25 numpy 1.26.4
26 nvidia-cublas-cu12 12.2.5.6
27 nvidia-cuda-cupti-cu12 12.2.142
28 nvidia-cuda-nvcc-cu12 12.2.140
29 nvidia-cuda-nvrtc-cu12 12.2.140
30 nvidia-cuda-runtime-cu12 12.2.140
31 nvidia-cudnn-cu12 8.9.4.25
32 nvidia-cufft-cu12 11.0.8.103
33 nvidia-curand-cu12 10.3.3.141
34 nvidia-cusolver-cu12 11.5.2.141
35 nvidia-cusparse-cu12 12.1.2.141
36 nvidia-nccl-cu12 2.16.5
37 nvidia-nvjitlink-cu12 12.2.140
38 oauthlib 3.2.2
39 opt-einsum 3.3.0
40 packaging 24.0
41 pandas 2.2.2
42 pillow 10.3.0
43 promise 2.3
44 protobuf 3.20.3
45 psutil 5.9.8
46 pyasn1 0.6.0
47 pyasn1_modules 0.4.0
48 pydot 2.0.0
49 pyparsing 3.1.2
50 python-dateutil 2.9.0.post0
51 pytz 2024.1
52 requests 2.31.0
53 requests-oauthlib 2.0.0
54 rsa 4.9
55 scipy 1.13.0
56 six 1.16.0
57 tensorboard 2.15.2
58 tensorboard-data-server 0.7.2
59 tensorflow 2.15.1
60 tensorflow-datasets 4.9.4
61 tensorflow-estimator 2.15.0
62 tensorflow-hub 0.16.1
63 tensorflow-io-gcs-filesystem 0.36.0
64 tensorflow-metadata 1.15.0
65 termcolor 2.4.0
66 tf_keras 2.15.1
67 toml 0.10.2
68 tqdm 4.66.2
69 typing_extensions 4.11.0
70 tzdata 2024.1
71 urllib3 2.2.1
72 Werkzeug 3.0.2
73 wrapt 1.14.1
74 zipp 3.18.1
requirement
1 absl-py==2.1.0
2 array_record==0.5.1
3 astunparse==1.6.3
4 cachetools==5.3.3
5 certifi==2024.2.2
6 charset-normalizer==3.3.2
7 click==8.1.7
8 dm-tree==0.1.8
9 etils==1.7.0
10 flatbuffers==24.3.25
11 fsspec==2024.3.1
12 gast==0.5.4
13 google-auth==2.29.0
14 google-auth-oauthlib==1.2.0
15 google-pasta==0.2.0
16 grpcio==1.62.2
17 h5py==3.11.0
18 idna==3.7
19 importlib_resources==6.4.0
20 keras==2.15.0
21 libclang==18.1.1
22 Markdown==3.6
23 MarkupSafe==2.1.5
24 ml-dtypes==0.3.2
25 numpy==1.26.4
26 nvidia-cublas-cu12==12.2.5.6
27 nvidia-cuda-cupti-cu12==12.2.142
28 nvidia-cuda-nvcc-cu12==12.2.140
29 nvidia-cuda-nvrtc-cu12==12.2.140
30 nvidia-cuda-runtime-cu12==12.2.140
31 nvidia-cudnn-cu12==8.9.4.25
32 nvidia-cufft-cu12==11.0.8.103
33 nvidia-curand-cu12==10.3.3.141
34 nvidia-cusolver-cu12==11.5.2.141
35 nvidia-cusparse-cu12==12.1.2.141
36 nvidia-nccl-cu12==2.16.5
37 nvidia-nvjitlink-cu12==12.2.140
38 oauthlib==3.2.2
39 opt-einsum==3.3.0
40 packaging==24.0
41 pandas==2.2.2
42 pillow==10.3.0
43 promise==2.3
44 protobuf==3.20.3
45 psutil==5.9.8
46 pyasn1==0.6.0
47 pyasn1_modules==0.4.0
48 pydot==2.0.0
49 pyparsing==3.1.2
50 python-dateutil==2.9.0.post0
51 pytz==2024.1
52 requests==2.31.0
53 requests-oauthlib==2.0.0
54 rsa==4.9
55 scipy==1.13.0
56 six==1.16.0
57 tensorboard==2.15.2
58 tensorboard-data-server==0.7.2
59 tensorflow==2.15.1
60 tensorflow-datasets==4.9.4
61 tensorflow-estimator==2.15.0
62 tensorflow-hub==0.16.1
63 tensorflow-io-gcs-filesystem==0.36.0
64 tensorflow-metadata==1.15.0
65 termcolor==2.4.0
66 tf_keras==2.15.1
67 toml==0.10.2
68 tqdm==4.66.2
69 typing_extensions==4.11.0
70 tzdata==2024.1
71 urllib3==2.2.1
72 Werkzeug==3.0.2
73 wrapt==1.14.1
74 zipp==3.18.1
keras3
> sessionInfo()
R version 4.2.3 (2023-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 22.04.4 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8
[4] LC_COLLATE=en_US.UTF-8 LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] keras3_0.2.0
loaded via a namespace (and not attached):
[1] zip_2.3.1 Rcpp_1.0.12 pillar_1.9.0 compiler_4.2.3
[5] base64enc_0.1-3 tools_4.2.3 zeallot_0.1.0 nlme_3.1-162
[9] jsonlite_1.8.8 lifecycle_1.0.4 tibble_3.2.1 gtable_0.3.4
[13] lattice_0.20-45 mgcv_1.8-42 pkgconfig_2.0.3 png_0.1-8
[17] rlang_1.1.3 Matrix_1.5-3 cli_3.6.2 rstudioapi_0.16.0
[21] fastmap_1.1.1 withr_3.0.0 dplyr_1.1.4 generics_0.1.3
[25] vctrs_0.6.5 rprojroot_2.0.4 grid_4.2.3 tidyselect_1.2.1
[29] reticulate_1.36.0 glue_1.7.0 here_1.0.1 R6_2.5.1
[33] fansi_1.0.6 farver_2.1.1 ggplot2_3.5.0 magrittr_2.0.3
[37] whisker_0.4.1 splines_4.2.3 listarrays_0.4.0 scales_1.3.0
[41] tfruns_1.5.2 colorspace_2.1-0 labeling_0.4.3 tensorflow_2.16.0.9000
[45] utf8_1.2.4 munsell_0.5.1
> reticulate::py_list_packages()
package version requirement
1 absl-py 2.1.0 absl-py==2.1.0
2 array_record 0.5.1 array_record==0.5.1
3 asttokens 2.4.1 asttokens==2.4.1
4 astunparse 1.6.3 astunparse==1.6.3
5 bleach 6.1.0 bleach==6.1.0
6 certifi 2024.2.2 certifi==2024.2.2
7 charset-normalizer 3.3.2 charset-normalizer==3.3.2
8 click 8.1.7 click==8.1.7
9 decorator 5.1.1 decorator==5.1.1
10 dm-tree 0.1.8 dm-tree==0.1.8
11 etils 1.7.0 etils==1.7.0
12 exceptiongroup 1.2.1 exceptiongroup==1.2.1
13 executing 2.0.1 executing==2.0.1
14 flatbuffers 24.3.25 flatbuffers==24.3.25
15 fsspec 2024.3.1 fsspec==2024.3.1
16 gast 0.5.4 gast==0.5.4
17 google-pasta 0.2.0 google-pasta==0.2.0
18 grpcio 1.62.2 grpcio==1.62.2
19 h5py 3.11.0 h5py==3.11.0
20 idna 3.7 idna==3.7
21 importlib_resources 6.4.0 importlib_resources==6.4.0
22 ipython 8.23.0 ipython==8.23.0
23 jax 0.4.26 jax==0.4.26
24 jaxlib 0.4.26+cuda12.cudnn89 jaxlib==0.4.26+cuda12.cudnn89
25 jedi 0.19.1 jedi==0.19.1
26 kaggle 1.6.12 kaggle==1.6.12
27 keras 3.3.2 keras==3.3.2
28 libclang 18.1.1 libclang==18.1.1
29 Markdown 3.6 Markdown==3.6
30 markdown-it-py 3.0.0 markdown-it-py==3.0.0
31 MarkupSafe 2.1.5 MarkupSafe==2.1.5
32 matplotlib-inline 0.1.7 matplotlib-inline==0.1.7
33 mdurl 0.1.2 mdurl==0.1.2
34 ml-dtypes 0.3.2 ml-dtypes==0.3.2
35 namex 0.0.8 namex==0.0.8
36 numpy 1.26.4 numpy==1.26.4
37 nvidia-cublas-cu12 12.3.4.1 nvidia-cublas-cu12==12.3.4.1
38 nvidia-cuda-cupti-cu12 12.3.101 nvidia-cuda-cupti-cu12==12.3.101
39 nvidia-cuda-nvcc-cu12 12.3.107 nvidia-cuda-nvcc-cu12==12.3.107
40 nvidia-cuda-nvrtc-cu12 12.3.107 nvidia-cuda-nvrtc-cu12==12.3.107
41 nvidia-cuda-runtime-cu12 12.3.101 nvidia-cuda-runtime-cu12==12.3.101
42 nvidia-cudnn-cu12 8.9.7.29 nvidia-cudnn-cu12==8.9.7.29
43 nvidia-cufft-cu12 11.0.12.1 nvidia-cufft-cu12==11.0.12.1
44 nvidia-curand-cu12 10.3.4.107 nvidia-curand-cu12==10.3.4.107
45 nvidia-cusolver-cu12 11.5.4.101 nvidia-cusolver-cu12==11.5.4.101
46 nvidia-cusparse-cu12 12.2.0.103 nvidia-cusparse-cu12==12.2.0.103
47 nvidia-nccl-cu12 2.19.3 nvidia-nccl-cu12==2.19.3
48 nvidia-nvjitlink-cu12 12.3.101 nvidia-nvjitlink-cu12==12.3.101
49 opt-einsum 3.3.0 opt-einsum==3.3.0
50 optree 0.11.0 optree==0.11.0
51 packaging 24.0 packaging==24.0
52 pandas 2.2.2 pandas==2.2.2
53 parso 0.8.4 parso==0.8.4
54 pexpect 4.9.0 pexpect==4.9.0
55 pillow 10.3.0 pillow==10.3.0
56 promise 2.3 promise==2.3
57 prompt-toolkit 3.0.43 prompt-toolkit==3.0.43
58 protobuf 3.20.3 protobuf==3.20.3
59 psutil 5.9.8 psutil==5.9.8
60 ptyprocess 0.7.0 ptyprocess==0.7.0
61 pure-eval 0.2.2 pure-eval==0.2.2
62 pydot 2.0.0 pydot==2.0.0
63 Pygments 2.17.2 Pygments==2.17.2
64 pyparsing 3.1.2 pyparsing==3.1.2
65 python-dateutil 2.9.0.post0 python-dateutil==2.9.0.post0
66 python-slugify 8.0.4 python-slugify==8.0.4
67 pytz 2024.1 pytz==2024.1
68 requests 2.31.0 requests==2.31.0
69 rich 13.7.1 rich==13.7.1
70 scipy 1.13.0 scipy==1.13.0
71 six 1.16.0 six==1.16.0
72 stack-data 0.6.3 stack-data==0.6.3
73 tensorboard 2.16.2 tensorboard==2.16.2
74 tensorboard-data-server 0.7.2 tensorboard-data-server==0.7.2
75 tensorflow 2.16.1 tensorflow==2.16.1
76 tensorflow-datasets 4.9.4 tensorflow-datasets==4.9.4
77 tensorflow-io-gcs-filesystem 0.36.0 tensorflow-io-gcs-filesystem==0.36.0
78 tensorflow-metadata 1.15.0 tensorflow-metadata==1.15.0
79 termcolor 2.4.0 termcolor==2.4.0
80 text-unidecode 1.3 text-unidecode==1.3
81 toml 0.10.2 toml==0.10.2
82 tqdm 4.66.2 tqdm==4.66.2
83 traitlets 5.14.3 traitlets==5.14.3
84 typing_extensions 4.11.0 typing_extensions==4.11.0
85 tzdata 2024.1 tzdata==2024.1
86 urllib3 2.2.1 urllib3==2.2.1
87 wcwidth 0.2.13 wcwidth==0.2.13
88 webencodings 0.5.1 webencodings==0.5.1
89 Werkzeug 3.0.2 Werkzeug==3.0.2
90 wrapt 1.16.0 wrapt==1.16.0
91 zipp 3.18.1 zipp==3.18.1
Let me know if there is any additional information I can provide to help troubleshoot the issue.
Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.