aleju / imgaug Goto Github PK
View Code? Open in Web Editor NEWImage augmentation for machine learning experiments.
Home Page: http://imgaug.readthedocs.io
License: MIT License
Image augmentation for machine learning experiments.
Home Page: http://imgaug.readthedocs.io
License: MIT License
I'm getting this error:
Import Error: cannot import name augmenters
Whenever I try to:
from imgaug import augmenters as iaa
-I'm running windows 7
-Anaconda with python 2
-Installed imgaug using this line:
pip install git+https://github.com/aleju/imgaug
-Running the script using visual studio 2015
Hi, I've got the following code:
def augment(im, y):
im_arr = np.array(im)
# See documentation for details regarding transformations: https://github.com/aleju/imgaug
fliplr_rate = 0.5
angle = 10
additive, contrast_norm = (45, 0.1)
gaussian_noise, dropout = (0.05, 0.01)
shear, shift = (2, 20)
aug_img_only = iaa.Sequential([
iaa.Sometimes(0.5, iaa.OneOf([
iaa.Add((-additive, additive)),
iaa.ContrastNormalization((1 - contrast_norm, 1 + contrast_norm))
])),
iaa.Sometimes(0.5, iaa.OneOf([
iaa.AdditiveGaussianNoise(scale=gaussian_noise * 255, per_channel=True),
iaa.Dropout(dropout)
]))
])
aug_img_mask = iaa.Sequential([
iaa.Fliplr(fliplr_rate),
iaa.Affine(rotate=(-angle, angle)),
iaa.Sometimes(0.5, iaa.Affine(
shear=(-shear, shear),
translate_px={'x': (-shift, shift), 'y': (-shift, shift)})
)
])
aug_img_only.reseed()
aug_img_only_det, aug_img_mask_det = aug_img_only.to_deterministic(), aug_img_mask.to_deterministic()
im_arr = aug_img_only_det.augment_images([im_arr])[0]
im_arr = aug_img_mask_det.augment_images([im_arr])[0]
y = aug_img_mask_det.augment_images([y])[0]
im = Image.fromarray(im_arr)
return im, y
I've got a ML system which has input images and known masks of areas of interest, which I later want to predict. I want to augment the images and the masks in the same way for some transformations, and apply other transformations (such as dropout, etc.) only to the original image.
Here, in the code, im
is the original image in PIL object format, im_arr
is the original image transformed to numpy array, and y
is the mask numpy array.
Now, everytime I run this code, for example, 5 times, with the same picture and mask, I get the same 5 augmentations. Meaning, that the first picture comes out the same every time, so does the second and so on.
Just to clarify, here is the code I use to run it:
for i in range(5):
im = Image.open('image.jpg')
y = np.load('mask.npy')
im, y = augment(im, y)
Why would this behavior happen? I reinstantiate the augmenters every time the function is called (as can be seen in the code), and only after the reinstantiation do I call to_deterministic().
What am I missing?
Thanks in advance!
Hi Alex! This library is amazing!! However when running the code below I get this error: RuntimeError: Could not execute image viewer.
import augmenters as iaa
import numpy as np
images = np.random.randint(0, 255, (16, 128, 128, 3), dtype=np.uint8)
seq = iaa.Sequential([iaa.Fliplr(0.5), iaa.GaussianBlur((0, 3.0))])
seq.show_grid(images[0], cols=8, rows=8)
seq.show_grid([images[0], images[1]], cols=8, rows=8)
Not a bug, a question.
Is it possible to use the crop function to randomly crop an image to a fixed size? I.e.: crop 256x256 images to 224x224 at a random position (within the image area)?
Thank you.
I run "sudo -H python -m pip install imgaug-0.2.4.tar.gz" command to user my pip in Cellar to install imgaug, and got the following error.
And i tried "pip install imgaug-0.2.4.tar.gz", and it seems that it uses 2.7.10 version python in osx system. However, my opencv is installed in 2.7.11 python in Cellar folder.
How can i solve this problem?
Processing ./imgaug-0.2.4.tar.gz
Error [Errno 2] No such file or directory while executing command python setup.py egg_info
Exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/usr/local/lib/python2.7/site-packages/pip/commands/install.py", line 335, in run
wb.build(autobuilding=True)
File "/usr/local/lib/python2.7/site-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "/usr/local/lib/python2.7/site-packages/pip/req/req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/usr/local/lib/python2.7/site-packages/pip/req/req_set.py", line 634, in _prepare_file
abstract_dist.prep_for_dist()
File "/usr/local/lib/python2.7/site-packages/pip/req/req_set.py", line 129, in prep_for_dist
self.req_to_install.run_egg_info()
File "/usr/local/lib/python2.7/site-packages/pip/req/req_install.py", line 439, in run_egg_info
command_desc='python setup.py egg_info')
File "/usr/local/lib/python2.7/site-packages/pip/utils/init.py", line 667, in call_subprocess
cwd=cwd, env=env)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 710, in init
errread, errwrite)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I know there has to be a limit to asking questions.
But I am working on a new feature learning methodology.
My algorithm has learned this following set of filters on MNIST. I was hoping you could give me a second opinion over how good these filters are.
Also, which ones of the two seem better?
Hi,
Is it difficult to generalize image augmentations without the restrictions of np.uint8
and 0-255 ?
I saw in the code that there are some augmentations that have a hard-coded np.clip and astype np.uint8
.
I started locally to modify test.py to launch tests on float32 data, however tests are not exaustive...
I'm interested in this feature and can help with implementation.
Segmentation labels (annotation) are usually provided as a matrix of labels (integers).
For this task the same augmentation as the images should be performed, except interpolation should be 'nearest neighbours' rather than bilinear or similar one.
Some form of retrieving the sampled parameters (for example: rotation angle was 32 degrees) would be usefull so we could re-create the same augmentation, but with different interpolation.
Deterministic augmentation doesn't help in this case.
In the meantime, as a workaround we can use:
from imgaug import augmenters as iaa
rotate_max=30
seq = iaa.Sequential([
iaa.Fliplr(0.5), # horizontally flip 50% of the images
],random_state=0)
rotate=iaa.Affine(rotate=(0, rotate_max),mode='edge')
rotate_nn_interpolation=iaa.Affine(rotate=(0, rotate_max),order=[0],mode='edge')
seq_nn_interpolation=seq.deepcopy()
seq.append(rotate)
seq_nn_interpolation.append(rotate_nn_interpolation)
aug_seed=0
for batch in...:
batch_images = images[batch, :, :, :]
batch_labels = labels[batch, :,:]
seq.reseed(aug_seed)
seq_nn_interpolation.reseed(aug_seed)
batch_images=seq.augment_images(batch_images)
batch_labels=seq_nn_interpolation.augment_images(batch_labels)
aug_seed+=1
Hello,
l have a set of images based sequence. l want to increase my dataset by doing some data augmentation as follow. for each image add horizontal line, vertical line at on the most left char and most right char and in the middle.
Here is the original image :
and l want to get something like this :
Adding horizontal line :
Adding vertical line on the middle
Adding vertical line on the left
Adding vertical line on the right
thank you
Hello!
This is not an issue, but instead a review of the installation process.
The installation script has successfully worked for
tested with both python 2.7 and python3.5
This was good work !!! 👍
Hello,
l passed this code generate_example_images.py
(in a for loop ) on a dataset of 200.000 images. After that l noticed that not all the images are augmented . For instance , additivegaussian noise for some images it worked perfectly but for other they remained the same. This also apply for other data augmentation method.
What is wrong ?
Thank you
As said in the README, I copied the required files into my directory and ran the
from imgaug import augmenters as iaa
but I get an error
"ImportError: cannot import name 'augmenters'"
Hi!
Thanks a lot for the library, it really works well. The only thing I have issues with is the keypoint augmentation, which works sometimes but in other cases it returns a set of negative coordinate values which are then of course not plotted inside the image when I use 'draw_on_image' and show it using matplotlib afterwards.
I'm currently using this augmentation sequence:
seq = iaa.Sequential([
iaa.Fliplr(0.5), # horizontally flip 50% of the images
iaa.Flipud(0.5), # vertically flip 50% of all images
#iaa.GaussianBlur(sigma=(0, 3.0)), # blur images with a sigma of 0 to 3.0
iaa.Affine(
scale={"x": (0.8, 1.2), "y": (0.8, 1.2)}, # scale images to 80-120% of their size, individually per axis
translate_px={"x": (-16, 16), "y": (-16, 16)}, # translate by -16 to +16 pixels (per axis)
rotate=(-45, 45), # rotate by -45 to +45 degrees
shear=(-16, 16), # shear by -16 to +16 degrees
order=ia.ALL, # use any of scikit-image's interpolation methods
#cval=(0, 1.0), # if mode is constant, use a cval between 0 and 1.0
#mode=ia.ALL # use any of scikit-image's warping modes (see 2nd image from the top for examples)
)
],
random_order=True # do all of the above in random order
)
seq_det = seq.to_deterministic() # call this for each batch again, NOT only once at the start
For example, when I use this input image:
I get results like this:
Does anyone have an idea what could be the problem?
Thanks a lot!
UPDATE: It seems like the keypoint augmentation only works if a single command is used in iaa.Affine() at once. If use several, only the first is use to augment keypoints. What works is to use several iaa.Affine() commands in series, with each containing a different image manipulation, which is fine for now.
As far as I can see your rotation is intended for square images. I am interested in using rotation like in the following code:
from scipy.ndimage.interpolation import rotate
# some code
new_image = rotate(image, angle, reshape=True)
Thus, it will transform image with shape (h,w) into image with shape (w,h) without any scaling by one of the axies. Can I make it using your library in some way or should I make a pull request?
Currently the `generate_example_images.py' script only generates an ' examples_grid.jpg' image. May I ask that how to save every sample in this grid image into separate image?
Thanks!
I have added (some) documentation for the Augmenter and Noop classes in the augmenters.py file.
Review the changes in My fork and make suggestions.
This is just a glimpse. Much work will be needed on documentation (as more often than not I am not quite sure what a parameter signifies, etc).
Suppose I have video-frames and I would like to apply same distortions/transformation/augmentations to all the frames while passing them in a single batch(and then other batch with other parameters for the same transformation set). How can this be done?
I am sorry if my query seems to be too naive
I'd like to augument input image recording of S and V channels of HSV image.
But in my knowledge Add
class does not have parameter for specifying channel, and it is not possible currently.
https://github.com/aleju/imgaug/blob/master/imgaug/augmenters.py#L1551
Is this possible in actual, and am I missing something?
How does the Affine Transformation works?
What I mean to ask is, out of the selected batch, upon transforming the image batch with the transformer,
are there non-transformed images in the resulting batch or has every sample obtained certainly gone under some transformation?
LIke, I know the Flip transformers only transform specified ratio of images. What about other transformers?
I notice that in the example of README.md
, all examples related with augmenting landmark points have a following line of code. However, it seems that the examples that only augment images do not.
seq_det = seq.to_deterministic() # call this for each batch again, NOT only once at the start
Is it necessary only for landmarks augmentation?
Can you tell me the meaning of this parameter?? It's almost everywhere
Also. Keypoints_on_images seems to be a list object. Are individual keypoints list objects themselves? Or numpy array?
@aleju Thank you for providing such an amazing package which makes convenient for training deep networks for visual recognition. In the example codes,
# Sometimes(0.5, ...) applies the given augmenter in 50% of all cases,
# e.g. Sometimes(0.5, GaussianBlur(0.3)) would blur roughly every second image.
st = lambda aug: iaa.Sometimes(0.5, aug)
I am still confused by the functionality of Sometimes(0.5)
, would you please give more comments on this function? Thank you .
Hello,
I am running the example code on my set of images and it's working great. But I want to save the resulted images one by one rather than saving them in a grid.
How can we convert these lines of code to save the images one by one rather than a grid?
grid = seq.draw_grid(image, cols=8, rows=8)
misc.imsave("examples_grid.jpg", grid)
Thanks
In the example found here, the first example references some load_batches
command that's not defined earlier. Searching in the augmenters.py
file (imported as iaa
) also did not yield any results).
What is that method?
May you add the script to generate ' examples_grid.jpg' image?
Thanks!
Can you tell me which operation is slow so i can remove it. Thx!
here is my code
seq = iaa.Sequential([
iaa.Fliplr(0.5),
iaa.Flipud(0.5),
st(iaa.GaussianBlur((0, 1.0))),
st(iaa.Sharpen(alpha=(0, 1.0), lightness=(0.75, 1.5))),
st(iaa.Emboss(alpha=(0, 1.0), strength=(0, 2.0))),
st(iaa.Sometimes(0.5,
iaa.EdgeDetect(alpha=(0, 0.4)),
iaa.DirectedEdgeDetect(alpha=(0, 0.4), direction=(0.0, 1.0)),
)),
st(iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.2), per_channel=0.5)),
st(iaa.Dropout((0.0, 0.1), per_channel=0.5)),
st(iaa.Add((-10, 10), per_channel=0.5)),
st(iaa.Multiply((0.5, 1.5), per_channel=0.5)),
st(iaa.ContrastNormalization((0.5, 1.5), per_channel=0.5)),
st(iaa.ElasticTransformation(alpha=(0.5, 2.5), sigma=0.25))
],
random_order=True
)
I am using python3 without conda/env.
After running:
sudo pip3 install git+https://github.com/aleju/imgaug
I receive:
Collecting git+https://github.com/aleju/imgaug
Cloning https://github.com/aleju/imgaug to /tmp/pip-mn453yqe-build
*** Error in `/usr/bin/python3': free(): invalid pointer: 0x0000000001af2990 ***
======= Backtrace: =========
...
======= Memory map: ========
...
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/imgaug.egg-info
writing top-level names to pip-egg-info/imgaug.egg-info/top_level.txt
writing requirements to pip-egg-info/imgaug.egg-info/requires.txt
writing pip-egg-info/imgaug.egg-info/PKG-INFO
writing dependency_links to pip-egg-info/imgaug.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/imgaug.egg-info/SOURCES.txt'
reading manifest file 'pip-egg-info/imgaug.egg-info/SOURCES.txt'
writing manifest file 'pip-egg-info/imgaug.egg-info/SOURCES.txt'
----------------------------------------
Command "python setup.py egg_info" failed with error code -6 in /tmp/pip-mn453yqe-build/
I am using manually build OpenCV 3.1.0-dev (from pyimagesearch tutorial)
when I load a image , do some augmentations ,the result is abnormal with lots of strides
`
seq = iaa.Sequential([
#iaa.Flipud(0.5),
#iaa.Dropout(0.02, name="Dropout"),
#iaa.GaussianBlur(5)
#iaa.AdditiveGaussianNoise(0, 10, True)
iaa.Sharpen(alpha=(0, 1.0), lightness=(0.75, 1.5))
#iaa.Crop(px=(0, 16)), # crop images from each side by 0 to 16px (randomly chosen)
#iaa.Fliplr(0.5), # horizontally flip 50% of the images
#iaa.GaussianBlur(sigma=(0, 5.0)) # blur images with a sigma of 0 to 3.0
])
img = scipy.misc.imread(image_dir)
images_aug = seq.augment_images(img)
misc.imshow(images_aug)
`
Four augmenters that use the Convolve class: (Sharpen, Emboss, EdgeDetect and DirectEdgeDetect) are giving me pickle errors.
AttributeError: Can't pickle local object 'Emboss.<locals>.create_matrices'
Any easy way to fix this or make them pickle safe?
Regards @aleju
When I clone the repo, and run the generate_example_images.py
, I get a runtime error:
$ cd ~/repos/imgaug
$ python generate_example_images.py
[draw_per_augmenter_images] Loading image...
[draw_per_augmenter_images] Initializing...
[draw_per_augmenter_images] Augmenting...
Traceback (most recent call last):
File "generate_example_images.py", line 290, in <module>
main()
File "generate_example_images.py", line 18, in main
draw_per_augmenter_images()
File "generate_example_images.py", line 252, in draw_per_augmenter_images
misc.imsave("examples.jpg", output_image.draw())
File "generate_example_images.py", line 271, in draw
rows_drawn = [self.draw_row(title, images, subtitles) for title, images, subtitles in self.rows]
File "generate_example_images.py", line 277, in draw_row
title_cell = ia.draw_text(title_cell, x=2, y=2, text=title, color=[0, 0, 0], size=12)
File "/Users/erickim/repos/imgaug/imgaug/imgaug.py", line 129, in draw_text
font = ImageFont.truetype("DejaVuSans.ttf", size)
File "/usr/local/lib/python2.7/site-packages/PIL/ImageFont.py", line 238, in truetype
return FreeTypeFont(font, size, index, encoding)
File "/usr/local/lib/python2.7/site-packages/PIL/ImageFont.py", line 127, in __init__
self.font = core.getfont(font, size, index, encoding)
IOError: cannot open resource
A quick fix is to modify imgaug/imgaug.py:128
and give the absolute path of the DejaVuSans.ttf
file that is included in the repo:
diff --git a/imgaug/imgaug.py b/imgaug/imgaug.py
index 7e94c82..b2b2485 100644
--- a/imgaug/imgaug.py
+++ b/imgaug/imgaug.py
@@ -9,6 +9,7 @@ import math
from scipy import misc
import multiprocessing
import threading
+import os
import sys
import six
import six.moves as sm
@@ -125,7 +126,8 @@ def draw_text(img, y, x, text, color=[0, 255, 0], size=25):
shape = img.shape
img = Image.fromarray(img)
- font = ImageFont.truetype("DejaVuSans.ttf", size)
+ font = ImageFont.truetype(os.path.join(os.path.abspath(os.path.split(__file__)[0]), "DejaVuSans.ttf"), size)
+
context = ImageDraw.Draw(img)
context.text((x, y), text, fill=tuple(color), font=font)
img_np = np.asarray(img)
Thoughts on this change?
[Enhancement] It would be nice if cval
was added to the __init__
function as a parameter with a description. I understand you can set it manually at the moment but it takes a while to even see if the code supports setting the background color.
I came across this error just now. When loading the augmenters module in Jupyter notebooks, the following line works
import augmenters as iaa
However, the same line fails if used in a script (in the same directory).
Any Idea what might be happening?? I think it's the (rather notorious) relative import system of python that's to blame.
PS: I forgot to tell you. I've been testing all of it in Python 3.5, and the import troubles (earlier and now this) are the only problems I have encountered.
after load image,make draw_grid(image,cols=8,rows=8),there have a problem boundary mode not supported.how to deal with?
When run the setup, error happens. opencv is installed on Anaconda.
Is it possible to install imgaug on Anaconda?
...
Processing ./dist/imgaug-0.2.0.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-K5MRPU-build/setup.py", line 6, in
raise Exception("Could not find package 'cv2' (OpenCV). It cannot be automatically installed, so you will have to manually install it.")
Exception: Could not find package 'cv2' (OpenCV). It cannot be automatically installed, so you will have to manually install it.
Could you publish this package to PyPi? It would make installing (e.g. pip install imgaug
) and using it in other packages more easy. Thanks!
When I was running "from imgaug import augmenters as iaa"
It has an error:
Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.
But the libmkl_avx2.so and libmkl_def.so are in the folder ~/anaconda2/lib/.
First of all thank you very much for this.
When I try to run your example code with these two images, I always get a black image!
This is the whole code which is given in the first page , I just replaced the random numpy array statement with these two images!:
import imgaug as ia
from imgaug import augmenters as iaa
import numpy as np
im = caffe.io.load_image('buckskin_s_000331.png')
im2 = caffe.io.load_image('buckskin_s_000005.png')
images = np.zeros([2,32,32,3])
images[0] = im
images[1] = im2
# Sometimes(0.5, ...) applies the given augmenter in 50% of all cases,
# e.g. Sometimes(0.5, GaussianBlur(0.3)) would blur roughly every second image.
st = lambda aug: iaa.Sometimes(0.3, aug)
# Define our sequence of augmentation steps that will be applied to every image
# All augmenters with per_channel=0.5 will sample one value _per image_
# in 50% of all cases. In all other cases they will sample new values
# _per channel_.
seq = iaa.Sequential([
iaa.Fliplr(0.5), # horizontally flip 50% of all images
iaa.Flipud(0.5), # vertically flip 50% of all images
st(iaa.Superpixels(p_replace=(0, 1.0), n_segments=(20, 200))), # convert images into their superpixel representation
st(iaa.Crop(percent=(0, 0.1))), # crop images by 0-10% of their height/width
st(iaa.GaussianBlur((0, 3.0))), # blur images with a sigma between 0 and 3.0
st(iaa.Sharpen(alpha=(0, 1.0), strength=(0.75, 1.5))), # sharpen images
st(iaa.Emboss(alpha=(0, 1.0), strength=(0, 2.0))), # emboss images
# search either for all edges or for directed edges
st(iaa.Sometimes(0.5,
iaa.EdgeDetect(alpha=(0, 0.7)),
iaa.DirectedEdgeDetect(alpha=(0, 0.7), direction=(0.0, 1.0)),
)),
st(iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.2), per_channel=0.5)), # add gaussian noise to images
st(iaa.Dropout((0.0, 0.1), per_channel=0.5)), # randomly remove up to 10% of the pixels
st(iaa.Invert(0.25, per_channel=True)), # invert color channels
st(iaa.Add((-10, 10), per_channel=0.5)), # change brightness of images (by -10 to 10 of original value)
st(iaa.Multiply((0.5, 1.5), per_channel=0.5)), # change brightness of images (50-150% of original value)
st(iaa.ContrastNormalization((0.5, 2.0), per_channel=0.5)), # improve or worsen the contrast
st(iaa.Affine(
scale={"x": (0.8, 1.2), "y": (0.8, 1.2)}, # scale images to 80-120% of their size, individually per axis
translate_px={"x": (-16, 16), "y": (-16, 16)}, # translate by -16 to +16 pixels (per axis)
rotate=(-45, 45), # rotate by -45 to +45 degrees
shear=(-16, 16), # shear by -16 to +16 degrees
order=ia.ALL, # use any of scikit-image's interpolation methods
cval=(0, 255), # if mode is constant, use a cval between 0 and 255
mode=ia.ALL # use any of scikit-image's warping modes (see 2nd image from the top for examples)
)),
st(iaa.ElasticTransformation(alpha=(0.5, 3.5), sigma=0.25)) # apply elastic transformations with random strengths
],
random_order=True # do all of the above in random order
)
images_aug = seq.augment_images(images)
plt.imshow(images_aug[0])
plt.show()
what is wrong here?
Hi!
I think the case of augmentation by perspective transform is very useful for real-life applications. Would you be interested in such function?
See http://www.euclideanspace.com/maths/discrete/groups/categorise/finite/cube/rotFace.png . Let the plane with a green 1
be an image. I'd suggest to parametrize the augmentation by angles around z
and y
, i.e. img_aug = aug_by_perspective(img, angle_vert, angle_horiz)
.
Implementing this should be relatively simple using OpenCV.
In the example of "heavy augmentations" I get the error
order=ia.ALL, # use any of scikit-image's interpolation methods
AttributeError: 'module' object has no attribute 'ALL'
I fixed this by replacing ia.ALL
with "ALL"
Does the variable ALL accept other values?
PS: Great library. Very good work
it's more of a question rather than an issue.
Have you considered implementing ZCA whitening?
I have a simple implementation. But It doesn't seem to work on color images.
For example, on CIFAR10 dataset, the result of ZCA whitening is distorted images where only few color pixels remain and rest of the image gets all distorted.
Also, ZCA whitening acts as a low pass filter according to some sources. What are other techniques that act as a low pass filter? Gaussian Blurring???
Hi, thanks for writing this repo! It's a great help in what I do :)
One question:
When I scale the image by 1.5 (affine transform) I lose information in the sides (it's more of a zoom than a resize function). Is there any way to simply resize the image?
Similar examples would be:
The solution I am looking for is something like adding a flag "canvas_fits_image" which will perform the affine transformation and resize the canvas accordingly.
If it doesn't exist, I might add it myself sometime. Thanks!
This looks amazing!! How to install and make use of this library?
Hi, Thanks for the library!
When we do the rotation transformation, can we support 'edge' padding for the black areas? skimage supports different padding mode: http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.rotate
Thank you!
I am not sure how to ask this question properly so bare with me.
I am going to try to use an illustrative example:
Is there an image translation that does the equivalent of this? So the "door knob" portions of the image would appear further away and the portions of the image closest to the "hinge" would appear closer.
Similar idea to these?
I am not looking for a 3d images I am just wanting some portions of my image closer and some portions further away.
Hi, when I try to import the augmenters file from imgaug root folder using iPython I get this error
ImportError: cannot import name augmenters
Also, when I do the import import augmenters as iaa
from the imgaug/imgaug folder, I get this error
1 from __future__ import print_function, division, absolute_import
----> 2 from . import imgaug as ia
3 from .parameters import StochasticParameter, Deterministic, Binomial, Choice, DiscreteUniform, Normal, Uniform
4 from abc import ABCMeta, abstractmethod
5 import random
ValueError: Attempted relative import in non-package
However, when importing from a python script it works fine
Hi,
Imgaug is a very good tool for generating images. I am trying to generate different data from my 3D MRI image data. As the code is only supporting 2D images or RGB images, imgaug is giving failure in case of my data. This might seem as my personal different problem, but I want your suggestion of how can we change the code so that it will support 3D images. My images are 25625660 sizes. Its a dicom image, and it has 60 slices. Generating augmented 2D image will not touch Z axis, which is not acceptable in my case. Please suggest something using which I can achieve the same feature, mainly I will use affine,piecewise affine and elastic transformation for generating new data.
Thanks
Hi,
I am using sequential augment with only rotate affine transform with -30, 30 range.
Then I wanted to augment keypoints. I did this with the point (100, 100). But, the augmented point is not in the correct position. I ran the key point augmentation for other sequential non-affine augmentations and they seemed to work fine.
I used the same visualization technique in the README.md with the "draw_on_image" method
Can you please help me with this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.