Git Product home page Git Product logo

ml4a-ofx's Introduction


ml4a
Machine Learning for Artists

ml4a is a Python library for making art with machine learning. It features:

Example

ml4a bundles the source code of various open source repositories as git submodules and contains wrappers to streamline and simplify them. For example, to generate sample images with StyleGAN2:

from ml4a import image
from ml4a.models import stylegan

network_pkl = stylegan.get_pretrained_model('ffhq')
stylegan.load_model(network_pkl)

samples = stylegan.random_sample(3, labels=None, truncation=1.0)
image.display(samples)

Every model in ml4a.models, including the stylegan module above, imports all of the original repository's code into its namespace, allowing low-level access.

Support ml4a

Become a sponsor

You can support ml4a by donating through GitHub sponsors.

How to contribute

Start by joining the Slack or following us on Twitter. Contribute to the codebase, or help write tutorials.

License

ml4a itself is licensed MIT, but you are also bound to the licenses of any models you use.

ml4a-ofx's People

Contributors

andreasref avatar dotkokott avatar fchtngr avatar genekogan avatar lassse avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ml4a-ofx's Issues

Combining Image and Audio

Hey there,

I'm a student of music majoring in music-informatics and right now writing my bachelor-thesis on prosody
(= intonation and rhythm) in speech.
I am planning to use this (awesome!) repo for visualizing a bunch of sound-files.
Note that I'm not a pro but very much in the process of learning CPP and Python, so please excuse any naivities.

While running AudioTSNEViewer I came to realize F0-tracking is not used in t-SNE-audio.py while it's the most relevant prosodic aspect in speech.
librosa.pyin does F0-tracking so I came up with a simple workaround:

I could first visualize each file's F0 and then build ImageTSNEViewer or ImageTSNELive based on the resulting plots.
Unfortunately (and obviously) the resulting app wont play back any sounds.

So I guess I could change AudioTSNEViewer's void ofApp::draw() to draw my F0-visualizations instead of points,
then build the app using the JSON-file created by tSNE-images.py?
I'd love the additional features in ImageTSNELive, but I see how that's a completely different challenge and my priority here is definitely audio playback.
What's the right way of combining these features?

Once I have finished my thesis, I would love to get into contributing a buildable version that implements these features.

OF applications that use ofxFaceTracker2 crash because of the path to the predictor

To use the ofxFaceTracker 2 examples at the moment I followed these steps:

  1. Downloaded shape_predictor_68_face_landmarks.dat.bz2
  2. Unzipped it to YOUR_OF_FOLDER/addons/ofxFaceTracker2/model

After that ofxFaceTracker2 examples worked.

At the moment, all the examples that have this line:

../../../../data/shape_predictor_68_face_landmarks.dat
which should change to this:
../../../data/shape_predictor_68_face_landmarks.dat

Also copy the .dat file to the app's data folder(e.g. FaceClassifier/bin/data).

Another option is to symlink the .dat file.

Distribute executables for OSX AudioTSNEViewer

Could you please distribute executables for OSX AudioTSNEViewer?
I have been trying to build the applications for 20 minutes and am roadblocked. I don't need to modify the code, I am happy to use a pre-built executable.

tSNE-images.py float32 not serializable by json.dumps

json dumps wont serialize float32's.
looks like this has been an issue for years idk wtf.

changing line 76 to this

    point = [np.float64((tsne[i, k] - np.min(tsne[:, k])) / (np.max(tsne[:, k]) - np.min(tsne[:, k]))) for k in
             range(tsne_dimensions)]

seems to fix the problem... at least for the imageTsneViewer program because i cant get imageTsneLive to compile

ubuntu 17.10, python3

How to build apps?

Hi,

I would greatly appreciate a few words how to build your apps, say AudioTSNEViewer, on Linux.

Cheers,
Lucas

Image t-SNE viewer

I'm trying to follow the tutorial here:
https://ml4a.github.io/guides/ImageTSNEViewer/
But when I run the following on Windows 10 64-bit:
.\tSNE-images.py --images_path C:\Users\Rob\Downloads\imageTsne\test --output_path C:\Users\Rob\Downloads\imageTsne\out --perplexity 5
where the test directory contains 16 images with arbitrary dimensions between 1920x1200 and 7680x4320, I get the following error:

Using TensorFlow backend.
WARNING:tensorflow:From C:\Program Files\Python\Python37\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-05-04 01:06:11.358141: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
getting activations for C:\Users\Rob\Downloads\imageTsne\test\AbstractBlue13.jpg 0/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\AbstractBlue16.jpg 1/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\AbstractBlue20.jpg 2/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\AbstractBlueRed4.jpg 3/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\AbstractBrown1.jpg 4/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\AbstractRed4.jpg 5/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\AbstractYellow14.jpg 6/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\Arena6.jpg 7/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\Electricity7.jpg 8/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\Flame2.jpg 9/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\GlowingOrb1.jpg 10/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\Hurricane2.jpg 11/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\Icicles1.jpg 12/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\Soft1.jpg 13/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\Swirl1.jpg 14/16
getting activations for C:\Users\Rob\Downloads\imageTsne\test\Waves3.jpg 15/16
Running PCA on 16 images...
Traceback (most recent call last):
  File "C:\Users\Rob\Downloads\imageTsne\ml4a-ofx-master\scripts\tSNE-images.py", line 94, in <module>
    run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate)
  File "C:\Users\Rob\Downloads\imageTsne\ml4a-ofx-master\scripts\tSNE-images.py", line 74, in run_tsne
    images, pca_features = analyze_images(images_path)
  File "C:\Users\Rob\Downloads\imageTsne\ml4a-ofx-master\scripts\tSNE-images.py", line 69, in analyze_images
    pca.fit(features)
  File "C:\Program Files\Python\Python37\lib\site-packages\sklearn\decomposition\pca.py", line 340, in fit
    self._fit(X)
  File "C:\Program Files\Python\Python37\lib\site-packages\sklearn\decomposition\pca.py", line 406, in _fit
    return self._fit_full(X, n_components)
  File "C:\Program Files\Python\Python37\lib\site-packages\sklearn\decomposition\pca.py", line 425, in _fit_full
    % (n_components, min(n_samples, n_features)))
ValueError: n_components=300 must be between 0 and min(n_samples, n_features)=16 with svd_solver='full'

I'm not sure what I need to do to solve this issue. Here's the pip freeze if interested:
https://pastebin.com/NikSZs2u
Python 3.7.0

Any help would be greatly appreciated, I have no idea how to solve this issue.

Document a few more libs needed in your image tsne notebook workflow

I also had to install h5py, tqdm, tensorflow, to get the notebook versions of your image_search and image_tsne to run. (Also you might want to add which Python version it runs on. I had 3 until I saw Rasterfairy still needed 2.)

Love it, though! (Minor bug file next.)
-Lynn

Can i run ml4a on RaspberryPi?

I'm stuck.. because ofxCcv doesn't have a version for RaspberryPi, so i can't run ml4a now :(
Do you have any idea? Please help me

cant compile ImageTSNELive

after configuring with projectGenerator and then running 'make'

compilation fails with:

/openFrameworks/addons/ofxCcv/libs/ccv/lib/linux64/libccv.a(xerbla.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: error: ld returned 1 exit status

on ubuntu 17.10

LNK2019 Error - AudioClassifier app

When adding

to the AudioClassifier app and trying to compile it using openframeworks 0.10.1 in VisualStudio 2017, I keep getting the following error:

2>FFTFeatures.obj : error LNK2019: unresolved external symbol "public: __thiscall GRT::FFT::FFT(unsigned int,unsigned int,unsigned int,unsigned int,bool,bool)" (??0FFT@GRT@@QAE@IIII_N0@Z) referenced in function "public: void __thiscall GRT::FFT::`default constructor closure'(void)" (??_FFFT@GRT@@QAEXXZ)

I was able to get ofxGtr examples and ofxMaxim examples independently, so I am not sure if this has to do with both Gtr and Maximilian having their own definition of FFT or conflicting in some other way when used together.

I was able to solve most of the linking errors I got at first, but I haven't been able to solve this one yet. Any advice or tip that could help me getting the AudioClassifier app working would be greatly appreciated.

Thank you in advance.

Best,

Paulo

DoodleClassifier: Path to data/sqlite3 image-net-file seems to be off

Some train of thought and exploration about this issue with the relevant question at the end ๐Ÿ˜‰ ๐Ÿš‚

I get this error when trying to "Add samples": Thread 1: EXC_BAD_ACCESS (code=1, address=0x18)

Up until then, I can adjust the threshold, min/max and
It seems to happen here in the "hack" line, as far as I can tell from what XCode highlights:

vector<float> ofxCcv::encode(const ofPixels& pix, int layer) const {
    convnet->count = layer; // hack to extract a particular layer with encode

    ccv_dense_matrix_t image;

My app is in a different folder than the openFramework stuff, but from what I understood, this shouldn't be a problem.

Also, the XCode console says:

[ error ] Can't find network file ../../../data/../../../../data/image-net-2012.sqlite3
[notice ] Adding samples...

which I already wondered about since it is a relative path to a file.

Using the ml4a-ofx/apps/DoodleClassifier folder with adjustments to the .xml file produces the same error.

Running sh setup.sh did create the image-net-2012.sqlite3 file โ€“ not within data but directly within ml4a-ofx. Adjusting the path to that file in https://github.com/ml4a/ml4a-ofx/blob/master/apps/DoodleClassifier/src/ofApp.cpp#L22 (which would then say ccv.setup(ofToDataPath("../../../../image-net-2012.sqlite3")); apparently fixes this issue.

Is this path wrong in general or am I holding it wrong and my setup needs to be fixed? ๐Ÿ˜Š

missing libdarknetOSX.dylib

Launching DarknetOSC.app from the finder gets me a crash. I thought it was sandbox related, so trying from the command line reveals a missing dylib:

$ ./DarknetOSC.app/Contents/MacOS/DarknetOSC 
dyld: Library not loaded: @rpath/libdarknetOSX.dylib
  Referenced from: /Users/oriol/Desktop/osc-modules/./DarknetOSC.app/Contents/MacOS/DarknetOSC
  Reason: image not found

Convnet Classifier (Debug?) Error

Trying to use the Convent Classifier app, and getting an error that I don't understand. Xcode lists it as a "Apple Mach-O Linker (Id) Error".

Anyone seen this before?

At risk of overkill, I've pasted the error message here:

Undefined symbols for architecture x86_64:
  "GRT::Util::scale(double const&, double const&, double const&, double const&, double const&, bool)", referenced from:
      ofxGrtMatrixPlot::update(GRT::MatrixFloat const&, float, float) in ofxGrtMatrixPlot.o
  "GRT::ErrorLog::observerManager", referenced from:
      GRT::ErrorLog::triggerCallback(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const in ofxGrtTimeseriesPlot.o
  "GRT::ErrorLog::errorLoggingEnabled", referenced from:
      GRT::ErrorLog::ErrorLog(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in ofxGrtTimeseriesPlot.o
  "GRT::WarningLog::warningLoggingEnabled", referenced from:
      GRT::WarningLog::WarningLog(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in ofxGrtBarPlot.o
  "GRT::ClassificationData::addSample(unsigned int, GRT::VectorFloat const&)", referenced from:
      ofApp::update() in ofApp.o
  "GRT::VectorFloat::VectorFloat(GRT::VectorFloat const&)", referenced from:
      ofApp::update() in ofApp.o
  "GRT::WarningLog::observerManager", referenced from:
      GRT::WarningLog::triggerCallback(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const in ofxGrtBarPlot.o
  "GRT::KNN::~KNN()", referenced from:
      ofApp::setup() in ofApp.o
  "GRT::MLBase::predict(GRT::VectorFloat)", referenced from:
      ofApp::update() in ofApp.o
  "GRT::VectorFloat::~VectorFloat()", referenced from:
      ofApp::update() in ofApp.o
  "GRT::ClassificationData::clear()", referenced from:
      ofApp::clear() in ofApp.o
  "GRT::ClassificationData::load(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)", referenced from:
      ofApp::load() in ofApp.o
  "GRT::ClassificationData::save(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const", referenced from:
      ofApp::save() in ofApp.o
  "GRT::MLBase::train(GRT::ClassificationData)", referenced from:
      ofApp::trainClassifier() in ofApp.o
  "GRT::Log::baseLoggingEnabled", referenced from:
      GRT::Log const& GRT::Log::operator<<<char [74]>(char const (&) [74]) const in ofxGrtBarPlot.o
      GRT::Log::operator<<(std::__1::basic_ostream<char, std::__1::char_traits<char> >& (*)(std::__1::basic_ostream<char, std::__1::char_traits<char> >&)) const in ofxGrtBarPlot.o
      GRT::Log const& GRT::Log::operator<<<char [73]>(char const (&) [73]) const in ofxGrtBarPlot.o
      GRT::Log const& GRT::Log::operator<<<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const in ofxGrtTimeseriesPlot.o
      GRT::Log const& GRT::Log::operator<<<char [92]>(char const (&) [92]) const in ofxGrtTimeseriesPlot.o
      GRT::Log::operator<<(std::__1::basic_ostream<char, std::__1::char_traits<char> >& (*)(std::__1::basic_ostream<char, std::__1::char_traits<char> >&)) const in ofxGrtTimeseriesPlot.o
      GRT::Log const& GRT::Log::operator<<<char [81]>(char const (&) [81]) const in ofxGrtTimeseriesPlot.o
      ...
  "GRT::GestureRecognitionPipeline::getClassLikelihoods() const", referenced from:
      ofApp::update() in ofApp.o
  "GRT::GestureRecognitionPipeline::~GestureRecognitionPipeline()", referenced from:
      ofApp::ofApp() in main.o
      ofApp::~ofApp() in ofApp.o
  "GRT::ClassificationData::setNumDimensions(unsigned int)", referenced from:
      ofApp::setup() in ofApp.o
  "GRT::GestureRecognitionPipeline::setClassifier(GRT::Classifier const&)", referenced from:
      ofApp::setup() in ofApp.o
  "GRT::GestureRecognitionPipeline::GestureRecognitionPipeline()", referenced from:
      ofApp::ofApp() in main.o
  "GRT::MLBase::getTrained() const", referenced from:
      ofApp::draw() in ofApp.o
  "GRT::GestureRecognitionPipeline::getPredictedClassLabel() const", referenced from:
      ofApp::update() in ofApp.o
      ofApp::sendOSC() in ofApp.o
  "GRT::KNN::KNN(unsigned int, bool, bool, double, bool, unsigned int, unsigned int)", referenced from:
      ofApp::setup() in ofApp.o
  "GRT::GestureRecognitionPipeline::getNumClasses() const", referenced from:
      ofApp::trainClassifier() in ofApp.o
  "GRT::ClassificationData::ClassificationData(GRT::ClassificationData const&)", referenced from:
      ofApp::trainClassifier() in ofApp.o
  "GRT::VectorFloat::VectorFloat(unsigned long)", referenced from:
      ofApp::update() in ofApp.o
  "GRT::ClassificationData::ClassificationData(unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)", referenced from:
      ofApp::ofApp() in main.o
  "GRT::ClassificationData::~ClassificationData()", referenced from:
      ofApp::ofApp() in main.o
      ofApp::trainClassifier() in ofApp.o
      ofApp::~ofApp() in ofApp.o

ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Error when compiling AudioClassifier app using VisualStudio

Hello. So, I downloaded the latest version of openFrameworks as well as the ml4a-ofx repository, and used the Project Generator to create a Visual Studio solution for the AudioClassifier app. Next I got the addons that the app uses, and placed them in the addons folder. At first I was getting some errors with the GRT addon, but I realized that the ofxGrt repo didn't have the GRT folder that appears here, so I placed the folder inside the addon's folder and it solved the issue.

However, with the ofxMaxim addon I get another issue: I noticed the github repo links to another repo which contains the same addon, but updated more recently. I have tried copying both folders into the addons folder, but I get different results with each. I attach screenshots. I have tried merging the contents of both folders into one, but I still get errors.

This is from the falcon4ever repo:
falcon4ever_ofxmaxim

And this is from the Maximilian repo:
micknoise_maximilian

I haven't tried the other apps yet, but any help with this would be very appreciated.

tSNE-image.py throwing JSON serialization error

I'm a complete Python/oF noob, and I'm trying to get the tSNE script example to work. Whenever I try and run the script, the process halts at the point where it tries to write the data. Here's the traceback:

Traceback (most recent call last):
File "tSNE-images.py", line 79, in
run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate)
File "tSNE-images.py", line 69, in run_tsne
json.dump(data, outfile)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/init.py", line 189, in dump
for chunk in iterable:
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 431, in _iterencode
for chunk in _iterencode_list(o, _current_indent_level):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 332, in _iterencode_list
for chunk in chunks:
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 408, in _iterencode_dict
for chunk in chunks:
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 332, in _iterencode_list
for chunk in chunks:
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 442, in _iterencode
o = _default(o)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: 0.3274011 is not JSON serializable

I'm on macOS Sierra, if that helps. Is this a code issue, or am I just doing something daft?
Thanks in advance for any help.

Error with Rasterfairy code in image_tsne.ipynb notebook

Maybe this is because I'm using a more recent Rasterfairy install, or because I did something wrong in my directory path and sizing setups. But this bit errors for me a the end of the notebook.

for img, grid_pos in tqdm(zip(images, grid_assignment)): 11 print grid_pos ---> 12 idx_x, idx_y = grid_pos 13 x, y = tile_width * idx_x, tile_height * idx_y 14 tile = Image.open(img)
The error is "too many values to unpack". The print statement I added shows it's an array of 2d arrays:
[[ 20. 13.] [ 15. 10.] [ 14. 14.] ..., [ 9. 7.] [ 6. 0.] [ 14. 5.]]
Update - sorry, should have checked into it more -apparently RasterFairy is returning an array of 2 in grid_assignments with the first item being the list you want of the coords:
(array([[ 20., 13.],
[ 15., 10.],
[ 14., 14.],
...,
[ 9., 7.],
[ 6., 0.],
[ 14., 5.]]), (40, 20))

so grid_assignments[0] works.

ConvNet Classifier

I use a windows 10 laptop but I still want to use convnetClassifier. Is there any way to do that? I know the most recently released executable (there should be more, IMO. If it seems appropriate, I would suggest releasing compiled again soon) only has Mac-Compatible, but compiling it from source on my computer is a pain. Does anybody have any ideas/advice?

Question re: OF version

Hi there! I have more of a question first - is there a specific version of OF I should be using to build the OSC modules with? I'm having some issues building the examples with both OF 0.9.7 and 0.9.8.

Wanted to make sure my problems weren't related to the OF version before I made an issue(s) for the specific build problems I'm having.

Thank you!

Doodle Classifier - Getting this error: "Thread 1: EXC_BAD_ACCESS (code=1, address=0x18)"

When I click on 'Add Samples' after drawing 10 circles, an error is thrown in line 158 of the ofxCcv.cpp file that is located within the addons folder of DoodleClassifier.

Line 158: convnet->count = layer; // hack to extract a particular layer with encode
Error: "Thread 1: EXC_BAD_ACCESS (code=1, address=0x18)"

Any tips as to what this might be would be much appreciated. Thanks.

Doodle Classifier: Getting error "No member named 'setTo' in 'ofXml'" in ofApp.cpp

I am getting these various errors in the ofApp.cpp file when compiling:

line 43: No member named 'setTo' in 'ofXml'
line 44 -> 46: No matching member function for call to 'getValue'
line 47: No member named 'exists' in 'ofXml'
line 48: No member named 'setTo' in 'ofXml'
line 54: No member named 'setToSibling' in 'ofXml'

Any tips would be much appreciated. Thanks.

Failed to load camera using Convnet Classifier

Hi, I tried to use Convnet Classifier for my project. I followed the guidelines and successfully built the app using Xcode. But the camera seemed not work properly(as follow: all balck)
Screen Shot 2019-04-21 at 10 51 41 PM
I am not sure what it's wrong.
I duplicated the app and renamed it.
Screen Shot 2019-04-21 at 10 52 16 PM
and I added the image-net-2012 file inside the bin>data
Screen Shot 2019-04-21 at 10 52 32 PM
I also followed Read.me and added two addons inside openframeworks>addons
Screen Shot 2019-04-21 at 11 02 08 PM

Everything seems correct but it just didnโ€™t work out. It will be really helpful if you can point out some mistakes! Thank you so much!!!

Problem with "undefined symbols for architechture x86_64"

Hi there,
I am trying to run the convnetRegression one and everything was set up according to the readme file. However, it comes up with "Apple Mach-O Linker Error, which is sort of same problem when I tried the ofxMSATensorflow made by Memo. Is that a problem related to my build setting of anything else? I am new in the coding world, Is there anyone who has the same problem?
I am running with OF:v0.10.1, Xcode:v10.0, and following is the errors showed.
Thanks inadvace!
Sincerely,

Undefined symbols for architecture x86_64:
"typeinfo for GRT::GestureRecognitionPipeline", referenced from:
typeinfo for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::train_(GRT::ClassificationData&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::train(GRT::RegressionData)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::train_(GRT::RegressionData&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::train(GRT::RegressionData, GRT::RegressionData)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::train_(GRT::RegressionData&, GRT::RegressionData&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::train(GRT::MatrixFloat)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::train_(GRT::ClassificationDataStream&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::train_(GRT::MatrixFloat&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::predict_(GRT::MatrixFloat&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::map(GRT::VectorFloat)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::train(GRT::TimeSeriesClassificationData)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::map_(GRT::VectorFloat&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::reset()", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::print() const", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::train(GRT::ClassificationData)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::save(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) const", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::save(std::__1::basic_fstream<char, std::__1::char_traits >&) const", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::saveModelToFile(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) const", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::loadModelFromFile(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::predict(GRT::VectorFloat)", referenced from:
ofApp::update() in ofApp.o
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::loadModelFromFile(std::__1::basic_fstream<char, std::__1::char_traits >&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::train(GRT::TimeSeriesClassificationData const&, unsigned int, bool)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::load(std::__1::basic_string<char, std::__1::char_traits, std::1::allocator > const&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::RegressionData::~RegressionData()", referenced from:
ofApp::ofApp() in main.o
GestureRecognitionPipelineThreaded::threadedFunction() in ofApp.o
ofApp::~ofApp() in ofApp.o
"GRT::MLBase::setRandomiseTrainingOrder(bool)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::GestureRecognitionPipeline::train(GRT::TimeSeriesClassificationData&, unsigned int, bool)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::train(GRT::ClassificationData&, unsigned int, bool)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::test(GRT::ClassificationData const&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::test(GRT::TimeSeriesClassificationData const&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::test(GRT::ClassificationDataStream const&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::RegressionData::RegressionData(unsigned int, unsigned int, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >, std::__1::basic_string<char, std::__1::char_traits, std::1::allocator >)", referenced from:
ofApp::ofApp() in main.o
"GRT::MultidimensionalRegression::~MultidimensionalRegression()", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::GestureRecognitionPipeline::train(GRT::RegressionData&, unsigned int)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::train(GRT::ClassificationDataStream)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::clearModel()", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::VectorFloat::VectorFloat(unsigned long)", referenced from:
ofApp::update() in ofApp.o
"GRT::GestureRecognitionPipeline::predict(GRT::VectorFloat&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::RegressionData::addSample(GRT::VectorFloat const&, GRT::VectorFloat const&)", referenced from:
ofApp::update() in ofApp.o
"GRT::VectorFloat::VectorFloat(GRT::VectorFloat const&)", referenced from:
ofApp::update() in ofApp.o
"GRT::MLBase::saveModelToFile(std::__1::basic_fstream<char, std::_1::char_traits >&) const", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLP::MLP()", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::GestureRecognitionPipeline::train(GRT::TimeSeriesClassificationData&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::setMinChange(double)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::GestureRecognitionPipeline::getIsInitialized() const", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::load(std::__1::basic_fstream<char, std::_1::char_traits >&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MultidimensionalRegression::MultidimensionalRegression(GRT::Regressifier const&, bool)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::MLP::setNumRandomTrainingIterations(unsigned int)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::MLBase::predict(GRT::MatrixFloat)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::setUseValidationSet(bool)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::GestureRecognitionPipeline::operator<<(GRT::Regressifier const&)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::MLBase::setLearningRate(double)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::GestureRecognitionPipeline::test(GRT::RegressionData const&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::GestureRecognitionPipeline()", referenced from:
GestureRecognitionPipelineThreaded::GestureRecognitionPipelineThreaded() in main.o
"GRT::MLBase::setValidationSetSize(unsigned int)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::MLP::init(unsigned int, unsigned int, unsigned int, GRT::Neuron::Type, GRT::Neuron::Type, GRT::Neuron::Type)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::GestureRecognitionPipeline::getNumTrainingSamples() const", referenced from:
ofApp::addSlider() in ofApp.o
"GRT::GestureRecognitionPipeline::train(GRT::UnlabelledData&)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::setMaxNumEpochs(unsigned int)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::RegressionData::setInputAndTargetDimensions(unsigned int, unsigned int)", referenced from:
ofApp::addSlider() in ofApp.o
"GRT::RegressionData::clear()", referenced from:
ofApp::clear() in ofApp.o
"GRT::RegressionData::save(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) const", referenced from:
ofApp::save() in ofApp.o
"GRT::RegressionData::load(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)", referenced from:
ofApp::load() in ofApp.o
"GRT::GestureRecognitionPipeline::clear()", referenced from:
ofApp::clear() in ofApp.o
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::train(GRT::ClassificationData const&, unsigned int, bool)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::getModelAsString() const", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::MLBase::enableScaling(bool)", referenced from:
ofApp::setupRegressor() in ofApp.o
"GRT::GestureRecognitionPipeline::~GestureRecognitionPipeline()", referenced from:
GestureRecognitionPipelineThreaded::~GestureRecognitionPipelineThreaded() in main.o
"GRT::GestureRecognitionPipeline::train(GRT::RegressionData const&, unsigned int)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::GestureRecognitionPipeline::getRegressionData() const", referenced from:
ofApp::update() in ofApp.o
"GRT::MLBase::train(GRT::UnlabelledData)", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::VectorFloat::VectorFloat()", referenced from:
ofApp::ofApp() in main.o
"GRT::VectorFloat::~VectorFloat()", referenced from:
ofApp::ofApp() in main.o
ofApp::update() in ofApp.o
ofApp::~ofApp() in ofApp.o
"GRT::MLBase::getModel(std::__1::basic_ostream<char, std::__1::char_traits >&) const", referenced from:
vtable for GestureRecognitionPipelineThreaded in ofApp.o
"GRT::RegressionData::RegressionData(GRT::RegressionData const&)", referenced from:
GestureRecognitionPipelineThreaded::threadedFunction() in ofApp.o
"GRT::MLP::~MLP()", referenced from:
ofApp::setupRegressor() in ofApp.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

[question] Cannot load .dat files

Hi @genekogan, just a question... After building the app ReverseImageSearchFast successfully, I'm having troubles when trying to load the .dat files (any of the ones provided on the guide). I could identify where this is happening, but couldn't find a solution yet... I was wondering if this is familiar to you?
I tried this in two Macs, same error, one 15" Nvidia GPU + Cuda installed, and the other an 13" Intel.

screen shot 2017-03-16 at 11 14 57 am

(I've added AA1, AA2, AA3 logs to track where it dies.... the source is at the bottom of this question)

[notice ] Loading from /Users/gbort/Desktop/COCO-ml4a/mscoco_145k_rp32.dat
[notice ] AA1
ReverseImageSearchFast(67100,0x7fff7458d000) malloc: *** mach_vm_map(size=14109161666195456) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug

....then if I change vector<string> filenames; by vector<char> filenames;, it explodes on the following sentence...

HOST_OS=Darwin
[ error ] Can't find network file ../../../../models/image-net-2012.sqlite3
[ error ] ofTessellator: performTessellation(): mesh polygon tessellation failed, winding mode 0
[ error ] ofTessellator: performTessellation(): mesh polygon tessellation failed, winding mode 0
[ error ] ofTessellator: performTessellation(): mesh polygon tessellation failed, winding mode 0
[ error ] ofTessellator: performTessellation(): mesh polygon tessellation failed, winding mode 0
[ error ] ofTessellator: performTessellation(): mesh polygon tessellation failed, winding mode 0
[ error ] ofTessellator: performTessellation(): mesh polygon tessellation failed, winding mode 0
[notice ] Loading from /Users/gbort/Desktop/COCO-ml4a/mscoco_145k_rp32.dat
[notice ] AA1
[notice ] AA2
libc++abi.dylib: terminating with uncaught exception of type dlib::serialization_error: Error deserializing object of type double
   while deserializing a dlib::matrix
/bin/sh: line 1: 70940 Abort trap: 6           ./ReverseImageSearchFast
make: *** [run] Error 134

Here's the method that fails... with the extra logs I added...

//--------------------------------------------------------------
void ofApp::load(string path) {
    ofLog()<<"Loading from "<<path;
    const char *filepath = path.c_str();
    ifstream fin(filepath, ios::binary);
    vector<vector<double> > projectedEncodings;
    vector<char> filenames;
    vector<double> column_means;
    dlib::matrix<double, 0, 0> E, V;
    dlib::deserialize(projectedEncodings, fin);
    ofLog()<<"AA1";
    dlib::deserialize(filenames, fin);
    ofLog()<<"AA2";
    dlib::deserialize(E, fin);
    ofLog()<<"AA3";
    dlib::deserialize(V, fin);
    ofLog()<<"AA4";
    dlib::deserialize(column_means, fin);
    ofLog()<<"AA5";
    pca.setE(E);
    pca.setV(V);
    pca.setColumnMeans(column_means);
    images.clear();

Errors using scripts/tSNE-audio.py

I was trying to recreate the projects that I realized last year with the help of this script, found several issues and managed to solve them:

In line 42 and line 63 I added the sample rate argument, otherwise librosa.load() defaults to 22050Hz.:

y, sr = librosa.load(source_audio, sr=None)

I also had to change line 69 to:

tsne = TSNE(n_components=tsne_dimensions, learning_rate=200, perplexity=tsne_perplexity, verbose=2, angle=0.1).fit_transform(np.array([f["features"] for f in feature_vectors]))

Before I was getting an Error hinting at the fact that fit_transform isn't given the numpy-array that it expects:

    File ~/.local/lib/python3.10/site-packages/sklearn/manifold/_t_sne.py:821
    if self.perplexity >= X.shape[0]:
    ^
    AttributeError: 'list' object has no attribute 'shape'

I'm on Python 3.10.12, with
librosa==0.10.1
scikit-learn==1.3.2

ConvnetViewer: FC_1 tab crashes run after 1 second on XCode

After the ConvnetViewer app builds and runs through XCode, the entire program runs normally, except for the FC_1 tab. When the tab is open, after 1 second the whole application freezes and the below error appears on XCode.

This error appears on XCode on line 78 of the ofxCcv.h file from the ofxCcv file.
Thread 12: EXC_BAD_ACCESS (code=1, address=0x3720)

Since the error occurs on the ofxCcv addon, this should be posted on the ofxCcv github, but that github has not been committed to in a few years. Also the ConvnetViewer app was created after the last commit of ofxCcv.

Unable to build facetracker2osc on xcode

I am curious how I can fix this error I get when I try to build the facetracker2osc module? I get the following error:
No matching member function for call to 'startThread' for openFrameworks/addons/ofxControl/src/ofxControlBpm.cpp

screenshot 2018-12-16 at 19 38 08

i'm a web dev, but not very experienced with c++ or openFrameworks. So i apologize if I'm missing something simple. I'm on macos Mojave & xcode Version 10.1, if that matters. btw, i'm really enjoying your ml4a course and trying to get a project started myself. :)

fwiw, i was able to setup/run faceClassifier app.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.