Git Product home page Git Product logo

miro's People

Contributors

jonysy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

gitter-badger

miro's Issues

Error handling

At a glance, it seems that most Rust libraries deal with external errors by wrapping them in a single enum. Wrapping every possible error that could arise is a tedious task.

The std::io module contains an Error struct and an ErrorKind enum. Error::new accepts a Boxable std::error::Error.

I prefer Boxing Errors over wrapping errors using an enum, as the Error trait would allow for propagating an error from a function to the code that calls that function, as a way to trace it back to its origin.

Note: Box allocates memory on the heap.

Useful crates:

  • error-type

    error_chain! {
        // The type defined for this error. These are the conventional
        // and recommended names, but they can be arbitrarily chosen.
        // It is also possible to leave this block out entirely, or
        // leave it empty, and these names will be used automatically.
        types {
            Error, ErrorKind, Result;
        }
    
        // Without the `Result` wrapper:
        //
        // types {
        //     Error, ErrorKind;
        // }
    
        // Automatic conversions between this error chain and other
        // error chains. In this case, it will e.g. generate an
        // `ErrorKind` variant called `Dist` which in turn contains
        // the `rustup_dist::ErrorKind`, with conversions from
        // `rustup_dist::Error`.
        //
        // Optionally, some attributes can be added to a variant.
        //
        // This section can be empty.
        links {
            Another(other_error::Error) #[cfg(unix)];
        }
    
        // Automatic conversions between this error chain and other
        // error types not defined by the `error_chain!`. These will be
        // wrapped in a new error with, in this case, the
        // `ErrorKind::Temp` variant. The description and cause will
        // forward to the description and cause of the original error.
        //
        // Optionally, some attributes can be added to a variant.
        //
        // This section can be empty.
        foreign_links {
            Fmt(::std::fmt::Error);
            Io(::std::io::Error) #[cfg(unix)];
        }
    
        // Define additional `ErrorKind` variants. The syntax here is
        // the same as `quick_error!`, but the `from()` and `cause()`
        // syntax is not supported.
        errors {
            InvalidToolchainName(t: String) {
                description("invalid toolchain name")
                display("invalid toolchain name: '{}'", t)
            }
        }
    }
  • error-type

    error_type! {
        #[derive(Debug)]
        pub enum LibError {
            Io(std::io::Error) {
                cause;
            },
            Message(Cow<'static, str>) {
                desc (e) &**e;
                from (s: &'static str) s.into();
                from (s: String) s.into();
            },
            Other(Box<Error>) {
                desc (e) e.description();
                cause (e) Some(&**e);
            }
        }
    }
  • quick-error

    quick_error! {
        #[derive(Debug)]
        pub enum SomeError {
            /// IO Error
            Io(err: std::io::Error) {}
            /// Utf8 Error
            Utf8(err: std::str::Utf8Error) {}
        }
    }

Use cargo workspaces

RFC:

A common method to organize a multi-crate project is to have one repository which contains all of the crates. Each crate has a corresponding subdirectory along with a Cargo.toml describing how to build it. There are a number of downsides to this approach, however:

Each sub-crate will have its own Cargo.lock, so it's difficult to ensure that the entire project is using the same version of all dependencies. This is desired as the main crate (often a binary) is often the one that has the Cargo.lock "which counts", but it needs to be kept in sync with all dependencies.

When building or testing sub-crates, all dependencies will be recompiled as the target directory will be changing as you move around the source tree. This can be overridden with build.target-dir or CARGO_TARGET_DIR, but this isn't always convenient to set.

Solving these two problems should help ease the development of large Rust projects by ensuring that all dependencies remain in sync and builds by default use already-built artifacts if available.

Add Never type

Never represents the type of a value that can never exist.

Add image pyramid structure

The current pyramidal implementation of the Lucas-Kanade tracker doesn't accept image pyramids. Implementing Flow<Pyramid> for PyramLucasKanade would allow for precomputed image pyramids.

[pub] [type | struct] ImagePyramid<I = GrayImage> [= Vec<I>; | { images: Vec<I>, orientation: .. }]

impl Flow<ImagePyramid> for PyramLucasKanade { .. }

"currentization" of mīrō

An experimental library design based on what is described in this blog post: High level libraries.

Excerpt:

High level libraries is the idea that Rust libraries can be written in a such way for game engines, that makes them very easy to use and can be composed together without adding complexity. I think the expression “high level” is awfully inaccurate, but I have not yet come up with a better word for it. Unfortunately I don’t have the libraries yet to show what I mean, but I hope to explain something about it in this post.
...

What are high level libraries?

This is something I am excited about!

When I say “high level” I mean different from “normal” or “low level” because of the way the library is used, not because it is further away or closer to the hardware. It is because such libraries usually are designed for higher concepts that involves bigger pieces of game programming, and they can be combined to build the features you want. So “high level” means something like “high level game library for Piston” and does not refer to programming in general.

A high level library requires just a few lines of code to set up, and adds functionality to the application without adding complexity.

Comment from discussion:

This is writeup of the state of high level library experimentation. I plan to gradually push this idea further, and make it the default way of introducing Piston. Libraries that does not depend on the Piston core won't be affected, and generic libraries does not have to be used with current objects. This way we can keep the existing philosophy of modular libraries that fit well together, but also improve the "user friendly" part.

Other Resources

Motion

  • Pyramidal Implementation of the Lucas-Kanade Tracker
    • GrayImage
      • Benches
      • Tests
    • GrayPyramid
      • Benches
      • Tests
  • Gunnar-Farnebäck
  • Horn-Schunck optical flow method

Move crates into modules

The current setup is becoming unmanageable. Having multiple crates is pointless, as I doubt the crates will ever be used independent of other sibling crates. Using modules instead of crates would also make integration tests a lot easier to code.

Also, the current setup isn't really approachable for potential contributors.

Detection

  • Sliding Window
    • Benches
    • Tests
  • Hard-Negative Mining
  • Region Proposal Networks
  • Oriented Object Proposals methods
  • Binarized Normed Gradients

Flesh out interface [TODO]

Just jotting down a few ideas before I snooze, will edit later..:

  • Basic interface shared among all ext modules
  • Each trait method should return its associated error if an error arises

Consider other floating-point types

Adding mini-floats (f24, f16, f8, et cetera) as native types has been discussed before:

When it comes to bigger data-structures, which require a higher dynamic range than integers can provide (raw image photography, videos, voxel data, etc.) f32 has some disadvantages. The obvious one is size: Using f32 as data type, a raw image of a 20 Mpx camera would produce 80 MB of data. The other reason is speed when it comes to real-time applications (like the computation of optical flow in computer vision) [1]

Useful links

Dylib not reloading

  1. Dylib not reloading when dependent on a crate that uses gcc as a build-dependency for compiling non-rust files to .a files.

Only works when the crate dependent on gcc sets its crate-type to ["dylib"]

  1. static variables seem to only work when the dylib is a part of a separate project. i.e.,
/bin/main.rs
lib.rs

won't work,

project-a
project-dylib

will work..

  1. high crate-type has to be set to dylib because of current

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.