Git Product home page Git Product logo

Allan's Projects

aimet icon aimet

AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.

annotated_deep_learning_paper_implementations icon annotated_deep_learning_paper_implementations

🧑‍🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

ce089 icon ce089

Repositório da disciplina ce089 - Estatística Computacional II

grounded-segment-anything icon grounded-segment-anything

Grounded-SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything

jsonflow icon jsonflow

JsonFlow is a minimal library for building frameworks to handle JSON/Dict data as APIs in pure Python to empower any application.

magali icon magali

MAGALI: An AI-powered tool to analyze nutrients and carbs from food photos, aiding metabolic health optimization.

ml-slrc icon ml-slrc

Research project for automating systematic literature reviews. Best Paper Award of WEBIST 2022

neural-compressor icon neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime

rnn icon rnn

Resumo sobre o comportamento de RNN do básico a aplicações básicas sobre sua funcionalidade.

s2query icon s2query

S2query is a library to search papers from Semantic Scholar.

semi-automation-of-systematic-review-of-clinical-trials-in-medical-psychology-with-bert-models icon semi-automation-of-systematic-review-of-clinical-trials-in-medical-psychology-with-bert-models

We employed pre-trained BERT models (distillBERT, BioBert, and SciBert) for text-classifications of the titles and abstracts of clinical trials in medical psychology. The average score of AUC is 0.92. A stacked model was then built by featuring the probability predicted by distillBERT and keywords of search domains. The AUC improved to 0.96 with F1, precision, and recall increasing to 0.95, 0.94, and 0.96 respectively. Training sample size of 100 results in the most cost-effective performance.

tokyo icon tokyo

BSPWM - Aesthetic Dotfiles 🍚

vit-pytorch icon vit-pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

wanda icon wanda

A simple and effective LLM pruning approach.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.