MUHAMMAD BILAL's Projects
This GitHub repository contains a Python script for evaluating a binary classification task related to Beethoven data. The script reads a CSV file ('Beethoven.csv') and utilizes regular expressions to classify the data. It then calculates Precision and Recall metrics to assess the classifier's performance.
Get started with Flower Welcome to the Flower federated learning tutorial! In this notebook, we’ll build a federated learning system using Flower and PyTorch. In part 1, we use PyTorch for the model training pipeline and data loading. In part 2, we continue to federate the PyTorch-based pipeline using Flower.
CNN is a type of deep learning model for processing data that has a grid pattern, such as images, which is inspired by the organization of animal visual cortex [13, 14] and designed to automatically and adaptively learn spatial hierarchies of features, from low- to high-level patterns
The most basic type of neural network is the ANN (Artificial Neural Network). The ANN does not have any special structure, it just comprises of multiple neural layers to be used for prediction. Let’s build a model that predicts whether a person has heart disease or not by using ANN.
Column Transformer is a sciket-learn class used to create and apply separate transformers for numerical and categorical data. To create transformers we need to specify the transformer object and pass the list of transformations inside a tuple along with the column on which you want to apply the transformation.
This tutorial showcases an in-depth exploration and comparison of two prominent image classification techniques: Convolutional Neural Networks (CNN) and Histogram of Oriented Gradients (HOG)
Here you will find all the data visualization techniques with examples in Python.
deep learning in python tutorials
There are two main methods to encode numerical features 1) Discritization 2) Binarization Discretization is the process of transforming the continous variables into discrete variables by creating a set of contiguous intervals that span teh range of the vriable's value
Feature Scaling is a technique to standardize the independent features present in the data in a fixed range. It is performed during the data pre-processing to handle highly varying magnitudes or values or units. Here we see the effect of scaling in different algorithms and how to perform and visualize these effects
In Functin transform we apply some functions on the columns so that there distribution become normal
Generative adversarial networks (GANs) are neural networks that generate material, such as images, music, speech, or text, that is similar to what humans produce.
In this tutorial, you’ll learn: What a generative model is and how it differs from a discriminative model How GANs are structured and trained How to build your own GAN using PyTorch How to train your GAN for practical applications using a GPU and PyTorch
# Beginner’s Guide to the GPT-3 Model Demonstrating some interesting example applications in Python, with just a few lines of codes
GradientTape API for automatic differentiation; that is, computing the gradient of a computation with respect to some inputs, usually tf. Variable s. TensorFlow "records" relevant operations executed inside the context of a tf. GradientTape onto a "tape".
In this tutorial i will tell you how to handel the mixed data
This tutorial introduces autoencoders with three examples: the basics, image denoising, and anomaly detection. An autoencoder is a special type of neural network that is trained to copy its input to its output.
In this intriguing project, we're thrilled to unveil an innovative solution that marries the cutting-edge realm of deep learning with the user-friendly Gradio web interface. Our mission? To revolutionize the landscape of loan approval predictions.
My first project with openAIMP
In this tutorial, you will see how you can use a time-series model known as Long Short-Term Memory. LSTM models are powerful, especially for retaining a long-term memory, by design, as you will see later.
all machine learning models to start with
Normalization is technique often applied as part of data preparation for machine learning. The goal of normalization is to change the values of numeric columns in the dataset to use a common scale, without distorting differences in the range of values or losing information. We have use MinMax Scaling to Normalize the dataset and then apply the Logistic Regression to check the difference between simple and scaled data
In this tutorial, you’ll learn: What artificial intelligence is How both machine learning and deep learning play a role in AI How a neural network functions internally How to build a neural network from scratch using Python
One hot encoding is one method of converting data to prepare it for an algorithm and get a better prediction. With one-hot, we convert each categorical value into a new categorical column and assign a binary value of 1 or 0 to those columns. Each integer value is represented as a binary vector.
In ordinal encoding, each unique category value is assigned an integer value. For example, “red” is 1, “green” is 2, and “blue” is 3. This is called an ordinal encoding or an integer encoding and is easily reversible. Often, integer values starting at zero are used. For some variables, an ordinal encoding may be enough. The integer values have a natural ordered relationship between each other and machine learning algorithms may be able to understand and harness this relationship.Label Encoding refers to converting the labels into a numeric form so as to convert them into the machine-readable form. Machine learning algorithms can then decide in a better way how those labels must be operated. It is an important pre-processing step for the structured dataset in supervised learning.