Git Product home page Git Product logo

secure_privateai_challenge's Introduction

Secure & Private AI

Notebooks for Secure and Private AI Challenge course.

#60DaysofUdacity Challenge

Day 1

- Completed the 3rd lesson

Day 2

- Completed the 4th lesson

Day 3

- Completed the 5th lesson

Day 4

- Finished the 6th lesson but didn't do the final project (will concentrate on it tomorrow)

Day 5

- I finished going through the final project in lesson 5

Some interesting links:
- Github link toward the implementation of the project
- An exellent explanation of the first part of the course

Day 6

- Completed the first 5 parts of Lesson 7 about Federated Learning.

Day 7

- Completed lesson 7th lesson
- Working on how to use chatbot for a medical case
- Got tagged for the 1st game of #wmn_who_code channel it was fun responding
- Got tagged for the 1st game of #random on what addected me the most during the challenge. 
It was the amazing effort and sharing spirit of the community

Day 8

Lesson 8: Securing Federated Learning
    - Completed 8.1 and 8.2 parts

Day 9

- Completed lesson 8
- Reading on machine learning conversations processing algorithms: 

Conversational AI

Day 10

- Finished the 5 first parts of lesson 9

Day 11

- Finished the 9th lesson : Encrypted Deep Learning
- Working on the self assessment part of the chatbot project
- Found some books to improve my python skills

Day 12

- Read the slide description for the Shirts project provided by the project group
- Searched for videos about GAN for style transfer
- Working on revamping my Resume 
- Searched for tips to revamp my cover letter

GAN for style transfer
Revamp your cover letter

Day 13

- Has been working the whole day on crafting a good cover letter (it's harder than it seems) 
- Reading about the interview process for a Machine learning Engineer.

Craft your Cover Letter

Day 14

- Checked the [Webinar] AMA with Robert Wagner | Secure & Private AI Challenge Scholarships , awesome input.
- Gone through the video from #sg_project-t-shirt Secure & Private AI Challenge - 60 Days of Udacity - Project T-Shirt, thank you team.

Webinar
Secure & Private AI Challenge - 60 Days of Udacity - Project T-Shirt

Day 15

- Watched videos about Logistic Regression, cost function and gradient descent (revising the basics) of deep learning.
- Trying to make the OpenAI GPT2 train
- Still trying to wrap my head around GAN (understood a little bit the principal but still trying to understand the code)

Day 16

- Working on EDA for NLP (1)
- Still trying to understand the GAN through this video (2) 

1 2

Day 17

- Learned about LSA in NLP
- Finished EDA for my textual dataset 

Day 18

- Learned about vectorization what it’s, why we use it and to transform a code from for loops to vectorization for gradient descent and cost function calculations.

Day 19

- Continued on vectorization: vectorized the whole Gradient descent calculations process
- Learned about Broadcasting in python, numpy library
- Checked out more on medium articles on chatbot creation and searched for ways to do multi-class classification on NLP tasks

Multi-class classification for NLP:
1
2

Day 20

 - Implemented and compared the performance between normal implementation of the gradient descent and loss calculation and the vectorized implementation of them using python.

Day 21

 - Finished notebook of Python basics using numpy from Andrew NG DL course.

Day 22

 - Started the Logistic Regression with a Neural Network mindset notebook from Andrew NG DL course.
 - I worked on my Style Transfer code trying to find the best combination of style and image to create something beautiful for the shirt style project. 

Day 24

 - Added a second lab (after doing it) in my Hands-on ML ppt talk.
 - I had a recap meeting about the event.

Day 25

  - I had my interview with the Digital School Product about the three months for the position of AI Engineer. Thank you #events_opportunities for sharing that opportunity.
  - I was asked to add an ML qwiklabs lab to hold a study jam.
  - I requested access to  ML Intro with the TensorFlow quest.
  - I did the lab: Creating an Object Detection Application Using TensorFlow 

Creating an Object Detection Application Using TensorFlow

Day 26

  - I added the lab to the ppt
  - I wrote my first draft of the PTSD Omdena challenge article (I will share it with you when it's out)

Day 27

  - It's finally the events day.  I held the talk and directed the study jam. Please check it out here if interested: shorturl.at/bmrvP

Day 28

  - Completed the 'Machine Learning with TensorFlow' lab from qwiklabs (https://google.qwiklabs.com/focuses/3391?parent=catalog)

Day 29

  - Working on more examples for shirt style transfer project #sg_project-t-shirt

Day 30

  - Has been working the whole day on Style transfer for the #sg_project-t-shirt , if interested please check my ''art'' in the Facebook group album for style shirt
  - Currently working on Creating Models with Amazon SageMaker lab from qwiklabs

Creating Models with Amazon SageMaker

Day 31

 - Participated in the online Fairygodboss recruiting event, it was a refreshing and first time experience.

Day 32

 - Tried the Multi-class classification tutorial with LSTM on the customer complaint dataset. To implement it using my own dataset.

The tutorial

Day 33

- I trained the multi-classification LSTM model I tried yesterday on the Omdena challenge PTSD dataset, the results were quite weird so I am thinking about redefining the structure of the dataset. Any suggestions are welcome 🙏 
- I tried training the same data using a pre-trained BERT model and it gave pretty good results (thinking about understanding BERT more deeply)
- I finished week 1 and started week 2 of the TensorFlow course from Coursera of Andrew NG, it's pretty smooth and quite beginner-friendly: 

The course

Day 34

- Worked on understanding how the BERT model works: https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270
- Applied to some job opportunities shared in #jobs channel which I would like to thank.

Day 35

- Studied the three notebooks of the 4th lesson from fastai course v3.

Day 36

- Exploring how AI can help in the legal field (any suggestions on how to make the process faster)

Day 37

- Going to BERT but this time reading the article to understand the implementation: 
- Began the Data Structures course from Udacity, I am thinking about brushing on the algorithms concepts basics in English (I studied it in French). 

https://towardsdatascience.com/bert-classifier-just-another-pytorch-model-881b3cf05784 https://github.com/sugi-chan/custom_bert_pipeline/blob/master/IMDB%20Dataset.csv https://classroom.udacity.com/courses/ud513/lessons/7174469398/concepts/78885717720923

Day 38

- Completed week 2 of DL from Deeplearningai course in coursera.
- Checked this article on how we can fine tune an ML model on AI conversational chatbots
- I am thinking about deploying ML models as mobile apps so I thought about checking Flutter but for IOS it’s really a hassle especially with the  apple development id. 

Flutter .
AI-scholar-chatbots-that-improve-after-deployment

Day 39

- Checked the AWS DeepRacer course and finished lesson 1 and started lesson 2.
- Finished week 2 of the Tensorflow 1 course specialization in Coursera by Andrew.
- Still working on setting up flutter for IOS : https://www.youtube.com/watch?v=3oIFshgMgLA
- Stopped at efficiency in the Data Structures course from Udacity.

These days I have been focusing on advancing on the TF course in Coursera (and making truffles for Eid)

Day 40

- Implementing convolution layers

Day 41

- Implementing pooling layers 

Day 42

- Implementing using numpy and scipy the convolution network from scratch and understanding how it works.

Day 43

- Implementing what I learned all along by applying it on MNIST (has been learning how it works using FashionMNIST)

Day 44

- Finished the first 5 videos of week 3 on Neural Networks from the Deep learning AI 1st course from the specialization
- I finaly got my first flutter app runing on an IOS simulator (thanks to the following tutorial): https://www.youtube.com/watch?v=H_xusHxICbk
- Completed this tutorial as well (it’s fun): https://codelabs.developers.google.com/codelabs/first-flutter-app-pt1/index.html?index=..%2F..index#5

Day 45

- Gone through the 3rd and 4th part in lesson 3 of AWS DeepRacer (but still need to activate my AWS account)
- I watched the first parts about Image Generators of week 4 of Tensorflow in Coursera.

Day 46

- Completed the lessons and quiz of week 3 of the deeplearning.ai 1st course of the DL specialization.

Day 47

- Finished the first lesson of Data Structures & Algorithms (calculating efficiency)
- I have gone through the videos and notebooks from the 7th until the 12th of TF course from coursera (that exposes the importance of having a validation set that is separate from the training set).

Day 48

- Finished week 4 and the 1st specialization of TF course from coursera while exploring the difference in accuracy and training when we change the image size and did the practice exercices where I used all the things I learned throughtout week 4 to create a classifier for a set of happy or sad images.

Day 49

- I have run and did half of the assignments of the notebook : Planar data classification with one hidden layer from the deeplearning.ai course on Coursera.

Day 50

- I finished the other half of the notebook where I implemented the loss, forward, backward propagation, the gradient descent and gathered all the functions in one function that constructed the model and did the predictions.

Day 51

- I implemented several machine learning models like : SVM, Random Forest, Logistic Regression... on the fully annotated dataset for the Omdena PTSD Challenge to help us compare the novel methods like BERT and ULMFIT performance with traditional models. 
- I reviewed the article I had to write about the challenge which is an introduction to how we can help cure PTSD using AI: https://medium.com/omdena/ai-post-traumatic-stress-disorder-f8917cedb434

Day 52

- I have gone through half of the concepts in week 1 of 2nd course of TF by deeplearning.ai

Day 53

- I watched the interview of Andrew NG with Ian Goodfellow.
- I finished the remaining lessons from week 1 as well as finished the final exercice that consists of classifying dogs and cats using the whole dataset from microsoft : https://www.coursera.org/learn/convolutional-neural-networks-tensorflow/home/welcome

Day 54

- I did the Introduction to Amazon EC2 Auto Scaling lab from qwiklabs
- I did all the labs in Cloud Hero Speedrun: IoT & Big Data:
	- Internet of Things: Qwik Start
	- Dataflow: Qwik Start - Templates
	- Streaming IoT Core Data to Dataprep
	- Building an IoT Analytics Pipeline on Google Cloud Platform

Day 55

- I watched the first 4 videos on Deep l layers network from deeplearning.ai week 4 1st course

Day 56

- I finished the lessons from the deeplearning.ai specialization course week 4 1st course.

Day 57

- I have gone through the lessons of week 2 and did the final exercise of classification using Augmentation techniques from the Convolutional Neural Networks in TensorFlow course in Coursera.

Day 58

- I studied all the lesson videos on Transfer Learning, and how to take an existing model, freeze many of its layers to prevent them being retrained, and effectively 'remember' the convolutions it was trained on to fit images.
How to add a DNN underneath this so that we could retrain on our images using the convolutions from the other model.
- As well as learned regularization using dropouts to make the network more efficient in preventing over-specialization and this overfitting.
From the Convolutional Neural Networks in TensorFlow course in Coursera.

Day 59

- I applied all that I learned in the previous videos to improve the accuracy of classifying Horses vs Humans.

Day 60

- I completed Convolutional Neural Networks in TensorFlow course from Coursera course by finishing all the 4th week lessons about Multi-Class classification by applying what I learned to a CGI (Computer Graphic Images) generated dataset of rock, paper, scissors with different skin tones and nail polish: http://www.laurencemoroney.com/rock-paper-scissors-dataset/
- I did the final project that classifies hand signs using the 'Sign Language MNIST' dataset from kaggle: https://www.kaggle.com/datamunge/sign-language-mnist/kernels

Dependencies

To run these notebooks you'll need to install Python 3.6, PySyft, Numpy, PyTorch 1.0, and Jupyter Notebooks. The easiest way for all of this is to create a conda environment:

conda create -n pysyft python=3.6
conda activate pysyft
conda install numpy jupyter notebook
conda install pytorch torchvision -c pytorch
pip install syft

secure_privateai_challenge's People

Contributors

elateifsara avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.