Git Product home page Git Product logo

spring_2023_vail-cakanaga's Introduction

MySureStart Logo

SureStart Virtual AI Learning (VAIL) Program โ€“ Applied Deep Learning Focus

The primary goal of the SureStart VAIL (with an Applied Deep Leaning focus) program is to support a global community of high-achieving, ambitious, and diverse group of tech-career aspirants in developing applied technical skills related to AI, machine learning, and related technologies such as computer vision, along with "power" skills critical for career success through a virtual program led by proximate technical mentors and supported by experts from industry and academia.

As a SureStart VAIL Trainee, you will learn foundational concepts related to AI, get hands-on experience developing machine learning models, and practice using your skills to build AI solutions for real-world problems, using the a modular learning approach in a mentoring-centered environment. The 5-weeks of the program will have two Program Phases: the Tech Training Phase and the Tech Innovation Phase, and you will encounter the modules in each phase per the schedule and plan laid out below.

Sun Mon Tue Wed Thu Fri Sat
Day 01 Day 02 Day 03 Day 04 Review
Day 05 Day 06 Day 07 Day 08 Review
Day 09 Day 10 Day 11 Day 12 Review
Day 13 Day 14 Review Day 15 Day 16
Day 17 Day 18 Day 19 Day 20 Finale

Program Phase 1: The Tech Training Phase

The first phase of SureStart VAIL is the Tech Training phase, which will last 3.5 weeks. This phase is a self-paced learning phase, in which students working in a team-based learning environment, supported by SureStart mentors, complete 14 technical modules focussed on building foundational AI knowledge and hands-on Machine Learning skills. Besides mentoring from the SureStart mentoring team, trainees will also receive advice, tips and recommendations for experienced academics and industry experts through our Career Pathways seminar series.

The modules are as follows:

Day 1: We start Phase 1 with an introduction to machine learning algorithms, and how a state-of-the-art library makes it easy and fun to run on-the-shelf machine learning (ML) models for Linear Regression, Decision Trees and Random Forest.

Day 2: Having had a general introduced to ML through simpler machine learning models, we will jump into Deep Learning, a more powerful ML modeling approach. We will also get an introduction to Tensorflow, a Python library that helps us to build deep neural networks easily and quickly.

Day 3: The exist many Python libraries that make it easy and efficient for us to build complex neural networks. However, these libraries create bit of a "blackbox" around their internal workings of how these networks are actually put together. So, today, with a goal to remove the "blackbox", we will build a simple neural network from scratch, and study the various aspects of these networks step-by-step over the next few days.

Day 4: Going one step further with neural network architectures, we explore a widely used deep architecture called Convolutional Neural Networks (or CNNs). CNNs are known for their ability to compound smaller patterns into larger more humanly-recognizable ones, and often used for Computer Vision related modeling tasks.

Day 5: An important part of the learning process of any machine learning algorithm is its loss function (or cost function). It maps an event or values of one or more variables onto a real number, intuitively representing some "cost" associated with the event. We start by studying loss functions related to regression-related models.

Day 6: Today, we will discuss loss in the context of classification-related algorithms. Classification loss differ from regression loss; regression loss functions aim at predicting quantities while classificaiton loss functions aim at predicting class and labels.

Day 7: The next topic is very connected to the previous topic. We can have a loss (or cost) function that represents how accurate a given machine learning model is at the prediction task, but we also need another algorithm that is able to change the weights of the model to lower the loss/cost function, such that the associated model gets better at the prediction task. Such algorithms are referred to as "optimizers", which we will study in this module.

Day 8: Considering just on the perceptrons/nodes, all neural networks are still very linear in their function. But to model complex real-world data, we need to add non-linearity. This is accomplished by adding intermediate nodes that apply non-linear functions called "activation functions" to the outputs of the neural network layer preceding it before it is ingested by the next layer. Today we will explore the various kinds of activation functions.

Day 9: Three crucial aspects of any machine learning model are the quality and quantity of data on which it is based, how we train the models, and how long we train the models. Problems with any three of these aspects can result in a model that has not learned enough from the data, or not learned the true patterns in the data and instead glommed onto the wrong details in the data, thus producing sub-optimal predictve outcomes. We learn how to combat such applied problems on this day.

Day 10: We delve futher into how to handle overfitting and underfitting problems by learning about modifications we can make to our neural network to address them.

Day 11: Given the AI and Machine Learning's growing role in human decision-making and its large implications, today we will consider Data Ethics, Data Bias, the negative consequences of failing to make ethical considerations part of the process, and concrete an Ethical AI Framework to incorporate in our technical development, so that we can build AI, Machine Learning and data-based tech solutions that are inclusive, equitable and fair.

Day 12: An advanced architecture of neural networks is an autoencoder. Unlike the deep models we have discussed thus far, autoencoders are "unsupervised", that is these algorithms identify hidden patterns in the input data without the need for human "supervision" in the form of prediction labels (also called "true" labels). Today, we will examine the network structure of autoencoders that makes it possible for it to learn in an unsupervised manner.

Day 13: Today we will learn about generative adversarial networks (or GANs), an enhancement to autoencoders. In GANs, we have an encoder-decoder setup: an encoder that is trained to mimic the input data, and an decoder that discerns whether the data it encounters is real or generated by the encoder. The goal of the encoder is to fool the decoder; and the decoder's to catch it. Through tens of thousands of repeated iterations or more, the encoder-decoder pair essentially train each other to get better at their tasks.

Day 14: Style transfer is a task that is very connected to generative adversarial networks but differ in one simple way. Today we shall learn about this difference, and other interesting sub-topics.

Program Phase 2: The Tech Innovation Phase

During the Innnovation phase of our Virtual AI intensive (the last 1.5 weeks), trainee teams will identify ONE real-world problem that is of import and impact in their local and global contexts; define a Capstone Project to address the selected real-world problem; and apply their newly-learned AI, ML and data skills to solve the problem. Teams will continue to receive guidance and support from mentors in developing and presenting their Innovation projects.

Prior to 10 AM EST of the first day of Phase 2, we will release the Makeathon theme choices. You will select one of them on which to base your Capstone project. All support materials, forms, and judging rubric and expectations will also be shared with it.

Though in this phase individuals and teams have no specific Colab notebooks to work on, here are our recommendations for a modular approach to working on your Capstone project:

Day 15: Identify Makeathon theme and share back with SureStart using the appropriate form. Begin brainstorm ideas for a socially meaningful real-world problem that is meaningful to individuals in your team. Each team member should review the Makeathon Guide. Continue ideating and brainstorming in your teams. Use the Makeathon Guide to define roles and tasks for each team member. Every team member must have at least 1 primary task that they are leading, and 1-2 tasks that they are supporting. Set up time to meet with your Business Mentors, if you have not already done so.

Day 16: Problem and Solution Outline -> Outline what is the primary problem you are going to solve? How you expect to solve it use AI/ML? What do you expect to build? What kind of data you are going to need to build it? Will you build a model or will you use an off-the-shelf model or library? Begin researching both requirements. Start your Deck Development -> Begin by creating a skeleton of the headers.

The Weekend before: Continue developing your project. Begin Market Research -> Start by asking who is your work going to benefit (these are your end-users and/or customers)? How are you going to get their input? Have an outreach plan; create surveys and begin setting up interviews. Start reaching out to folks you want to survey/interview. Do your Competitive Landscape Analysis -> Consider who else is solving this problem. What is left unsolved? Why? With this knowledge, refine your proposed solution. Begin your Tech Development -> Start by identifying and acquiring/creating datasets your need; outlining the model you will need to build, or exploring off-the-shelf model or libraries if that is the direction you want to go. Begin building the scaffolding code you will need to implement your solution. Watch some past SureStart Makeathon winners.

Day 17: Finish your Competitive Landscape Analysis. Continue to Market Research work. Continue your Tech Development. Start your Deck Development if you have not yet. Begin outlining your Pitch: aka, what you will say during your 5-minute final project presentations.

Day 18: Finalize the Tech Dev process. Share the required one page description of your project using the template provided. Continue all other threads of work. Keep practicing your Pitch.

Day 19: Finish your Market Research. Your Tech and Deck development should be near complete. Keep practicing your Pitch.

Day 20: Finish all threads of work. Keep practicing your Pitch.

Last day: Show up 15 minutes before the final presentation. Dress professionally. Present your team project at the Makeathon with confidence and pride. Celebrate successful program completion!

spring_2023_vail-cakanaga's People

Contributors

cakanaga avatar github-classroom[bot] avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.