Git Product home page Git Product logo

projectsfall2021's Introduction

ProjectsFall2021

This is a repository that contains the list of links of the repositories of the projects.

To add the link of your repository, you need to fork this repository, create a file containing the link to your repository, and create a pull request (PR) from this fork. Here are the detailed steps:

  1. Create a fork: Click Fork on the top right, and choose your account.
  2. Add the link to your fork repository:
  • In the repository you just forked, click Create new file.
  • Name it as (The name of your project).md.
  • Type the link of the repository of your project in the following format: [The name of your project](The url of the repository) by your names. See ExampleProject.md as an example by editing it (by clicking the pen icon).
  • Commit new file.
  • Check if it has the same format as our example.
  1. Create a pull request from a fork:
  • Navigate back to the original repository (This one).
  • To the right of the Branch menu, click New pull request.
  • On the Compare page, click compare across forks.
  • Confirm that the base fork is the original repository.
  • Use the head fork drop-down menu to select your fork.
  • Type the title and description for your pull request.
  • Click Create pull request. Then you should be able to see your pull request in the Pull requests tab. We will accept your pull request once we see it.

projectsfall2021's People

Contributors

adrianne-fu avatar amysnl avatar audreyyap avatar austin-li-1123 avatar bhoomithakkar avatar caperstar avatar cheweilou avatar eric-w-h avatar ericland avatar ericzhouzhuo avatar hehongli08 avatar indigo410 avatar jcl373 avatar jmarkus725 avatar joeldsouza493 avatar jwang115 avatar jz2249 avatar kinyawang avatar kothiyalp15 avatar mdeledebur avatar mjdanbury avatar muyuanliu1 avatar pppsdavid avatar ridhitbhura avatar sichengzhao avatar sz646 avatar thekurz avatar yuetao-hou avatar yyrrhao avatar zgk2003 avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

projectsfall2021's Issues

Proposal Review - jy478

Summary: This project aims to improve the pricing of US High Yield Corporate 5-year Credit Default Swaps (CDS) by estimating two key parameters in the pricing model with statistical methods. It utilizes three kinds of data as inputs: trading data (history prices of CDS and bonds along with the bond rating), fundamental data (financial statements), and macroeconomic indicators. The objective is to evaluate the Probability of Default and the Recovery Rate.

Advantages: (i) Great impacts for the financial industry: I think this topic is a pain point for derivative traders in the credit market, as the accuracy of pricing results can be very sensitive to the estimated parameters. The improved model can no doubt provide additional values in real-world practice. (ii) Proper data sets: All the factors that may determine the default and recovery rate are considered. Examples of input features and the database to be used are well explained in the proposal. (iii) Focus on a specific market. The project chooses the US High Yield Corporate 5-year market for research, which simplified the problem.

The following part is not the weak points of this project, but just some potential problems I think you may face in your future work.

Potential problems: (i) Timestamps of different data types may not be aligned. For example, in some cases, you may deal with daily trading data, monthly macroeconomic data, and quarterly earnings reports. The announcement dates of financial statements and macroeconomic data should also be considered. (ii) Output features for training your model may be kind of confusing. For estimating the probability of default, are you planning to use the historical default rates of corporate bonds as outputs, or the implied default rates of CDS spreads? For estimating the recovery rates, which output do you want to use? Are you planning to use unsupervised learning for classification? (iii) As is mentioned in your proposal, you may estimate the parameter by company. How can you model the correlation of default in your pricing?

In all this project is very interesting and valuable. Hope you succeed!

Final Report Peer Review

Well posed albeit standard question, but with interesting background information. I love the fact that they constructed some of their own data and brought it together, meaning they will get unique results. Good Data analysis preprocessing and visuals. The analysis comprehensively considered outliers and explained the value in different models. Good discussion on feature importance.
They considered unique factors and made viable predictions using them. I agree with their future work section. I think they thought of some new creative ideas as they went along doing this project.

Peer review for "Stock Prediction Using Foundamentals"

Their project is about quantitative finance. It is to find potential methods to improve the practice of interpreting operating performance of a public company. They want to explore how the operation of a public company affect its stock returns so that they can gain fresh insights from financial market. The dataset they are using is NYSE SP500company fundamental Kaggle dataset which contains the stock prices, the fundamental financial metrics, and the descriptions of each security from 2010 to 2016.
Here are three things that I like about this proposal:

  1. This proposal clearly states their objective, and what dataset they will use.
  2. Although many groups have used the same data they are using, they discovered their drawbacks. They are going to use a different method.
  3. The idea of try to correlate the stock returns to macroeconomic data is interesting.
    Three things that need improvement:
  4. To deal with the noise, they are going to simply remove all non-finance company. The result hence will lose generality.
  5. I think it will be better if they can talk more about what value this project can bring. What organizations will be benefited from reading this project.
  6. If this proposal is presented to an outsider, he/she may not understand the financial jargon they are using such as fundamental metrics.

Final Peer Review

This project focuses on finding the relationship between government regulations and a country's disease and death share attributed to drug use. They also try to predict drug leniency's impact in the United States by considering their effects in other nations.

Things I like about the project:
The group did well by explaining how they preprocessed their data. They adopted methods like taking the log of the data to transfer into a normal-like distribution and visualizing the data.
They clearly outline potential solutions, future improvements, and data scarcity they encountered while formulating their model.
The visualizations in their report, as well as addressing the fairness of their model

Improvements:
The group could have used more techniques from the class, like using some nonlinear models for prediction purposes.
The group could have mentioned how you split the dataset for training validation and testing.
Moreover, I would also suggest adding a table to summarize the results obtained from different models, which will make the comparison more accessible and straightforward.

Final Report Peer Review

This project aims to predict the sale price of art pieces based on a number of features. This helps to protect sellers and buyers against prices far away from the fair value. I think the project idea itself is very interesting and creative.

Things I like:

  • I like how creative you choose the features. For example, your team added the last sale price of the painting and price of the last painting painted by the same artist.

  • The interpretation of the clustering part is very interesting.

  • The preprocessing part is clever, especially on the part of reducing the number of features that represent auction houses.

Things can be improved:

  • Maybe giving more details about the features that you use to make predictions.

  • For tuning hyperparameters for tree-based ensemble models, grid search might be a good idea to test more parameters and figure out the best ones.

  • Since there are so many features being considered, it might be useful to calculate the feature importance as a byproduct of Decision Trees to get a sense of what factors are most important in determining art prices.

Final Report Peer Review

Comments on presentation:
I would like to list a few areas of improvement:

  1. There was no clear definition of the problem and motivation at the start of the presentation.
  2. It is not clear how each data series are incorporated. It is better to discuss what the data looks like and what feature engineering is used.
  3. There is no discussion of methodology. It is better to discuss what kind of linear regression and logistic regression are used and how do you tune your hyper parameters.
  4. The presentation can incorporate more plots rather than just the regression coefficient table.
  5. There can be more explaination about the coefficients on other variables and what do those indicate.
  6. Conclusion is not well explained and future scopes not discussed.

Comments on the report:
Motivation:
The discussion seems comprehensive and large-scaled but the study is only about gender. It is not consistent with what the project aims to examine.

Data:
The choice of independent variables are sound. However, there does not seem to be any distinguishment of the dependent and independent variables. Alse there is no feature engineering tricks discussed in class applied here. You have the opportunity to apply, for example, one hot vector encoding on patent class and application year or use matrix completion to impute missing data.
Also I do not really understand why the most recent data from 2014 to 2020 would be missing.

Methodology:
I understand that you do not wish to estimate anything, but it seems like a wasted opportunity. I would like to see how your model predicts out-of-sample to examine whether it can be generalized at all outside of the designated years.

Results and conclusion:
I think the results are sound and the conclustion is convincing. However, there is not really a great deal of data science techniques being incorporated and the project seems skeleton. Again, splitting your available data into in-sample and out-of-sample would be great to test your out-of-sample performance. There is also a general lack of informative plots about your results and it makes the report very hard to read.

The font and line-spacing seems purposely enlarged to fit 8 pages.

Peer Review

This project is about exploring the relationships between operations of a public company and its stock returns. They are using the data NYSE SP500 company fundamentals from kaggle and trying to solve the mistakes that those people made when cracking this case.

(1) I like the part of “what we are doing differently”, but I think the only difference they propose is how to select data. Does it mean all other things they are going to do is the same as other people did from kaggle website? I think they should elaborate more on what are they going to do except for the data selection, for instance the methods like eda or which models they are plan to use.
(2) I also like that they go through the dataset you are going to use thoroughly and why they are choosing only part of it. They also proposed which data they are going to use for model testing.

Peer Review for midterm report

The project aims to predict/forecase realized volatility of the S&P 500 index over different market regimes. The data used is 1 minute tick data for S&P 500 from Kaggle. For prediction of volatility, this group consider not only market variables but also marcroeconomic variables and lagged volatility terms which help increase accuracy of the prediction.

Advantages:
In the exploratory data analysis part, this group covers every feature and makes a large number of plot based on different features. They also include description of how they deal with missing data. They even make a comparison of between China and the US.
Beside complte data visualization this group did a great job doing feature engineering. For the preliminary model they normalize 6 features and input missing value for 2 features before training their model.
While fitting the model, a preliminary model is built first and then revised with reduced multicollinearity as well as more significant features.

Potential improvement:
Could potentially add interaction term to increase the accuracy of the model
Could potentially try on different method such as bagging or boosting. Some algorithms do not require a lot of manual feature selection so it would be easier
Could spend less time on data visualization for feature like US dividend indicator because they are not quite explanatory with visualization

Peer Review

This project is trying to predict the price of oil, plain and simple. They will be using Bloomberg data for WTI Crude futures, macro indicators, and other indicator variables for natural disasters.

I like that the proposal's objective is clear and concise. The dataset being used is large and easily obtainable as well. Finally, there is undoubtedly a business need to make sense of such a volatile, unpredictable asset with methods more sophisticated than just using time series forecasting.

However, is the business need really what the project claims, which is to help prevent an oil shortage? Accurately predicting the future price of oil could be useful to either hedge against these price changes or to speculate on them, but preventing an oil shortage is a political and a foreign policy issue beyond the scope of machine learning. I would restructure your problem statement to not include the more grand issue of a shortage and purely focus on price prediction for financial / risk management purposes. If you are looking at oil dynamics on a global level, you may want to look at more than just WTI Crude prices (ex: look at Brent Crude as well). Finally, the project doesn't state which methods or algorithms it will be using- will it be mostly regression?

Peer Review-JC2498

Exploring the news engagement is essential for the news company to identify their business strategy (like advertisement on which article outlets) in the future by the prediction of the popularity of the article. Therefore, I think this topic is very useful. The dataset they will use is "Internet news data with user engagement". This dataset is available online, so I checked the dataset and found that it includes many different sources within two months, also covers different fields of content. So the dataset is representative for analysis. They mentioned they gonna use both classification and regression models to predict the popularity of the article which I think is a good point to not only give a classification of "good" or "bad" but also can predict the levels of popularity. It's useful for a news company to track. The improvement I suggest is to consider the content field carefully when predicting since the article field is a primal feature that influences the result. The NLP model is also a good choice for this problem since it will focus on the content of the article which will lead to more precise details of prediction.

Peer review for project proposal

Their project is about macroeconomics. It is trying to answer the question of what the effects of Federal Reserve’s Actions on asset prices and market indicators are. The datasets they are using are about actions of FR including quantitative easing, federal funds rate, and press releases which can be sourced from government-run websites. They are also using market data as well.
Three things that I like about this proposal:

The report clearly shows the objective and why is this project important.
I like the way they structure the proposal. They ask questions and clearly give the answer to each corresponding question.
The table in the proposal makes the report even easier to read.
Three things that need improvement:
The proposal didn’t give a thorough description on what the datasets look like.
Since it is a formal project proposal, I think it’s better to include a formal title and name.
The project seems too complex and too general. It will be easier to narrow down and focus on some specific actions of Federal Reserve.

Final Report Peer Review

This project aims to predict the levels of damage to buildings in Nepal as a result of earthquakes, and uses a dataset called “Richter’s Predictor: Modeling Earthquake Damage.” The goals of this project are to do EDA and examine the correlation between damage grade (on a scale of 1 to 3) and other variables, as well as to fit multiple regression models and fit decision trees.

The group standardized some variables, as well as converted some others to binary. In doing so, they were able to get a clear understanding of which variables played important roles in predictions and which did not. In a correlation heatmap, the group found the variables had the highest correlation with damage grade. They then generated scatter plots and histograms to visually represent the data in a clear and orderly fashion. From these plots, they were able to identify skews in the distributions. Though their regularized regression models were not as accurate as they had hoped, they were able to identify and address reasons that contributed to the shortcomings. The multinomial linear regression model used hyperparameter tuning on 5 validation sets, and after refitting the optimal hyperparameters got an accuracy of 74.1%. The decision trees and random forests used had an accuracy of 88% for the training data and 72% for the testing data.

What I liked:
All methods are well explained, and the methods used show deep understanding of the class material and the implications of using the specific models.
I appreciated the inclusion of some of the equations, as it reminded me exactly of what the functions represented and what values they were working with.
The depth in the discussions of results was great. I was able to clearly understand how all of the identified features affected the overall outcomes of the models.

What could use improvement:
This is a tough section since I think this project is very well done!
Including some of the plots off to the side instead of referring to the appendix could be helpful (though with the page limit I see why this decision was made).
For the confusion matrices in Figure 5, some of the colors might be hard to differentiate for some, so a different color scale might be useful.

Overall, I think this is a great project and it was very fun to read and see how your models performed. Good job and enjoy your break!

Final Peer Review

This project looks into prior medical data regarding readmissions from the MIMIC-III Clinical Database to predict readmission rates in patients. The goal of this project was to use Linear Regression and Random Forests to generate a predictive model regarding readmissions

What I Liked:
(i) The Data Preprocessing steps were thoroughly explained and the graphs depicted in the section inform the reader about the distribution of the data
(ii) The explanations on using Linear Regression and Random Forests were informative and effectively assessed the accuracy of each model
(iii) The Fairness section was interesting and assessed the implications of a certain model

Some Suggestions:
(i) Could any models have been further improved with some of your findings? Could hyperparameters be tuned even further?
(ii) Though the paper is informative, I would have liked to see more data and graphs regarding the model creation and assessing your model’s overall performance after initial tests were run
(iii) Could another model have been introduced instead of Linear Regression or Forests?

Peer review for "What is the Federal actually doing?"

Their project is about macroeconomics. It is trying to answer the question of what the effects of Federal Reserve’s Actions on asset prices and market indicators are. The datasets they are using are about actions of FR including quantitative easing, federal funds rate, and press releases which can be sourced from government-run websites. They are also using market data as well.
Three things that I like about this proposal:

  1. The report clearly shows the objective and why is this project important.
  2. I like the way they structure the proposal. They ask questions and clearly give the answer to each corresponding question.
  3. The table in the proposal makes the report even easier to read.
    Three things that need improvement:
  4. The proposal didn’t give a thorough description on what the datasets look like.
  5. Since it is a formal project proposal, I think it’s better to include a formal title and name.
  6. The project seems too complex and too general. It will be easier to narrow down and focus on some specific actions of Federal Reserve.

Peer proposal review

The topic of this project on the actions to take in order to reduce road accidents. Examples of data sources that this project uses are a dataset from a Kaggle competition and one from asirt.org. The object is to analyze the accidents, evaluate their causes, and try to prevent them.
One thing I like about the proposal is that the questions listed on the proposals are very relevant and specific. I think these questions can be good guidance for completing this project. Additionally, the team is able to present three datasets at this state which means that there are at least a few things that there are promised to be able to work on.
After reading the proposal, however, I have concerns as to why this project is significant enough for any sort of investment as well as why the team thinks there are promising in completing answering their research questions. From the point of view of a CEO, it is hard for me to decide as to approve this proposal or not.

Peer Review

The project is about building a classification model to determine whether to grant mortgage loans to the applicants. The group is using a dataset from 21st Mortgage Corporation in California, which contains a wide range of variables. These variables have been filtered and 7 were selected for further consideration.

Things I like about the proposals

  • The topic is of great usefulness in practice since the prediction could help the company to save a large amount of time and effort to examine each applicant's profile.
  • I like that the group already has a closer look into the data and listed out the popular features they have selected.
  • The proposal points out that the model will be valuable for financial institutions to reduce the bad debt occurrence rates, it serves well the purpose of explaining the application, dataset and method.

Improvements

  • I am wondering the time period used for the training data. In addition, maybe it would be more comprehensive if the effect of the pandemic on credit approval is taken into consideration. Such as how your model, if based on pre-pandemic data, will perform now and in the future.
  • The proposal mentioned that they selected 7 features that are popular, I would suggest the rationale of choosing these 7 features and what methods are adopted in the process.
  • I am wondering is there a specific type of applicant you are looking at, are they all for home mortgage or for business purposes.

Peer Review Comment

The project aims at establishing a forecast model on the airbnb price based on the room features. The team proposed to use airbnb data with over 30,000 rows and 74 features and calendar, listing information, and implement Big Messy Data methods including data cleaning, feature engineering, linear and nonlinear models. The final goal of the project is to build a solid model which can accurately find a suggested price for an airbnb room.

This is an interesting proposal, which at least I’ve never imagined when I looked for airbnb rooms before. It tried to build a quantitative link between room features and room price and this can serve as reference if it’s robust enough for both industry and personal use. And the data they proposed to use is definitely a great practice of handling messy data, as it contains numeric, text, locations, photos, with plenty of works to do on processing data.

Concerns also exists. The first one is that it’s hard to purely link the price and room features, because the supply/demand market has more factors than the room features per se, say revenue management techniques, macro economy considerations etc., so whether it can output a desired model is a major concern. The second one is how to properly use the features, as it can be hard to quantify texts, locations and photos in real. The third flaw exists in the proposal, as you propose to help hosts decide their price in the “Problems” part, but then turn to help people find a well worth airbnb room in the following “Our Goal” part, which puzzles me. And there are several typos in the proposal, so it’s worthy to clearly review your proposal again before starting your practice.

Peer review by CT648

This project aims to save hardware memory by using dimension reduction methods. By their detailed description, they have listed everything they plan to do, including url links to their test datasets and reference papers. The interpretation part of techniques are easy to understand, while the project itself is obviouly not a easy one.

I think there is one thing they can improve. Since the idea of the proposal stated a big problem unsolved, they can briefly introduce how those dimension reduction method works and how they are related to their prolbem.

Except from the original goals of finding ways to do dimension reduction, they also seek to interprete how their method reduce dimension successfully in real dataset. This new goal might be a burden since it can be problem for another project.

Midterm report review

The project uses Airbnb data to predict prices.

What I liked about the report:

  1. The authors provide a clear and concise description of how the missing values were dealt with
  2. They demonstrate efficient use of histograms and correlation analysis in the project and derive meaningful insights from these plots
  3. They start with a simple linear regression modelling technique to generate basic insights about the associations of the covariates with the outcome variable and also identify the shortcomings of this model

What can be improved:

  1. It would have been nice to include a brief introduction about the main aim of the project and to provide an intuition for the outcome and covariates used
  2. A descriptive summary (mean, median, standard deviations) of all variables along with the linear regression summary analysis would provide a holistic view of the entire dataset
  3. It would be meaningful to incorporate the rationale behind using XGBoost to eliminate correlated features

Proposal Peer Review - zy225

Summary of the Project

This project identifies factors that contribute to low expected life expectancy by constructing multiple linear or non-linear models. The dataset they are using is provided by the WHO that contains factors that might influence the expected life expectancy from 193 countries from 2000 to 2015. The objective of the project is to provide suggestions to a specific country on which areas it should focus on to increase the life expectancy of its population.

Things I Like about the Proposal

First of all, the dataset is of high quality in that it comes from WHO and it is comprehensive in both time and space. Secondly, the proposal explains the practical significance of the project very well. Lastly, it is attempting to address the problem by constructing multiple models.

Three Areas for Improvement

  1. There are three questions to answer. Might be too much work.
  2. The project is going to construct linear and non-linear models for regression tasks. But none of the questions the project wants to answer is related with a numerical outcome.
  3. Using regression models cannot provide answers to the third question.

midterm report peer review

This report is about improving algorithmic fairness in COMPAS. I do think the author should elaborate a little more on explaining what is COMPAS and what is the historical bias behind it.

In the data preliminary analysis part, I really like the histograms. I think it's very simple and clear to reveal some insights of the preliminary data.

The report used logistical regression to fit the initial data which I think suit the situation very well. The False Positive and False Negative rate indicate there are bias in the COMPAS algorithm towards white and black defendants.

The future plan looks promising. Overall I think this is a clear and concise report.

Final Project Review

This project seeks to predict the price of Bitcoin using different technical indicators, a-la factor investing. I like how they implement their results in a trading strategy and showed the performance of it relative to buy and hold. Their model outperforming the buy and hold strategy has positive implications. Likewise, their feature selection was very robust and they took good care to discuss the implications of picking those features (ex: are they correlated?). They also took time to discuss how each of the different models they used worked, which is good for readers who are not familiar with them, increasing the trustworthiness of their final model decision. Finally, I like how they adapted to their initial results and decided to switch direction, going the classification route, and they clearly explained why they made this decision.

It was impossible to see axes of feature distributions in the presentation, so it was hard to get a sense of the distributions of some of the features. I would have liked to see discussion about steps taken to perform missing value imputation, preprocessing (or at least mention that they didn't have to do it if that was the case), and train/validation/test split, especially because they are working with time series data. The lack of discussion on these topics decreases the trustworthiness of the final model. They also do not discuss why they would expect certain classification models to outperform others - what is it about your implementation or the properties of XGBoost, for example, that allows it to produce the highest test accuracy? Finally, I thought the formatting and appearance of the report were very clean and professional, but I did notice some typos and oddly worded sentences.

Overall a very good application of ML algorithms to a trending, interesting topic.

Final Report Peer Review

This project aims to find the most suitable for NY citizens to treat Septicemia. They use two types of data: inpatient and hospital data.

Things I like:

  1. They combined three features and create a new feature for data preprocessing part.
  2. For categorical data transformations, they use different encoding methods based on the type of data. They also deal with miscellaneous data in a clever way.
  3. I like the way they visualize how “fair” their model is by showing the cost distribution for different groups.

Suggestions:

  1. I’m kind of confused about this sentence: “some features were selected by ourselves according to the assumptions on features which could have hi effects on the cost of visiting hospitals for each patient”. What method do your team use exactly to choose these features? Is it by correlation coefficient, data visualization or simply by random guess that some features are bound to be significant in predicting y? I think it would be better if you can elaborate more.
  2. TYPO: Fairness “Matrics”
  3. I got lost in the part where your team using random forest for feature selection. How were the feature chosen? Why are you showing Fig.4? It seems that you does not elaborate what does Fig.4 explain in your report.
    I’d like more interpretation for your final models. You compare your models with assumptions made at the very beginning, which is good. But I’m still confused which features are the most important. Maybe you can use a plot to visualize the feature importance.

Project Proposal: Peer Review

In this Bitcoin price prediction project, the team is proposing to leverage multi-factor pricing models in predicting price movements of bitcoin. In order to do so, the team must deal with high levels of volatility in the data as well as finding metrics which can help in the prediction. The project will leverage price, trading volume, google data, and more alongside feature engineering, regression, and classification in order to pursue the goal.

There are many aspects of this proposal that I appreciate. First of all, the benefit that an accurate price prediction model for Bitcoin would create for an individual or a firm is immense. Second, it is easy to imagine that the model could be tweaked and repurposed into a more generalized cryptocurrency pricing model which would magnify the benefits of the project immensely. In addition to the aforementioned benefits, it is clear that the project team did their research prior to this proposal as they have a comprehensive plan as to how to move forward as well as an idea of how effective a model they create could be.

Concerns:
How will the usefulness of the model be gauged? i.e. What level of correctness in predictions do we need in order to view the project as a success?

I have some initial concerns when looking at the descriptions of the features. The first is that the google search rate of Bitcoin lags behind price movements, which could possibly lead to misleading predictions, particularly after large price swings. Secondly, I would be careful in using gold price as a feature. While both gold and Bitcoin have finite supply, gold is traditionally much less volatile. Also, the correlation between gold and bitcoin has been known to switch between a positive and negative correlation, making me question its helpfulness as a feature.

I am also concerned about the long term benefits of the project. Cryptocurrency, as a virtually unregulated space, could change drastically in the near future as regulations catch up with the technology. This could cause the driving factors of bitcoins pricing to change and make the model created in this project obsolete.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.