ablanco1950 Goto Github PK
Name: Alfonso Blanco.
Type: User
Bio: telecommunications engineer working long time as a software developer
Name: Alfonso Blanco.
Type: User
Bio: telecommunications engineer working long time as a software developer
ABALONE_DECISIONTREE_C4-5: A procedure is attached that uses the Abalone file (https://archive.ics.uci.edu/ml/datasets/abalone) as test and training . After evaluating the entropy of each field, a tree has been built with the nodes corresponding to fields 0, 7 and 4 and branch values ??in each node: 1 for the root node corresponding to field 0, 29 for the next node in the hierarchy corresponding to field 7, and 33 in the last node corresponding to field 4. The values ??of each field have been associated with indices, which can encompass several real values. the values ??of these indices are those that have been considered for the calculation of entropies and for making a branching of values ??at each node. A hit rate of around 58% is obtained, that is, in the low range of the existing procedures to treat this multiclass file, which are detailed in the documentation to download from https://archive.ics.uci.edu/ml/ datasets / abalone The depth of the tree has been increased without obtaining significant improvements. Nor has it been significantly improved by applying adaboost. Resources: Spyder 4 On the c: drive there should be the abalone-1.data file downloaded from https://archive.ics.uci.edu/ml/datasets/abalone Functioning: From Spyder run: AbaloneDecisionTree_C4-5-ThreeLevels.py The screen indicates the number of hits and failures and in the file C:\AbaloneCorrected.txt the records of the test file (records 3133 to 4177 of abalone-1.data) with an indication of whether their predicted class values ??coincide with the reals, the predicted class value and the order number of the record in abalone-1.data The following programs are also attached: AbaloneDecisionTree_ID3.py and AbaloneDecisionTree_C4-5_parameters.py that have served to calculate the necessary parameters to build the tree. Cite this software as: ** Alfonso Blanco García ** ABALONE_DECISIONTREE_C4-5 References: https://archive.ics.uci.edu/ml/datasets/abalone
ABALONE_NAIVEBAYES_WEIGHTED_ADABOOST: Two procedures are attached that use the Abalone file as test and training (https://archive.ics.uci.edu/ml/datasets/abalone). Both start from a treatment of the training part calculating the frequencies corresponding to each value of each field and applying a Naive Bayes probability calculation. In a second step, one of the procedures takes advantage of the previous result to apply weights based on each field to the wrong or true records. The other procedure uses Adaboost, using the adaboost routine published at https://github.com/jaimeps/adaboost-implementation (Jaime Pastor). A hit rate of around 58% is obtained, that is, in the low range of the existing procedures to treat this multiclass file, which are detailed in the documentation to download from https://archive.ics.uci.edu/ml/ datasets / abalone
Config files for my GitHub profile.
Using together cv2's findcontours and Haarcascade license plate detection together with the application of an extensive set of filters
BFS-no-conventional Search according to the BFS algorithm according to an "unconventional" method, meaning the conventional one that is downloaded from http://www.paulgraham.com/acl.html link to code file acl2.lisp. In this "unconventional" version, all the paths that lead to the objective are obtained following the BFS strategy, not only the first path, which will be the shortest. Since there may be several paths that are the shortest, apart from facilitating the case in which the branches are weighted and only the shortest path is not decisive. A program BFS-no-conventional-only-first-path.cl is also provided that obtains only the first path that reaches the target (actually it is the same program in which the value of the option parameter has been modified). The result is an increase in time but a significant reduction in the memory occupied, which is useful in the case of large graphs. The code has some lack of "orthodoxy", such as the use of global variables. Requirements: Allegro CL 10.1 Free Express Edition The programs are loaded, the code is selected and Tools Incremental Evaluation is given. The test cases are then selected and Tools Incremental Evaluation is given again. References: ANSI Common Lisp by Paul Graham. http://www.paulgraham.com/acl.html link to code, file acl2.lisp Diverse material of practices of the subject of Artificial Intelligence of the Polytechnic School Superior, Computer Engineering, of the Autonomous University of Madrid.
From dataset https://universe.roboflow.com/roboflow-100/bone-fracture-7fylg a model is obtained, based on yolov10, with that custom dataset, to indicate fractures in x-rays.
Project that detects the brand of a car, between 1 and 49 brands, that appears in a photograph, with a success rate of more than 70% (using a test file that has not been involved in the training as a valid or training file, "unseen data") and can be implemented on a personal computer
Project that detects the brand of a car, between 1 and 49 brands ( the 49 brands of Stanford car file), that appears in a photograph with a success rate of more than 80% (using a test file that has not been involved in the training as a valid or training file, "unseen data") and can be implemented on a personal computer.
Project that from photos of cars, estimates its detailed colors ( not basic colors) based on the maximum values of the R G B histograms of each photo
Project that detects the model of a car, between 1 and 196 models ( the 196 modelss of Stanford car file), that appears in a photograph with a success rate of more than 70% (using a test file that has not been involved in the training as a valid or training file, "unseen data") and can be implemented on a personal computer.
Project that estimates the distance a car is on a road based on the relationship between the real size of the car and the size it appears in the video obtained. It also estimates the lane the car are traveling in at any given time based on the angle between the position of the car and camera, even guess lane change intentions
This work is an extension of the project https://github.com/ablanco1950/LicensePlate_RoboflowAPI_Filters_PaddleOCR adding the possibility to detect the speed
This work is an extension of the project https://github.com/ablanco1950/LicensePlate_Yolov8_Filters_PaddleOCR adding the possibility to detect the speed, tracking and counting cars
Creation of a model based on yolov8 that uses the file downloaded from https://www.kaggle.com/datasets/valentynsichkar/traffic-signs-dataset-in-yolo-format/data as a custom dataset to detect traffic signs. The detected signals can be recognized using the project https://github.com/ablanco1950/RecognizeTrafficSign.
Project that positions an object in a video following a road lane.
DPLL_propositional_logical_inference: Starting from a FNC (Conjunctive Normal Form), that is, a series of clauses (literals joined by the or operator) joined by an and operator. Apply the DPLL algorithm and determine the values of the literals that give a solution to the FNC. A clear explanation of the DPLL algorithm can be found at http://www.cs.us.es/~fsancho/?e=120. The tests have been implemented based on the examples that appear in a link to netlogo on that page. If you have an expedition in FBC (with connectors => and <=>) you can switch to an FNC, which would be the entrance to this project, downloading the https://github.com/bertuccio/inferencia-logica-proposicional project. This project can be completed with the DPLL algorithm by adding the instructions given from the definition of the DPLL function to the end. And activating the instructions that appear in the function pasa-lista-FBF-to-lista-FNC that would serve as an interface between both projects. In fact, DPLL_propositional_logical_inference is intended to complete Propositional Logical Inference, with the DPLL algorithm and share functions. Requirements: Allegro CL 10.1 Free Express Edition References: https://github.com/bertuccio/inferencia-logica-proposicional by Adrián Lorenzo Mateo (Bertuccio) who uses material from the Artificial Intelligence practices at the Higher Polytechnic School of the Autonomous University of Madrid. Informatics Engineering. http://www.cs.us.es/~fsancho/?e=120 by Fernando Sancho Caparrini. Higher Technical School of Computer Engineering of the University of Seville.
Detection of fractures in radiographs by obtaining the X and Y coordinates of the center of the fracture applying ML (SVR) to obtain the values of these coordinates separately. It is applied to a selection of data from the Roboflow file https://universe.roboflow.com/landy-aw2jb/fracture-ov5p1/dataset/1
From a selection of data from the Roboflow file https://universe.roboflow.com/landy-aw2jb/fracture-ov5p1/dataset/1, which represents a reduced but homogeneous version of that file, a model is obtained using an adaptation of the project https://www.kaggle.com/code/nyachhyonjinu/yolov3-test instead any yolo model
From a selection of data from the Roboflow file https://universe.roboflow.com/landy-aw2jb/fracture-ov5p1/dataset/1, which represents a reduced but homogeneous version of that file, a model is obtained based on yolov10 with that custom dataset to indicate fractures in x-rays.
Simple application of VGG16 for the recognition of images, obtained from LFW, of a limited number of famous(15) with good performance (greater than 80%)
Using the decision tree technique based on entropy calculation, this application calculates the hit rate of the HASTIE file with a hit rate higher than 99%
Taking into account that the accuracy of statistical results depend on the accuracy of the input data, not only on the algorithm, a Hastie file has been created in which all the records have the correct class assigned and tests of hit rates and sensitivity have been carried out
HASTIE_NAIVEBAYES: from the Hastie_10_2.csv file obtained by the procedure described in https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_hastie_10_2.html, obtains a success rate in the training of 88% and 84% in the test. The main difference is that in the statistical process, each field is sampled differently according to its contribution to the hit rate.
kNN-MIN: A Spark-based design of the k-Neighbors Neighbors classifier for big data, using minimal resources and minimal code.
Lane detection using cv.matchTemplate function, a simpler system than the one usually used to process the image and detect contours. Furthermore, it does not require establishing a region of interest.
An image recognition process contained in the LFW database http://vis-www.cs.umass.edu/lfw/#download is carried out with extreme simplicity, taking advantage of the ease of sklearn to implement the SVM model. Cascading face recognition is also used to refine the images, obtaining accuracy greater than 70% in the test with images that do not appear in the training.
A recognition process of images contained in the LFW database http://vis-www.cs.umass.edu/lfw/#download is carried out using two models, one based on the minimum distance between training image records and test and another that is an adaptation of the CNN KERAS model https://keras.io/examples/vision/mnist_convnet/. Both models are complementary. A module is also incorporated that takes advantage of the facility of sklearn to implement the SVM model with great simplicity.
Through the use of Contrast Limited Adaptive Histogram Equalization (CLAHE) filters, completed with otsu filters, a direct reading of car license plates with success rates above 70% and an acceptable time is achieved
A recognition licenses plates based in FindContours
From some files of images and labels obtained by applying the project presented at https://github.com/ashok426/Vehicle-number-plate-recognition-YOLOv5, the images of license plates are filtered through a threshold that allows a better recognition of the license plate numbers by pytesseract. On 05/23/2022, a new version is introduced. On 07/04/2022 an ML version es added
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.