Yannick Stephan π
AI ML engineer | Senior Software engineer
About Me:
π‘ Β I am an accomplished AI ML Engineer with a robust background in software engineering, I have honed my skills to specialize in machine learning over the past years.
π Β In my continuous pursuit of knowledge and skills advancement, I successfully completed a second Master's degree in Data Science and Artificial Intelligence at Mines Paris in collaboration with Data Science Test. This academic journey not only enhanced my capabilities as an AI ML Engineer but also equipped me with comprehensive skills to excel as a ML engineer and LLMOps / LMOps specialist.
π¬ Β As an AI ML Engineer, I have a proven track record of developing and deploying machine learning models that drive actionable insights and enable data-driven decision-making. I am experienced in the end-to-end development of ML systems, from data preprocessing and feature engineering to model selection, training, and deployment. I am well-versed in various ML algorithms and frameworks, including TensorFlow, PyTorch, and scikit-learn.
π» Β In addition to my expertise in AI and ML, I bring a wealth of knowledge in software engineering and mobile.
π Β With strong problem-solving and analytical skills, I thrive in challenging and dynamic environments. I am highly collaborative, possessing excellent communication skills and the ability to work effectively in cross-functional teams.
βοΈ Β Get in touch on LinkedIn & Stack Overflow
π Β I am fluent in English and French, and I have experience working in international settings in France, Spain, Canada, the United States, and Switzerland. π«π·π¨π¦πΊπΈπͺπΈπ¨π
π My Projects:
Machine Learning (ML):
- π§ Brain MGMT Prediction: Comprehensive analysis and model development for MGMT prediction using an XGBoost ensemble with a U-Net-based feature extractor. | Kaggle |
Deep Neural Networks (DNN):
- π· Wine Quality Prediction: Utilized DNN to predict wine quality based on physicochemical properties. | Kaggle |
Computer Vision (CV):
- π§ Brain MGMT Prediction: Developed a CNN-based model for prediction of the MGMT. | Kaggle |
- πΎ Rice Grain Classification: Employed CNN to classify different types of rice grains. | Kaggle |
- π¦ Traffic Sign Recognition (GTSRB): Developed a CNN-based model for real-time traffic sign recognition. | Kaggle |
- βοΈ Handwritten Digit Classification (MNIST): Achieved high accuracy in classifying handwritten digits using CNN. | Kaggle |
Generative Adversarial Network (GAN)
- π² Bicycle Image Generation: Exploration of Generative Models (GAN) and its capabilities, focusing on the generation of bicycle. | Kaggle |
Natural language processing (NLP)
-
π¬ Dialogue Summarization with PEFT: Explore LLM capabilities in dialogue summarization using PEFT. | Kaggle |
-
π¬ Less Toxic Dialogue Summarization with POO: Dive into less toxic dialogue summarization using POO with LLMs. | Kaggle |
-
π¬ Basic LLM Applications from Hugging Face: Showcase various applications leveraging LLMs from Hugging Face. | Kaggle |
-
π¬ Embeddings, Vector Databases, and Advanced Searches: Explore vector embeddings and databases in LLMs for advanced searches. | Kaggle |
-
π¬ Pinecone Databases: Explore Pinecone and LLM vector databases. | Kaggle |
-
π¬ Weaviate Databases: Explore Weaviate and LLM vector databases. | Kaggle |
-
π¬ Multi-stage Reasoning with LangChain: Enhance LLMs with LangChain for multi-stage reasoning. | Kaggle Part 1 | | Part 2 |
-
π¬ QA on Own Data with LangChain and RAG: Apply RAG for Question Answering on Own Data using NLP and LLMs with LangChain. | Kaggle |
-
π¬ Fine-Tuning LLM with Trainer: Optimize LLMs through fine-tuning using the Trainer approach. | Kaggle |
-
π¬ Fine-Tuning LLM with Trainer and DeepSpeed: Enhance LLMs through fine-tuning using Trainer and DeepSpeed. | Kaggle |
-
π¬ LLMs with Society, Bias, and Toxicity: Investigate the intersection of LLMs with society, bias, and toxicity. | Kaggle |
-
π¬ LLMOps and Pipeline (Dev, Stag, Prod): Implement LLMs using LLMOps and create a comprehensive pipeline for development, staging, and production. | Kaggle |
-
π¬ Fine-Tuning LLM for QA with LoRA and Flan-T5 Large: Fine-tune LLMs for QA using LoRA and Flan-T5 Large. | Kaggle | | Hugging Face Model |
-
π¬ Fine-Tuning LLM LLama 2 QLoRA: Fine-tune LLMs for chat using QLoRA and Llama 2 in 2024. | Kaggle |
Software Libraries:
- π οΈ SKit: A custom software library I developed. | View on GitHub |