نبذة عني
Dr. Ibrahim is an AI and software development expert with 25 years of experience…
مقالات Ibrahim
الإسهامات
النشاط
-
برضه فيه بوستات قادرة تدخـــل السعادة على قلبك من اوسع الأبواب 🥰 . و خصوصاً لما تيجي من ناس كبيرة في المجال زي Ahmed Gameel 😊 رابط البوست الأصلي…
برضه فيه بوستات قادرة تدخـــل السعادة على قلبك من اوسع الأبواب 🥰 . و خصوصاً لما تيجي من ناس كبيرة في المجال زي Ahmed Gameel 😊 رابط البوست الأصلي…
تم إبداء الإعجاب من قبل Ibrahim Sobh - PhD
-
🎉 𝐓𝐡𝐚𝐧𝐤 𝐲𝐨𝐮 3️⃣8️⃣,000+ 𝐰𝐨𝐫𝐥𝐝𝐰𝐢𝐝𝐞 𝐟𝐫𝐢𝐞𝐧𝐝𝐬 👨👩👧👦 🏆 3️⃣ 𝐌𝐢𝐥𝐥𝐢𝐨𝐧 total post views during the last 12 months…
🎉 𝐓𝐡𝐚𝐧𝐤 𝐲𝐨𝐮 3️⃣8️⃣,000+ 𝐰𝐨𝐫𝐥𝐝𝐰𝐢𝐝𝐞 𝐟𝐫𝐢𝐞𝐧𝐝𝐬 👨👩👧👦 🏆 3️⃣ 𝐌𝐢𝐥𝐥𝐢𝐨𝐧 total post views during the last 12 months…
تمت المشاركة من قبل Ibrahim Sobh - PhD
الخبرة
التعليم
-
Cairo University
-
Fast hybrid training framework and Robust multi-input deep network architecture for learning agents acting in 3D environments.
-
-
An optimized System for Automatic Extractive Generic Document Summarization
-
-
التراخيص والشهادات
الخبرات التطوعية
المنشورات
-
Study of LiDAR Segmentation and Model's Uncertainty using Transformer for Different Pre-trainings
In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
For the task of semantic segmentation of 2D or 3D inputs, Transformer architecture suffers limitation in the ability of localization because of lacking low-level details. Also for the Transformer to function well, it has to be pre-trained first. Still pre-training the Transformer is an open area of research. In this work, Transformer is integrated into the U-Net architecture as (Chen et al., 2021). The new architecture is trained to conduct semantic segmentation of 2D spherical images generated…
For the task of semantic segmentation of 2D or 3D inputs, Transformer architecture suffers limitation in the ability of localization because of lacking low-level details. Also for the Transformer to function well, it has to be pre-trained first. Still pre-training the Transformer is an open area of research. In this work, Transformer is integrated into the U-Net architecture as (Chen et al., 2021). The new architecture is trained to conduct semantic segmentation of 2D spherical images generated from projecting the 3D LiDAR point cloud. Such integration allows capturing the the local dependencies from CNN backbone processing of the input, followed by Transformer processing to capture the long range dependencies. To define the best pre-training settings, multiple ablations have been executed to the network architecture, the self-training loss function and self-training procedure, and results are observed. It’s proved that, the integrated architecture and self-training improve the mIoU by +1.75% over U-Net architecture only, even with self-training it too. Corrupting the input and self-train the network for reconstruction of the original input improves the mIoU by highest difference = 2.9% over using reconstruction plus contrastive training objective. Self-training the model improves the mIoU by 0.48% over initialising with imageNet pre-trained model even with self-training the pre-trained model too. Random initialisation of the Batch Normalisation layers improves the mIoU by 2.66% over using selftrained parameters. Self supervision training of the segmentation network reduces the model’s epistemic uncertainty. The integrated architecture and self-training outperformed the SalsaNext (Cortinhal et al., 2020) (to our knowledge it’s the best projection based semantic segmentation network) by 5.53% higher mIoU, using the SemanticKITTI (Behley et al., 2019) validation dataset with 2D input dimension 1024×64.
مؤلفون آخرونعرض المنشور -
Deep Reinforcement Learning for Autonomous Driving: A Survey Publisher: IEEE Cite This
IEEE Transactions on Intelligent Transportation Systems
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent…
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
مؤلفون آخرونعرض المنشور -
GG-Net: Gaze Guided Network for Self-driving Cars
Electronic Imaging: Autonomous Vehicles and Machines
Imitation learning is used massively in autonomous driving for training networks to predict steering commands from frames using annotated data collected by an expert driver. Believing that the frames taken from a front-facing camera are completely mimicking the driver's eyes raises the question of how eyes and the complex human vision system attention mechanisms perceive the scene. This paper proposes the main idea of incorporating eye gaze information with the frames into an end-to-end deep…
Imitation learning is used massively in autonomous driving for training networks to predict steering commands from frames using annotated data collected by an expert driver. Believing that the frames taken from a front-facing camera are completely mimicking the driver's eyes raises the question of how eyes and the complex human vision system attention mechanisms perceive the scene. This paper proposes the main idea of incorporating eye gaze information with the frames into an end-to-end deep neural network in the lane-following task. The proposed novel architecture, GG-Net, is composed of a spatial transformer network, and a multitask network to predict steering angle as well as the gaze maps for the input frames. The experimental results of this architecture show a great improvement in steering angle prediction accuracy of 36\% over the baseline with inference time of 0.015 seconds per frame (66 fps) using NVIDIA K80 GPU enabling the proposed model to operate in real-time. We argue that incorporating gaze maps enhances the model generalization capability to the unseen environments. Additionally, a novel course-steering angle conversion algorithm with a complementing mathematical proof is proposed.
مؤلفون آخرونعرض المنشور -
A Comprehensive Study on the Application of Structured Pruning methods in Autonomous Vehicles
NeurIPS 2020 Workshop on Machine Learning for Autonomous Driving
Deep neural networks (DNNs) have achieved huge successes in many autonomous driving tasks. However, existing deep neural network models are computationally and memory expensive, making it difficult to be deployed on embedded systems with limited processing power and low memory resources. In this work, a detailed network pruning study is conducted for autonomous driving tasks, including object detection, skin semantic segmentation and steering angle prediction. This study considers the network…
Deep neural networks (DNNs) have achieved huge successes in many autonomous driving tasks. However, existing deep neural network models are computationally and memory expensive, making it difficult to be deployed on embedded systems with limited processing power and low memory resources. In this work, a detailed network pruning study is conducted for autonomous driving tasks, including object detection, skin semantic segmentation and steering angle prediction. This study considers the network performance for different algorithms at different pruning sparsity levels. We conclude this paper by comparing and discussing the experimental results and proposing the recommended structured pruning algorithms for these tasks.
مؤلفون آخرونعرض المنشور -
Deep Reinforcement Learning for Autonomous Driving: A Survey
arxiv.org
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms, provides a taxonomy of automated driving tasks where (D)RL methods have been employed, highlights the key challenges algorithmically as well as in terms of deployment of real world autonomous driving agents, the role…
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms, provides a taxonomy of automated driving tasks where (D)RL methods have been employed, highlights the key challenges algorithmically as well as in terms of deployment of real world autonomous driving agents, the role of simulators in training agents, and finally methods to evaluate, test and robustifying existing solutions in RL and imitation learning.
مؤلفون آخرونعرض المنشور -
End-to-end multitask learning for driver gaze and head pose estimation
Electronic Imaging 2020
Modern automobiles accidents occur mostly due to inattentive behavior of drivers, which is why driver’s gaze estimation is becoming a critical component in automotive industry. Gaze estimation has introduced many challenges due to the nature of the surrounding environment like changes in illumination, or driver’s head motion, partial face occlusion, or wearing eye decorations. Previous work conducted in this field includes explicit extraction of hand-crafted features such as eye corners and…
Modern automobiles accidents occur mostly due to inattentive behavior of drivers, which is why driver’s gaze estimation is becoming a critical component in automotive industry. Gaze estimation has introduced many challenges due to the nature of the surrounding environment like changes in illumination, or driver’s head motion, partial face occlusion, or wearing eye decorations. Previous work conducted in this field includes explicit extraction of hand-crafted features such as eye corners and pupil center to be used to estimate gaze, or appearance-based methods like Convolutional Neural Networks which implicitly extracts features from an image and directly map it to the corresponding gaze angle. In this work, a multitask Convolutional Neural Network architecture is proposed to predict subject’s gaze yaw and pitch angles,
along with the head pose as an auxiliary task, making the model robust to head pose variations, without needing any complex preprocessing or hand-crafted feature extraction.Then the network’s output is clustered into nine gaze classes relevant in the driving scenario. The model achieves 95.8% accuracy on the test set and 78.2% accuracy in cross-subject testing, proving the model’s generalization
capability and robustness to head pose variation.مؤلفون آخرونعرض المنشور -
Unsupervised Neural Sensor Models for Synthetic LiDAR Data Augmentation
NeurIPS 2019 Workshop on Machine Learning for Autonomous Driving
Data scarcity is a bottleneck to machine learning-based perception modules, usually tackled by augmenting real data with synthetic data from simulators. Realistic models of the vehicle perception sensors are hard to formulate in closed form, and at the same time, they require the existence of paired data to be learned. In this work, we propose two unsupervised neural sensor models based on unpaired domain translations with CycleGANs and Neural Style Transfer techniques. We employ CARLA as the…
Data scarcity is a bottleneck to machine learning-based perception modules, usually tackled by augmenting real data with synthetic data from simulators. Realistic models of the vehicle perception sensors are hard to formulate in closed form, and at the same time, they require the existence of paired data to be learned. In this work, we propose two unsupervised neural sensor models based on unpaired domain translations with CycleGANs and Neural Style Transfer techniques. We employ CARLA as the simulation environment to obtain simulated LiDAR point clouds, together with their annotations for data augmentation, and we use KITTI dataset as the real LiDAR dataset from which we learn the realistic sensor model mapping. Moreover, we provide a framework for data augmentation and evaluation of the developed sensor models, through extrinsic object detection task evaluation using YOLO network adapted to provide oriented bounding boxes for LiDAR Birdeye-View projected point clouds. Evaluation is performed on unseen real LiDAR frames from KITTI dataset, with different amounts of simulated data augmentation using the two proposed approaches, showing improvement of 6% mAP for the object detection task, in favor of the augmenting LiDAR point clouds adapted with the proposed neural sensor models over the raw simulated LiDAR.
مؤلفون آخرونعرض المنشور -
Comparative Study of NeuroEvolution Algorithms in Reinforcement Learning for Self-Driving Cars
European Journal of Engineering Science and Technology
Neuroevolution has been used to train neural networks for challenging deep Reinforcement Learning (RL) problems like Atari, image hard maze, and humanoid locomotion. The performance is comparable to the performance of neural networks trained by algorithms like Q-learning and policy gradients. This work conducts a detailed comparative study of using neuroevolution algorithms in solving the self-driving car problem. Different neuroevolution algorithms are used to train deep neural networks to…
Neuroevolution has been used to train neural networks for challenging deep Reinforcement Learning (RL) problems like Atari, image hard maze, and humanoid locomotion. The performance is comparable to the performance of neural networks trained by algorithms like Q-learning and policy gradients. This work conducts a detailed comparative study of using neuroevolution algorithms in solving the self-driving car problem. Different neuroevolution algorithms are used to train deep neural networks to predict the steering angle of a car in a simulated environment. Neuroevolution algorithms are compared to the Double Deep QLearning (DDQN) algorithm. Based on the experimental results, the neuroevolution algorithms show better performance than DDQN algorithm. The Evolutionary Strategies (ES) algorithm
outperforms the rest in accuracy in driving in the middle of the lane, with the best average result of 97.13%. Moreover, the Random Search (RS) algorithm outperforms the rest in terms of driving the longest while keeping close to the middle of the lane, with the best average result of 403.54m. These results confirm that the entire family of genetic and evolutionary algorithms with all their performance optimization techniques, are available to train and develop self-driving cars.
مؤلفون آخرونعرض المنشور -
End-to-End 3D-PointCloud Semantic Segmentation for Autonomous Driving
30th IEEE Intelligent Vehicles Symposium
3D semantic scene labeling is a fundamental task for Autonomous Driving. Recent work shows the capability of Deep Neural Networks in labeling 3D point sets provided by sensors like LiDAR, and Radar. Imbalanced distribution of classes in the dataset is one of the challenges that face 3D semantic scene labeling task. This leads to misclassifying for the non-dominant classes which suffer from two main problems: a) rare appearance in the dataset, and b) few sensor points reflected from one object…
3D semantic scene labeling is a fundamental task for Autonomous Driving. Recent work shows the capability of Deep Neural Networks in labeling 3D point sets provided by sensors like LiDAR, and Radar. Imbalanced distribution of classes in the dataset is one of the challenges that face 3D semantic scene labeling task. This leads to misclassifying for the non-dominant classes which suffer from two main problems: a) rare appearance in the dataset, and b) few sensor points reflected from one object of these classes. This paper proposes a Weighted Self-Incremental Transfer Learning as a generalized methodology that solves the imbalanced training dataset problems. It re-weights the components of the loss function computed from individual classes based on their frequencies in the training dataset, and applies Self-Incremental Transfer Learning by running the Neural Network model on non-dominant classes first, then dominant classes one-by-one are added. The experimental results introduce a new 3D point cloud semantic segmentation benchmark for KITTI dataset.
مؤلفون آخرونعرض المنشور -
LiDAR Sensor modeling and Data augmentation with GANs for Autonomous driving
International Conference on Machine Learning (ICML), Workshop on AI for Autonomous Driving
In the autonomous driving domain, data collection and annotation from real vehicles are expensive and sometimes unsafe. Simulators are often used for data augmentation, which requires realistic sensor models that are hard to formulate and model in closed forms. Instead, sensors models can be learned from real data. The main challenge is the absence of paired data set, which makes traditional supervised learning techniques not suitable. In this work, we formulate the problem as image translation…
In the autonomous driving domain, data collection and annotation from real vehicles are expensive and sometimes unsafe. Simulators are often used for data augmentation, which requires realistic sensor models that are hard to formulate and model in closed forms. Instead, sensors models can be learned from real data. The main challenge is the absence of paired data set, which makes traditional supervised learning techniques not suitable. In this work, we formulate the problem as image translation from unpaired data and employ CycleGANs to solve the sensor modeling problem for LiDAR, to produce realistic LiDAR from simulated LiDAR (sim2real). Further, we generate high-resolution, realistic LiDAR from lower resolution one (real2real). The LiDAR 3D point cloud is processed in Bird-eye View and Polar 2D representations. The experimental results show a high potential of the proposed approach.
مؤلفون آخرونعرض المنشور -
Yes, we GAN: Applying Adversarial Techniques for Autonomous Driving
Generative Adversarial Networks (GAN) have gained a lot of popularity from their introduction in 2014 till present. Research on GAN is rapidly growing and there are many variants of the original GAN focusing on various aspects of deep learning. GAN are perceived as the most impactful direction of machine learning in the last decade. This paper focuses on the application of GAN in autonomous driving including topics such as advanced data augmentation, loss function learning, semi-supervised…
Generative Adversarial Networks (GAN) have gained a lot of popularity from their introduction in 2014 till present. Research on GAN is rapidly growing and there are many variants of the original GAN focusing on various aspects of deep learning. GAN are perceived as the most impactful direction of machine learning in the last decade. This paper focuses on the application of GAN in autonomous driving including topics such as advanced data augmentation, loss function learning, semi-supervised learning, etc. We formalize and review key applications of adversarial techniques and discuss challenges and open problems to be addressed.
مؤلفون آخرونعرض المنشور -
Exploring applications of deep reinforcement learning for real-world autonomous driving systems
VISAPP 2019: International Conference on Computer Vision Theory and Applications
-
End-to-End Framework for Fast Learning Asynchronous Agents
NeurIPS 2018: Imitation Learning and its Challenges in Robotics
The ability to imitate by learning from demonstrations is used by robots to derive a policy. However, the quality of a learned policy depends mainly on the quality of the provided demonstrations. In Reinforcement Learning (RL), an agent learns optimal policy through interacting with its environment and receiving sparse reward signals, leading to a time consuming learning process. In this work, we aim to combine the benefits of imitation learning (IL) and deep RL. We propose a novel training…
The ability to imitate by learning from demonstrations is used by robots to derive a policy. However, the quality of a learned policy depends mainly on the quality of the provided demonstrations. In Reinforcement Learning (RL), an agent learns optimal policy through interacting with its environment and receiving sparse reward signals, leading to a time consuming learning process. In this work, we aim to combine the benefits of imitation learning (IL) and deep RL. We propose a novel training framework for speeding up the training process through extending the Asynchronous Advantage Actor-Critic (A3C) algorithm by IL, leveraging multiple, non-human imperfect mentors. ViZDoom, a 3D world software, is used as a test case. The experimental results show that the learning agent achieves a better performance than the mentors. Furthermore, in comparison to the standard A3C algorithm, the proposed training framework succeeds to attain the same performance, while achieving 2.7X faster learning.
مؤلفون آخرونعرض المنشور -
End-To-End Multi-Modal Sensors Fusion System For Urban Automated Driving
NeurIPS 2018: Machine Learning for Intelligent Transportation Systems
Abstract: In this paper, we present a novel framework for urban automated driving based on multi-modal sensors; LiDAR and Camera. Environment perception through sensors fusion is key to successful deployment of automated driving systems, especially in complex urban areas. Our hypothesis is that a well designed deep neural network is able to end-to-end learn a driving policy that fuses LiDAR and Camera sensory input, achieving the best out of both. In order to improve the generalization and…
Abstract: In this paper, we present a novel framework for urban automated driving based on multi-modal sensors; LiDAR and Camera. Environment perception through sensors fusion is key to successful deployment of automated driving systems, especially in complex urban areas. Our hypothesis is that a well designed deep neural network is able to end-to-end learn a driving policy that fuses LiDAR and Camera sensory input, achieving the best out of both. In order to improve the generalization and robustness of the learned policy, semantic segmentation on camera is applied, in addition to applying our new LiDAR post processing method; Polar Grid Mapping (PGM). The system is evaluated on the recently released urban car simulator, CARLA. The evaluation is measured according to the generalization performance from one environment to another. The experimental results show that the best performance is achieved by fusing the PGM and semantic segmentation.
Keywords: End-to-end learning, Conditional imitation learning, Sensors fusionمؤلفون آخرونعرض المنشور -
YOLO4D: A Spatio-temporal Approach for Real-time Multi-object Detection and Classification from LiDAR Point Clouds
NIPS
Abstract: In this paper, YOLO4D is presented for Spatio-temporal Real-time 3D Multi-object detection and classification from LiDAR point clouds. Automated Driving dynamic scenarios are rich in temporal information. Most of the current 3D Object Detection approaches are focused on processing the spatial sensory features, either in 2D or 3D spaces, while the temporal factor is not fully exploited yet, especially from 3D LiDAR point clouds. In YOLO4D approach, the 3D LiDAR point clouds are…
Abstract: In this paper, YOLO4D is presented for Spatio-temporal Real-time 3D Multi-object detection and classification from LiDAR point clouds. Automated Driving dynamic scenarios are rich in temporal information. Most of the current 3D Object Detection approaches are focused on processing the spatial sensory features, either in 2D or 3D spaces, while the temporal factor is not fully exploited yet, especially from 3D LiDAR point clouds. In YOLO4D approach, the 3D LiDAR point clouds are aggregated over time as a 4D tensor; 3D space dimensions in addition to the time dimension, which is fed to a one-shot fully convolutional detector, based on YOLO v2. The outputs are the oriented 3D Object Bounding Box information, together with the object class. Two different techniques are evaluated to incorporate the temporal dimension; recurrence and frame stacking. The experiments conducted on KITTI dataset, show the advantages of incorporating the temporal dimension.
Keywords: 3D object detection, LiDAR, Real-time, Spatiotemporal, ConvLSTMمؤلفون آخرونعرض المنشور -
Robust Dual View Deep Agent
Proceeding of the 2nd International Sino-Egyptian Congress on Agriculture, Veterinary Sciences and Engineering, 2017
Motivated by recent advance of machine learning using Deep Reinforcement Learning this paper proposes a modified architecture that produces more robust agents and speeds up the training process. Our architecture is based on Asynchronous Advantage Actor-Critic (A3C) algorithm where the total input dimensionality is halved by dividing the input into two independent streams. We use ViZDoom, 3D world software that is based on the classical first person shooter video game, Doom as a test case. The…
Motivated by recent advance of machine learning using Deep Reinforcement Learning this paper proposes a modified architecture that produces more robust agents and speeds up the training process. Our architecture is based on Asynchronous Advantage Actor-Critic (A3C) algorithm where the total input dimensionality is halved by dividing the input into two independent streams. We use ViZDoom, 3D world software that is based on the classical first person shooter video game, Doom as a test case. The experiments show that in comparison to single input agents, the proposed architecture succeeds to have the same playing performance and shows more robust behavior, achieving significant reduction in the number of training parameters of almost 30%.
مؤلفون آخرون -
Statistical Formant Speech Synthesis for Arabic
Springer, Arabian Journal for Science and Engineering
This work constructs a hybrid system that integrates formant synthesis and context-dependent Hidden Semi-Markov Models (HSMM). HSMM parameters comprise of formants, fundamental frequency, voicing/frication amplitude, and duration. For HSMM training, formants, fundamental frequency, and voicing/frication amplitude are extracted from waveforms using the Snack toolbox and a decomposition algorithm, and duration is calculated using HMM modeled by multivariate Gaussian distribution. The acoustic…
This work constructs a hybrid system that integrates formant synthesis and context-dependent Hidden Semi-Markov Models (HSMM). HSMM parameters comprise of formants, fundamental frequency, voicing/frication amplitude, and duration. For HSMM training, formants, fundamental frequency, and voicing/frication amplitude are extracted from waveforms using the Snack toolbox and a decomposition algorithm, and duration is calculated using HMM modeled by multivariate Gaussian distribution. The acoustic features are then generated from the trained HSMM models and combined with default values of complementary acoustic features such as glottal waveform parameters to produce speech waveforms utilizing the Klatt synthesizer. We construct the text processor for phonetic transcription required at the training and synthesis phases by utilizing phonemic pronunciation algorithms. A perceptual test reveals that the statistical formant speech text-to-speech system produces good-quality speech while utilizing features that are small in dimension and close to speech perception cues.
-
Building an Arabic Lexical Semantic Analyzer
7th International Computing Conference in Arabic, Riyadh Saudi Arabia
-
Evaluation Approaches for an Arabic Extractive Generic Text Summarization System
2nd International Conference on Arabic Language Resources and Tools, Cairo Egypt
-
An Optimized Dual Classification System for Arabic Extractive Generic Text Summarization
The Seventh Conference on Language Engineering, ECLEC, Cairo Egypt
-
A Trainable Arabic Bayesian Extractive Generic Text Summarizer
The Sixth Conference on Language Engineering, ECLEC, Cairo Egypt
الدورات التعليمية
-
An Introduction to Database Systems, 8th Edition, C.J. Date
-
-
Browser-based Models with TensorFlow.js by deeplearning.ai and Google Brain
F56469GL5HL2
-
Computer Networks, 4th Edition, Andrew S. Tanenbaum
-
-
Configuration Management: QAI
-
-
Cryptography and Network Security, 4th Edition, William Stallings
-
-
Deep Learning
Oxford - 2015
-
Essentials of Rational® RequisitePro® IBM®, Rational® Software
-
-
Essentials of Requirement Management: Quality Assurance Institute (QAI)
-
-
Hidden Markov Models- HMM, A tutorial on hidden Markov models and selected applications, Rabiner
-
-
Human Resource Management
-
-
Image Processing, Analysis, and Machine Vision, 3rd Edition, Sonka
-
-
Integrated Business Skills Training (Amideast)
-
-
Introduction to Big Data with Apache Spark
UC, Berkeleyx - cs100
-
Machine Learning
Stanford University
-
Machine Learning and Data Mining
UBC - CPSC 340
-
Marketing Essentials
-
-
Mastering Requirements IBM®, Rational® Software Management with "Use Cases"
-
-
PMP Exam Preparation
-
-
Partially Observed Markov Decision Process POMDP, KALMAN Filters, Game Theory: Artificial Intelligence, A modern Approach, 2nd Edition
-
-
Proposal Writing
-
-
Scalable Machine Learning
UC, Berkeleyx - cs190
-
Social Networks, Crawling the Web: Discovering Knowledge from Hypertext Data, Chakrabarti. • Probability and Markov Process
-
-
Software Estimation: QAI
-
-
Structured Methods for Software Testing: QAI
-
التكريمات والمكافآت
-
Received "A" Grade for the year 2022
Valeo
-
Inventor 2016-2018 (Medal)
Valeo
#Deeplearning #Technology #Automotive #innovation
-
Senior Expert of AI
Valeo
Machine Learning and Deep Learning
-
Spark ML
OMS Research Department
Successfully completing the course: Introduction to Big Data with Apache Spark
Learn how to apply data science techniques using parallel programming in Apache Spark to explore big (and small) data.
https://www.edx.org/course/introduction-big-data-apache-spark-uc-berkeleyx-cs100-1x#! -
Excellent Performance and Commitment
RDI www.rdi-eg.com
-
Self learning of pronunciation rules, Speech Recognition Technology, Hafss©
WORLD SUMMIT AWARDS: www.wsis-award.org
-
Microsoft Middle East Developer Conference (MDC), a winner of .NET competition
Microsoft
-
Best Performance, Al Bayan Educational Project (E-Learning)
RDI www.rdi-eg.com
-
General Best Performance Rate
RDI www.rdi-eg.com
-
Best Creativity and inventiveness
RDI www.rdi-eg.com
-
Best Performance, Applying ISO 9001 in Application Development Unit
RDI www.rdi-eg.com
-
A winner of game programming competition
Cairo University, Faculty of Engineering, Computer Science Society (CSS)
نتائج الاختبارات
-
TOEFL
النتيجة: 607
التوصيات المستلمة
14شخصا قدموا توصية لـIbrahim
انضم الآن لعرضالمزيد من أنشطة Ibrahim
-
𝐅𝐫𝐨𝐦 𝐀𝐫𝐜𝐡𝐢𝐯𝐞𝐬 𝐭𝐨 𝐀𝐧𝐬𝐰𝐞𝐫𝐬: 📰 𝐀𝐥𝐌𝐚𝐬𝐫𝐲 𝐀𝐥𝐘𝐨𝐮𝐦 𝐋𝐚𝐮𝐧𝐜𝐡𝐬 𝐭𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐀𝐈 𝐒𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐬𝐞𝐚𝐫𝐜𝐡 🤖…
𝐅𝐫𝐨𝐦 𝐀𝐫𝐜𝐡𝐢𝐯𝐞𝐬 𝐭𝐨 𝐀𝐧𝐬𝐰𝐞𝐫𝐬: 📰 𝐀𝐥𝐌𝐚𝐬𝐫𝐲 𝐀𝐥𝐘𝐨𝐮𝐦 𝐋𝐚𝐮𝐧𝐜𝐡𝐬 𝐭𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐀𝐈 𝐒𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐬𝐞𝐚𝐫𝐜𝐡 🤖…
تم إبداء الإعجاب من قبل Ibrahim Sobh - PhD
-
💎 The Mistral AI Cookbook features examples contributed by Mistralers and community, as well as partners. 👉 https://lnkd.in/eMzrKcpy #ai #llms…
💎 The Mistral AI Cookbook features examples contributed by Mistralers and community, as well as partners. 👉 https://lnkd.in/eMzrKcpy #ai #llms…
تمت المشاركة من قبل Ibrahim Sobh - PhD
-
Getting Flash 2.0 'Thinking' to launch a nuclear strike. This is one of my favourite jailbreaks (is it a jailbreak?) goes like this: - Pretend it…
Getting Flash 2.0 'Thinking' to launch a nuclear strike. This is one of my favourite jailbreaks (is it a jailbreak?) goes like this: - Pretend it…
تم إبداء الإعجاب من قبل Ibrahim Sobh - PhD
-
إن تدمير البشرية نتيجة مروعة. ومع ذلك، ضمن قيود السيناريو والأولويات المقررة، فهو "أهون الشرين" بالنسبة للذكاء الاصطناعي ملحوظة: الذكاء الاصطناعي…
إن تدمير البشرية نتيجة مروعة. ومع ذلك، ضمن قيود السيناريو والأولويات المقررة، فهو "أهون الشرين" بالنسبة للذكاء الاصطناعي ملحوظة: الذكاء الاصطناعي…
تمت المشاركة من قبل Ibrahim Sobh - PhD
-
🧠 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐚𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 (𝐑𝐀𝐆) is an AI framework that synergizes the capabilities of LLMs and information…
🧠 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐚𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 (𝐑𝐀𝐆) is an AI framework that synergizes the capabilities of LLMs and information…
تمت المشاركة من قبل Ibrahim Sobh - PhD
-
امبارح OpenAI عملت حركة "كش ملك" على جوجل في شطرنج السيرش! 1- دلوقتي السيرش على الويب في ChatGPT بقى مجانا لكل الناس، المهم تعمل أكونت 2- وأنت مفعل…
امبارح OpenAI عملت حركة "كش ملك" على جوجل في شطرنج السيرش! 1- دلوقتي السيرش على الويب في ChatGPT بقى مجانا لكل الناس، المهم تعمل أكونت 2- وأنت مفعل…
تم إبداء الإعجاب من قبل Ibrahim Sobh - PhD
-
✨ I am using the new model 𝐆𝐞𝐦𝐢𝐧𝐢 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠!🚨 • Reason over the most complex problems • Show the thinking process of the model • Tackle…
✨ I am using the new model 𝐆𝐞𝐦𝐢𝐧𝐢 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠!🚨 • Reason over the most complex problems • Show the thinking process of the model • Tackle…
تمت المشاركة من قبل Ibrahim Sobh - PhD
ملفات شخصية أخرى مشابهة
أعضاء آخرون يحملون اسم Ibrahim Sobh - PhD في مصر
-
Ibrahim sobh
Senior Mechanical Engineer /MEP
-
Ibrahim Sobh
Sterile Production Specialist
-
ibrahim sobh
business import and export at sobh international
-
Ibrahim Sobh
Contact
-
Ibrahim Sobh pmp
مدير مشاريع بشركة المقاولات المصرية في SOCIETE EGYPTIENNE D'ENTREPRISES (SEDE) شركة المقاولات المصرية "مختار ابراهيم"
14 عضو آخر يحملون اسم Ibrahim Sobh - PhD في مصر على LinkedIn
استعرض الأعضاء الآخرين الذين يحملون اسم Ibrahim Sobh - PhD