Here we are. Over 1200 submissions from 250 teams, and 4 months of build time all comes down to this. Today we present the Grand Prize Winners in OpenCV AI Competition 2021.
As mentioned in our previous update in which we announced the Regional Winners, these were difficult choices. All of these teams represented the best in their regions and the world, and though the choices were tough in the end we know you will agree that the three Grand Prize Winning teams are so impressive and so very deserving of this honor.
Each of these teams and projects exemplify the goals we set for this competition from the start: Furthering the spread of AI through the global OpenCV community, making edge AI hardware more affordable and easier to access, and focusing that power on social good.
Congratulations to all the winners, and to every participant on every team who spent the last few months building something awesome. We cannot give a prize to every team, but we appreciate you. Without any further ado, we present the Grand Prize winners of OpenCV AI Competition 2021.
Third Grand Prize ($5,000): Team Cerebros — CEREBROS, the Next-Generation Electric Wheelchair
Mission
We are a group of technology enthusiasts engineers representing Alexandria university in Egypt. Our prime focus is to integrate many technologies together to solve problems that require innovative smart solutions. We have practical experience in several areas, including embedded systems, robotics, and machine learning.
We decided to apply this experience in solving one of the problems facing the medical field. Technological developments in brain-machine interface (BMI) have received our attention, so we planned to utilize this technology in one of a wide range of applications centered in health monitoring and nursing management which is the wheelchair.
Solution
Our brain-computer interface makes use of electroencephalography (EEG), an affordable, accessible, and non-invasive technique to detect brain activity. In addition to motor imagery, eye blinking signals and jaw artifacts were used to initiate starts and stops and to indicate the desire to turn. Users can turn left or right by simply thinking about left and right- hand movements.
The most common technique used by automotive companies is using short-range sensors like ultrasonic, and IMU to detect the stability of the chair. But short-range sensors have a lot of drawbacks and are not reliable enough to guarantee the complete safety of the patient, so we decided to use computer vision in autopilot application using OAK-D.
Three OAK-D cameras are used in left, right and front of the wheelchair in order to be completely aware of the whole environment, after the objects are detected from each cameras’ stream a priority algorithm is implemented to ensure safety precautions in terms of movement of the wheelchair is safe in all directions or does the wheelchair need to stop and indifference to any movement commands received by the user.
We also used the cameras’ stream in mapping and localization by determining the coordinates of the wheelchair and putting all surrounding vehicles on google map with a simple technique.
Second Grand Prize ($10,000): Caleta — NeoCam: Real-time telemonitoring of preterm neonates
Mission
Incubators were invented in the 19th century, and they have slowly evolved in their design, adding small improvements that have contributed to increase the life expectancy and the quality of life of newborns, through better control of temperature, more comfort, detailed monitorization, and better medical understanding of biomarkers. Nowadays, different sensors are used to monitor the vital signs of neonates in incubators, which are physically attached to the skin. Physical devices can be a nuisance especially for babies, but also for the health personnel who have to handle them frequently.
The equipment currently in use is able to measure physiological quantities, but nowadays computer vision technology allows to monitor also emotional variables (pain, stress…), and eventually indirect variables related to cognitive development.
Finally, Neonatal Intensive Care Unites (NICUs) often require a high presence of trained personnel, which need to divide their time among all the neonates, and thus are able to observe each of them for only a limited amount of time.
It would thus be very desirable to conceive an autonomous and intelligent monitoring system which is able to:
- Monitor physiological and emotional variables.
- Avoid physical contact of sensors and devices with the baby skin.
- Observe the baby for long periods of time, producing hourly or daily reports.
- Achieve an early detection of abnormal development issues.
- Raise alarms in case of anomalous events.
The NeoCam project addresses this growing interest in automated systems for monitoring physiological and emotional parameters of babies avoiding the need for probes attached to the skin, and allowing for a better medical care with early detection of abnormal situations.
Solution
The purpose of Caleta Team’s project is to build NeoCam, an advanced embedded computer vision ecosystem for autonomous monitorization of preterm newborns in incubators. The platform, based on the OAK-D smart camera, will analyze images of the babies without contact and in real-time for extracting information on vital signs.
Three main components can be distinguished in the system:
- A hardware platform based on sensors (OAK-D smart camera), computing (OAK-D and Raspberry 4), communication systems (5G/Wifi), a cloud server, and display devices,
- Algorithms to process the data captured with the sensors, and
- An App for analyzing the data.
The project consists of a non-contact remote monitoring system for babies in incubators, cribs or beds, through images captured by cameras and processed by artificial intelligence algorithms with the OAK-D AI Kit. The algorithms developed in this project are able to process images captured by the OAK-D smart camera in order to:
- Monitor the respiratory rhythm: detection of the head by artificial vision algorithms and based on this, select various reference points in the thorax. The depth measurement thanks to the different cameras of the OAK-D makes it possible to detect the variations in the volume of the rib cage and therefore the respiratory rhythm
- Monitor physical activity: detection of the motion of arms and legs by artificial vision algorithms in order to develop an activity index
- Monitor emotional status: face detection by artificial vision algorithms, and classification of these images into categories (pain/calm) through an image classification algorithm (see Figure 5.3).
All the processed data is stored in a system designed, developed and deployed in a cloud based infrastructure supported by Microsoft Azure ®, which allows not only to guarantee performance-related characteristics (scalability, interoperability, security), but also remote access from any device that has a connection to Internet anytime and anywhere, regardless of supporting technology.
Overall & First Grand Prize Winner ($20,000): Cortic Tigers, Democratizing AI with OAK-D
Mission
Democratization of A.I. is a powerful idea that provides a path forward to transform the promise of A.I. enhanced technological development into reality. We hope that one day, anyone, regardless of their technical ability, will be able to apply the power of A.I. to address challenges they encounter in their daily lives.
With advancements in deep learning in recent years, applications of A.I. have become commonplace. All of us are surrounded by it somehow in various forms like smartphones, self-driving cars, or even streaming services such as YouTube or Netflix. However, most of the efforts in A.I. development still reside in universities or big corporations. This is because developing and deploying A.I. algorithms typically require lots of resources like computational power, large amounts of data, and human expertise. In order to fully democratize A.I., we must address the following prerequisites:
- affordable hardware platform powerful enough to run state-of-the-art deep-learning algorithms
- user-friendly programming environment to encourage learning and experimentation
- the ability to easily and quickly prototype using both pre-trained and new models in applications
- the ability to adapt existing pre-trained models to specific use cases using a small amount of training data
- useful sample programs that show the power of A.I. for different use cases
Solution
- The high-level programming interface we call the Cortic A.I. Toolkit (CAIT). It is the simplified Python API that users call to perform various A.I. tasks. It also backs the Visual Programming Interface that we will talk about in a later section.
- The middleware component we call the Cortic Universal RunTime (CURT). It facilitates communication between different modules and devices. It also offers a simple command-based programming interface to perform distributed computing tasks.
Once again, congratulations to these three truly exceptional teams and projects. What they have achieved is amazing, and we wish them the best of luck in continuing to develop these exciting solutions. We would also like to once again thank our generous sponsors at Microsoft Azure and Intel for their support.
The Popular Vote
The popular vote to be chosen by YOU is still open, voting will continue through September 20th. The links below will take you to a voting form for the named region, with the team names and final submission videos for each of the regional winners. You will be able to vote for one project in each region, with the winners receiving an additional $2,000.
- North America
- South America + Central America + Caribbeans
- Europe + Russia + Australasia ( Australia, New Zealand, neighboring islands)
- Middle East + Africa
- Central Asia + Southern Asia
- Eastern Asia + South Eastern Asia
Finally, thank you to everyone who participated in this competition- no matter if your project was chosen or not, you matter. This community we are all a part of makes us stronger collectively than we are apart- and the best is yet to come.
A Spatial AI Revolution!
Even though this year’s competition has ended, this is not the end of the story for OpenCV AI Kit! On September 15th a new chapter begins- at a size and price that will astonish you! You can be one of 74 lucky winners by subscribing to the OpenCV Newsletter and following the new campaign on Kickstarter.
Leave a Reply