An AI-based system photographs the embryos every five minutes, processes the data of their development and notifies any anomalies observed. This increases the likelihood of choosing the most viable and healthy early-stage embryo for IVF procedures.
Almost one in six couples face infertility; about 48.5 million couples, 186 million individuals worldwide are inflicted. Europe has one of the lowest birth rates in the world, with an average of just 1.55 children per woman.
The most effective form of assisted reproductive technology is in vitro fertilization (IVF) – a complex series of procedures used to help with fertility. However, the success of IVF procedures is closely linked to many biological and technical issues.
The interdisciplinary team of KTU researchers lead by Dr Raudonis, developed an automated method for early-stage embryo evaluation. The method is based on processing the visual data collected by photographing the developing embryo every five minutes from seven different sides for up to five days. Up to 20,000 images are generated during the image-capturing process. To evaluate them all manually would be an impossible task for the embryologist in charge of the procedure.
Automated embryo development assessment has been actively developed in the last six years, when the technical possibilities to create more sophisticated AI methods and algorithms emerged. Strong teams of scientists from Israel, Australia, Denmark and other countries are working in this field. More and more clinics all over the world are applying AI-based solutions for assisting infertility treatment.
Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.
SOURCES – Journal Sensors, KTU
Written By Brian Wang, Nextbigfuture.com
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.