The NMPA published an article on artificial intelligence software for process optimization, illustrating what the clinical applications of the software and how to do software validation.
Artificial intelligence software for process optimization is currently used in fetal examinations in obstetrics and gynecology, and ultrasound and Doppler examinations of the heart, as well as those of solid organs, musculoskeletal and nerves. The output results of the software are only for the reference given to doctors, and the results need to be confirmed and modified by doctors based on professional knowledge.
They are usually embedded in imaging and ultrasound equipment. According to the “Medical Device Classification Catalog”, the classification code is 06-07, which is Class II or III. If it is registered as an independent software, the classification code is 21-02, which is Class II, based on the “Guideline for Classification and Definition of Artificial Intelligence Medical Devices”
Software Applications for the NMPA Artificial Intelligence Software Process Optimization
Ultrasonic spectrum automatic identification
It serves functions of simplifying the diagnosis and treatment process. The artificial intelligence algorithm will identify the spectrum image category according to the operation of the upper/lower shift of the baseline in the spectrum and the position of the cursor and call related measurements.
Ultrasonic Spectrum Auto ID does not perform any measurements, it only sends commands to the system to initiate specific measurements, reducing user clicks. The user needs to confirm the automatic recognition result of the software. If the recognition result is wrong, the user can start the manual measurement by himself.
The picture above is the user interface of the automatic spectrum recognition function. Using the automatic spectrum recognition function there will be a clear prompt in the upper right corner of the image area: Spectrum Auto Recognized, and the top of the operation interface will remind the user to review the automatic measurement results (Review the results. Then press Image Store button to approve visible measurements or select Cancel.)
At the bottom of the measurement menu on the right, the user can choose Approve or Cancel. After clicking Approve, the system will accept the automatic measurement results; after clicking Cancel, the user can directly perform manual measurement or select other measurement items.
The clinical needs of the intelligent obstetric screening function come from the “Practice Guidelines for Routine Ultrasound Scanning of Fetus in the Second Trimester” issued by ISUOG. This guideline defines a standard set of slices and a set of measurements to be taken during pregnancy. The clinical pain point solved by this function is mainly that the fetus may constantly change its position during ultrasound examination, making it difficult to cooperate with the examination.
Ultrasound doctors usually cannot perform section scans in a fixed order and need to manually scroll through the scan list to select and confirm the appropriateness. scan items. In many cases, this results in the sonographer not being able to conduct scans based on guidelines and thus unable to guarantee the quality of the examination (e.g., missing slices occur).
The function of intelligent production screening includes two modules of “identification” and “quality control”. After the sonographer presses the freeze button, the “Recognition” function will immediately analyze the image in the cine playback acquired by the sonographer, if the function detects an image that matches the defined slice, and the slice is consistent with the one in the smart navigation list.
If a scan item is associated, the system will recommend the scan item to the user. At this time, the user can identify the image details by pressing the “Quality Control” icon and display the identification results of various scan feature structures (such as the fetal nose tip, nostrils, etc.): “found (discovered)” or “not found (Not found)”.
This function is expected to provide users with some reference information, allowing users to judge whether the image is available, whether the section is standard, and whether the image quality meets the requirements in the guidelines. The results will not be actively stored in the report.
The image above is an example of the output result of the intelligent screening function. After freezing, the “Recognition” function analyzed the image and identified the trans cerebellar plane (TCP). Since the user puts the TCP section into the scan item when setting up the smart navigation, the smart product screening function “recognizes” the scan item “Cerebellum/CM (cerebellum/CM)”, and a pink ” SonoLyst” logo.
When the user presses the “Quality Control” icon, the “Quality Control” function will fill in the evaluation results of each scanning standard according to the identification results of the characteristic structure (including the symmetry of the brain, the cerebellum, etc.): “found” or “not found (not found)”.
Neural automatic recognition
Nerve block is a clinical anesthesia method that injects local anesthetics around nerve trunks, plexuses, and nodes to block their impulse conduction and anesthetize the innervated area. Brachial plexus block is the main anesthesia method for upper limb and shoulder surgery.
Since the 1990s, ultrasound has been widely used in the guidance of peripheral nerve blocks, and nerve visualization has greatly reduced the complications of nerve blocks compared with the blind puncture method of body surface positioning.
The nerve automatic recognition function software adopts deep learning technology, which can make the brachial plexus region easier to identify by enhancing the image of the brachial plexus region. In clinical use, the automatic nerve identification function can be used to assist the ultrasound imaging observation of normal brachial plexus and the positioning before nerve block.
The operator selects the linear array probe and nerve examination mode according to the operation procedure specified in the manual, walks and scans along the patient’s intermuscular groove or supraclavicular area in B mode, and activates the nerve automatic recognition function after reaching the target anatomical area, which can be enhanced the image of the brachial plexus region is used to make it easier to identify the morphological characteristics of the brachial plexus. The results need to be confirmed and modified by doctors based on professional knowledge.
Automatic recognition of heart structure
Two-dimensional echocardiography is currently one of the most important examination methods for cardiac ultrasound examination, which can display the cross-sectional anatomy, spatial position and motor function status of the great vessels of the heart.
The consensus and guidelines of experts at home and abroad have clearly defined the standard views of two-dimensional echocardiography, including acoustic windows, angles, characteristic structures, and their corresponding clinical significance. Through the definition of the standard section, the echocardiography teaching, training, and inspection application have a unified standard.
However, the heart has the characteristics of fast movement and complex structure. In the process of using echocardiography, doctors need to judge whether the standard section conforms to the specification based on their own experience.
The automatic recognition function of heart structure is used in clinical ultrasound diagnosis examination. Based on the guidelines and standards of the American Society of Echocardiography, this function uses deep learning technology to identify the slice type for two-dimensional echocardiographic images, that is, to determine in real time which standard slice the current image belongs to (for example, apical four-chamber, parasternal long axis, sternal para lateral short axis, inferior xiphoid four cavities and inferior vena cava), and further identify the characteristic structures appearing in the current image, such as left ventricle, left atrium, mitral valve, tricuspid valve, etc. Among them, deep learning technology obtains image features by learning large sample data, on the one hand, classifies slice types, and on the other hand, performs target detection.
The operator follows the operation process specified in the manual, selects the phased array probe and the heart examination mode, and after adjusting the image parameters to obtain the best optimized image, starts the automatic heart structure recognition function.
If the current image is a standard section, the type of section will be displayed on the screen in real time, and the recognizable feature structure on the image will be prompted. The result is only for the doctor’s reference, and the doctor can confirm and modify it based on professional knowledge.
Software validation refers to confirming that software meets user needs and intended purposes by providing objective evidence, including a series of activities such as software validation testing (user testing), clinical evaluation, and design review. Software validation testing is based on user needs and is carried out by expected users in real or simulated usage scenarios.
Common software validation items for imaging and ultrasound process optimization artificial intelligence software generally include automatic feature structure recognition accuracy and/or measurement accuracy.
The test samples used in software validation should be representative and coverable to the applicable population of the software function. The coverage of test samples should consider factors such as abnormal physiological structure and minimum identifiable target size. For example, if the automatic identification function of heart structure can be used for people with abnormal heart structure, the corresponding samples should be included in the test sample; If the structural abnormalities are verified, relevant warnings should be given in the instructions.
The representativeness of test samples should take into account the relevant factors that affect the accuracy of software recognition feature structure, for example, factors that affect the recognition accuracy of neural automatic recognition functions include gender, age and BMI index, according to this conclusion, test samples should be considered in gender (male , female), age (<12, 12 to 20, 21 to 40, 41 to 65, >65) and BMI (<18.5, 18.5 to 24.9, 25 to 29.9) distributed.
For software involving multiple steps, the accuracy of each step should be verified. For example, the intelligent product screening function includes two steps of “identification” and “quality control”, and the applicant should verify the accuracy of “identification” and “quality control”. The accuracy of ” was verified separately. For a software-validated result, the applicant may discuss the clinical acceptability of the result by comparing it to the recognition accuracy or measurement when the clinician does not use the function.
For any additional questions about NMPA artificial intelligence software process optimization, please contact us today.