DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s argument on Pages 10-15 regarding the rejection of Claims 1 and 12 under 35 U.S.C. 103 over Aladahalli in view of Naidu has been fully considered but is not persuasive under new grounds of rejection as below.
Regarding the rejection of all remaining corresponding claims, applicant’s argument submitted on Page 15 relies on the supposed deficiencies with respect to the rejection of parent Claims 1 and 12. Applicant’s argument is moot for the same reasons detailed above.
Claim Objections
Claim 1 is objected to because of the following informalities: minor grammatical errors. The claim should be amended to “[…] at least of portion of the fetal organ; and an image analysis module […] a valid images list; and provide the valid images list […]” in order to make sense grammatically. Appropriate correction is required.
Claim 3 is objected to because of the following informalities: minor error in antecedent basis. The claim should be amended to “[[the]] a fetal heart” in order to establish proper antecedent basis. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 12-14, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mienkina et al. (US 20220071595) in view of Himsl et al. (US 20180129782).
Regarding Claims 1, 12, and 17, Mienkina teaches a system for guiding a user in ultrasound assessment of a fetal organ, ([0027] “ultrasound system 100” and [0058] “an operator of an ultrasound system 100 may select, via a user input device 130, an examination type, such as an obstetric fetal examination”), said ultrasound assessment being based on an ultrasound image sequence comprising multiple predefined required views, (Fig. 13 “1304: Acquire real-time ultrasound images and an instruction to freeze an acquired ultrasound image view”), of the fetal organ, ([0058] “For example, the protocol for a second trimester obstetric fetal examination may include a number of pre-defined views, such as a head transcerebellar plane view, a profile sagittal plane view, a face coronal plane view, a sagittal spine view, a four chamber heart view, and the like.”), said system comprising:
a) an input module, ([0036] “the signal processor 132 may comprise a view detection processor 140, an anatomical structure detection processor 150”), configured to receive in real time a sequence of two-dimensional ultrasound images comprising multiple predefined required views of the fetal organ, wherein each image comprises at least a portion of the fetal organ ([0058] “For example, the protocol for a second trimester obstetric fetal examination may include a number of pre-defined views, such as a head transcerebellar plane view, a profile sagittal plane view, a face coronal plane view, a sagittal spine view, a four chamber heart view, and the like. Each of the pre-defined views may include criteria for being protocol adherent, such as the presence of certain anatomical features, image features, and the like. As an example, a protocol adherent head transcerebellar plane view of a second trimester obstetric fetal examination may include anatomical features, such as a cerebellum, cavum septum pellucidum, cisterna magna, midline falx, and brain symmetry” and Fig. 13 “1304: Acquire real-time ultrasound images and an instruction to freeze an acquired ultrasound image view”); and
b) an image analysis module, ([0036] “the signal processor 132 may comprise a view detection processor 140, an anatomical structure detection processor 150”), configured to:
i) provide each image as input to a second classifier of the image analysis structure, said second classifier being configured to detect the presence in the image of predefined fetal anatomical landmarks ([0040] “the anatomical structure detection processor 150 may include, for example, artificial intelligence image analysis algorithms, one or more deep neural networks (e.g., a convolutional neural network such as u-net) and/or may utilize any suitable form of artificial intelligence image analysis techniques or machine learning processing functionality configured to determine the presence and absence of the features in the acquired ultrasound image view.”);
ii) whenever the second classifier identifies that a predefined number of fetal anatomical landmarks associated to the identified view's category are present in the image, ([0044] “the markers 224, 226 and list indicators 232, 234 may correspond with, for example, a cerebellum, cavum septum pellucidum, cisterna magna, midline falx, brain symmetry, and a particular magnification of the acquired ultrasound image view 210 as defined by a protocol for a head transcerebellar plane view of a second trimester obstetric fetal examination.”), adding said image to a valid images list ([0035] “the processed image data […] may be stored at the archive 138. The archive 138 may be a local archive, a Picture Archiving and Communication System (PACS), an enterprise archive (EA), a vendor-neutral archive (VNA), or any suitable device for storing images and related information.”); and
iii) provide the valid images list to be used to perform a diagnostic evaluation of the fetal organ development during a successive medical examination ([0055] “The display system 134 can be operable to display information from the signal processor 132 and/or archive 138” and [0056] “The archive 138 may include databases, libraries, sets of information, or other storage accessed by and/or incorporated with the signal processor 132, for example. The archive 138 may be able to store data temporarily or permanently, for example. The archive 138 may be capable of storing medical image data”).
Furthermore, the cited actions are computer implemented, which necessitate associated computer-readable media, as in [0077] (“Various embodiments may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.”).
However, Mienkina does not explicitly teach an image analysis module configured to: provide each image as input to an image analysis structure comprising at least one first classifier, said first classifier being configured to identify if the image belongs to any view's category comprised in a predefined list of view's categories and, if so, to identify the view's category to which belongs the image among said predefined list of view's categories; wherein each view's category is associated to at least one predefined fetal anatomical landmark; whenever the first classifier identifies that the image corresponds to one view's category of the predefined list of view's categories, adding said image to a valid images list;
In an analogous saving medical image data field of endeavor, Himsl teaches a system for guiding a user in ultrasound assessment, ([0011] “an imaging system, such as the ultrasound imaging system”), of a fetal organ, ([0037] “In […] fetal cardiology, a sequence of images may be used to detect periodic motion typical of the […] fetal heart, and as such a different or modified image recognition algorithm may be used.”), said system comprising: an image analysis module, ([0014] “The system controller 116 may include an image-processing module that receives image data”), configured to:
a) provide each image as input to an image analysis structure comprising at least one first classifier, (Fig. 2 and [0027] “Classifier 202 may be a module in the system controller (such as the classification module of the controller 116”), said first classifier being configured to identify if the image belongs to any view's category comprised in a predefined list of view's categories and, if so, to identify the view's category to which belongs the image among said predefined list of view's categories; wherein each view's category is associated to at least one predefined fetal anatomical landmark ([0021] “the image recognition module may use a model or algorithm stored within a memory of the controller, such as a shape detection algorithm, to recognize the anatomical feature and apply a corresponding tag” and [0028] “Based on the received medical images and exam context, classifier 202 may generate one or more tags corresponding to the identified features of the images and apply the tags to individual images and/or images in a cine”); and
b) whenever the first classifier identifies that the image corresponds to one view's category of the predefined list of view's categories, ([0036] “At 308, the method includes classifying the image(s) with one or more tags. […] a tag may specify an anatomical feature, as indicated at 310. The anatomical feature may be an organ (e.g., a heart) or a region of an organ (e.g., an aorta of a heart), for example. Such an analysis may be useful in obstetrics, for example, where features of the fetus (e.g., head, heart) may be tagged.”), adding said image to a valid images list ([0043]-[0044] “Returning to 320, if a selective save request is received, the method proceeds to 324 and includes receiving the selection of one or more tags to designate the images to be saved. […] the controller may automatically select relevant tags based on the image acquisition protocol performed. At 326, the method includes saving the images that have been tagged with tag(s) that match the selected tag(s) designated at 326. The images with the selected tag(s) may be saved to an internal memory of the ultrasound system or to a remote memory (for example, a PACS server). In one example, an image will be saved if it is tagged with at least one of the selected tags. In another example, an image may be saved only if the image is tagged with all of the selected tags.”).
Furthermore, the cited actions are computer implemented, which necessitate associated computer-readable media, as in [0023] (““Systems,” “units,” or “modules” may include or represent hardware and associated instructions (e.g., software stored on a tangible and non-transitory computer readable storage medium, such as a computer hard drive, ROM, RAM, or the like) that perform one or more operations described herein. The hardware may include electronic circuits that include and/or are connected to one or more logic-based devices, such as microprocessors, processors, controllers, or the like. These devices may be off-the-shelf devices that are appropriately programmed or instructed to perform operations described herein from the instructions described above. Additionally or alternatively, one or more of these devices may be hard-wired with logic circuits to perform these operations.”).
It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to modify the teachings of Mienkina with the image analysis module of Himsl because the modification allows for a seamless saving process, in which an operator may selectively save categorized (tagged) images. This reduces the amount of medical image data storage and increases operator efficiency, as taught by Himsl in [0055].
Regarding Claims 2 and 13, the modified system of Mienkina teaches all limitations of Claim 1, as discussed above. Furthermore, Mienkina teaches wherein the input module is configured to further receive the predefined list of view's categories, ([0034] “The user input device 130 may be utilized to […] select examination types” and [0058] “At step 1302, an ultrasound examination may be initiated at an ultrasound system 100. For example, an operator of an ultrasound system 100 may select, via a user input device 130, an examination type, such as an obstetric fetal examination”), wherein each view's category is associated to a view landmarks list comprising at least one predefined view fetal anatomical landmark that should be visible in a view belonging to the view's category, ([0058] “he selected examination type may be associated with an examination protocol defining a number of specific target views and criteria for adherence of the target views based on the presence of certain anatomical features. For example, the protocol for a second trimester obstetric fetal examination may include a number of pre-defined views, such as a head transcerebellar plane view, a profile sagittal plane view, a face coronal plane view, a sagittal spine view, a four chamber heart view, and the like. Each of the pre-defined views may include criteria for being protocol adherent, such as the presence of certain anatomical features, image features, and the like. As an example, a protocol adherent head transcerebellar plane view of a second trimester obstetric fetal examination may include anatomical features, such as a cerebellum, cavum septum pellucidum, cisterna magna, midline falx, and brain symmetry, and image features, such as a particular magnification of the acquired ultrasound image view.”).
Additionally, the cited actions are computer implemented, which necessitate associated computer-readable media, as in [0077] (“Various embodiments may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.”).
Moreover, Himsl teaches wherein the image analysis module is configured to:
a) verify that the first classifier has identified that the image corresponds to one view's category of the predefined list of view's categories, ([0021] “the image recognition module may use a model or algorithm stored within a memory of the controller, such as a shape detection algorithm, to recognize the anatomical feature and apply a corresponding tag” and [0028] “Based on the received medical images and exam context, classifier 202 may generate one or more tags corresponding to the identified features of the images and apply the tags to individual images and/or images in a cine”), and that a predefined number of the at least one predefined view fetal anatomical landmark, comprised in the view landmarks list associated to the view's category detected by the first classifier, corresponds to the predefined fetal anatomical landmarks detected by the second classifier in the image, so as to evaluate the quality of the image of the identified view category, ([0040] “the exam context may include the type of exam being performed (such as a fetal anatomy scan, echocardiogram, abdominal ultrasound, etc.). As such, the controller will be given a context for the types of features to identify and may therefore limit the possible tags to apply to the resulting images based on the context. For example, if an abdominal ultrasound exam is performed, the controller may be programmed to identify organs such as the liver, gallbladder, and pancreas in the resulting ultrasound images. Similarly, as an abdominal ultrasound exam does not involve imaging the heart, the controller may not attempt to recognize and tag the heart in the resulting ultrasound images.”), and
b) add said image to the valid images list if both conditions are verified ([0043]-[0044] “Returning to 320, if a selective save request is received, the method proceeds to 324 and includes receiving the selection of one or more tags to designate the images to be saved. […] the controller may automatically select relevant tags based on the image acquisition protocol performed. At 326, the method includes saving the images that have been tagged with tag(s) that match the selected tag(s) designated at 326. The images with the selected tag(s) may be saved to an internal memory of the ultrasound system or to a remote memory (for example, a PACS server). In one example, an image will be saved if it is tagged with at least one of the selected tags. In another example, an image may be saved only if the image is tagged with all of the selected tags.”).
Additionally, the cited actions are computer implemented, which necessitate associated computer-readable media, as in [0023] (““Systems,” “units,” or “modules” may include or represent hardware and associated instructions (e.g., software stored on a tangible and non-transitory computer readable storage medium, such as a computer hard drive, ROM, RAM, or the like) that perform one or more operations described herein. The hardware may include electronic circuits that include and/or are connected to one or more logic-based devices, such as microprocessors, processors, controllers, or the like. These devices may be off-the-shelf devices that are appropriately programmed or instructed to perform operations described herein from the instructions described above. Additionally or alternatively, one or more of these devices may be hard-wired with logic circuits to perform these operations.”).
It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to modify the teachings of Mienkina with the image analysis module of Himsl because it reduces the amount of feature recognition and tagging required, thereby allowing the system to operate more efficiently, as taught by Himsl in [0040].
Regarding Claim 3, the modified system of Mienkina teaches all limitations of Claim 1, as discussed above. Furthermore, Mienkina teaches wherein the fetal organ is the fetal heart ([0038] “during a second trimester obstetric fetal examination, an associated protocol may define that a number of views be acquired, such as […] a four chamber heart view”).
Regarding Claim 4, the modified system of Mienkina teaches all limitations of Claim 1, as discussed above. Furthermore, Mienkina teaches wherein in the image analysis module the image analysis structure comprises a first stage employing a convolutional neural network, ([0038] “the view detection processor 140 may include, for example, artificial intelligence image analysis algorithms, one or more deep neural networks (e.g., a convolutional neural network such as u-net) and/or may utilize any suitable form of artificial intelligence image analysis techniques or machine learning processing functionality configured to detect a view of the acquired ultrasound image view.”).
Moreover, Himsl teaches wherein the first classifier of the image analysis structure comprises a second stage employing a fully connected neural network receiving as input at least a portion of the output of the first stage of the image analysis structure ([0037] “The image recognition algorithms employed may be different for different types of imaging data. […] Conventional approaches such as convolutional neural networks may be adapted to the 3D data sets”).
It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to modify the teachings of Mienkina with the first classifier of Himsl because the modification improves identification of complex and non-linear relationships in data and further enables automated analysis of the images.
Regarding Claim 5, the modified system of Mienkina teaches all limitations of Claim 1, as discussed above. Furthermore, Mienkina teaches wherein the system further comprises a manual input module configured to receive at least one image provided manually by the user and whenever said image is provided by the user manually, ([0034] “the user input device 130 may be operable to configure, manage and/or control operation of […] the archive 138”), the image analysis module is further configured to provide the image as input to the second classifier of the image analysis structure, the image analysis module is further configured to provide the image as input to the second classifier of the image analysis structure and whenever the second classifier identifies that a predefined number of fetal anatomical landmarks associated to the identified view's category are present in the image, ([0044] “the markers 224, 226 and list indicators 232, 234 may correspond with, for example, a cerebellum, cavum septum pellucidum, cisterna magna, midline falx, brain symmetry, and a particular magnification of the acquired ultrasound image view 210 as defined by a protocol for a head transcerebellar plane view of a second trimester obstetric fetal examination.”), adding said image to a valid images list ([0035] “the processed image data […] may be stored at the archive 138. The archive 138 may be a local archive, a Picture Archiving and Communication System (PACS), an enterprise archive (EA), a vendor-neutral archive (VNA), or any suitable device for storing images and related information.”).
Moreover, Himsl teaches wherein the system further comprises a manual input module configured to receive at least one image provided manually by the user and whenever said image is provided by the user manually, ([0018] “The system controller 116 is operably connected to a user interface 122 that enables an operator to control at least some of the operations of the system 100” and [0020] “The system controller 116 may also house an image-recognition module, which accesses stored images/videos (i.e., an image library) from either or both of the memory 114 and the memory 120”), the image analysis module is further configured to provide the image as input to the first classifier of the image analysis structure, ([0027] “Classifier 202 may be a module in the system controller (such as the classification module of the controller 116”), and whenever the first classifier identifies that the image corresponds to one view's category of the predefined list of view's categories, ([0036] “At 308, the method includes classifying the image(s) with one or more tags. […] a tag may specify an anatomical feature, as indicated at 310. The anatomical feature may be an organ (e.g., a heart) or a region of an organ (e.g., an aorta of a heart), for example. Such an analysis may be useful in obstetrics, for example, where features of the fetus (e.g., head, heart) may be tagged.”), adding said image to a valid images list ([0043]-[0044] “Returning to 320, if a selective save request is received, the method proceeds to 324 and includes receiving the selection of one or more tags to designate the images to be saved. […] the controller may automatically select relevant tags based on the image acquisition protocol performed. At 326, the method includes saving the images that have been tagged with tag(s) that match the selected tag(s) designated at 326. The images with the selected tag(s) may be saved to an internal memory of the ultrasound system or to a remote memory (for example, a PACS server). In one example, an image will be saved if it is tagged with at least one of the selected tags. In another example, an image may be saved only if the image is tagged with all of the selected tags.”).
It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to modify the teachings of Mienkina with the image analysis module of Himsl because the modification allows for a seamless saving process, in which an operator may selectively save categorized (tagged) images. This reduces the amount of medical image data storage and increases operator efficiency, as taught by Himsl in [0055].
Regarding Claim 6, the modified system of Mienkina teaches all limitations of Claim 5, as discussed above. Furthermore, Mienkina teaches wherein when the at least one image is provided manually by the user is not validated by the second classifier, the image analysis module is further configured to provide the image as input to an object detector of the image analysis structure comprising a fourth stage configured to receive as input at least a portion of the output of the first stage of the image analysis structure and comprising region-based fully convolutional neural network architecture being configured to perform segmentation of the image, so as to classify and localize fetal anatomical landmarks in the image ([0038] “The neurons of a third layer may learn positions of the recognized shapes relative to landmarks in the image data” and [0060] “if the detected view is an unknown view or otherwise not one of the target views, the process 1300 may return to step 1304 to acquire a different ultrasound image view 210.”).
Regarding Claim 14, the modified method of Mienkina teaches all limitations of Claim 12, as discussed above. Furthermore, Mienkina teaches when the valid images list comprises all the predefined required views of the fetal organ, further comprises providing a message to inform the user that the valid images list comprises all the predefined required views of the fetal organ ([0060] “if the detected view is an unknown view or otherwise not one of the target views, the process 1300 may return to step 1304 to acquire a different ultrasound image view 210.”).
Claims 7-11 are rejected under 35 U.S.C. 103 as being unpatentable over Mienkina et al. (US 20220071595) in view of Himsl et al. (US 20180129782), as applied to Claim 1 above, and further in view of Canfield et al. (US 20210137416).
Regarding Claim 7, the modified system of Mienkina teaches all limitations of Claim 1, as discussed above. However, the modified system of Mienkina does not explicitly teach a diagnostic module that when the valid images list comprises all the predefined required views of the fetal organ is configured to: provide a stack of one image of the valid images list as input to a diagnostic structure, wherein the diagnostic structure comprises a first stage employing a convolutional neural network receiving as input the stack of images and providing an output and wherein said diagnostic structure comprises a first classifier of the diagnostic structure employing, at a second stage, a fully connected neural network receiving as input at least a portion of the output of the first stage of the diagnostic structure; the first classifier of the diagnostic structure being configured to discriminate between pathological development and physiological development of the fetal organ; whenever the output of the first classifier of the diagnostic structure categorizes the image as comprising a pathological development, providing the image as input to: a second classifier of the diagnostic structure comprising a third stage employing a fully connected neural network receiving as input at least a portion of the output of the first stage of the diagnostic structure and being configured to classify the pathological development into at least one pathology category; an object detector of the diagnostic structure comprising a fourth stage configured to receive as input at least a portion of the output of the first stage of the diagnostic structure and comprising a fully convolutional neural network being configured to perform segmentation of the image and localization of at least one pathological development region in the fetal organ; output module configured to: output to the user the at least one pathology category obtained from the second classifier of the diagnostic structure and the result of the image segmentation of the image and localization of the pathological development region in the fetal organ obtained from the object detector of the diagnostic structure; output to the user a message to end examination, whenever the output of the first classifier of the diagnostic structure categorizes the image as comprising a physiological development.
In an analogous adaptive ultrasound scanning field of endeavor, Canfield teaches a system for guiding a user in ultrasound assessment of a fetal organ so as to perform a diagnostic evaluation of the fetal organ development during a medical examination, (Abstract “The present disclosure describes imaging systems configured to generate adaptive scanning protocols based on anatomical features and conditions identified during a prenatal scan of an object. Systems may include an ultrasound transducer configured to acquire echo signals responsive to ultrasound pulses transmitted toward a target region.”), said ultrasound assessment being based on an ultrasound image sequence comprising multiple predefined required views of the fetal organ, ([0027] “the scan protocol 138 may convey the presence of an anatomical feature within a current image frame 124, and in some embodiments, whether the image of such feature is sufficient to accurately measure the feature or whether additional images of the feature are needed” and [0032] “classify images into distinct categories, for example “full view,” “head,” “abdominal,” “chest,” or “extremities.” Sub-categories can include, for example, “stomach,” “bowel,” “umbilical cord,” “kidney,” “bladder,” “legs,” “arms,” “hands,” “femur,” “spine,” “heart,” “lungs,” “stomach,” “bowel,” “umbilical cord,” “kidney,” or “bladder.””), comprising:
a) a diagnostic module, ([0027] “data processor 126”), that when the valid images list comprises all the predefined required views of the fetal organ, ([0027] “a stream of discrete ultrasound image frames 124”), is configured to:
i) provide a stack of one image of the valid images list as input to a diagnostic structure, ([0027] “The first neural network 128 can be configured to receive the image frames 124, […] via the data processor 126”), wherein the diagnostic structure comprises a first stage employing a convolutional neural network receiving as input the stack of images, ([0040] “the neural network 128 is a convolutional neural network (CNN)”), and providing an output and wherein said diagnostic structure comprises a first classifier of the diagnostic structure employing, at a second stage, a fully connected neural network receiving as input at least a portion of the output of the first stage of the diagnostic structure ([0033] “Output generated by the neural network 128 can be input into a second neural network 130, which in some examples, comprises a convolutional neural network (CNN) configured to receive multiple input types. For example, input to the second neural network 130 can include the organ/view classification received from the first neural network 128, along with binary classifications of motion detection, approximated fetal position data, and/or a list of measurements to be obtained in accordance with a stored scan protocol. From these inputs, the second neural network 130 can determine and output a suggested next measurement to be obtained in accordance with the required measurements of a prenatal assessment protocol.”); the first classifier of the diagnostic structure being configured to discriminate between pathological development, ([0041] “the neural network 128 can be configured to determine whether an abnormality is present in an image frame.”), and physiological development of the fetal organ ([0032] “the data processor 126 can be configured to implement a neural network 128, which can be configured to classify images into distinct categories, for example “full view,” “head,” “abdominal,” “chest,” or “extremities.” Sub-categories can include, for example, “stomach,” “bowel,” “umbilical cord,” “kidney,” “bladder,” “legs,” “arms,” “hands,” “femur,” “spine,” “heart,” “lungs,” “stomach,” “bowel,” “umbilical cord,” “kidney,” or “bladder.” Classification results determined by the neural network 128 can be adaptive to a current ultrasound region of interest and/or the completed measurements within the prenatal assessment protocol.”);
ii) whenever the output of the first classifier of the diagnostic structure categorizes the image as comprising a pathological development, providing the image as input to:
1) a second classifier of the diagnostic structure comprising a third stage employing a fully connected neural network, ([0027] “According to such embodiments, the output generated by the first neural network 128 may still be input into the second neural network 130, but the two networks may constitute sub-components of a larger, layered network, for example.”), receiving as input at least a portion of the output of the first stage of the diagnostic structure and being configured to classify the pathological development into at least one pathology category ([0048] “In the event that an abnormality is detected, the user interface 500 may provide an instruction to hold the transducer steady at one location, thereby allowing further analysis. Slight adjustments in the imaging angle may also be recommended to more thoroughly characterize a detected abnormality.”);
2) an object detector of the diagnostic structure comprising a fourth stage configured to receive as input at least a portion of the output of the first stage of the diagnostic structure and comprising a fully convolutional neural network, ([0027] “According to such embodiments, the output generated by the first neural network 128 may still be input into the second neural network 130, but the two networks may constitute sub-components of a larger, layered network, for example.”), being configured to perform segmentation of the image and localization of at least one pathological development region in the fetal organ ([0034] “an adaptive scan protocol 138 that includes a list of required fetal measurements, each of which may be accompanied by a status indicator showing whether or not each measurement has been obtained. The user interface 134 may be configured to display and update the adaptive scan protocol 138 in real time as an ultrasound scan is being performed. In some examples, the user interface 134 may be further configured to display instructions 139 for adjusting the data acquisition unit 110 in the manner necessary to obtain the next recommended measurements. The user input 140 received at the user interface 134 can be in the form of a manual confirmation that a particular measurement has been obtained. In some embodiments, the user input 140 may comprise agreement or disagreement with a next recommended measurement. In this manner, a user may override a recommended measurement. In some examples, the user input 140 can include instructions for implementing particular operational parameters necessary for imaging and/or measuring specific anatomical features, e.g., biparietal diameter, occipito-frontal diameter, head circumference, abdominal circumference, femur length, amniotic fluid index, etc.”);
b) output module, ([0045] “user interface 500”), configured to:
i) output to the user the at least one pathology category obtained from the second classifier of the diagnostic structure, ([0048] “In the event that an abnormality is detected, the user interface 500 may provide an instruction to hold the transducer steady at one location, thereby allowing further analysis. Slight adjustments in the imaging angle may also be recommended to more thoroughly characterize a detected abnormality.”), and the result of the image segmentation of the image and localization of the pathological development region in the fetal organ obtained from the object detector of the diagnostic structure ([0045] “a list of measurements 512 and calculations 514 may also be displayed.”);
ii) output to the user a message to end examination, whenever the output of the first classifier of the diagnostic structure categorizes the image as comprising a physiological development ([0046] “For example, completed measurements can be colored green in the worklist 510, while the next recommended measurements can be colored red, and the current measurement colored blue. In some embodiments, a confidence level associated with a current image view classification and/or the suggested next measurement may also be displayed, for example as a component of the current view description 504.”).
It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to further modify with the diagnostic module of Canfield because the eliminates the need for an operator to mentally keep track of incomplete and complete tasks while also diagnosing, as taught by Canfield in [0029], which can be influenced by human error.
Regarding Claim 8, the modified system of Mienkina teaches all limitations of Claim 7, as discussed above. Furthermore, Canfield teaches wherein the fully convolutional neural network of the fourth stage of the object detector is based on a region-based fully convolutional neural network architecture ([0033] “Output generated by the neural network 128 can be input into a second neural network 130, which in some examples, comprises a convolutional neural network (CNN) configured to receive multiple input types.” Where because the CNN of Canfield is utilized to detect objects, it employs region-based fully convolutional neural network architecture.).
It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to further modify with the teachings of Canfield because a R-CNN provides accuracy and flexibility in tasks.
Regarding Claim 9, the modified system of Mienkina teaches all limitations of Claim 7, as discussed above. Furthermore, Canfield teaches wherein the first of the diagnostic structure, the second classifier of the diagnostic structure and the object detector of the diagnostic structure are configured to receive as input a stack of images comprising at least one image ([0027] “The first neural network 128 can be configured to receive the image frames 124”).
It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to further modify with the teachings of Canfield for the same reasons as Claim 7.
Regarding Claim 10, the modified system of Mienkina teaches all limitations of Claim 7, as discussed above. Furthermore, Canfield teaches wherein the first stage convolutional neural networks of the image analysis structure and of the diagnostic structure have at least one common layer, defined during training ([0024] “the neural network(s) may be trained using any of a variety of currently known or later developed machine learning techniques to obtain a neural network (e.g., a machine-trained algorithm or hardware-based system of nodes) that is configured to analyze input data in the form of ultrasound image frames and identify certain features, including the presence and in some embodiments, the size, of one or more prenatal anatomical features” and [0037] “Extracted features may be input into a recurrent neural network, for example, which can be trained to determine a fetal position and/or orientation based on the features identified.”).
It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to further modify with the teachings of Canfield for the same reasons as Claim 7.
Regarding Claim 11, the modified system of Mienkina teaches all limitations of Claim 10, as discussed above. Furthermore, Canfield teaches wherein the image analysis structure and of the diagnostic structure results from a simultaneous training, ([0024] “the neural network(s) may be trained using any of a variety of currently known or later developed machine learning techniques to obtain a neural network (e.g., a machine-trained algorithm or hardware-based system of nodes) that is configured to analyze input data in the form of ultrasound image frames and identify certain features, including the presence and in some embodiments, the size, of one or more prenatal anatomical features.”), notably semi-supervised ([0043] “the training may be supervised.).
It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to further modify with the teachings of Canfield because the combination enhances performance of the processors.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA CHRISTINA TALTY whose telephone number is (571)272-8022. The examiner can normally be reached M-Th 8:30-5:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mike Carey can be reached at (571) 270-7235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARIA CHRISTINA TALTY/Examiner, Art Unit 3797
/MICHAEL J CAREY/Supervisory Patent Examiner, Art Unit 3795