Prosecution Insights
Last updated: April 19, 2026
Application No. 18/450,196

HYBRID 3D-TO-2D SLICE-WISE OBJECT LOCALIZATION ENSEMBLES

Final Rejection §103
Filed
Aug 15, 2023
Examiner
FITZPATRICK, ATIBA O
Art Unit
2677
Tech Center
2600 — Communications
Assignee
GE Precision Healthcare LLC
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
775 granted / 881 resolved
+26.0% vs TC avg
Minimal +5% lift
Without
With
+4.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
908
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 881 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation Considering claims 17-20, the broadest reasonable interpretation (BRI) of limitations, “computer-readable memory” (emphasis added) does not include transitory signals. The BRI of a “computer-readable memory” is the physical component where a computer stores data and instructions to be accessed by the CPU (Central Processing Unit). Thus, claims 17-20 fall into one of the statutory categories – under consideration of 35 USC § 101. Response to Amendments Amendments overcome the 35 USC 101, abstract idea, rejections. Amendments overcome the 35 USC 102 rejections of 1, 2, 8, 9, 10, and 16, but necessitate 35 USC 103 rejections. Response to Arguments Applicant's arguments with respect to the prior art rejections have been considered but are moot in view of the new ground(s) of rejection necessitated by amendment. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 8, 9, 10, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0414882 A1 (Petrovich) in view of US 2022/0378395 A1 (Yang). As per claim 9, Petrovich teaches a computer-implemented method, comprising: accessing, by a device operatively coupled to a processor, at least one three-dimensional voxel array (Petrovich: abstract: “receiving a 3D image of a body region of the patient”; para 20: “CT apparatus 3 acquires, in a per se known way, a 3D image 15 of a body region of a patient 11;” para 24: “In a step S01 of the identification method 20, the input data are acquired through the CT apparatus 3 and are received by the computer 5, as previously described.”; Fig. 3 (shown below): mainly S01; PNG media_image1.png 696 866 media_image1.png Greyscale PNG media_image2.png 605 928 media_image2.png Greyscale ); and localizing, by the device and via execution of a deep learning ensemble, an object depicted in the at least one three-dimensional voxel array, wherein the deep learning ensemble receives as input the at least one three-dimensional voxel array (Petrovich: abstract: “extracting 2D axial images of the 3D image taken along respective axial planes, 2D sagittal images of the 3D image taken along respective sagittal planes, and 2D coronal images of the 3D image taken along respective coronal planes; applying an axial neural network to each 2D axial image to generate a respective 2D axial probability map, a sagittal neural network to each 2D sagittal image to generate a respective 2D sagittal probability map, and a coronal neural network to each 2D coronal image to generate a respective 2D coronal probability map”; Fig. 3 (shown below): mainly S03; Fig. 4 (shown below): mainly 13, 13a-13b; PNG media_image3.png 1278 994 media_image3.png Greyscale PNG media_image4.png 739 266 media_image4.png Greyscale PNG media_image5.png 741 1010 media_image5.png Greyscale ), and wherein the deep learning ensemble is configured to: (ii) process the voxel array in a manner that preserves spatial correspondence along a slicing axis (Petrovich: para 25 (shown above); paras 28-32 (shown below): The 3D voxel array is decomposed into individual slices. Each 2D slice is independently fed into a 2D neural network. Each network processed each slice independently to produce a per-slice 2D probability map. Spatial correspondence along a slicing axis is preserved in that the same number of slices is maintained from start to finish so that the probability maps have 1-to-1 correspondence with the input slices.), and (iii) produce, as a primary output, a set of two-dimensional object location indicators each indexed to a respective two-dimensional slice of the three-dimensional voxel array (Petrovich: abstract: “applying an axial neural network to each 2D axial image to generate a respective 2D axial probability map, a sagittal neural network to each 2D sagittal image to generate a respective 2D sagittal probability map, and a coronal neural network to each 2D coronal image to generate a respective 2D coronal probability map; generating, based on the 2D probability maps, a 3D mask of the coronary sinus of the patient.”; Fig. 3 (shown above): mainly S05-S07; Fig. 4 (shown above): mainly 22-30; PNG media_image6.png 1207 708 media_image6.png Greyscale PNG media_image7.png 361 996 media_image7.png Greyscale PNG media_image8.png 250 686 media_image8.png Greyscale ). Petrovich does not teach (i) perform hybrid three-dimensional and two-dimensional convolutional processing on the at least one three-dimensional voxel array. Yang teaches (i) perform hybrid three-dimensional and two-dimensional convolutional processing on the at least one three-dimensional voxel array (Yang: PNG media_image9.png 323 995 media_image9.png Greyscale PNG media_image10.png 595 993 media_image10.png Greyscale PNG media_image11.png 505 993 media_image11.png Greyscale PNG media_image12.png 325 993 media_image12.png Greyscale PNG media_image13.png 518 1614 media_image13.png Greyscale PNG media_image14.png 502 992 media_image14.png Greyscale PNG media_image15.png 607 951 media_image15.png Greyscale PNG media_image16.png 640 998 media_image16.png Greyscale PNG media_image17.png 767 814 media_image17.png Greyscale PNG media_image18.png 267 993 media_image18.png Greyscale Para 98), (ii) process the voxel array in a manner that preserves spatial correspondence along a slicing axis (Yang: PNG media_image19.png 503 995 media_image19.png Greyscale “[0059] The present invention relies upon a concept of dimensionally reducing 3D data (having dimensions width, height and depth) into 2D data (having only two of these three dimensions). This effectively results in the 3D data being projected into a 2D plane. The reduction may be performed along a single axis or dimension of the 3D data, e.g. along the width dimension/axis, along the height dimension/axis or along the depth dimension/axis.” “[0060] It will be understood that, if a 2D image is a projection of the 3D image along a particular direction/axis, the 2D image thereby provides information on the 3D image within the other two axes of the 3D image. For example, if a 2D image is produced by reducing a 3D image along a depth dimension, then the produced 2D image will represent the 3D image along the width and height axes (i.e. the dimensions of the 2D image will be “width” and “height”).”; Paras 74-77, 96; PNG media_image20.png 322 996 media_image20.png Greyscale : Reducing the along the depth axis while preserving the width/height dimension information preserves spatial correspondence along a slicing axis). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Yang into Petrovich since both Petrovich and Yang suggest a practical solution and field of endeavor of reducing the computational burden in detecting and locating objects in 3D medical images using CNN in general and Yang additionally provides teachings that can be incorporated into Petrovich in that the system performs hybrid three-dimensional and two-dimensional convolutional processing on the at least one three-dimensional voxel array as to “accelerate the detection efficiency and simplify the algorithm designation or hardware construction” (Yang: para 58). The teachings of Yang can be incorporated into Petrovich in that the system performs hybrid three-dimensional and two-dimensional convolutional processing on the at least one three-dimensional voxel array. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim 10, Petrovich in view of Yang the computer-implemented method of claim 9, wherein: the deep learning ensemble comprises a first deep learning neural network, a second deep learning neural network, and a third deep learning neural network that are in parallel with each other; the at least one three-dimensional voxel array comprises a first three-dimensional voxel array made up of axial two-dimensional slices, a second three-dimensional voxel array made up of coronal two-dimensional slices, and a third three-dimensional voxel array made up of sagittal two-dimensional slices; the first deep learning neural network receives as input the first three-dimensional voxel array and produces as output, for each of the axial two-dimensional slices of the first three-dimensional voxel array, a respective one of the set of two-dimensional object location indicators; the second deep learning neural network receives as input the second three-dimensional voxel array and produces as output, for each of the coronal two-dimensional slices of the second three-dimensional voxel array, a respective one of the set of two-dimensional object location indicators; and the third deep learning neural network receives as input the third three-dimensional voxel array and produces as output, for each of the sagittal two-dimensional slices of the third three-dimensional voxel array, a respective one of the set of two-dimensional object location indicators (Petrovich: See arguments and citations offered in rejecting claim 9 above: mainly Fig. 4, paras 25, 28-30, 31 (all shown above)). As per claim 16, Petrovich in view of Yang the computer-implemented method of claim 9, wherein the object is an anatomical structure of a medical patient (Petrovich: See arguments and citations offered in rejecting claim 9 above). As per claim(s) 1, 2, and 8, arguments made in rejecting claim(s) 9, 10, and 16 are analogous, respectively. Petrovich also teaches a system, comprising: a processor that executes computer-executable components stored in a non-transitory computer-readable memory, wherein the computer-executable components (Petrovich: See arguments and citations offered in rejecting claim 9 above; Fig. 1: (mainly 5-9) and paras 17-19). Claim(s) 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Petrovich in view of Yang as applied to claims 1 and 9 above, and further in view of US 20220198213 A1 (Kim). As per claim 12, Petrovich in view of Yang the computer-implemented method of claim 9, wherein: the deep learning ensemble comprises a first deep learning neural network, a second deep learning neural network, and a third deep learning neural network that are in parallel with each other; the at least one three-dimensional voxel array comprises a single three-dimensional voxel array made up of axial two-dimensional slices; the first deep learning neural network receives as input the single three-dimensional voxel array and produces as output, for each of the axial two-dimensional slices of the single three-dimensional voxel array, a respective one of the set of two-dimensional object location indicators; and (Petrovich: See arguments and citations offered in rejecting claim 9 above; Fig. 1: (mainly 5-9) and paras 17-19). Petrovich in view of Yang does not teach the second deep learning neural network and the third deep learning neural network are idle. Kim teaches the deep learning ensemble comprises a first deep learning neural network, a second deep learning neural network, and a third deep learning neural network that are in parallel with each other; the first deep learning neural network receives as input the the second deep learning neural network and the third deep learning neural network are idle (Kim: para 133: “According to an exemplary embodiment of the present disclosure, the computing device 100 may determine one or more anomaly detection sub models for computing the input data among the plurality of generated anomaly detection sub models (420).” Para 138: “According to an exemplary embodiment of the present disclosure, the computing device 100 may include a logic 510 for generating an anomaly detection model including a plurality of anomaly detection sub models having a pre-learned network function through using a plurality of training data subsets included in a training data set, a logic 520 for determining one or more anomaly detection sub models for calculating an input data among the generated anomaly detection sub models, and a logic 530 for judging whether or not the anomaly is existed in the input data through using the one or more determined anomaly detection sub models.” PNG media_image21.png 596 721 media_image21.png Greyscale PNG media_image22.png 687 753 media_image22.png Greyscale PNG media_image23.png 711 532 media_image23.png Greyscale PNG media_image24.png 676 662 media_image24.png Greyscale ). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Kim into Petrovich in view of Yang since both Petrovich in view of Yang and Kim suggest a practical solution and field of endeavor of an ensemble deep learning computer vision system for detecting anomalies in general and Kim additionally provides teachings that can be incorporated into Petrovich in view of Yang in that only one sub-model of the ensemble deep learning system is used as to “determine an optimal anomaly detection sub model” (emphasis added; Kim: para 73). The teachings of Kim can be incorporated into Petrovich in view of Yang in that only one sub-model of the ensemble deep learning system is used. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim(s) 4, arguments made in rejecting claim(s) 12 (and base claim 9) are analogous, respectively. Petrovich also teaches a system, comprising: a processor that executes computer-executable components stored in a non-transitory computer-readable memory, wherein the computer-executable components (Petrovich: See arguments and citations offered in rejecting claim 9 above; Fig. 1: (mainly 5-9) and paras 17-19). Claim(s) 5 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Petrovich in view of Yang as applied to claims 1 and 9 above, and further in view of US 20220222812 A1 (Li) and US 20210216822 A1 (Paik). As per claim 13, Petrovich in view of Yang the computer-implemented method of claim 9, wherein the deep learning ensemble generates a set of confidence scores respectively corresponding to the set of two-dimensional object location indicators, and further comprising: Petrovich in view of Yang does not teach rendering, by the device and on an electronic display, a message indicating that the object is present in the at least one three-dimensional voxel array. Li rendering, by the device and on an electronic display,… in response to at least one of the set of confidence scores exceeding a threshold (Li: Para 23: “classification information of other types of lesions”; Para 51: “Each classification branch may produce an image classification result (e.g., a probability that the lung has pneumonia), and the image classification results are fused to yield the final prediction result”; para 52: “prediction result may be output from the input/output device 708 for presentation to a user such as clinician, patient”; Para 29: “the prediction result may be a diagnosis result of pneumonia, e.g., the probability of the lung has pneumonia. According to an exemplary probability fusion algorithm, the average probability of the three classification branches is calculated, and a threshold may be set to 0.5. When the average probability is equal to or greater than 0.5, it is determined that the prediction result is that the patient has pneumonia. When the average probability is less than 0.5, it is determined that the prediction result is that the patient does not have pneumonia. In another example, the fusion unit 203 may also use, for example, a voting fusion algorithm to vote the prediction results of the three classification branches, and the one with the most votes is determined as the final classification”; Para 25: “FIG. 1 shows transverse, sagittal and coronal images of a target area of lung of the same patient. As shown in FIG. 1, (a) is a transverse image, (b) is a sagittal image, and (c) is a coronal image. The white circles marked in the three images indicate the same pneumonia lesion area that is presented in the three different views, i.e., transverse, sagittal and coronal sectional views.” PNG media_image25.png 298 858 media_image25.png Greyscale PNG media_image26.png 487 979 media_image26.png Greyscale ). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Li into Petrovich in view of Yang since both Petrovich in view of Yang and Li suggest a practical solution and field of endeavor of ensemble deep learning with parallel sub-models for each of axial, coronal, and sagittal orientations, respectively, for detecting anomalies in the different orientations in general and Li additionally provides teachings that can be incorporated into Petrovich in view of Yang in that display is in response to at least one of the set of confidence scores exceeding a threshold so that “When the average probability is equal to or greater than 0.5, it is determined that the prediction result is that the patient has pneumonia” (Li: para 29). The teachings of Li can be incorporated into Petrovich in view of Yang in that display is in response to at least one of the set of confidence scores exceeding a threshold. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. Paik teaches a message indicating that the object is present in the at least one three-dimensional voxel array (Paik: para 4: “Medical image interpretation is a process in which the clinician receives as inputs, the set of medical images and associated clinical information (e.g., medical history, indication for imaging), and produces a textual report that includes findings”; Para 10: “a display configured to show a graphical user interface for evaluating a medical image; (c) a non-transitory computer readable storage medium encoded with a computer program that causes said processor to: (i) generate a medical report including a computer-generated finding related to said medical image when a user accepts inclusion of said computer-generated finding within said report.”; Para 62: “Medical images can be visualized on a display, and a user such as a radiologist is able to interpret the images using a streamlined process to efficiently generate findings for insertion into a medical report.”; Para 71: “analyzing, in response to an instruction from a user, a medical image using a machine learning software module, thereby generating a computer-finding; (b) providing said user an option to incorporate said computer-finding into a medical report”; Para 72: “Segmentation provides a representation of a medical image that can be medically relevant, and thus suitable for inclusion within an analysis or report.”; Para 127: “AI findings/diagnoses, with the associated medical text that are suggested for insertion into the diagnostic report.”). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Paik into Petrovich in view of Yang and Li since both Petrovich in view of Yang and Li and Paik suggest a practical solution and field of endeavor of deep learning computer vision for detecting anomalies in medical tomographic images in general and Paik additionally provides teachings that can be incorporated into Petrovich in view of Yang and Li in that a message is provided indicating that the object is present since “the interpretation of medical images is a predominantly manual process with limits on speed and efficiency because of the reliance on a human to interpret the image and laboriously enter findings into the medical report” (Paik: para 2). The teachings of Paik can be incorporated into Petrovich in view of Yang and Li in that a message indicating that the object is present. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim(s) 5, arguments made in rejecting claim(s) 13 (and base claim 9) are analogous, respectively. Petrovich also teaches a system, comprising: a processor that executes computer-executable components stored in a non-transitory computer-readable memory, wherein the computer-executable components (Petrovich: See arguments and citations offered in rejecting claim 9 above; Fig. 1: (mainly 5-9) and paras 17-19). Claim(s) 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Petrovich in view of Yang as applied to claims 1 and 9 above, and further in view of Li. As per claim 14, Petrovich in view of Yang the computer-implemented method of claim 9, wherein the deep learning ensemble generates a set of confidence scores respectively corresponding to the set of two-dimensional object location indicators, and further comprising: (Petrovich: See arguments and citations offered in rejecting claim 9 above; PNG media_image27.png 1004 997 media_image27.png Greyscale ). Petrovich in view of Yang does not teach rendering, by the device and on an electronic display. Li teaches rendering, by the device and on an electronic display, one or more of the set of … object location indicators that have confidence scores exceeding a threshold (Li: Para 23: “classification information of other types of lesions”; Para 51: “Each classification branch may produce an image classification result (e.g., a probability that the lung has pneumonia), and the image classification results are fused to yield the final prediction result”; para 52: “prediction result may be output from the input/output device 708 for presentation to a user such as clinician, patient”; Para 29: “the prediction result may be a diagnosis result of pneumonia, e.g., the probability of the lung has pneumonia. According to an exemplary probability fusion algorithm, the average probability of the three classification branches is calculated, and a threshold may be set to 0.5. When the average probability is equal to or greater than 0.5, it is determined that the prediction result is that the patient has pneumonia. When the average probability is less than 0.5, it is determined that the prediction result is that the patient does not have pneumonia. In another example, the fusion unit 203 may also use, for example, a voting fusion algorithm to vote the prediction results of the three classification branches, and the one with the most votes is determined as the final classification”; Para 25: “FIG. 1 shows transverse, sagittal and coronal images of a target area of lung of the same patient. As shown in FIG. 1, (a) is a transverse image, (b) is a sagittal image, and (c) is a coronal image. The white circles marked in the three images indicate the same pneumonia lesion area that is presented in the three different views, i.e., transverse, sagittal and coronal sectional views.” PNG media_image25.png 298 858 media_image25.png Greyscale PNG media_image26.png 487 979 media_image26.png Greyscale ). See rationale for combining provided in rejecting claim 13 above. As per claim(s) 6, arguments made in rejecting claim(s) 14 (and base claim 9) are analogous, respectively. Petrovich also teaches a system, comprising: a processor that executes computer-executable components stored in a non-transitory computer-readable memory, wherein the computer-executable components (Petrovich: See arguments and citations offered in rejecting claim 9 above; Fig. 1: (mainly 5-9) and paras 17-19). Claim(s) 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Petrovich in view of Yang in view of Li as applied to claims 6 and 14 above, and further in view of US 20210125707 (Rusko). As per claim 15, Petrovich in view of Yang the computer-implemented method of claim 15. Petrovich in view of Yang does not teach the threshold is a variable based on user input. Rusko teaches these limitations (Rusko: PNG media_image28.png 596 994 media_image28.png Greyscale PNG media_image29.png 608 651 media_image29.png Greyscale ). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Rusko into Petrovich in view of Yang since both Petrovich in view of Yang and Rusko suggest a practical solution and field of endeavor of ensemble deep learning with parallel sub-models for each of axial, coronal, and sagittal orientations, respectively, for detecting anomalies in the different orientations and display is in response to at least one of the set of confidence scores exceeding a threshold in general and Rusko additionally provides teachings that can be incorporated into Petrovich in view of Yang in that the threshold is a variable based on user input since “Using higher/lower threshold can make the result under/over-segmented” (Rusko: para 45). The teachings of Rusko can be incorporated into Petrovich in view of Yang in that the threshold is a variable based on user input. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim(s) 7, arguments made in rejecting claim(s) 15 (and base claim 9) are analogous, respectively. Petrovich also teaches a system, comprising: a processor that executes computer-executable components stored in a non-transitory computer-readable memory, wherein the computer-executable components (Petrovich: See arguments and citations offered in rejecting claim 9 above; Fig. 1: (mainly 5-9) and paras 17-19). Claim(s) 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Petrovich in view of Yang in view of US 20210216822 A1 (Paik). As per claim(s) 17 and 18, arguments made in rejecting claim(s) 9 and 10 are analogous, respectively. Petrovich also teaches a computer program product for facilitating hybrid 3D-to-2D slice-wise object localization ensembles, the computer program product comprising a computer-readable memory having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to. Petrovich in view of Yang does not teach depicting a cervical spine of a medical patient; localize a fracture in the cervical spine; and a set of two-dimensional fracture location indicators. Paik teaches these limitations (Paik: Para 99: “generates a finding (e.g., a sentence) that is inserted into the medical report for the X-ray image that states the L5 vertebra has a fracture.”; para 107: “the output is a predicted or detected feature or pathology that is generated using an image analysis algorithm comprising a neural network architecture configured for progressive reasoning. As an illustrative example, a neural network is made up of a sequence of modules with classifiers that generate input based on an input medical image and the output generated by the previous classifier. In this example, the classifiers of this neural network carry out image segmentation, labeling of the segmented parts of the image, and then identifies pathologies for the labeled segments (e.g., a lesion, stenosis, fracture, etc.) in sequence with the segmentation output being used in combination with the original image by a classifier that performs labeling of the identified image segments, and the classifier identifying pathologies using the labeled image segments and the original image”; Para 119: “MRI of the spine, there may be approximately 50 different abnormality categories that can be observed, and detection of these abnormality categories can be made using a variety of different machine learning architectures”; Para 209: “classifying raw data (e.g., identifying distinct segments within an image such as vertebrae in an X-ray of the spine)”; Para 222: “fracture… identification and severity prediction of a handful of orthopedic related findings across different anatomies including, but not limited to, the spine”; Para 226: “Non-limiting examples of predicted findings such as for spinal cases include but are not limited to: foraminal stenosis, central canal stenosis, disc bulging, disc herniation, disc desiccation, synovial cysts, nerve compression, Schmorl's nodes, vertebrae fractures, and scoliosis across the different vertebral bodies and discs”; Para 255: “A single FCN model is used for segmentation of cervical, thoracic, and lumbar spine imaging studies.” : Note that Paik teaches that the finding of the neural network segmentation and identification can be a vertebrae fracture and says that the vertebrae can be cervical spine with vertebrae within range of C1 to C7). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Paik into Petrovich in view of Yang since both Petrovich in view of Yang and Paik suggest a practical solution and field of endeavor of deep learning computer vision for detecting anomalies in medical tomographic images in general and Paik additionally provides teachings that can be incorporated into Petrovich in view of Yang in that the system localizes a fracture in the cervical spine so that “the system generates a finding (e.g., a sentence) that is inserted into the medical report for the X-ray image that states the L5 vertebra has a fracture” (Paik: para 99). The teachings of Paik can be incorporated into Petrovich in view of Yang in that the system localizes a fracture in the cervical spine. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. Claim(s) 20 is rejected under 35 U.S.C. 103 as being unpatentable over Petrovich in view of Yang and Paik as applied to claim 17 above, and further in view of Kim. As per claim(s) 20, arguments made in rejecting claim(s) 12 (and base claim 17 above) are analogous, respectively. Petrovich also teaches a computer program product for facilitating hybrid 3D-to-2D slice-wise object localization ensembles, the computer program product comprising a computer-readable memory having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to (Petrovich: See arguments and citations offered in rejecting claim 9 above; Fig. 1: (mainly 5-9) and paras 17-19). See rationale for combining provided in rejecting claim 12 above. Allowable Subject Matter Claims 3, 11, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Limitations pertaining to “a ResNet backbone of the modified RetinaNet architecture comprises three-dimensional convolutional kernels instead of two-dimensional convolutional kernels; downsampling operators of the modified RetinaNet architecture do not perform downsampling along a slicing axis; and a Feature Pyramid Network of the modified RetinaNet architecture comprises two-dimensional convolutional kernels instead of three-dimensional convolutional kernels and is applied, via shared weights, on a slice-wise basis”, in conjunction with other limitations present in the listed claims, base claims, and intervening claims, distinguish over the prior art. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Atiba Fitzpatrick whose telephone number is (571) 270-5255. The examiner can normally be reached on M-F 10:00am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for Atiba Fitzpatrick is (571) 270-6255. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Atiba Fitzpatrick /ATIBA O FITZPATRICK/ Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Aug 15, 2023
Application Filed
Oct 09, 2025
Non-Final Rejection — §103
Dec 18, 2025
Interview Requested
Jan 13, 2026
Response Filed
Mar 10, 2026
Final Rejection — §103
Mar 23, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602854
SYSTEM AND METHOD FOR MEDICAL IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12586195
OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC APPARATUS, OPHTHALMIC INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579649
RADIATION IMAGE PROCESSING APPARATUS AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12555237
CLOSEUP IMAGE LINKING
2y 5m to grant Granted Feb 17, 2026
Patent 12548221
SYSTEMS AND METHODS FOR AUTOMATIC QUALITY CONTROL OF IMAGE RECONSTRUCTION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
93%
With Interview (+4.9%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 881 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month