Prosecution Insights
Last updated: April 19, 2026
Application No. 18/608,894

SYSTEMS AND METHODS FOR IMAGE PROCESSING

Non-Final OA §103
Filed
Mar 18, 2024
Examiner
ISMAIL, OMAR S
Art Unit
2635
Tech Center
2600 — Communications
Assignee
Shanghai United Imaging Healthcare Co. Ltd.
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
734 granted / 802 resolved
+29.5% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
24 currently pending
Career history
826
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
66.3%
+26.3% vs TC avg
§102
7.0%
-33.0% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 802 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED OFFICE ACTION Status of Claims Claims 1-20 are pending examination. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b) (2) (C) for any potential 35 U.S.C. 102(a) (2) prior art against the later invention. 1. Claims 1,2,3,4,5,6,7,8,12,13,18,19 and 20 are rejected under 35 U.S.C 103(a) as being unpatentable over Goin et al. (USPUB 20180046758) in view of Amelia Jimenez Sanchez et al. (NPL DOC: "Weakly-Supervised Localization and Classification of Proximal Femur Fractures," 27th September 2018,Computer Vision and Pattern Recognition,arXiv:1809.10692v1,Pages 1-7.). As per claim 1, Goin et al. teaches A computer-aided diagnosis method implemented on a computing device having one or more processors and one or more storage devices ( FIG. 1 and Paragraph [0044]-“… the parameter generator 201 may store the parameters used to verify the region of interest to improve a local computer learning algorithm to determine regions of interest. ..”) , the method comprising: obtaining a plurality of medical images related to a region of interest (ROI) of an object( Paragraphs [0021-0023]- “… the imaging device 110 generates a three dimension image of a part of a body. In such examples, the imaging device 110 may scan a body part multiple times (e.g., multiple image slices) and combine the multiple images to generate the three dimensional image…The image visualizer 114 prompts the clinician to identify and/or verify (e.g. segment) a region of interest within the medical image. The region of interest corresponds to the boundaries of an irregularity (e.g., lesion, tumor, stenosis, polyp, nodule, aneurysm, etc.). The image visualizer 114 incorporates segmentation data (e.g., data related to the region of interest)…”) ; causing an image list to be displayed for managing the plurality of medical images ( Paragraph [0023]- “…The analysis recorder 116 anonymizes the local archive 118 by removing any identifying data in the extracted data and anonymize (e.g., downsample, encrypt, etc.) areas of the annotated whole volume image that are outside the region of interest/bounding box prior to generating the local archive 118. Because each patient has unique features, a patient may be identified based on a high resolution medical image displaying such unique features…”) ; Goin et al. does not explicitly teaches Receiving a first instruction related to selecting an item related to the ROI, the first instruction being generated through the image list; and upon receiving the first instruction, causing at least one of the plurality of medical images corresponding to the selected item to be displayed. However, within analogous art, Amelia Jimenez-Sanchez et al. teaches Receiving a first instruction related to selecting an item related to the ROI (ROI localization from images taught within Page 2- Col. 2- “…A popular method for ROI localization is Regions with CNN (R-CNN) [13], which however depends on an external region proposal system. In order to localize the ROI without the need of additional expert annotations, we model the problem in the framework of deep attention models [12], [13] capable of finding a ROI implicitly. In addition, we leverage STL [14], which optimizes simultaneously for the classification and localization tasks….” ) , the first instruction being generated through the image list ( Fig. 3 and 4 teaches multiple image lists ) ; and upon receiving the first instruction, causing at least one of the plurality of medical images corresponding to the selected item to be displayed( Displaying of the selected item ( bone fracture image identified with bounding box) taught within Fig. 1 -4). One of ordinary skill in the art would have been motivated to combine the teaching of Amelia Jimenez-Sanchez et al. within the modified teaching of the Methods and apparatus for recording anonymized volumetric data from medical image visualization software mentioned by Gogin et al. because the Weakly-Supervised Localization and Classification of Proximal Femur Fractures mentioned by Amelia Jimenez-Sanchez et al. provides a method and system for implementation of classification of medical images of body structure for identification of region of interest within the plurality of medical images of body structure. Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Weakly-Supervised Localization and Classification of Proximal Femur Fractures mentioned by Amelia Jimenez-Sanchez et al. within the modified teaching of the Methods and apparatus for recording anonymized volumetric data from medical image visualization software mentioned by Gogin et al. for implementing a system and method for classification of medical images of body structure for identification of region of interest within the plurality of medical images of body structure. As per claim 2, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 1, Gogin et al. teaches wherein the plurality of medical images include at least one abnormality of the ROI (Region of interest identification from medical images taught within Paragraph [0027]- “…FIG. 1, the annotated images include a high resolution medical image with a segmented region of interest identified by a user. The annotated image further includes volumetric data, segmentation data, contextual data, and metadata. In some examples, the receiver 200 receives extraction parameters from the remote system 122 to customize the generated local archive 118 of FIG. 1…”) ; and the at least one abnormality includes at least one of a fracture, a stenosis, a plaque, a tumor, a nodule, inflammation, or an abnormality in morphology, function, or metabolism ( Abnormality interpreted as the region of interest corresponds to the boundaries of an irregularity and taught within Paragraph [0022]- “…The image visualizer 114 prompts the clinician to identify and/or verify (e.g. segment) a region of interest within the medical image. The region of interest corresponds to the boundaries of an irregularity (e.g., lesion, tumor, stenosis, polyp, nodule, aneurysm, etc.). The image visualizer 114 incorporates segmentation data (e.g., data related to the region of interest),…”). As per claim 3, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 2, Gogin et al. teaches further comprising: upon receiving the first instruction, causing an abnormality detection result of the at least one displayed medical image to be displayed ( FIG. 7 and Paragraph [0064]- “…Both the annotated whole volume image 700 and the anonymized image 702 include a region of interest 704 which is segmented by a clinician. The whole volume image 700 includes features 705 displayed in high resolution. The anonymized image 702 includes a bounding box 706 defining a high resolution portion of the image….”) . As per claim 4, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 3, Gogin et al. teaches wherein the abnormality detection result includes at least one of a location of the selected abnormality in the at least one displayed medical image, or a type of the selected abnormality ( FIG. 7- 702 teaching the abnormality within displayed medical image and furthermore taught within Paragraphs [0076-0077]- “…The annotated volume medical image includes a region of interest identified by the clinician. The region of interest corresponds to an irregularity in the whole volume image. … wherein anonymize the annotated whole volume image by reducing the resolution and/or blurring the whole volume image outside the region of interest…”) . As per claim 5, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 4, Gogin et al. does not explicitly teach wherein causing the abnormality detection result of the at least one displayed medical image to be displayed includes: causing a marker indicating the location of the selected abnormality in the at least one displayed medical image to be displayed. Within analogous art, Amelia Jimenez Sanchez et al. teaches wherein causing the abnormality detection result of the at least one displayed medical image to be displayed ( Fig. 1 and Fig 4 teaches the display of fracture ( abnormality ) within a medication image of body part ) includes: causing a marker indicating the location of the selected abnormality in the at least one displayed medical image to be displayed ( Fig. 1 and Fig. 2 shows bounding box ( marker) display of fracture ( abnormality ) within a medication image of body part and Page 6- Col.2 – “…Bounding box and heatmap predictions by the different methods. The green bounding boxes correspond to ground truth localization. a bounding box predictor. We have verified the importance of localization in this task, by comparing against the lower- and upper-bound models…”) . As per claim 6, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 4, Gogin et al. does not explicitly teach wherein causing the abnormality detection result of the at least one displayed medical image to be displayed includes: causing a text describing the type of the selected abnormality to be displayed. Within analogous art, Amelia Jimenez Sanchez et al. teaches wherein causing the abnormality detection result of the at least one displayed medical image to be displayed ( Fig. 1 and Fig 4 teaches the display of fracture ( abnormality ) within a medication image of body part ) includes: causing a text describing the type of the selected abnormality to be displayed ( text description of abnormal detection within medical images taught within Page 6- Fig. 3 and 4). As per claim 7, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 3, Gogin et al. does not explicitly teach wherein the abnormality detection result is generated by: obtaining a detection model generated based on a machine learning model; and generating the abnormality detection result by inputting the at least one displayed medical image into the fracture detection model. Within analogous art, Amelia Jimenez Sanchez et al. teaches wherein the abnormality detection result is generated ( Fig. 1 and Fig 4 teaches the display of fracture ( abnormality ) within a medication image of body part ) by: obtaining a detection model generated based on a machine learning model ( Page 3- Fig. 2 shows the convolutional network ( machine learning ) ) ; and generating the abnormality detection result by inputting the at least one displayed medical image into the fracture detection model ( Page 2- Col. 2- “…Convolutional Neural Network (ConvNet) trained to predict a probability map of fracture incidence. Also, Bar et al. [19] describe a method relying on a ConvNet followed by a Recurrent Neural Network (RNN) for compression fracture detection….”) . As per claim 8, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 1, Gogin et al. teaches wherein the item related to the ROI includes one of the at least one abnormality of the ROI ( Paragraph [0022]- “…The region of interest corresponds to the boundaries of an irregularity (e.g., lesion, tumor, stenosis, polyp, nodule, aneurysm, etc.). The image visualizer 114 incorporates segmentation data (e.g., data related to the region of interest), …”) . As per claim 12, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 8, Gogin et al. does not explicitly teach further comprising: upon receiving the first instruction, causing at least one historical image corresponding to the at least one of the plurality of medical images to be displayed. Within analogous art, Amelia Jimenez Sanchez et al. teaches further comprising: upon receiving the first instruction, causing at least one historical image corresponding to the at least one of the plurality of medical images to be displayed (Page 6- Fig. 4 – teaches multiple images considered as the historical images taught within Page 2- Col. 2- “…Given N X-ray images with each image I 2 RH_W, our aim is to build a classification model f(_) that assigns to each image a class label y 2 C, where C _ fnormal; A1; A2; A3; B1; B2; B3g, i.e. ^y = f(I; !f ), where^ denotes a prediction and !f are the classification model parameters. In addition, we define a localization task g(_) that returns the position p of the ROI,…within the X-ray image such …, where !g are the localization model parameters. p = ftr; tc; sg is a bounding box of scale s centered at (tr; tc). The ROI image… is obtained with I’ = Wp(I),…” ) . As per claim 13, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 1, Gogin et al. teaches wherein the item related to the ROI includes one of at least one structure of the ROI( Paragraph [0022]- “…The region of interest corresponds to the boundaries of an irregularity (e.g., lesion, tumor, stenosis, polyp, nodule, aneurysm, etc.). The image visualizer 114 incorporates segmentation data (e.g., data related to the region of interest), …”) . As per claim 18, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 1, Gogin et al. does not explicitly teach further comprising: receiving an instruction of selecting, for display, a first location in a first medical image of the plurality of medical images; and simultaneously displaying the first medical image, or a portion thereof, including the selected first location and a second medical image, or a portion thereof, of the plurality of medical images, the second medical image including a second location corresponding to the first location. Within analogous art, Amelia Jimenez Sanchez et al. teaches further comprising: receiving an instruction of selecting, for display, a first location in a first medical image of the plurality of medical images (ROI localization from images taught within Page 2- Col. 2- “…A popular method for ROI localization is Regions with CNN (R-CNN) [13], which however depends on an external region proposal system. In order to localize the ROI without the need of additional expert annotations, we model the problem in the framework of deep attention models [12], [13] capable of finding a ROI implicitly. In addition, we leverage STL [14], which optimizes simultaneously for the classification and localization tasks….”); and simultaneously displaying the first medical image, or a portion thereof, including the selected first location and a second medical image( Within Page 6- Fig. 4 teaches the displaying of first and second medical images within selected location with the identification with bounding box and further taught within Page 6 – Col. 2- “…localization in this task, by comparing against the lower- and upper-bound models. As hypothesized, a closer look onto the fractured bone does improve the performance of the models in every scenario. All our models give good results for 2 and 3 classes. However, the 6 class scenario is really challenging and our results are slightly below the inter-observer variability of 66-71%. We believe that increasing the size of the dataset is necessary, as for some classes it reaches a critical number (e.g. 15 for class A3), which is far from representative of the true intra-class variation. Other possibilities to deal with the 6 classes are the use of more complex architectures like a ConvNet cascade to fetch deeper features,…”), or a portion thereof, of the plurality of medical images, the second medical image including a second location corresponding to the first location( Displaying of the selected item ( bone fracture image identified with bounding box) taught within Fig. 1 -4). As per claim 19, Goin et al. teaches A computer-aided diagnosis system( FIG. 1 and Paragraph [0044]-“… the parameter generator 201 may store the parameters used to verify the region of interest to improve a local computer learning algorithm to determine regions of interest. ...”) , comprising: at least one storage device including a set of instructions ( Paragraph [0005]- “… a computer readable medium comprising instructions which, when executed, cause a machine to record anonymized volumetric data from medical image visualization software….”) ; at least one processor in communication with the at least one storage device ( Paragraph [0039]- “…the example processes of FIGS. 3 and 4 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable …”) , wherein when executing the set of instructions, the at least one processor is directed to cause the system to perform operations including: obtaining a plurality of medical images related to a region of interest (ROI) of an object (Paragraphs [0021-0023]- “… the imaging device 110 generates a three dimension image of a part of a body. In such examples, the imaging device 110 may scan a body part multiple times (e.g., multiple image slices) and combine the multiple images to generate the three dimensional image…The image visualizer 114 prompts the clinician to identify and/or verify (e.g. segment) a region of interest within the medical image. The region of interest corresponds to the boundaries of an irregularity (e.g., lesion, tumor, stenosis, polyp, nodule, aneurysm, etc.). The image visualizer 114 incorporates segmentation data (e.g., data related to the region of interest)…”) , the plurality of medical images including at least one abnormality of the ROI(Paragraphs [0021-0023]- “… the imaging device 110 generates a three dimension image of a part of a body. In such examples, the imaging device 110 may scan a body part multiple times (e.g., multiple image slices) and combine the multiple images to generate the three dimensional image…The image visualizer 114 prompts the clinician to identify and/or verify (e.g. segment) a region of interest within the medical image. The region of interest corresponds to the boundaries of an irregularity (e.g., lesion, tumor, stenosis, polyp, nodule, aneurysm, etc.). The image visualizer 114 incorporates segmentation data (e.g., data related to the region of interest)…”); causing an image list to be displayed for managing the plurality of medical images ( Paragraph [0023]- “…The analysis recorder 116 anonymizes the local archive 118 by removing any identifying data in the extracted data and anonymize (e.g., downsample, encrypt, etc.) areas of the annotated whole volume image that are outside the region of interest/bounding box prior to generating the local archive 118. Because each patient has unique features, a patient may be identified based on a high resolution medical image displaying such unique features…”); Goin et al. does not explicitly teaches receiving a first instruction related to selecting one of the at least one abnormality, the first instruction being generated through the image list; and upon receiving the first instruction, causing at least one of the plurality of medical images corresponding to the selected abnormality to be displayed. However, within analogous art, Amelia Jimenez-Sanchez et al. teaches receiving a first instruction related to selecting one of the at least one abnormality (ROI localization from images taught within Page 2- Col. 2- “…A popular method for ROI localization is Regions with CNN (R-CNN) [13], which however depends on an external region proposal system. In order to localize the ROI without the need of additional expert annotations, we model the problem in the framework of deep attention models [12], [13] capable of finding a ROI implicitly. In addition, we leverage STL [14], which optimizes simultaneously for the classification and localization tasks….” ) , the first instruction being generated through the image list ( Fig. 3 and 4 teaches multiple image lists ) ; and upon receiving the first instruction, causing at least one of the plurality of medical images corresponding to the selected abnormality to be displayed ( Displaying of the selected item ( bone fracture image identified with bounding box) taught within Fig. 1 -4). One of ordinary skill in the art would have been motivated to combine the teaching of Amelia Jimenez-Sanchez et al. within the modified teaching of the Methods and apparatus for recording anonymized volumetric data from medical image visualization software mentioned by Gogin et al. because the Weakly-Supervised Localization and Classification of Proximal Femur Fractures mentioned by Amelia Jimenez-Sanchez et al. provides a method and system for implementation of classification of medical images of body structure for identification of region of interest within the plurality of medical images of body structure. Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Weakly-Supervised Localization and Classification of Proximal Femur Fractures mentioned by Amelia Jimenez-Sanchez et al. within the modified teaching of the Methods and apparatus for recording anonymized volumetric data from medical image visualization software mentioned by Gogin et al. for implementing a system and method for classification of medical images of body structure for identification of region of interest within the plurality of medical images of body structure. As per claim 20, Goin et al. teaches A computer-aided diagnosis method ( FIG. 1 and Paragraph [0044]-“… the parameter generator 201 may store the parameters used to verify the region of interest to improve a local computer learning algorithm to determine regions of interest. ...”) implemented on a computing device having one or more processors and one or more storage devices( Paragraph [0039]- “…the example processes of FIGS. 3 and 4 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable …”), the method comprising: obtaining a plurality of medical images related to a region of interest (ROI) of an object (Paragraphs [0021-0023]- “… the imaging device 110 generates a three dimension image of a part of a body. In such examples, the imaging device 110 may scan a body part multiple times (e.g., multiple image slices) and combine the multiple images to generate the three dimensional image…The image visualizer 114 prompts the clinician to identify and/or verify (e.g. segment) a region of interest within the medical image. The region of interest corresponds to the boundaries of an irregularity (e.g., lesion, tumor, stenosis, polyp, nodule, aneurysm, etc.). The image visualizer 114 incorporates segmentation data (e.g., data related to the region of interest)…”) ; Goin et al. does not explicitly teaches receiving an instruction of selecting, for display, a first location in a first medical image of the plurality of medical images; and simultaneously displaying the first medical image, or a portion thereof, including the selected first location and a second medical image, or a portion thereof, of the plurality of medical images, the second medical image including a second location corresponding to the first location. Within analogous art, Amelia Jimenez Sanchez et al. teaches receiving an instruction of selecting, for display, a first location in a first medical image of the plurality of medical images (ROI localization from images taught within Page 2- Col. 2- “…A popular method for ROI localization is Regions with CNN (R-CNN) [13], which however depends on an external region proposal system. In order to localize the ROI without the need of additional expert annotations, we model the problem in the framework of deep attention models [12], [13] capable of finding a ROI implicitly. In addition, we leverage STL [14], which optimizes simultaneously for the classification and localization tasks….” ); and simultaneously displaying the first medical image, or a portion thereof, including the selected first location and a second medical image ( Within Page 6- Fig. 4 teaches the displaying of first and second medical images within selected location with the identification with bounding box and further taught within Page 6 – Col. 2- “…localization in this task, by comparing against the lower- and upper-bound models. As hypothesized, a closer look onto the fractured bone does improve the performance of the models in every scenario. All our models give good results for 2 and 3 classes. However, the 6 class scenario is really challenging and our results are slightly below the inter-observer variability of 66-71%. We believe that increasing the size of the dataset is necessary, as for some classes it reaches a critical number (e.g. 15 for class A3), which is far from representative of the true intra-class variation. Other possibilities to deal with the 6 classes are the use of more complex architectures like a ConvNet cascade to fetch deeper features,…”) or a portion thereof, of the plurality of medical images, the second medical image including a second location corresponding to the first location( Displaying of the selected item ( bone fracture image identified with bounding box) taught within Fig. 1 -4). One of ordinary skill in the art would have been motivated to combine the teaching of Amelia Jimenez-Sanchez et al. within the modified teaching of the Methods and apparatus for recording anonymized volumetric data from medical image visualization software mentioned by Gogin et al. because the Weakly-Supervised Localization and Classification of Proximal Femur Fractures mentioned by Amelia Jimenez-Sanchez et al. provides a method and system for implementation of classification of medical images of body structure for identification of region of interest within the plurality of medical images of body structure. Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Weakly-Supervised Localization and Classification of Proximal Femur Fractures mentioned by Amelia Jimenez-Sanchez et al. within the modified teaching of the Methods and apparatus for recording anonymized volumetric data from medical image visualization software mentioned by Gogin et al. for implementing a system and method for classification of medical images of body structure for identification of region of interest within the plurality of medical images of body structure. 2. Claim 17 is rejected under 35 U.S.C 103(a) as being unpatentable over Goin et al. (USPUB 20180046758) in view of Amelia Jimenez Sanchez et al. (NPL DOC: "Weakly-Supervised Localization and Classification of Proximal Femur Fractures," 27th September 2018,Computer Vision and Pattern Recognition,arXiv:1809.10692v1,Pages 1-7.) in further view of Giuliano Mariani et al. ( NPL Doc.: “A review on the clinical uses of SPECT/CT,” 25th February 2010, Eur J Nucl Med Mol Imaging (2010) 37,Pages 1959-1979.). As per claim 17, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. teach claim 13, Combination of Gogin et al. and Amelia Jimenez-Sanchez et al. does not explicitly teach wherein the at least one reconstructed image includes at least one of a multiplanar reconstruction (MPR) image, a curved planar reconstruction (CPR) image, or a three-dimensional (3D) rendering image. Within analogous art, Giuliano Mariani et al. teaches wherein the at least one reconstructed image includes at least one of a multiplanar reconstruction (MPR) image, a curved planar reconstruction (CPR) image, or a three-dimensional (3D) rendering image ( Page 1960-Col. 2 – “…performing attenuation correction based on individual patient-based tissue density data, the SPECT/CT workstations allow image reconstruction, three plane (transaxial, coronal, sagittal) and 3-D display, including maximum intensity projection and surface volume rendering. SPECT, CT and fused images are shown on the same screen,…”) . One of ordinary skill in the art would have been motivated to combine the teaching of Giuliano Mariani et al. within the combined modified teaching of the Methods and apparatus for recording anonymized volumetric data from medical image visualization software mentioned by Gogin et al. and the Weakly-Supervised Localization and Classification of Proximal Femur Fractures mentioned by Amelia Jimenez-Sanchez et al. because the A review on the clinical uses of SPECT/CT mentioned by Giuliano Mariani et al. provides a method and system for implementation of SPECT/CT imaging for improving sensitivity and specificity in the imaging. Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the A review on the clinical uses of SPECT/CT mentioned by Giuliano Mariani et al. within the combined modified teaching of the Methods and apparatus for recording anonymized volumetric data from medical image visualization software mentioned by Gogin et al. and the Weakly-Supervised Localization and Classification of Proximal Femur Fractures mentioned by Amelia Jimenez-Sanchez et al. for implementing a system and method for SPECT/CT imaging for improving sensitivity and specificity in the imaging. It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123. Allowable Subject Matter 3. Claims 9,10,11 ,14,15 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 4. The following is an examiner’s statement of reasons for objecting the claims as allowable subject matter: As to claim 9, prior art of record does not teach or suggest the limitation mentioned within claim 9: “…the image list includes at least one abnormality tag corresponding to the at least one abnormality, there is a mapping relationship between the at least one abnormality tag and the plurality of medical images, and the first instruction is generated by selecting one or more of the at least one abnormality tag in the image list.” As to claims 10 and 11, The following claims depend objected allowable claim 9, therefore the following claims are considered objected allowable claims over prior art of record. As to claim 14, prior art of record does not teach or suggest the limitation mentioned within claim 14: “…the image list includes at least one structure tag corresponding to the at least one structure, there is a mapping relationship between the at least one structure tag and the plurality of medical images, and the second instruction is generated by selecting one or more of the at least one structure tag in the image list.” As to claim 15, The following claims depend objected allowable claim 14, therefore the following claims are considered objected allowable claims over prior art of record. As to claim 16, prior art of record does not teach or suggest the limitation mentioned within claim 16: “…detecting one or more abnormality regions related to the at least one structure in at least one of the plurality of medical images; upon receiving the first instruction, causing the following to be displayed: at least one reconstructed image related to the selected structure; or a marker of the one or more detected abnormality regions related to the selected structure.” Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion 5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of Reference Cited for a listing of analogous art. 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OMAR S ISMAIL whose telephone number is (571)272-9799 and Fax # is (571)273-9799. The examiner can normally be reached on M-F 9:00am-6:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David C. Payne can be reached on (571) 272-3024. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OMAR S ISMAIL/ Primary Examiner, Art Unit 2635
Read full office action

Prosecution Timeline

Mar 18, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603705
LATENCY EQUALIZATION FOR OPTICAL FILTER
2y 5m to grant Granted Apr 14, 2026
Patent 12596911
METHOD AND APPARATUS WITH NEURAL NETWORK CONTROL
2y 5m to grant Granted Apr 07, 2026
Patent 12594391
MODEL-GUIDED IMAGING FOR MECHANICAL VENTILATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586365
OBJECT CLASSIFICATION USING MULTIPLE LABELS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12586359
SYNTHETIC-TO-REALISTIC IMAGE CONVERSION USING GENERATIVE ADVERSARIAL NETWORK (GAN) OR OTHER MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+9.7%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 802 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month