Prosecution Insights
Last updated: April 19, 2026
Application No. 18/448,671

REAL-TIME ANALYSIS OF IMAGES CAPTURED BY AN ULTRASOUND PROBE

Final Rejection §102§103
Filed
Aug 11, 2023
Examiner
WINDSOR, COURTNEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Echo Mind AI Corp.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
217 granted / 252 resolved
+24.1% vs TC avg
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
284
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claim 1 has been amended changing the scope and contents of the claim. Claims 2-20 have been newly added. Response to Arguments Applicant's arguments filed October 23, 2025 have been fully considered but they are not persuasive. Regarding independent claim 1, applicant argues, “Song is silent regarding any artificial intelligence system, let alone an artificial intelligence system that analyses images to determine characteristics in the images and identifying ‘at least one of a pathology shown in the plurality of images and a musculoskeletal area shown in the plurality of images’ as required by claim 1 (Remarks, 6).” Examiner respectfully disagrees. Specifically, with regard to the argued limitation, Song discloses at paragraph 0027, “If the detected structures meet certain criteria, CAD unit 122 may highlight them in the image for the radiologist, for example, using boundary contours or bounding boxes. This allows the radiologist to draw conclusions about the condition of e pathology. In some embodiments, CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or he contrast characteristics of a tumor.” With respect to identifying the pathology, Song is read as determining parameters associated with the medical condition such as, “the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or he contrast characteristics of a tumor (paragraph 0027).” These features are determined using the CAD unit which has performed segmentation of the image using a neural network (paragraph 0026). Additionally, with respect to identifying a musculoskeletal area, the actual segmentation process implemented via a neural network is read as identifying a musculoskeletal area (paragraphs 0026-0027). Further, the examiner notes that a neural network is read as a specific type of artificial intelligence. Thus, the examiner does believe that Song discloses all limitations of the independent claim as amended. The applicant is directed to the mapping below for each specific limitation. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Specifically the examiner calls attention to claims 17-20 being interpreted under 35 USC 112(f). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 9, 17 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Publication No. 2020/0402237 to Song et al. (hereinafter Song). Regarding independent claim 1, Song discloses A method (abstract, “Embodiments of the disclosure provide systems and methods for generating a diagnosis report based on a medical image of a patient.”), comprising: receiving a plurality of images from an ultrasound probe (paragraph 0015, “Consistent with the present disclosure, diagnosis report generating system 100 may receive medical images 102 from image acquisition device 101;” paragraph 0016, “In some embodiments, image acquisition device 101 may acquire medical images 102 using any suitable imaging modalities, including, e.g., functional MRI (e.g., fMRI, DCE-MRI and diffusion MRI), Cone Beam CT (CBCT), Spiral CT, Positron Emission Tomography (PET), Single-Photon Emission Computed Tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, and radiotherapy portal imaging, etc.”); analyzing, using an artificial intelligence system, the plurality of images to determine one or more characteristics in each image of the plurality of images (paragraph 0026, “CAD unit 122 may further segment the images to identify different regions of interest (e.g., anatomical structures) in the image, e.g. heart, lung, ribcage, blood vessels, possible round lesions. Various segmentation methods may be used, including, e.g., matching with an anatomic databank, or using neural networks trained using sample images. The identified structures may be analyzed individually for special characteristics.”); identifying, by the artificial intelligence system, at least one of a pathology shown in the plurality of images and a musculoskeletal area shown in the plurality of images (paragraph 0027, “If the detected structures meet certain criteria, CAD unit 122 may highlight them in the image for the radiologist, for example, using boundary contours or bounding boxes. This allows the radiologist to draw conclusions about the condition of the pathology. In some embodiments, CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or he contrast characteristics of a tumor.”); automatically generating a report that includes the at least the one of the pathology and the musculoskeletal area (Figure 4, element S412, “automatically construct a diagnosis report;” paragraph 0052, “ For example, the report may include a patient information section 212/312 showing patient name, gender, and age, as well as examination section 214 containing scan information derived from the patient's meta data. Report generation unit 124 may further generate diagnosis content of the report based on step S408. For example, the diagnosis report may include impression section 216/316 and findings section 218/318. The diagnosis sections in the report may include screenshots of images imported from the CAD analysis as well as text information indicating, e.g., the type of the detected object (i.e. bleeding type cerebral hemorrhage), the position of the detected object (i.e. left frontal lobe), and parameters calculated in step S410. ”); and providing the report to a computing device (abstract, “The system also includes a display configured to display the diagnosis report;” paragraph 0035, “Processor 120 may render visualizations of user interfaces to display data on a display 130. Display 130 may include a Liquid Crystal Display (LCD), a Light Emitting Diode Display (LED), a plasma display, or any other type of display, and provide a Graphical User Interface (GUI) presented on the display for user input and data display. ”). Regarding independent claim 9, the rejection of claim 1 applies directly. Additionally, Song further discloses A system (abstract, “Embodiments of the disclosure provide systems and methods for generating a diagnosis report based on a medical image of a patient. ”), comprising: at least one processing unit (Figure 1, element 120, “processor”); and a memory operably coupled to the at least one processing unit (Figure 1, element 150, “memory”) and storing instructions that, when executed by the at least one processing unit, perform operations (paragraph 0008, “Embodiments of the disclosure further provide a non-transitory computer-readable medium having instructions stored thereon that, when executed by one or more processors, causes the one or more processors to perform a method for generating a diagnosis report based on a medical image of a patient.”), comprising: receiving a plurality of images captured by an ultrasound probe (paragraph 0015, “Consistent with the present disclosure, diagnosis report generating system 100 may receive medical images 102 from image acquisition device 101;” paragraph 0016, “In some embodiments, image acquisition device 101 may acquire medical images 102 using any suitable imaging modalities, including, e.g., functional MRI (e.g., fMRI, DCE-MRI and diffusion MRI), Cone Beam CT (CBCT), Spiral CT, Positron Emission Tomography (PET), Single-Photon Emission Computed Tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, and radiotherapy portal imaging, etc.”); providing the plurality of images to an artificial intelligence system (paragraph 0026, “CAD unit 122 may further segment the images to identify different regions of interest (e.g., anatomical structures) in the image, e.g. heart, lung, ribcage, blood vessels, possible round lesions. Various segmentation methods may be used, including, e.g., matching with an anatomic databank, or using neural networks trained using sample images. The identified structures may be analyzed individually for special characteristics.”); causing the artificial intelligence system to analyze the plurality of images to identify at least one of a pathology shown in the plurality of images and a musculoskeletal area shown in the plurality of images (paragraph 0027, “If the detected structures meet certain criteria, CAD unit 122 may highlight them in the image for the radiologist, for example, using boundary contours or bounding boxes. This allows the radiologist to draw conclusions about the condition of e pathology. In some embodiments, CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or he contrast characteristics of a tumor.”); generate a report that includes the at least the one of the pathology and the musculoskeletal area (Figure 4, element S412, “automatically construct a diagnosis report;” paragraph 0052, “ For example, the report may include a patient information section 212/312 showing patient name, gender, and age, as well as examination section 214 containing scan information derived from the patient's meta data. Report generation unit 124 may further generate diagnosis content of the report based on step S408. For example, the diagnosis report may include impression section 216/316 and findings section 218/318. The diagnosis sections in the report may include screenshots of images imported from the CAD analysis as well as text information indicating, e.g., the type of the detected object (i.e. bleeding type cerebral hemorrhage), the position of the detected object (i.e. left frontal lobe), and parameters calculated in step S410. ”); and provide the report to a computing device (abstract, “The system also includes a display configured to display the diagnosis report;” paragraph 0035, “Processor 120 may render visualizations of user interfaces to display data on a display 130. Display 130 may include a Liquid Crystal Display (LCD), a Light Emitting Diode Display (LED), a plasma display, or any other type of display, and provide a Graphical User Interface (GUI) presented on the display for user input and data display. ”). Regarding independent claim 17, the rejection of claim 1 applies directly. Additionally, Song further discloses A system (abstract, “Embodiments of the disclosure provide systems and methods for generating a diagnosis report based on a medical image of a patient. ”), comprising: means for receiving a plurality of images from an ultrasound probe (paragraph 0015, “Consistent with the present disclosure, diagnosis report generating system 100 may receive medical images 102 from image acquisition device 101;” paragraph 0016, “In some embodiments, image acquisition device 101 may acquire medical images 102 using any suitable imaging modalities, including, e.g., functional MRI (e.g., fMRI, DCE-MRI and diffusion MRI), Cone Beam CT (CBCT), Spiral CT, Positron Emission Tomography (PET), Single-Photon Emission Computed Tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, and radiotherapy portal imaging, etc;” Figure 1: the receiving is read as the interface between the image acquisition device (101) and the communication interface (110) and the processor (120)); an artificial intelligence means for analyzing the plurality of images to determine one or more characteristics in each image of the plurality of images (paragraph 0026, “CAD unit 122 may further segment the images to identify different regions of interest (e.g., anatomical structures) in the image, e.g. heart, lung, ribcage, blood vessels, possible round lesions. Various segmentation methods may be used, including, e.g., matching with an anatomic databank, or using neural networks trained using sample images. The identified structures may be analyzed individually for special characteristics;” the neural network is read as an artificial intelligence means); means for identifying at least one of a pathology shown in the plurality of images and a musculoskeletal area shown in the plurality of images (Figure 1, element 122, “computer aided diagnosis unit;” paragraph 0027, “If the detected structures meet certain criteria, CAD unit 122 may highlight them in the image for the radiologist, for example, using boundary contours or bounding boxes. This allows the radiologist to draw conclusions about the condition of e pathology. In some embodiments, CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or he contrast characteristics of a tumor.”); means for generating a report that includes the at least the one of the pathology and the musculoskeletal area (Figure 4, element S412, “automatically construct a diagnosis report;” paragraph 0052, “ For example, the report may include a patient information section 212/312 showing patient name, gender, and age, as well as examination section 214 containing scan information derived from the patient's meta data. Report generation unit 124 may further generate diagnosis content of the report based on step S408. For example, the diagnosis report may include impression section 216/316 and findings section 218/318. The diagnosis sections in the report may include screenshots of images imported from the CAD analysis as well as text information indicating, e.g., the type of the detected object (i.e. bleeding type cerebral hemorrhage), the position of the detected object (i.e. left frontal lobe), and parameters calculated in step S410;” Figure 1, element 124 “report generation unit”); and means for providing the report to a computing device (abstract, “The system also includes a display configured to display the diagnosis report;” paragraph 0035, “Processor 120 may render visualizations of user interfaces to display data on a display 130. Display 130 may include a Liquid Crystal Display (LCD), a Light Emitting Diode Display (LED), a plasma display, or any other type of display, and provide a Graphical User Interface (GUI) presented on the display for user input and data display;” Figure 1, the connection between the processor (120) and the display (130)). Regarding dependent claim 20, the rejection of claim 17 is incorporated herein. Additionally, Song further discloses further comprising means (Figure 1, element 134, “report generation unit”) for adding color coding to the report (paragraph 0037, “A cerebral lesion is highlighted using a different color as well as marked by a boundary contour. ”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-5, 10-13 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Song as applied to claims 1, 9 and 17 respectively above, and further in view of WO 2020079696 to Spillinger (hereinafter Spillinger). Regarding dependent claim 2, the rejection of claim 1 is incorporated herein. Additionally, Song fails to explicitly disclose further comprising automatically selecting a subset of images from the plurality of images for the report. However, Spillinger discloses further comprising automatically selecting a subset of images from the plurality of images for the report (page 19, “According to some embodiments, images captured in vivo may be received and a plurality of these images may be automatically (e.g. by a processor shown in FIG. 1 ) selected for display. In some embodiments, a subset of these selected images may be identified automatically and/or by a user and a case report or a report may be generated which includes only images from the identified subset of images (e.g., one or more or all of the images in the subset).”). Song is directed toward “Embodiments of the disclosure provide systems and methods for generating a diagnosis report based on a medical image of a patient (abstract).” Spillinger is directed toward “Systems and methods may display and/or provide analysis of a number of selected images of a patient's gastrointestinal tract collected in-vivo by a swallowable capsule. Images may be displayed for review (e.g., as a study) and/or for further analysis by a user. A subset of images representing the stream of images and automatically selected according to a first selection method may be displayed (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention Song and Spillinger are directed toward similar methods of endeavor of medical image analysis. Further, one of ordinary skill in the art before the effective filing date of the claimed invention would be aware providing thousands of images to a reviewer would be overwhelming, time consuming, and unnecessary. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Spillinger in order to ensure a report can be reviewed quickly for diagnosis. Regarding dependent claim 3, the rejection of claim 2 is incorporated herein. Additionally, Spillinger in the combination further discloses wherein the subset of images are selected using the artificial intelligence system (page 11, “ Each image selection method described herein may include one or more filters or selection or detection rules and the selection according to each method may be performed in one or more stages. The selection or detection rules may be applied by utilizing algorithms, e.g., machine learning algorithms and deep learning algorithms in particular. ”). It is well known to one of ordinary skill in the art before the effective filing date of the claimed invention artificial intelligence systems are faster and more accurate than other processing methods. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Spillinger in order to ensure a subset of images is determined accurately and efficiently. Regarding dependent claim 4, the rejection of claim 2 is incorporated herein. Additionally, Song and Spillinger in the combination as a whole fails to explicitly disclose wherein the subset of images have a higher quality when compared to other images of the plurality of images. However, Spillinger does disclose at page 35, “The map screen or display of Fig. 3 may also include a graphical representation of, or an indication of, an estimated or determined cleansing level, e.g., a score for an automatically measured level of cleanliness of the respective segment, and/or an indication of the image quality during the capsule passage in that segment;” this cleansing value is read as a quality metric. Additionally, Spillinger discloses at page 11, “ Each image selection method described herein may include one or more filters or selection or detection rules and the selection according to each method may be performed in one or more stages. The selection or detection rules may be applied by utilizing algorithms, e.g., machine learning algorithms and deep learning algorithms in particular.” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention that images of high quality are best suited for diagnosis; said differently, if a low quality image is reviewed by a clinician, the clinician may determine an inaccurate diagnosis because the image quality it so poor. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Song and Spillinger in order to ensure the best images for review are output for diagnosis, so that accurate diagnosis can be made. Regarding dependent claim 5, the rejection of claim 2 is incorporated herein. Additionally, Spillinger in the combination further discloses wherein the subset of images include one or more color codes that are associated with the at least one of the pathology and the musculoskeletal area (page 27, “ "Heat" may refer to colors used to convey information on the map, such as colors assigned to lines signifying certain images along the bar. Different colors may represent, for example, different type of images, such as images of a first or second level. Such a heat map or bar may include lines indicating the segmentation of the Gl portion (e.g., colored in a different color) etc. Colored sections of the map may provide certain information about those sections;” page 37, “Each selected image may be displayed with an indication of the identified area of interest within the image, e.g., by coloring the area of interest in a specific color.”). It is well known to one of ordinary skill in the art before the effective filing date of the claimed invention that color coding images for review makes it easier and faster for a reviewer to understand. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Spillinger in order to ensure a subset of images are reviewed accurately and efficiently. Regarding dependent claim 10, the rejection of claim 9 is incorporated herein. Additionally, Song fails to explicitly disclose further comprising instructions for selecting a subset of images from the plurality of images for the report. However, Spillinger discloses further comprising instructions for selecting a subset of images from the plurality of images for the report (page 19, “According to some embodiments, images captured in vivo may be received and a plurality of these images may be automatically (e.g. by a processor shown in FIG. 1 ) selected for display. In some embodiments, a subset of these selected images may be identified automatically and/or by a user and a case report or a report may be generated which includes only images from the identified subset of images (e.g., one or more or all of the images in the subset).”). Song is directed toward “Embodiments of the disclosure provide systems and methods for generating a diagnosis report based on a medical image of a patient (abstract).” Spillinger is directed toward “Systems and methods may display and/or provide analysis of a number of selected images of a patient's gastrointestinal tract collected in-vivo by a swallowable capsule. Images may be displayed for review (e.g., as a study) and/or for further analysis by a user. A subset of images representing the stream of images and automatically selected according to a first selection method may be displayed (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention Song and Spillinger are directed toward similar methods of endeavor of medical image analysis. Further, one of ordinary skill in the art before the effective filing date of the claimed invention would be aware providing thousands of images to a reviewer would be overwhelming, time consuming, and unnecessary. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Spillinger in order to ensure a report can be reviewed quickly for diagnosis. Regarding dependent claim 11, the rejection of claim 10 is incorporated herein. Additionally, Spillinger in the combination further discloses wherein the subset of images are selected using the artificial intelligence system (page 11, “ Each image selection method described herein may include one or more filters or selection or detection rules and the selection according to each method may be performed in one or more stages. The selection or detection rules may be applied by utilizing algorithms, e.g., machine learning algorithms and deep learning algorithms in particular. ”). It is well known to one of ordinary skill in the art before the effective filing date of the claimed invention artificial intelligence systems are faster and more accurate than other processing methods. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Spillinger in order to ensure a subset of images is determined accurately and efficiently. Regarding dependent claim 12, the rejection of claim 10 is incorporated herein. Additionally, Song and Spillinger in the combination as a whole fails to explicitly disclose wherein the subset of images have a higher quality when compared to other images of the plurality of images. However, Spillinger does disclose at page 35, “The map screen or display of Fig. 3 may also include a graphical representation of, or an indication of, an estimated or determined cleansing level, e.g., a score for an automatically measured level of cleanliness of the respective segment, and/or an indication of the image quality during the capsule passage in that segment;” this cleansing value is read as a quality metric. Additionally, Spillinger discloses at page 11, “ Each image selection method described herein may include one or more filters or selection or detection rules and the selection according to each method may be performed in one or more stages. The selection or detection rules may be applied by utilizing algorithms, e.g., machine learning algorithms and deep learning algorithms in particular.” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention that images of high quality are best suited for diagnosis; said differently, if a low quality image is reviewed by a clinician, the clinician may determine an inaccurate diagnosis because the image quality it so poor. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Song and Spillinger in order to ensure the best images for review are output for diagnosis, so that accurate diagnosis can be made. Regarding dependent claim 13, the rejection of claim 10 is incorporated herein. Additionally, Spillinger in the combination further discloses further comprising instructions for adding one or more color codes that are associated with the at least one of the pathology and the musculoskeletal area (page 27, “ "Heat" may refer to colors used to convey information on the map, such as colors assigned to lines signifying certain images along the bar. Different colors may represent, for example, different type of images, such as images of a first or second level. Such a heat map or bar may include lines indicating the segmentation of the Gl portion (e.g., colored in a different color) etc. Colored sections of the map may provide certain information about those sections;” page 37, “Each selected image may be displayed with an indication of the identified area of interest within the image, e.g., by coloring the area of interest in a specific color.”). It is well known to one of ordinary skill in the art before the effective filing date of the claimed invention that color coding images for review makes it easier and faster for a reviewer to understand. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Spillinger in order to ensure a subset of images are reviewed accurately and efficiently. Regarding dependent claim 18, the rejection of claim 17 is incorporated herein. Additionally, Song fails to explicitly disclose further comprising means for selecting a subset of images from the plurality of images for the report. However, Spillinger discloses further comprising means for selecting a subset of images from the plurality of images for the report (page 19, “According to some embodiments, images captured in vivo may be received and a plurality of these images may be automatically (e.g. by a processor shown in FIG. 1 ) selected for display. In some embodiments, a subset of these selected images may be identified automatically and/or by a user and a case report or a report may be generated which includes only images from the identified subset of images (e.g., one or more or all of the images in the subset);” the algorithm for selection is read as the means). Song is directed toward “Embodiments of the disclosure provide systems and methods for generating a diagnosis report based on a medical image of a patient (abstract).” Spillinger is directed toward “Systems and methods may display and/or provide analysis of a number of selected images of a patient's gastrointestinal tract collected in-vivo by a swallowable capsule. Images may be displayed for review (e.g., as a study) and/or for further analysis by a user. A subset of images representing the stream of images and automatically selected according to a first selection method may be displayed (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention Song and Spillinger are directed toward similar methods of endeavor of medical image analysis. Further, one of ordinary skill in the art before the effective filing date of the claimed invention would be aware providing thousands of images to a reviewer would be overwhelming, time consuming, and unnecessary. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Spillinger in order to ensure a report can be reviewed quickly for diagnosis. Regarding dependent claim 19, the rejection of claim 18 is incorporated herein. Additionally, Spillinger in the combination further discloses wherein the means for selecting the subset of images is associated with the artificial intelligence means (page 11, “ Each image selection method described herein may include one or more filters or selection or detection rules and the selection according to each method may be performed in one or more stages. The selection or detection rules may be applied by utilizing algorithms, e.g., machine learning algorithms and deep learning algorithms in particular. ”). It is well known to one of ordinary skill in the art before the effective filing date of the claimed invention artificial intelligence systems are faster and more accurate than other processing methods. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Spillinger in order to ensure a subset of images is determined accurately and efficiently. Claim(s) 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Song further in view of Spillinger as applied to claims 5 and 13 respectively above, and further in view of U.S. Patent No. 11,676,701 to Carter et al. (hereinafter Carter). Regarding dependent claim 6, the rejection of claim 5 is incorporated herein. Additionally, Song and Spillinger in the combination fails to explicitly disclose wherein an opacity of the one or more color codes indicates a confidence level of an accuracy of the identification of the at least one of the pathology and the musculoskeletal area.. However, Carter discloses wherein an opacity of the one or more color codes indicates a confidence level of an accuracy of the identification of the at least one of the pathology and the musculoskeletal area (column 10, line 48, “At block 406, for the given image region currently being processed, the medical provider system 102 may determine one or more bounding shape display parameters (such as color, opacity and/or shape type) based at least in part on a label within the metadata for the given region. The label may represent or specify a specific pathology or other classification previously determined by a machine learning model and assigned as a classification label to the given region. In some embodiments, for instance, different pathologies may be assigned different bounding shapes, colors or other display parameters, which may be configurable by a user. In one example, at least one display parameter determined at block 406 may be based on a confidence level determined by one or more models. For example, a specific color and/or opacity may be assigned to the bounding region based on its confidence score, as will be further discussed below.”). As noted above, Song and Spillinger are directed toward medical image analysis. Further, Carter is directed toward “Systems and methods are provided for automatically marking locations within a radiograph of one or more dental pathologies, anatomies, anomalies or other conditions determined by automated image analysis of the radiograph by a number of different machine learning models (abstract).” As can be seen by one of ordinary skill in the art before the effective filing date of the claimed invention, Song, Spillinger and Carter are directed toward similar methods of endeavor of medical image analysis. Further, Carter allows for correlating the display to confidence determinations. One of ordinary skill in the art before the effective filing date of the claimed invention would easily understand that outputting confidence information aids a user in determining how much one should rely on a specific output; said differently, if the system has low confidence in the output, the user may not want to rely heavily on that determination. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Carter to ensure a user has context for how confident a system is, to further inform their own decision making. Regarding dependent claim 14, the rejection of claim 13 is incorporated herein. Additionally, Song and Spillinger in the combination fails to explicitly disclose wherein an opacity of the one or more color codes indicates a confidence level of an accuracy of the identification of the at least one of the pathology and the musculoskeletal area. However, Carter discloses wherein an opacity of the one or more color codes indicates a confidence level of an accuracy of the identification of the at least one of the pathology and the musculoskeletal area (column 10, line 48, “At block 406, for the given image region currently being processed, the medical provider system 102 may determine one or more bounding shape display parameters (such as color, opacity and/or shape type) based at least in part on a label within the metadata for the given region. The label may represent or specify a specific pathology or other classification previously determined by a machine learning model and assigned as a classification label to the given region. In some embodiments, for instance, different pathologies may be assigned different bounding shapes, colors or other display parameters, which may be configurable by a user. In one example, at least one display parameter determined at block 406 may be based on a confidence level determined by one or more models. For example, a specific color and/or opacity may be assigned to the bounding region based on its confidence score, as will be further discussed below.”). As noted above, Song and Spillinger are directed toward medical image analysis. Further, Carter is directed toward “Systems and methods are provided for automatically marking locations within a radiograph of one or more dental pathologies, anatomies, anomalies or other conditions determined by automated image analysis of the radiograph by a number of different machine learning models (abstract).” As can be seen by one of ordinary skill in the art before the effective filing date of the claimed invention, Song, Spillinger and Carter are directed toward similar methods of endeavor of medical image analysis. Further, Carter allows for correlating the display to confidence determinations. One of ordinary skill in the art before the effective filing date of the claimed invention would easily understand that outputting confidence information aids a user in determining how much one should rely on a specific output; said differently, if the system has low confidence in the output, the user may not want to rely heavily on that determination. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Carter to ensure a user has context for how confident a system is, to further inform their own decision making. Claim(s) 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Song. Regarding dependent claim 7, the rejection of claim 1 is incorporated herein. Additionally, Song discloses wherein the one or more characteristics include one or more patterns identified within one or more images of the plurality of images (paragraph 0027, “CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or the contrast characteristics of a tumor.”), one or more shapes identified within the one or more images of the plurality of images (paragraph 0049, “ In some embodiments, CAD unit 122 may be used to perform a CAD analysis to detect the conspicuous object;” detecting the object is read as detecting the shape); an echogenicity identified in one or more images of the plurality of images (paragraph 0027, “CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or he contrast characteristics of a tumor;” pixel intensity and contrast characteristics correlate to brightness which is an indication of echogenicity), However, Song fails to explicitly disclose as further recited. However, at paragraph 0027, Song discloses paragraph 0027, “CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or he contrast characteristics of a tumor.” These features to quantify are read as exemplary. One of ordinary skill in the art before the effective filing date of the claimed invention would be aware there are numerous alternative features that can be detected from images to quantify different disease states. Further, echotexture is well known by one of ordinary skill in the art before the effective filing date of the claimed invention to be a value characterizing the appearance of tissues/organ internal structures as being homogenous or heterogenous. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Song in order to quantify additional features of the ultrasound images to quantify a wider variety of disease states. Regarding dependent claim 15, the rejection of claim 9 is incorporated herein. Additionally, Song discloses wherein analyzing the one or more images includes identifying one or more patterns within one or more images of the plurality of images (paragraph 0027, “CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or the contrast characteristics of a tumor.”), identifying one or more shapes within the one or more images of the plurality of images (paragraph 0049, “ In some embodiments, CAD unit 122 may be used to perform a CAD analysis to detect the conspicuous object;” detecting the object is read as detecting the shape); identifying an echogenicity in one or more images of the plurality of images (paragraph 0027, “CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or he contrast characteristics of a tumor;” pixel intensity and contrast characteristics correlate to brightness which is an indication of echogenicity), However, Song fails to explicitly disclose as further recited. However, at paragraph 0027, Song discloses paragraph 0027, “CAD unit 122 may further determine one or more parameters that quantify the medical condition. For example, the parameters may include size (such as diameter, length, width, depth, etc.), volume, pixel intensities, or he contrast characteristics of a tumor.” These features to quantify are read as exemplary. One of ordinary skill in the art before the effective filing date of the claimed invention would be aware there are numerous alternative features that can be detected from images to quantify different disease states. Further, echotexture is well known by one of ordinary skill in the art before the effective filing date of the claimed invention to be a value characterizing the appearance of tissues/organ internal structures as being homogenous or heterogenous. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Song in order to quantify additional features of the ultrasound images to quantify a wider variety of disease states. Claim(s) 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Song as applied to claims 1 and 9 respectively above, and further in view of Carter. Regarding dependent claim 8, the rejection of claim 1 is incorporated herein. Additionally, Song fails to explicitly disclose further comprising: generating a confidence threshold associated with the at least one of the pathology and the musculoskeletal area; and providing the confidence threshold in the report. However, Carter discloses further comprising: generating a confidence threshold associated with the at least one of the pathology and the musculoskeletal area (column 12, line 53, “For example, a green bounding box may indicate a high confidence score (falling above a first threshold), gold may indicate a medium confidence score (falling above a second threshold) and red may indicate a low confidence score (falling above a third threshold). In other embodiments different shapes, line styles or other visual differences may be used to distinguish confidence scores instead of or in addition to color differences.”); and providing the confidence threshold in the report (column 12, line 53, “For example, a green bounding box may indicate a high confidence score (falling above a first threshold), gold may indicate a medium confidence score (falling above a second threshold) and red may indicate a low confidence score (falling above a third threshold). In other embodiments different shapes, line styles or other visual differences may be used to distinguish confidence scores instead of or in addition to color differences;” outputting the colors correlated to the threshold is read as providing the threshold in the report). Song is directed toward “ systems and methods for generating a diagnosis report based on a medical image of a patient (abstract).” Carter is directed toward “Systems and methods are provided for automatically marking locations within a radiograph of one or more dental pathologies, anatomies, anomalies or other conditions determined by automated image analysis of the radiograph by a number of different machine learning models (abstract).” As can be seen by one of ordinary skill in the art before the effective filing date of the claimed invention, Song and Carter are directed toward similar methods of endeavor of medical image analysis. Further, Carter allows for correlating the display to confidence determinations. One of ordinary skill in the art before the effective filing date of the claimed invention would easily understand that outputting confidence information aids a user in determining how much one should rely on a specific output; said differently, if the system has low confidence in the output, the user may not want to rely heavily on that determination. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Carter to ensure a user has context for how confident a system is, to further inform their own decision making. Regarding dependent claim 16, the rejection of claim 9 is incorporated herein. Additionally, Song fails to explicitly disclose further comprising: generating a confidence threshold associated with the at least one of the pathology and the musculoskeletal area; and providing the confidence threshold in the report. However, Carter discloses further comprising: generating a confidence threshold associated with the at least one of the pathology and the musculoskeletal area (column 12, line 53, “For example, a green bounding box may indicate a high confidence score (falling above a first threshold), gold may indicate a medium confidence score (falling above a second threshold) and red may indicate a low confidence score (falling above a third threshold). In other embodiments different shapes, line styles or other visual differences may be used to distinguish confidence scores instead of or in addition to color differences.”); and providing the confidence threshold in the report (column 12, line 53, “For example, a green bounding box may indicate a high confidence score (falling above a first threshold), gold may indicate a medium confidence score (falling above a second threshold) and red may indicate a low confidence score (falling above a third threshold). In other embodiments different shapes, line styles or other visual differences may be used to distinguish confidence scores instead of or in addition to color differences;” outputting the colors correlated to the threshold is read as providing the threshold in the report). Song is directed toward “ systems and methods for generating a diagnosis report based on a medical image of a patient (abstract).” Carter is directed toward “Systems and methods are provided for automatically marking locations within a radiograph of one or more dental pathologies, anatomies, anomalies or other conditions determined by automated image analysis of the radiograph by a number of different machine learning models (abstract).” As can be seen by one of ordinary skill in the art before the effective filing date of the claimed invention, Song and Carter are directed toward similar methods of endeavor of medical image analysis. Further, Carter allows for correlating the display to confidence determinations. One of ordinary skill in the art before the effective filing date of the claimed invention would easily understand that outputting confidence information aids a user in determining how much one should rely on a specific output; said differently, if the system has low confidence in the output, the user may not want to rely heavily on that determination. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Carter to ensure a user has context for how confident a system is, to further inform their own decision making. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to Courtney J. Nelson whose telephone number is (571)272-3956. The examiner can normally be reached Monday - Friday 8:00 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COURTNEY JOAN NELSON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Aug 11, 2023
Application Filed
Jul 21, 2025
Non-Final Rejection — §102, §103
Oct 23, 2025
Response Filed
Mar 18, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603175
METHOD AND APPARATUS FOR DETERMINING DIAGNOSIS RESULT DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597188
SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES FOR PHYSIOLOGY-COMPENSATED RECONSTRUCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597494
METHOD AND APPARATUS FOR TRAINING MEDICAL IMAGE REPORT GENERATION MODEL, AND IMAGE REPORT GENERATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12588881
PROVIDING A RESULT DATA SET
2y 5m to grant Granted Mar 31, 2026
Patent 12592016
Material-Specific Attenuation Maps for Combined Imaging Systems Background
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month