Prosecution Insights
Last updated: April 19, 2026
Application No. 18/273,959

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Final Rejection §103
Filed
Jul 24, 2023
Examiner
WINDSOR, COURTNEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
217 granted / 252 resolved
+24.1% vs TC avg
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
284
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1, 3-6 and 9-12 have been amended changing the scope and contents of the claim. Claims 13-20 have been newly added. Claims 2 and 8 have been cancelled. Applicant’s amendment filed October 14, 2025 overcomes the following objection/rejection(s) from the last Office Action of July 14, 2025: Rejections to the claims under 35 USC § 102 Response to Arguments Applicant’s arguments with respect to claim(s) 1 and similarly claims 11-12 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5, 7, 9-14, 17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2018/0075599 to Tajbakhsh et al. (hereinafter Tajbakhsh), and further in view of U.S. Patent No. 9,424,644 to Sprencz et al. (hereinafter Sprencz). Regarding independent claim 1, Tajbakhsh discloses an image processing device (abstract, “A system and methods for detecting polyps using optical images acquired during a colonoscopy;” paragraph 0002, “The present disclosure relates, generally, to systems and method for processing optical images. More particularly, the disclosure relates to automatic detection of polyps in optical images or video;” paragraph 0028, “relay the optical image data to the controller 104 for processing and analysis”), comprising: at least one memory configured to store instructions (paragraph 0035, “Alternatively, such data analysis may be carried out off-line by processing optical image data accessed from a data storage or memory. This would allow for automated verification of previously performed colonoscopies. To this end, the processor may read and execute software instructions from a non-transitory computer-readable medium, such as a hard drive, CD-ROM, DVD, internal, external or flash memory, and the like, as well as transitory computer-readable media”); and at least one processor configured to execute the instructions to (paragraph 0035, “To this end, the processor may read and execute software instructions from a non-transitory computer-readable medium, such as a hard drive, CD-ROM, DVD, internal, external or flash memory, and the like, as well as transitory computer-readable media.”): acquire a captured image acquired by photographing an inspection target by an endoscope (Figure 4, element 402, “receive a set of optical images;” paragraph 0028, “In some embodiments, the colonoscopy device 102 may include an endoscope (not shown in FIG. 1) configured to acquire optical image data, either continuously or intermittently, from a patient's colon, and relay the optical image data to the controller 104 for processing and analysis. By way of example, the endoscope may include a CCD camera, fiber optic camera, or other video recording device, as well as one or more light sources. ;” paragraph 0047, “The process 400 can begin at process block 402 with receiving one or more optical images for analysis. As described, such images can be obtained in substantially real-time from a live video feed or accessed from a data storage, memory, database, cloud, and the like.”); detect a lesion part, in which a lesion formation is suspected, in the captured image (Figure 4, element 404, “generate polyp candidates by analyzing the received set of optical images;” paragraph 0047, “The optical images may then be analyzed to generate polyp candidates, as indicated by process block 404. In the analysis, the images may be processed to construct one or more edge maps, for example, by applying Canny's method of edge detection, for example. Advantageously, different color channels associated with the received images may be analyzed to extract as many edge pixels as possible. As such, the received images may be filtered using various color filters, such as red, blue, an green color filters. Edge pixels obtained using the different color channels may then be combined to generate the edge maps. The edge maps may be further refined by applying a classification scheme based on patterns of intensity variation, and used in a voting scheme, as described, to identify polyp candidates. In some aspects, a bounding box is generated for each identified polyp candidate.”); output information based on the diagnosis result by a display device or audio output device (paragraph 0009, “The processor is further configured to process the images by identifying polyps using the computed probabilities, and generating a report indicating identified polyps. The system further includes an output for displaying the report;” paragraph 0040, “The output elements may take any shape or form, and may include various visual, audio and other systems for providing a report either intermittently or in substantially real time. For instance as shown, an output may be in the form of a display. Another output can include speakers. Yet another output can include one or more electronic connections through which signals, data, or reports can sent to a database, data storage, or a medical record, for example.”). Tajbakhsh fails to explicitly disclose as further recited. However, Sprencz discloses input a combined image of the captured image and a cutout image to a diagnosis model (Figure 6, element 252, “overlay lesion candidates on skeletal structure;” column 9, line 23, “Images of the lesion candidates are generated in step 252 and displayed as an overlay on the patient skeletal image generated from the segmented skeletal structure output from bone segmentation subroutine 210 of FIG. 5.”), wherein the cutout image indicates the lesion part cut out from the captured image (column 9, line 23, “Images of the lesion candidates are generated in step 252 and displayed as an overlay on the patient skeletal image;” the lesion candidate overlay is the cutout image indicating lesion parts from the patient skeletal image) and the diagnosis model is trained to take an image as an input and output a diagnosis result relating to the lesion part in the image (Figure 6, element 262, “classify lesion candidates;” column 9, line 34, “However, if the lesion detection is acceptable 260, lesion detection subroutine 246 is prompted to move on to step 262 and classify the identified lesion candidates as being either bone lesions (i.e., lesions located within the skeletal structure) or non-bone lesions (i.e., lesions not located in the skeletal structure);” column 9, line 45, “To assist with classifying lesion candidates as being either bone lesions or non-bone lesions at step 262, lesion detection subroutine 246 uses the segmented CT volume of step 208. In particular, lesion detection subroutine 246 compares the location of the lesion candidates to the location of the identified skeletal structure. Any lesion candidates that fall within the location of the skeletal structure are automatically identified as bone lesions and any lesion candidates that are outside location of the skeletal structure are automatically identified as non-bone lesions. Because lesion detection subroutine 246 automatically classifies lesions as being bone or non-bone based on the identified location of the lesion candidate with respect to the identified location of the patient's skeletal structure, lesion candidates may be correctly classified as being bone lesions with high accuracy;” the definition of the algorithm the classifier follows is read as a method of training); acquire the diagnosis result relating to the lesion part in the combined image output by the diagnosis model in response to the input of the combined image (column 9, line 34, “However, if the lesion detection is acceptable 260, lesion detection subroutine 246 is prompted to move on to step 262 and classify the identified lesion candidates as being either bone lesions (i.e., lesions located within the skeletal structure) or non-bone lesions (i.e., lesions not located in the skeletal structure).”) Tajbakhsh is directed toward “A system and methods for detecting polyps using optical images acquired during a colonoscopy (abstract).” Sprencz is directed toward “Methods and systems for evaluating bone lesions (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, both Tajbakhsh and Sprencz are directed toward similar methods of endeavor of detection and analysis of lesions in medical images. Further, one of ordinary skill in the art before the effective filing date of the claimed invention would easily understand that providing additional information for diagnosis can aid in a more accurate output. Said differently, inputting just a segmentation of a lesion could lead to errors with diagnosing the lesion location; the rest of the image that would be excluded may contain key information for diagnosis. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Sprencz in order to ensure all available data is considered in diagnosis determination so that accuracy is increased. Regarding dependent claim 5, the rejection of claim 1 is incorporated herein. Additionally, Tajbakhsh in the combination further discloses wherein the at least one processor is configured to execute the instructions to input an image, into which the captured image resized based on an input format of the diagnosis model and the cutout image are combined, to the diagnosis model, and acquire the diagnosis result based on the diagnosis result outputted by the diagnosis model (abstract, “ The method also includes generating a plurality of image patches around locations associated with each polyp candidate, applying a set of convolutional neural networks to the corresponding image patches, and computing probabilities indicative of a maximum response for each convolutional neural network. ;” each of the image patches input to the CNNs of the diagnosis model represent a "resized" captured image; ). Regarding dependent claim 7, the rejection of claim 1 is incorporated herein. Additionally, Tajbakhsh in the combination further discloses wherein the inspection target is either a large bowel or a stomach (paragraph 0009, “In one aspect of the disclosure, a system for polyp detection using optical images acquired during a colonoscopy is provided. The system includes an input configured to receive a set of optical images acquired from a patient; during a colonoscopy;” colonoscopy is read as including the large bowel). Regarding dependent claim 9, the rejection of claim 1 is incorporated herein. Additionally, Sprencz discloses wherein the at least one processor is configured to execute the instructions to, in a case in which the presence of a specific type of a lesion by the diagnosis is detected, cause the display device or the audio output device to output information notifying at least the presence of the lesion as the information based on the diagnosis (column 9, line 23, “Images of the lesion candidates are generated in step 252 and displayed as an overlay on the patient skeletal image generated from the segmented skeletal structure output from bone segmentation subroutine 210 of FIG;” column 10, line 5, “Technique 200 also uses the results of the lesion classification in step 268 to calculate quantitative information regarding the skeletal structure and detected bone lesions of the patient. It is contemplated that quantitative information regarding the skeletal structure and detected bone lesions may be in regard to individual bone lesions or the total of all bone lesions, according to various embodiments. The quantitative information is displayed at step 270;” column 10, line 60, “ Image display region 282 may further be configured to display bone lesions and non-bone lesions in different colors or with different shading to visually distinguish detected bone lesions from non-bone lesions. As one example, a lesion candidate classified as a bone lesion, such as lesion candidate 296, may be displayed with a red contour, and a lesion candidate classified as a non-bone lesion, such as lesion candidate 298, may be displayed with a blue contour.”). It is well known by one of ordinary skill in the art before the effective filing date of the invention there are multiple different types of lesions. At the simplest form, there are benign and malignant lesions; malignant lesions needing treatment where benign do not. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Sprencz to ensure users are made aware of the more concerning lesion type (malignant). Regarding dependent claim 10, the rejection of claim 1 is incorporated herein. Additionally, Sprencz discloses wherein the at least one processor is configured to execute the instructions to cause the display device or the audio output device to output information relating to at least one of a classification or risk of the lesion part based on the diagnosis as the information based on the diagnosis (column 9, line 23, “Images of the lesion candidates are generated in step 252 and displayed as an overlay on the patient skeletal image generated from the segmented skeletal structure output from bone segmentation subroutine 210 of FIG;” column 10, line 5, “Technique 200 also uses the results of the lesion classification in step 268 to calculate quantitative information regarding the skeletal structure and detected bone lesions of the patient. It is contemplated that quantitative information regarding the skeletal structure and detected bone lesions may be in regard to individual bone lesions or the total of all bone lesions, according to various embodiments. The quantitative information is displayed at step 270;” column 10, line 60, “ Image display region 282 may further be configured to display bone lesions and non-bone lesions in different colors or with different shading to visually distinguish detected bone lesions from non-bone lesions. As one example, a lesion candidate classified as a bone lesion, such as lesion candidate 296, may be displayed with a red contour, and a lesion candidate classified as a non-bone lesion, such as lesion candidate 298, may be displayed with a blue contour.”). It is well known by one of ordinary skill in the art before the effective filing date of the invention there are multiple different types of lesion classifications. At the simplest form, there are benign and malignant lesions, further different lesion stages, etc. One of ordinary skill in the art before the effective filing date would be aware that providing as much classification detail as possible helps inform the diagnosis, and thus the treatment options. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Sprencz to ensure users are given as much information as possible related to the lesion itself to ideally produce a more accurate lesion diagnosis and thus, more successful treatment. Regarding dependent claim 11, the rejection of claim 1 applies directly. Additionally, Tajbakhsh further discloses an image processing method executed by a computer (abstract, “A system and methods for detecting polyps using optical images acquired during a colonoscopy;” paragraph 0002, “The present disclosure relates, generally, to systems and method for processing optical images. More particularly, the disclosure relates to automatic detection of polyps in optical images or video;” paragraph 0028, “relay the optical image data to the controller 104 for processing and analysis”), the image processing method comprising: acquiring a captured image acquired by photographing an inspection target by an endoscope (Figure 4, element 402, “receive a set of optical images;” paragraph 0028, “In some embodiments, the colonoscopy device 102 may include an endoscope (not shown in FIG. 1) configured to acquire optical image data, either continuously or intermittently, from a patient's colon, and relay the optical image data to the controller 104 for processing and analysis. By way of example, the endoscope may include a CCD camera, fiber optic camera, or other video recording device, as well as one or more light sources. ;” paragraph 0047, “The process 400 can begin at process block 402 with receiving one or more optical images for analysis. As described, such images can be obtained in substantially real-time from a live video feed or accessed from a data storage, memory, database, cloud, and the like.”); detecting a lesion part, in which a lesion formation is suspected, in the captured image (Figure 4, element 404, “generate polyp candidates by analyzing the received set of optical images;” paragraph 0047, “The optical images may then be analyzed to generate polyp candidates, as indicated by process block 404. In the analysis, the images may be processed to construct one or more edge maps, for example, by applying Canny's method of edge detection, for example. Advantageously, different color channels associated with the received images may be analyzed to extract as many edge pixels as possible. As such, the received images may be filtered using various color filters, such as red, blue, an green color filters. Edge pixels obtained using the different color channels may then be combined to generate the edge maps. The edge maps may be further refined by applying a classification scheme based on patterns of intensity variation, and used in a voting scheme, as described, to identify polyp candidates. In some aspects, a bounding box is generated for each identified polyp candidate.”); outputting information based on the diagnosis result by a display device or audio output device (paragraph 0009, “The processor is further configured to process the images by identifying polyps using the computed probabilities, and generating a report indicating identified polyps. The system further includes an output for displaying the report;” paragraph 0040, “The output elements may take any shape or form, and may include various visual, audio and other systems for providing a report either intermittently or in substantially real time. For instance as shown, an output may be in the form of a display. Another output can include speakers. Yet another output can include one or more electronic connections through which signals, data, or reports can sent to a database, data storage, or a medical record, for example.”). Tajbakhsh fails to explicitly disclose as further recited. However, Sprencz discloses inputting a combined image of the captured image and a cutout image to a diagnosis model (Figure 6, element 252, “overlay lesion candidates on skeletal structure;” column 9, line 23, “Images of the lesion candidates are generated in step 252 and displayed as an overlay on the patient skeletal image generated from the segmented skeletal structure output from bone segmentation subroutine 210 of FIG. 5.”), wherein the cutout image indicates the lesion part cut out from the captured image (column 9, line 23, “Images of the lesion candidates are generated in step 252 and displayed as an overlay on the patient skeletal image;” the lesion candidate overlay is the cutout image indicating lesion parts from the patient skeletal image) and the diagnosis model is trained to take an image as an input and output a diagnosis result relating to the lesion part in the image (Figure 6, element 262, “classify lesion candidates;” column 9, line 34, “However, if the lesion detection is acceptable 260, lesion detection subroutine 246 is prompted to move on to step 262 and classify the identified lesion candidates as being either bone lesions (i.e., lesions located within the skeletal structure) or non-bone lesions (i.e., lesions not located in the skeletal structure);” column 9, line 45, “To assist with classifying lesion candidates as being either bone lesions or non-bone lesions at step 262, lesion detection subroutine 246 uses the segmented CT volume of step 208. In particular, lesion detection subroutine 246 compares the location of the lesion candidates to the location of the identified skeletal structure. Any lesion candidates that fall within the location of the skeletal structure are automatically identified as bone lesions and any lesion candidates that are outside location of the skeletal structure are automatically identified as non-bone lesions. Because lesion detection subroutine 246 automatically classifies lesions as being bone or non-bone based on the identified location of the lesion candidate with respect to the identified location of the patient's skeletal structure, lesion candidates may be correctly classified as being bone lesions with high accuracy;” the definition of the algorithm the classifier follows is read as a method of training); acquiring the diagnosis result relating to the lesion part in the combined image output by the diagnosis model in response to the input of the combined image (column 9, line 34, “However, if the lesion detection is acceptable 260, lesion detection subroutine 246 is prompted to move on to step 262 and classify the identified lesion candidates as being either bone lesions (i.e., lesions located within the skeletal structure) or non-bone lesions (i.e., lesions not located in the skeletal structure).”). Tajbakhsh is directed toward “A system and methods for detecting polyps using optical images acquired during a colonoscopy (abstract).” Sprencz is directed toward “Methods and systems for evaluating bone lesions (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, both Tajbakhsh and Sprencz are directed toward similar methods of endeavor of detection and analysis of lesions in medical images. Further, one of ordinary skill in the art before the effective filing date of the claimed invention would easily understand that providing additional information for diagnosis can aid in a more accurate output. Said differently, inputting just a segmentation of a lesion could lead to errors with diagnosing the lesion location; the rest of the image that would be excluded may contain key information for diagnosis. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Sprencz in order to ensure all available data is considered in diagnosis determination so that accuracy is increased. Regarding dependent claim 12, the rejection of claim 1 applies directly. Additionally, Tajbakhsh further discloses a non-transitory computer readable storage medium storing a program executed by a computer (paragraph 0035, “Alternatively, such data analysis may be carried out off-line by processing optical image data accessed from a data storage or memory. This would allow for automated verification of previously performed colonoscopies. To this end, the processor may read and execute software instructions from a non-transitory computer-readable medium, such as a hard drive, CD-ROM, DVD, internal, external or flash memory, and the like, as well as transitory computer-readable media”), the program causing the computer to: acquire a captured image acquired by photographing an inspection target by an endoscope (Figure 4, element 402, “receive a set of optical images;” paragraph 0028, “In some embodiments, the colonoscopy device 102 may include an endoscope (not shown in FIG. 1) configured to acquire optical image data, either continuously or intermittently, from a patient's colon, and relay the optical image data to the controller 104 for processing and analysis. By way of example, the endoscope may include a CCD camera, fiber optic camera, or other video recording device, as well as one or more light sources. ;” paragraph 0047, “The process 400 can begin at process block 402 with receiving one or more optical images for analysis. As described, such images can be obtained in substantially real-time from a live video feed or accessed from a data storage, memory, database, cloud, and the like.”); detect a lesion part, in which a lesion formation is suspected, in the captured image (Figure 4, element 404, “generate polyp candidates by analyzing the received set of optical images;” paragraph 0047, “The optical images may then be analyzed to generate polyp candidates, as indicated by process block 404. In the analysis, the images may be processed to construct one or more edge maps, for example, by applying Canny's method of edge detection, for example. Advantageously, different color channels associated with the received images may be analyzed to extract as many edge pixels as possible. As such, the received images may be filtered using various color filters, such as red, blue, an green color filters. Edge pixels obtained using the different color channels may then be combined to generate the edge maps. The edge maps may be further refined by applying a classification scheme based on patterns of intensity variation, and used in a voting scheme, as described, to identify polyp candidates. In some aspects, a bounding box is generated for each identified polyp candidate.”); and output information based on the diagnosis result by a display device or audio output device (paragraph 0009, “The processor is further configured to process the images by identifying polyps using the computed probabilities, and generating a report indicating identified polyps. The system further includes an output for displaying the report;” paragraph 0040, “The output elements may take any shape or form, and may include various visual, audio and other systems for providing a report either intermittently or in substantially real time. For instance as shown, an output may be in the form of a display. Another output can include speakers. Yet another output can include one or more electronic connections through which signals, data, or reports can sent to a database, data storage, or a medical record, for example.”). Tajbakhsh fails to explicitly disclose as further recited. However, Sprencz discloses input a combined image of the captured image and a cutout image to a diagnosis model (Figure 6, element 252, “overlay lesion candidates on skeletal structure;” column 9, line 23, “Images of the lesion candidates are generated in step 252 and displayed as an overlay on the patient skeletal image generated from the segmented skeletal structure output from bone segmentation subroutine 210 of FIG. 5.”), wherein the cutout image indicates the lesion part cut out from the captured image (column 9, line 23, “Images of the lesion candidates are generated in step 252 and displayed as an overlay on the patient skeletal image;” the lesion candidate overlay is the cutout image indicating lesion parts from the patient skeletal image) and the diagnosis model is trained to take an image as an input and output a diagnosis result relating to the lesion part in the image (Figure 6, element 262, “classify lesion candidates;” column 9, line 34, “However, if the lesion detection is acceptable 260, lesion detection subroutine 246 is prompted to move on to step 262 and classify the identified lesion candidates as being either bone lesions (i.e., lesions located within the skeletal structure) or non-bone lesions (i.e., lesions not located in the skeletal structure);” column 9, line 45, “To assist with classifying lesion candidates as being either bone lesions or non-bone lesions at step 262, lesion detection subroutine 246 uses the segmented CT volume of step 208. In particular, lesion detection subroutine 246 compares the location of the lesion candidates to the location of the identified skeletal structure. Any lesion candidates that fall within the location of the skeletal structure are automatically identified as bone lesions and any lesion candidates that are outside location of the skeletal structure are automatically identified as non-bone lesions. Because lesion detection subroutine 246 automatically classifies lesions as being bone or non-bone based on the identified location of the lesion candidate with respect to the identified location of the patient's skeletal structure, lesion candidates may be correctly classified as being bone lesions with high accuracy;” the definition of the algorithm the classifier follows is read as a method of training); acquire the diagnosis result relating to the lesion part in the combined image output by the diagnosis model in response to the input of the combined image (column 9, line 34, “However, if the lesion detection is acceptable 260, lesion detection subroutine 246 is prompted to move on to step 262 and classify the identified lesion candidates as being either bone lesions (i.e., lesions located within the skeletal structure) or non-bone lesions (i.e., lesions not located in the skeletal structure).”). Tajbakhsh is directed toward “A system and methods for detecting polyps using optical images acquired during a colonoscopy (abstract).” Sprencz is directed toward “Methods and systems for evaluating bone lesions (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, both Tajbakhsh and Sprencz are directed toward similar methods of endeavor of detection and analysis of lesions in medical images. Further, one of ordinary skill in the art before the effective filing date of the claimed invention would easily understand that providing additional information for diagnosis can aid in a more accurate output. Said differently, inputting just a segmentation of a lesion could lead to errors with diagnosing the lesion location; the rest of the image that would be excluded may contain key information for diagnosis. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Sprencz in order to ensure all available data is considered in diagnosis determination so that accuracy is increased. Regarding dependent claim 13, the rejection of claim 1 is incorporated herein. Additionally, Tajbakhsh in the combination further discloses wherein the at least one processor is configured to execute the instructions to detect the lesion part in which the lesion formation is suspected by inputting the captured image into a detection model trained in advance by using machine learning (paragraph 0026, “ Specifically, the present approach makes use of convolutional neural networks (“CNNs”) to learn and combine information from multiple image features, including color, shape, texture, and temporal features, into one consolidated framework;” paragraph 0057, “FIG. 6 illustrates the network layout applied for CNNs utilized this study. A common practice when training CNNs”). Regarding dependent claim 14, the rejection of claim 1 is incorporated herein. Additionally, Sprencz in the combination further discloses wherein the diagnosis result is classification information which indicates a classification of the lesion formation (abstract, “classification of the at least one lesion as a bone or non-bone lesion”), and wherein the information based on the diagnosis result includes at least one of: the classification information (abstract, “classification of the at least one lesion as a bone or non-bone lesion”), a lesion name based on the classification information (column 10, line 60, “ Image display region 282 may further be configured to display bone lesions and non-bone lesions in different colors or with different shading to visually distinguish detected bone lesions from non-bone lesions;” the color coordinates to the name of the lesion type), or information on a risk based on the classification information (column 2, line 43, “automatically calculate a bone lesion metric based on the classification. The set of instructions further causes the computer to calculate a lesion burden as a ratio of the bone lesion metric and the patient skeletal metric.”). It is well known by one of ordinary skill in the art before the effective filing date of the invention there are multiple different types of lesion classifications. At the simplest form, there are benign and malignant lesions, further different lesion stages, etc. One of ordinary skill in the art before the effective filing date would be aware that providing as much classification detail as possible helps inform the diagnosis, and thus the treatment options. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Sprencz to ensure users are given as much information as possible related to the lesion itself to ideally produce a more accurate lesion diagnosis and thus, more successful treatment. Regarding dependent claim 17, the rejection of claim 11 is incorporated herein. Additionally, Tajbakhsh in the combination further discloses the image processing method comprising: inputting an image, into which the captured image resized based on an input format of the diagnosis model and the cutout image are combined, to the diagnosis model, and acquiring the diagnosis result based on the diagnosis result outputted by the diagnosis model (abstract, “ The method also includes generating a plurality of image patches around locations associated with each polyp candidate, applying a set of convolutional neural networks to the corresponding image patches, and computing probabilities indicative of a maximum response for each convolutional neural network. ;” each of the image patches input to the CNNs of the diagnosis model represent a "resized" captured image; ). Regarding dependent claim 19, the rejection of claim 11 is incorporated herein. Additionally, Tajbakhsh in the combination further discloses wherein the inspection target is either a large bowel or a stomach (paragraph 0009, “In one aspect of the disclosure, a system for polyp detection using optical images acquired during a colonoscopy is provided. The system includes an input configured to receive a set of optical images acquired from a patient; during a colonoscopy;” colonoscopy is read as including the large bowel). Regarding dependent claim 20, the rejection of claim 11 is incorporated herein. Additionally, Sprencz discloses the image processing method comprising: in a case in which the presence of a specific type of a lesion by the diagnosis is detected, causing the display device or the audio output device to output information notifying at least the presence of the lesion as the information based on the diagnosis (column 9, line 23, “Images of the lesion candidates are generated in step 252 and displayed as an overlay on the patient skeletal image generated from the segmented skeletal structure output from bone segmentation subroutine 210 of FIG;” column 10, line 5, “Technique 200 also uses the results of the lesion classification in step 268 to calculate quantitative information regarding the skeletal structure and detected bone lesions of the patient. It is contemplated that quantitative information regarding the skeletal structure and detected bone lesions may be in regard to individual bone lesions or the total of all bone lesions, according to various embodiments. The quantitative information is displayed at step 270;” column 10, line 60, “ Image display region 282 may further be configured to display bone lesions and non-bone lesions in different colors or with different shading to visually distinguish detected bone lesions from non-bone lesions. As one example, a lesion candidate classified as a bone lesion, such as lesion candidate 296, may be displayed with a red contour, and a lesion candidate classified as a non-bone lesion, such as lesion candidate 298, may be displayed with a blue contour.”). It is well known by one of ordinary skill in the art before the effective filing date of the invention there are multiple different types of lesions. At the simplest form, there are benign and malignant lesions; malignant lesions needing treatment where benign do not. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Sprencz to ensure users are made aware of the more concerning lesion type (malignant). Claim(s) 3 and 15 rejected under 35 U.S.C. 103 as being unpatentable over Tajbakhsh further in view of Sprencz as applied to claim 1 and 11 respectively above, and further in view of U.S. Publication No. 2021/0052135 to Fu et al. (hereinafter Fu). Regarding dependent claim 3, the rejection of claim 1 is incorporated herein. Additionally, Tajbakhsh and Sprencz in the combination as a whole fails to explicitly disclose wherein the at least one processor is configured to execute the instructions to input an image, in which the captured image is overlaid on the cutout image in a channel direction, to the diagnosis model, and acquire the diagnosis result based on the diagnosis result outputted by the diagnosis model. However, Fu discloses wherein the at least one processor is configured to execute the instructions to input an image, in which the captured image is overlaid on the cutout image in a channel direction (paragraph 0043, “thereby reducing the quantity of the input feature images and fusing features of the channels. The 1×1 convolution operation in the transition layer may reduce the quantity of the input channels by half;” Figure 3, multiple images are input into the convolution layer and processed for prediction), to the diagnosis model, and acquire the diagnosis result based on the diagnosis result outputted by the diagnosis model (Figure 3, “classification layer” and “predict an organ category” is read as the diagnosis of the image). With regard to specifically overlaying the captured image and the cut-out image, Fu discloses inputting multiple different images that have been transformed from the original, and the original image (see input layer in Figure 3). The cutout image is read as a further transformation of the original image, thus, it would have been obvious to a person having ordinary skill in the art at the time of filing the claimed invention to modify the teaching of Fu to input multiple different images into the neural network, as related to a captured image and a segmented image. As noted above, Tajbakhsh and Sprencz are directed toward lesion analysis in medical images. Further, Tajbakhsh is directed toward “A system and methods for detecting polyps using optical images acquired during a colonoscopy (abstract).” Fu is directed toward “an endoscopic image processing method and system, and a computer device (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, Tajbakhsh, Sprencz and Fu are directed toward similar methods of endeavor of image analysis. Further, Fu allows for an input containing multiple images (See input layer in Figure 3). This allows for increased efficiency rather than having to process each image individually; as well as better determining relationships between images, as opposed to just analyzing one image itself. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Fu to ensure inter-image relationships can be determined, and allow for a more efficient system. Regarding dependent claim 15, the rejection of claim 11 is incorporated herein. Additionally, Tajbakhsh and Sprencz in the combination as a whole fails to explicitly disclose the image processing method comprising: inputting an image, in which the captured image is overlaid on the cutout image in a channel direction, to the diagnosis model, and acquiring the diagnosis result based on the diagnosis result outputted by the diagnosis model. However, Fu discloses the image processing method comprising: inputting an image, in which the captured image is overlaid on the cutout image in a channel direction (paragraph 0043, “thereby reducing the quantity of the input feature images and fusing features of the channels. The 1×1 convolution operation in the transition layer may reduce the quantity of the input channels by half;” Figure 3, multiple images are input into the convolution layer and processed for prediction), to the diagnosis model, and acquiring the diagnosis result based on the diagnosis result outputted by the diagnosis model. (Figure 3, “classification layer” and “predict an organ category” is read as the diagnosis of the image). With regard to specifically overlaying the captured image and the cut-out image, Fu discloses inputting multiple different images that have been transformed from the original, and the original image (see input layer in Figure 3). The cutout image is read as a further transformation of the original image, thus, it would have been obvious to a person having ordinary skill in the art at the time of filing the claimed invention to modify the teaching of Fu to input multiple different images into the neural network, as related to a captured image and a segmented image. As noted above, Tajbakhsh and Sprencz are directed toward lesion analysis in medical images. Further, Tajbakhsh is directed toward “A system and methods for detecting polyps using optical images acquired during a colonoscopy (abstract).” Fu is directed toward “an endoscopic image processing method and system, and a computer device (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, Tajbakhsh, Sprencz and Fu are directed toward similar methods of endeavor of image analysis. Further, Fu allows for an input containing multiple images (See input layer in Figure 3). This allows for increased efficiency rather than having to process each image individually; as well as better determining relationships between images, as opposed to just analyzing one image itself. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Fu to ensure inter-image relationships can be determined, and allow for a more efficient system. Claim(s) 4 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Tajbakhsh further in view of Sprencz as applied to claim 1 and 11 respectively above, and further in view of Shi, Chenfei, Xue, Yan, Jiang, Chuan, Tian, Hui, Liu, Bei, Gastroscopic Panoramic View: Application to Automatic Polyps Detection under Gastroscopy, Computational and Mathematical Methods in Medicine, 2019, 4393124, 8 pages, 2019. https://doi.org/10.1155/2019/4393124 (hereinafter Shi). Regarding dependent claim 4, the rejection of claim 1 is incorporated herein. Additionally, Tajbakhsh and Sprencz in the combination as a whole fails to explicitly disclose wherein the at least one processor is configured to execute the instructions to input an image, in which the captured image and the cutout image are connected in a vertical direction or a lateral direction of the images, to the diagnosis model, and acquire the diagnosis result based on the diagnosis result outputted by the diagnosis model. However, Shi discloses wherein the at least one processor is configured to execute the instructions to input an image, in which the captured image and the cutout image are connected in a vertical direction or a lateral direction of the images (Figure 5; section 4, “The panorama image is gradually constructed and unfolded, as well as the polyps in the panorama image are detected in real time, shown in Figure 5.”), to the diagnosis model, and acquire the diagnosis result based on the diagnosis result outputted by the diagnosis model (Figure 2). With regard to specifically inputting stitched images of the captured image and the cut-out image, it is clearly well known in Shi to combine images in a variety of directions. Stitching images together is not novel, and if there was a need to combine two images either laterally or vertically it would have been understood by one of ordinary skill in the art to modify the teaching of Shi to form an input set of images, into one image. As noted above, Tajbakhsh and Sprencz are directed toward lesion analysis in medical images. Further, Tajbakhsh is directed toward “A system and methods for detecting polyps using optical images acquired during a colonoscopy (abstract).” Shi is directed toward “gastroscopic panorama reconstruction method (abstract).” It can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention both Tajbakhsh, Sprencz and Shi are directed toward similar methods of endeavor of image processing of medical images. Further, Shi allows for processing of stitched images containing different information. One of ordinary skill in the art would easily understand that different images or transformations of an image contain various important data; for example contrast enhancement may provide specific information about a diseased area, while a cropped image may provide a specific ROI. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate Shi in order to ensure the system focuses on different formats of an input image, allowing for focusing on the most important parts and not processing unnecessary data. Regarding dependent claim 16, the rejection of claim 11 is incorporated herein. Additionally, Tajbakhsh and Sprencz in the combination as a whole fails to explicitly disclose the image processing method comprising: inputting an image, in which the captured image and the cutout image are connected in a vertical direction or a lateral direction of the images, to the diagnosis model, and acquiring the diagnosis result based on the diagnosis result outputted by the diagnosis model. However, Shi discloses the image processing method comprising: inputting an image, in which the captured image and the cutout image are connected in a vertical direction or a lateral direction of the images (Figure 5; section 4, “The panorama image is gradually constructed and unfolded, as well as the polyps in the panorama image are detected in real time, shown in Figure 5.”), to the diagnosis model, and acquiring the diagnosis result based on the diagnosis result outputted by the diagnosis model (Figure 2). With regard to specifically inputting stitched images of the captured image and the cut-out image, it is clearly well known in Shi to combine images in a variety of directions. Stitching images together is not novel, and if there was a need to combine two images either laterally or vertically it would have been understood by one of ordinary skill in the art to modify the teaching of Shi to form an input set of images, into one image. As noted above, Tajbakhsh and Sprencz are directed toward lesion analysis in medical images. Further, Tajbakhsh is directed toward “A system and methods for detecting polyps using optical images acquired during a colonoscopy (abstract).” Shi is directed toward “gastroscopic panorama reconstruction method (abstract).” It can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention both Tajbakhsh, Sprencz and Shi are directed toward similar methods of endeavor of image processing of medical images. Further, Shi allows for processing of stitched images containing different information. One of ordinary skill in the art would easily understand that different images or transformations of an image contain various important data; for example contrast enhancement may provide specific information about a diseased area, while a cropped image may provide a specific ROI. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate Shi in order to ensure the system focuses on different formats of an input image, allowing for focusing on the most important parts and not processing unnecessary data. Claim(s) 6 and 18 rejected under 35 U.S.C. 103 as being unpatentable over Tajbakhsh further in view of Sprencz as applied to claims 1 and 11 respectively above, and further in view of U.S. Publication No. 2020/0342598 to Shiratani (hereinafter Shiratani). Regarding dependent claim 6, the rejection of claim 1 is incorporated herein. Additionally, Tajbakhsh and Sprencz in the combination as a whole fails to explicitly disclose wherein the at least one processor is configured to execute the instructions to a
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Jul 10, 2025
Non-Final Rejection — §103
Oct 14, 2025
Response Filed
Nov 25, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603175
METHOD AND APPARATUS FOR DETERMINING DIAGNOSIS RESULT DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597188
SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES FOR PHYSIOLOGY-COMPENSATED RECONSTRUCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597494
METHOD AND APPARATUS FOR TRAINING MEDICAL IMAGE REPORT GENERATION MODEL, AND IMAGE REPORT GENERATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12588881
PROVIDING A RESULT DATA SET
2y 5m to grant Granted Mar 31, 2026
Patent 12592016
Material-Specific Attenuation Maps for Combined Imaging Systems Background
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month