Prosecution Insights
Last updated: April 19, 2026
Application No. 18/578,337

COMPUTER-IMPLEMENTED SYSTEMS AND METHODS FOR OBJECT DETECTION AND CHARACTERIZATION

Non-Final OA §101§102§103
Filed
Jan 11, 2024
Examiner
SORRIN, AARON JOSEPH
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Cosmo Artificial Intelligence – AI Limited
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
46 granted / 62 resolved
+12.2% vs TC avg
Strong +51% interview lift
Without
With
+50.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
22 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
20.4%
-19.6% vs TC avg
§103
35.6%
-4.4% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
29.3%
-10.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 62 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 18578337, filed on 01/11/2024. Information Disclosure Statement The information disclosure statements (IDSs) submitted on 1/11/24, 10/29/24, and 6/27/25 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 30-53 are rejected under 35 U.S.C. 101. Claim 30 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of analyzing medical images, aggregating and evaluating data, and making determinations, without significantly more. The claim recites: “A computer-implemented system for processing real-time video, the system comprising at least one processor configured to: detect an object of interest in a plurality of frames received from a medical image device; characterize the object of interest, wherein characterization includes determining a plurality of features associated with the object of interest, the plurality of features including a determined location and a determined size of the object of interest; aggregate, when the object of interest persists over more than one of the plurality of frames, information associated with the determined location and determined size of the object of interest; evaluate, based on the aggregated information, the determined location and determined size of the object of interest; present, on a display device, when the determined location is in a first body region and the determined size is within a first range, the aggregated information for the object of interest; and present, on the display device, when the determined location is in a second body region and the determined size is within a second range, information indicating a status of the characterization of the object of interest.” The limitations, as drafted, are processes that, under their broadest reasonable interpretation, cover performance of the limitation in the mind. A person can identify objects in images, characterize the location and size of the object, aggregate information obtained across multiple images, evaluate location and size based on the aggregate information, and present information and characterization status (verbally or written) based on body region and size ranges. The obtaining of real-time medical images amounts to data collection (insignificant extra-solution activity). This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a computer-implemented system, a processor, a medical device, and a display device. These are recited at a level of generality such that they amount to generic elements for implementing the abstract idea (computer-implemented system, a processor), obtaining medical images (medical device), and outputting determinations (display). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are recited at a high-level of generality. It is therefore a judicial exception that is not integrated into a practical application, and does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This claim is not patent eligible. Claim 31 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of determining a medical guideline based on location and size, then presenting the medical guideline. This amounts to identifying an image as a result, without significantly more. This claim is not patent eligible. Claim 32 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of classifying an object of interest by several means, each of which can be done mentally. This claim is not patent eligible. Claim 33 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea specifying the location of the object as a physiological region such as rectum, sigmoid colon, etc. This modification of claim 1 can be performed with the human mind, as the human mind can make a determination based on a body region. This claim is not patent eligible. Claim 34 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of specifying the determined size of the object of interest as a number or size classification, which can be done by the human mind. This claim is not patent eligible. Claim 35 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of detecting multiple objects in multiple frames, characterizing the objects, and presenting information, each of which may be done mentally/verbally. This claim is not patent eligible. Claims 36-37 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of identifying the object as diminutive or non-diminutive and generating information about the object of interest based on the identification, which can be done mentally. The claims are not patent eligible. Claims 38-45 and 46-53 are rejected under 35 U.S.C. 101 because they are directed to a computer-implemented method and non-transitory computer readable medium with instructions for a processor, which are analogous to the abstract idea recited in claims 30-37. The computer-implementation, non-transitory computer readable medium, and processor are recited at a high level of generality such that they amount to generic computer components for the performance of the abstract idea. The claims are not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 30, 32, 34-35, 38, 40, 42-43, 46, 48, and 50-51 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ding (CN 107230198 A). Regarding claim 30, Ding teaches “A computer-implemented system for processing real-time video, (Ding, Page 4, Paragraphs 13-14, “In step S101, a first gastroscope image is dynamically acquired. In this step, the first gastroscope image may be an image to be processed obtained from a gastroscope, and obtained through optical processing to obtain a white light or NBI (NarrowBand Imaging endoscopic narrow-band imaging) gastroscope image. The gastroscope image is acquired dynamically in real time by the above-mentioned gastroscope device, specifically, dynamic images at preset time intervals can be acquired through a preset setting. The first gastroscope image can be pre-processed through operations including rotation and color difference and color temperature adjustment to complete local image processing with low performance requirements. In addition, the first gastroscope image can also receive information input from an interactive display device.”) “the system comprising at least one processor” (Ding, Figure 3, translated below) [AltContent: rect] PNG media_image1.png 830 512 media_image1.png Greyscale “configured to: detect an object of interest in a plurality of frames received from a medical image device; characterize the object of interest, wherein characterization includes determining a plurality of features associated with the object of interest, the plurality of features including a determined location and a determined size of the object of interest;” (Ding, Page 4, Paragraphs 15-16, “In step S102, segment the segmented area with lesion features on the first gastroscope image, mark the first-level feature information and position information corresponding to the segmented area on the first gastroscope image as the second gastroscope image, and output The second gastroscopy image. In this step, the first gastroscope image is received, and a target detection algorithm is used to perform region division on the first gastroscope image, including lesion separation and feature extraction. For example, by scanning the region on the first gastroscope image that may have a feature of a lesion, the region on the first gastroscope image that may have a feature of a lesion is segmented, and the segmented area is marked as a suspicious area. Then, the image information data obtained by the frame-selected segmented area is used as input, the segmented area is selected from the first gastroscope image, the positional relationship between the segmented areas is recorded, and the primary feature information contained in the image of the segmented area is extracted , including the extraction of features such as boundary definition, color, surface smoothness, and shape. Finally, a preliminary annotation is performed on the first gastroscope image, so as to obtain a second gastroscope image. In this step, the second gastroscope image can be interactively displayed on the display device.” Note that boundary definition and shape features amount to location and size). “aggregate, when the object of interest persists over more than one of the plurality of frames, information associated with the determined location and determined size of the object of interest; evaluate, based on the aggregated information, the determined location and determined size of the object of interest;” (Ding, Page 4, Paragraph 17, “In step S103, the second gastroscope image is received, and the primary feature information and position information in a plurality of the second gastroscope images are combined and analyzed to form secondary feature information and an area position corresponding to the secondary feature information, and output a third gastroscope image labeled with the secondary feature information and the region location information.” Note that the combining and analyzing of primary feature information and position information amounts to the aggregating and evaluating of the determined location and size.) “present, on a display device, when the determined location is in a first body region and the determined size is within a first range, the aggregated information for the object of interest; and present, on the display device, when the determined location is in a second body region and the determined size is within a second range, information indicating a status of the characterization of the object of interest.” (Ding, Page 4, Paragraph 17, “In step S103, the second gastroscope image is received, and the primary feature information and position information in a plurality of the second gastroscope images are combined and analyzed to form secondary feature information and an area position corresponding to the secondary feature information, and output a third gastroscope image labeled with the secondary feature information and the region location information.” Note that the display of the secondary feature information and region location information are mapped to the presentation of the aggregated information and a status of the characterization, respectively. Additionally, the first and second human body regions are not expressly defined as different, and under broadest reasonable interpretation, can be interpreted as any region inside the human body. Similarly, the first and second size are not expressly defined as different, and could therefore be interpreted as any size >0. Still, the “when” statement merely describes when the step is applicable thus it does not expressly provide a conditional requirement.) Regarding claim 32, Ding teaches “The system of claim 30,” “wherein the plurality of features further includes a classification of the object of interest, the classification being based on at least one of a histological classification, a morphological classification, a structural classification, or a malignancy classification.” (Ding, Page 4, Paragraphs 15-16, “In step S102, segment the segmented area with lesion features on the first gastroscope image, mark the first-level feature information and position information corresponding to the segmented area on the first gastroscope image as the second gastroscope image, and output The second gastroscopy image. In this step, the first gastroscope image is received, and a target detection algorithm is used to perform region division on the first gastroscope image, including lesion separation and feature extraction. For example, by scanning the region on the first gastroscope image that may have a feature of a lesion, the region on the first gastroscope image that may have a feature of a lesion is segmented, and the segmented area is marked as a suspicious area. Then, the image information data obtained by the frame-selected segmented area is used as input, the segmented area is selected from the first gastroscope image, the positional relationship between the segmented areas is recorded, and the primary feature information contained in the image of the segmented area is extracted , including the extraction of features such as boundary definition, color, surface smoothness, and shape. Finally, a preliminary annotation is performed on the first gastroscope image, so as to obtain a second gastroscope image. In this step, the second gastroscope image can be interactively displayed on the display device.” Note that the features described such as boundary definition, color, smoothness, shape, amount to structural and morphological features.). Regarding claim 34, Ding teaches “The system of claim 30,” “wherein the determined size associated with the object of interest is a numeric value or a size classification.” (Ding, Page 3 last paragraph, “The second calculation output unit is used to receive the image with the second gastroscope, extract a plurality of feature information and position information in the second gastroscope, and associate a plurality of the first-level feature information according to the position information, Combining and analyzing the associated multiple first-level feature information to form second-level feature information, determining the area range of the second-level feature information on the image, retrieving the feature description information corresponding to the second-level feature information, and outputting the marked information The secondary feature information, the feature description information and the third gastroscope image of the region range.” Note that area range amounts to a numerical value associated with the size.) Regarding claim 35, Ding teaches “The system of claim 30,” “wherein the at least one processor is further configured to: detect a plurality of objects of interest in the plurality of frames; characterize the plurality of objects of interest, the characterization including determining a plurality of sets of features associated with the plurality of objects of interest, wherein a set of features in the plurality of sets of features includes characterization and size information associated with a detected object of interest in the plurality of objects of interest; and present, on the display device, information associated with one or more sets of features in the plurality of sets of features.” (Note that this claim recites overlapping limitations with claim 30, therefore the rejection of claim 30 is applied here. This claim is distinct in that a plurality of objects are detected, whereas claim 30 specifies one object of interest. Ding teaches the detection and subsequent analysis and characterization steps on a plurality of objects of interest: Ding, Page 5, Paragraph 8, “For example, after receiving the NBI gastroscope image delivered by the primary screening diagnosis module, the target detection algorithm is used to divide the NBI gastroscope image into regions and extract the first-level index (first-level feature information). The extraction process includes lesion segmentation and feature extraction. The lesion segmentation is to scan the areas where lesions may exist on the NBI gastroscope image, frame possible areas on the graph, and mark suspicious areas; feature extraction mainly completes suspicious areas In the process of feature extraction, the image information data obtained by segmentation is used as input, and the segmented area is selected from the segmented image, and the features such as boundary definition, color, surface smoothness, and shape are extracted, and a preliminary analysis is performed on the image. Annotate to form the second gastroscopy image. In this step, the second gastroscope image can also be interactively displayed on the display device for review by the doctor, which is convenient for the doctor to understand the image.”) Claims 38, 40, and 42-43 recite a computer implemented method with steps corresponding to the elements of the system recited in Claims 30, 32, 34-35. Therefore, the recited steps of these claims are mapped to the analogous elements in the corresponding system claims. Regarding claims 46, 48, and 50-51, these claims recite a non-transitory computer readable medium including instructions that when executed by at least one processor, cause the at least one processor to perform operations corresponding to the steps recited in Claims 38, 40, and 42-43. Therefore, the recited programming instructions of these claims are mapped to the analogous steps in the corresponding method claims. Ding discloses a processor (see Figure 3 in the above rejection of claim 30), and the recited non-transitory computer readable medium is a well known feature in the art and is not considered novel. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 31, 33, 36, 37, 39, 41, 44, 45, 47, 49, 52 and 53 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ding in view of Bryne (Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model). Regarding claim 31, Ding teaches “The system of claim 30,” Ding does not expressly disclose “wherein the at least one processor is further configured to: identify, based on the determined location and size of the object of interest, a medical guideline; and present, on the display device, information associated with the identified medical guideline.” Byrne discloses “wherein the at least one processor is further configured to: identify, based on the determined location and size of the object of interest, a medical guideline; and present, on the display device, information associated with the identified medical guideline.” (Byrne, Figure 4 shows diagnostic determination (medical guideline) being displayed.) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to incorporate the medical guideline identification and presentation taught by Byrne into diagnostic strategy of Ding. The motivation for doing so would have been to output more details on the lesion to better inform doctors. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ding in view of Byrne to fully disclose the invention of claim 31. Regarding claim 33, Ding teaches “The system of claim 30,” While Ding discloses endoscopic imaging using a gastroscope for lesion detection (Ding, Page 4, Paragraphs 13-14, “In step S101, a first gastroscope image is dynamically acquired. In this step, the first gastroscope image may be an image to be processed obtained from a gastroscope, and obtained through optical processing to obtain a white light or NBI (NarrowBand Imaging endoscopic narrow-band imaging) gastroscope image. The gastroscope image is acquired dynamically in real time by the above-mentioned gastroscope device, specifically, dynamic images at preset time intervals can be acquired through a preset setting. The first gastroscope image can be pre-processed through operations including rotation and color difference and color temperature adjustment to complete local image processing with low performance requirements. In addition, the first gastroscope image can also receive information input from an interactive display device.”), Ding does not expressly disclose endoscopic imaging using colonoscopy. Byrne discloses the endoscopic imaging using colonoscopy (imaging of colon regions) for lesion detection. (Byrne, Discussion, Paragraph 3, “In this study, we apply deep learning to the real-time challenge for polyp differentiation into NICE types 1 and 2, using non-magnification colonoscopy, and most importantly where computer decision support is provided in real-time on unaltered endoscopic video streams. Previous studies of computer decision support for colorectal polyps have used magnifying colonoscopes17 or endocytoscopy,16 both of which are rarely available in the USA or Europe, and while acknowledging the great work of these investigators in this field, our DCNN approach is very different. Our model works with unprocessed frames and can operate in quasi real-time, with a frame processing time of 50 ms on consumer-grade hardware. Our model also works regardless of the polyp location in the frame (the operator does not need to precisely locate the polyp in the middle of the frame). The DCNN is trained end-to-end, meaning that the complete image preprocessing and classification task is solved within the same learning procedure, resulting in a much more robust model than previous work16 17 that consisted of hand-specified preprocessing followed by a trainable classifier. In the broader computer vision community, the end-to-end training of DCNNs has been, since 2013, systematically overtaking hand-engineered features and support vector classification. Ongoing work will determine if such an AI-based clinical decision support system could aid in the widespread adoption of a ‘resect and discard’ strategy.”) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to incorporate the colon region imaging taught by Byrne into the endoscopic imaging of Ding. The motivation for doing so would have been to expand clinical utility of the lesion detection to more physiological systems. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ding in view of Byrne to fully disclose, “wherein the determined location associated with the object of interest is a location in at least one of a rectum, sigmoid colon, descending colon, transverse colon, ascending colon, or cecum.” Regarding claim 36, Ding teaches “The system of claim 30,” While Ding discloses identifying a lesion as suspicious (Ding, Page 5, Paragraph 8, “For example, after receiving the NBI gastroscope image delivered by the primary screening diagnosis module, the target detection algorithm is used to divide the NBI gastroscope image into regions and extract the first-level index (first-level feature information). The extraction process includes lesion segmentation and feature extraction. The lesion segmentation is to scan the areas where lesions may exist on the NBI gastroscope image, frame possible areas on the graph, and mark suspicious areas; feature extraction mainly completes suspicious areas In the process of feature extraction, the image information data obtained by segmentation is used as input, and the segmented area is selected from the segmented image, and the features such as boundary definition, color, surface smoothness, and shape are extracted, and a preliminary analysis is performed on the image. Annotate to form the second gastroscopy image. In this step, the second gastroscope image can also be interactively displayed on the display device for review by the doctor, which is convenient for the doctor to understand the image.”), and presenting lesion information (Ding, Page 4, Paragraph 17, “In step S103, the second gastroscope image is received, and the primary feature information and position information in a plurality of the second gastroscope images are combined and analyzed to form secondary feature information and an area position corresponding to the secondary feature information, and output a third gastroscope image labeled with the secondary feature information and the region location information.”), Ding does not expressly disclose identifying a lesion as non-diminutive and presenting that information. Byrne discloses identification and presentation of a lesion as non-diminutive (Byrne, Figures 1 and 4(B) shows lesion identification as Type 2 (non-diminutive).) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to incorporate the non-diminutive identification and presentation taught by Byrne into the lesion characterization and presentation of Ding. The motivation for doing so would have been to provide further information to the provider and patient. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ding in view of Byrne to fully disclose, “wherein presenting information indicating the status of the characterization of the object of interest further comprises: identify the object of interest as non-diminutive;” Ding in view of Byrne further disclose, “and generate aggregate information of the object of interest, wherein the aggregate information includes a location and a size of the object of interest in the plurality of frames.” (Ding, Page 4, Paragraphs 15-16, “In step S102, segment the segmented area with lesion features on the first gastroscope image, mark the first-level feature information and position information corresponding to the segmented area on the first gastroscope image as the second gastroscope image, and output The second gastroscopy image. In this step, the first gastroscope image is received, and a target detection algorithm is used to perform region division on the first gastroscope image, including lesion separation and feature extraction. For example, by scanning the region on the first gastroscope image that may have a feature of a lesion, the region on the first gastroscope image that may have a feature of a lesion is segmented, and the segmented area is marked as a suspicious area. Then, the image information data obtained by the frame-selected segmented area is used as input, the segmented area is selected from the first gastroscope image, the positional relationship between the segmented areas is recorded, and the primary feature information contained in the image of the segmented area is extracted , including the extraction of features such as boundary definition, color, surface smoothness, and shape. Finally, a preliminary annotation is performed on the first gastroscope image, so as to obtain a second gastroscope image. In this step, the second gastroscope image can be interactively displayed on the display device.” Note that the location and size determination is mapped to the boundary and size identification.) Regarding claim 37, Ding teaches “The system of claim 30,” While Ding discloses identifying suspicious vs. non-suspicious lesions (Ding, Page 5, Paragraph 8, “For example, after receiving the NBI gastroscope image delivered by the primary screening diagnosis module, the target detection algorithm is used to divide the NBI gastroscope image into regions and extract the first-level index (first-level feature information). The extraction process includes lesion segmentation and feature extraction. The lesion segmentation is to scan the areas where lesions may exist on the NBI gastroscope image, frame possible areas on the graph, and mark suspicious areas; feature extraction mainly completes suspicious areas In the process of feature extraction, the image information data obtained by segmentation is used as input, and the segmented area is selected from the segmented image, and the features such as boundary definition, color, surface smoothness, and shape are extracted, and a preliminary analysis is performed on the image. Annotate to form the second gastroscopy image. In this step, the second gastroscope image can also be interactively displayed on the display device for review by the doctor, which is convenient for the doctor to understand the image.” Note that the identification of a suspicious lesion requires that in the absence of this identification, a lesion is identified as non-suspicious.), and presenting lesion information (Ding, Page 4, Paragraph 17, “In step S103, the second gastroscope image is received, and the primary feature information and position information in a plurality of the second gastroscope images are combined and analyzed to form secondary feature information and an area position corresponding to the secondary feature information, and output a third gastroscope image labeled with the secondary feature information and the region location information.”), Ding does not expressly disclose identifying a lesion as diminutive and presenting that information. Byrne discloses identifying a lesion as diminutive and presenting that information (Byrne, Figures 1 and 4(A) shows lesion identification and presentation as Type 1 (diminutive).) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to incorporate the diminutive identification and presentation taught by Byrne into the lesion characterization and presentation of Ding. The motivation for doing so would have been to provide further information to the provider and patient. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ding in view of Byrne to fully disclose, “wherein presenting information indicating the status of the characterization of the object of interest further comprises: identify the object of interest as diminutive;”. Ding in view of Byrne further disclose “and generate information indicating the status of characterization of the object of interest, wherein the status of characterization includes non-aggregate information of classification of the object of interest.” (Byrne, Figure 4(A), as incorporated above, displays a Type 1 (diminutive) lesion. This is mapped to the status of characterization of non-aggregate information of classification. Note that this was previously incorporated above with motivation and rationale.) Claims 39, 41, 44, and 45 recite a computer implemented method with steps corresponding to the elements of the system recited in Claims 31, 33, 36, and 37. Therefore, the recited steps of these claims are mapped to the analogous elements in the corresponding system claims. The rationale and motivation to combine the Ding and Byrne references apply here. Regarding claims 47, 49, 52, and 53, these claims recite a non-transitory computer readable medium including instructions that when executed by at least one processor, cause the at least one processor to perform operations corresponding to the steps recited in Claims 39, 41, 44, and 45. Therefore, the recited programming instructions of these claims are mapped to the analogous steps in the corresponding method claims. The rationale and motivation to combine the Ding and Byrne references apply here. Ding discloses a processor (see Figure 3 in the above rejection of claim 30), and the recited non-transitory computer readable medium is a well-known feature in the art and is not considered novel. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tian (Localising and Classifying Polyps from Colonoscopy Videos using Deep Learning) teaches a deep learning method for detecting, localizing, and classifying polyps in colonoscopy videos. Hirasawa (US 20200337537 A1) teaches image-based diagnosis that assesses feature information about a lesion in the digestive system. Zur (US 20200387706 A1) teaches polyp tracking in endoscopic colon images including image augmentation based on computed vectors. Nygaard (US 20220296081 A1) teaches real-time detection of anatomical landmarks in endoscopic images of the gastrointestinal tract. Hong (HHongUS 20220335599 A1) teaches image-based tumor detection in medical images, including tumor shape and property identification.Hone ( Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON JOSEPH SORRIN whose telephone number is (703)756-1565. The examiner can normally be reached Monday - Friday 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON JOSEPH SORRIN/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Jan 11, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592054
LOW-LIGHT VIDEO PROCESSING METHOD, DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586245
ROBUST LIDAR-TO-CAMERA SENSOR ALIGNMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12566954
SOLVING MULTIPLE TASKS SIMULTANEOUSLY USING CAPSULE NEURAL NETWORKS
2y 5m to grant Granted Mar 03, 2026
Patent 12555394
IMAGE PROCESSING APPARATUS, METHOD, AND STORAGE MEDIUM FOR GENERATING DATA BASED ON A CAPTURED IMAGE
2y 5m to grant Granted Feb 17, 2026
Patent 12547658
RETRIEVING DIGITAL IMAGES IN RESPONSE TO SEARCH QUERIES FOR SEARCH-DRIVEN IMAGE EDITING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+50.6%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 62 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month