Prosecution Insights
Last updated: April 19, 2026
Application No. 17/862,353

COMPUTER-IMPLEMENTED SYSTEMS AND METHODS FOR OBJECT DETECTION AND CHARACTERIZATION

Final Rejection §103§112
Filed
Jul 11, 2022
Examiner
ZUBERI, MOHAMMED H
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Cosmo Artificial Intelligence - AI Limited
OA Round
4 (Final)
70%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
306 granted / 437 resolved
+15.0% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
460
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 437 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to communication as filed on 6/9/2025. This action is made Non-Final. Claims 1-7, 9-24, 26-29 and 54 are pending in the case. Claims 1, 18, and 29 are independent claims. Claim 8 has been canceled and claims 1, 9, 18, 29 and 54 have been amended. Response to Arguments Applicant’s arguments with respect to claim(s) 1-7, 9-24, 26-29 and 54 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant is directed to the updated rejection of the claims necessitated by the amendment. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. The rejection of claim 54 under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, has been withdrawn as necessitated by the amendment. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-24, 26-29 and 54 is/are rejected under 35 U.S.C. 103 as being unpatentable over Michael F Byrne et al, (“Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model” GUT MICROBIOTA, vol. 68, no. 1, 1 January 2019 pages, pages 94-100 from IDS filed 7/11/2022 hereinafter Byrne) in view of Zur (USPUB 20180253839 A1) and further in view of Tegzes (USPUB 20210073987 A1). Claim 1: Byrne teaches A computer-implemented system for processing real-time video (Abstract and Introduction sections), the system comprising at least one processor configured to: process a real-time video captured from a medical image device during a medical procedure, the real-time video comprising a plurality of frames (Abstract and Methods sections); detect an object of interest in the plurality of frames (Abstract and Methods sections); execute a plurality of neural networks, the plurality of neural networks including: a trained classification network configured to determine a classification of the object of interest; and a trained size network configured to determine a size associated with the object of interest (Abstract, Introduction and methods sections: “An AI model trained on endoscopic video can differentiate diminutive adenomas from hyperplastic polyps with high accuracy...the...NICE...classification, could enable differentiating hyperplastic from adenomatous polyps. The NICE classification scheme was designed to enable trained endoscopists to recognize visual cues such as colour, presence of vessels and surface patterns...virtual chromoendoscopy using NBI...is recommended to assess polyps of 5 mm or less during colonoscopy...to determine whether they are adenomatous or hyperplastic...The DCNN model allowed essentially real time analysis of endoscopic polyp videos and calculates a probability that a polyp is a conventional adenoma or a serrated class lesion. The probability of a hyperplastic or adenomatous polyp...is displayed immediately on each endoscopic video image”); identify, based on two or more of the classification, the location, and the size, a medical treatment guideline; and display, in real-time on a display device during the medical procedure, information indicating the identified medical treatment guideline (Title, Abstract and Introduction sections, page 99 col 2). Byrne, by itself, does not seem to completely teach execute a plurality of neural networks in parallel; a trained location network configured to determine a location associated with the object of interest. The Examiner maintains that these features were previously well-known as taught by Zur. Zur teaches execute a plurality of neural networks in parallel; a trained location network configured to determine a location associated with the object of interest (0150-151, 0153: the system uses a first CNN which works on the RGB channels of the image, a second CNN which works on the first and second derivatives of the intensity channel of the image (for example, Sobel derivatives, Laplacian of Gaussian operator, and Canny edge detector), and a third CNN which works on three successive time occurrence of the patch or the image to let the network take in consideration the continuity of the polyp or suspicious region appearance. The results of the three networks are unified to one decision based on max operator, or any type of weighted average which its weight parameters can be learned together and in parallel with the learning process of the three CNN's... A specific CNN (in the architecture of FIG. 8 for example) can also be taught to detect and segment Lumen or Unfocused regions in the image. The Decision Trees, Random Forest and SVM, can also operate in parallel to the main three CNN's... The combination of the conventional (unsupervised) detectors and supervised learning detectors and classifiers actually decides if the system can cope with just notifying if a frame contain a polyp, or also locate and segment the polyp in the frame. At least two conventional detectors and one supervised machine learning method are needed to be integrated together in order to notify if there is a polyp or suspicious region in the frame, and at least three conventional detectors and two supervised machine learning methods are needed to be integrated together in order to locate and segment the polyp or suspicious regions in the frame). Byrne and Zur are analogous art because they are from the same problem-solving area, utilizing neural networks to assist in the processing of images taken from medical imaging devices. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Byrne and Zur before him or her, to combine the teachings of Byrne and Zur. The rationale for doing so would have been to more efficiently process images taken from medical imaging devices. Therefore, it would have been obvious to combine Byrne and Zur to obtain the invention as specified in the instant claim(s). Byrne and Zur do not seem to completely teach a trained location network configured to determine a location relative to a human organ within a human body associated with the object of interest. The Examiner maintains that these features were previously well-known as taught by Tegzes. Tegzes teaches a trained location network configured to determine a location relative to a human organ within a human body associated with the object of interest (Fig 27, 0055, 0133, 0165, 0171: Certain examples use neural networks and/or other machine learning to implement a new workflow for image analysis including body detection in an image (e.g., a two-dimensional and/or three-dimensional computed tomography (CT), x-ray, etc., image), generation of a bounding box around a region of interest, and voxel analysis in the bounding box region. Certain examples facilitate a cloud-shaped stochastic feature-set of a fully-connected network (FCN) in conjunction with multi-layer input features of a CNN using innovative network architectures with gradient boosting machine (GBM) stacking over the FCN and CNN with associated feature sets to segment an image and identify organ(s) in the image... acquired image data can be analyzed and segmented to identify one or more organs and/or other region(s) of interest in the image data. At block 1902, the body is detected in the image data. As described above, irrelevant parts of the image are excluded using slice segmentation and analysis to identify the body in which the organ and/or other region of interest lies. At block 1904, a bounding box is formed around the organ or other region of interest identified in the image data. At block 1906, voxel-level segmentation is used to process image data within the bounding box... FIG. 27 shows example image labelling in which portions of each image 2702-2712 are associated with an appropriate label 2720 identifying the anatomy, organ, region, or object of interest (e.g., brain, head (and neck), chest, upper abdomen, lower abdomen, upper pelvis, center pelvis, lower pelvis, thigh, shin, foot, etc... From the body object(s), a trained binary classifier (e.g., a CNN binary classifier, etc.) identifies a particular organ and/or other item of interest using a plurality of slices in a plurality of directions (e.g., three slices, one each from the axial, coronal, and sagittal views, etc.) to be classified and combined to form a bounding box around the item from the plurality of views. Within the bounding box, voxels are analyzed to segment the voxels and determine whether or not the voxel forms part of the organ/item of interest). Byrne and Tegzes are analogous art because they are from the same problem-solving area, utilizing neural networks to assist in the processing of images taken from medical imaging devices. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Byrne and Tegzes before him or her, to combine the teachings of Byrne and Tegzes. The rationale for doing so would have been to more efficiently process images taken from medical imaging devices. Therefore, it would have been obvious to combine Byrne and Tegzes to obtain the invention as specified in the instant claim(s). Claim 2: Byrne discloses the medical procedure includes at least one of an endoscopy, a gastroscopy, a colonoscopy, or an enteroscopy (Methods section). Claim 3: Byrne discloses the object of interest includes at least one of a formation on or of human tissue, a change in human tissue from one type of cell to another type of cell, an absence of human tissue from a location where the human tissue is expected, or a lesion (Abstract, Introduction and Methods sections). Claim 4: Byrne discloses the information indicating the identified medical treatment guideline includes an instruction to leave or resect the object of interest (Abstract and Introduction sections). Claim 5: Byrne discloses the information indicating the identified medical treatment guideline includes a type of resection (Abstract and Introduction sections). Claim 6: Byrne discloses generate a confidence value for the identified medical treatment guideline (Methods section). Claim 7: Byrne discloses the determined classification is based on at least one of a histological classification, a morphological classification, a structural classification, or a malignancy classification (Introductions section). Claim 9: Byrne discloses the location in the human body is one of a location in a rectum, sigmoid colon, descending colon, transverse colon, ascending colon, or cecum (Discussion section). Claim 10: Byrne discloses the determined size associated with the object of interest is a numeric value or a size classification (Methods section). Claim 11: Byrne discloses apply one or more neural networks that implement a trained quality network configured to: determine a frame quality associated with at least one of the plurality of frames; and generate a confidence value associated with the determined frame quality (Methods section). Claim 12: Byrne discloses aggregate data associated with the determined classification, location, and size when at least one of the determined frame quality or the confidence value is above a predetermined threshold; and display, on the display device, at least a portion of the aggregated data (Methods section). Claim 13: Byrne discloses detect a plurality of objects of interest in the plurality of frames; determine a plurality of classifications and sizes associated with the plurality of objects of interest, wherein a classification and a size in the plurality of determined classifications and sizes are associated with a detected object of interest in the detected plurality of objects of interest; and display, on the display device, information associated with one or more determined classifications and sizes (Title, Abstract and Methods sections). Claim 14: Byrne discloses apply one or more neural networks that implement an encoder network configured to encode the object of interest in the plurality of frames by processing an area surrounding the object of interest (Methods section). Claim 15: Byrne discloses generate a latent representation of the object of interest encoded by the encoder network (Methods section). Claim 16: Byrne discloses provide the latent representation of the object of interest with the classification network, the location network, and the size network (Methods section). Claim 17: Byrne discloses track the object of interest in the plurality of images to determine temporal information of the object of interest (Abstract, Introduction and Methods sections). Claim 18: Claim 18 essentially recites a computer implemented system for completing the steps of claim 1. As Byrne discloses A computer-implemented system for processing real-time video (Abstract and Introduction sections), claim 18 is rejected using the same rationale used above in the rejection of claim 1. Claim 19: Byrne discloses the medical procedure includes at least one of an endoscopy, a gastroscopy, a colonoscopy, or an enteroscopy (Methods section). Claim 20: Byrne discloses the object of interest includes at least one of a formation on or of human tissue, a change in human tissue from one type of cell to another type of cell, an absence of human tissue from a location where the human tissue is expected, or a lesion (Abstract, Introduction and Methods sections). Claim 21: Byrne discloses the information indicating the medical treatment guideline includes an instruction to leave or resect the object of interest (Abstract and Introduction sections). Claim 22: Byrne discloses the information indicating the identified medical treatment guideline includes a type of resection (Abstract and Introduction sections). Claim 23: Byrne discloses generate a confidence value for the identified medical treatment guideline (Methods section). Claim 24: Byrne discloses the trained characterization network comprises: a trained classification network configured to determine a classification associated with the object of interest and to generate a classification confidence value associated with the determined classification; a trained location network configured to determine a location associated with the object of interest and to generate a location confidence value associated with the determined location; and a trained size network configured to determine a size associated with the object of interest and to generate a size confidence value associated with the determined size (Abstract, Introduction and Methods sections: “An AI model trained on endoscopic video can differentiate diminutive adenomas from hyperplastic polyps with high accuracy...the...NICE...classification, could enable differentiating hyperplastic from adenomatous polyps. The NICE classification scheme was designed to enable trained endoscopists to recognize visual cues such as colour, presence of vessels and surface patterns...virtual chromoendoscopy using NBI...is recommended to assess polyps of 5 mm or less during colonoscopy...to determine whether they are adenomatous or hyperplastic...The DCNN model allowed essentially real time analysis of endoscopic polyp videos and calculates a probability that a polyp is a conventional adenoma or a serrated class lesion. The probability of a hyperplastic or adenomatous polyp...is displayed immediately on each endoscopic video image”). Claim 26: Byrne discloses apply one or more neural networks that implement a trained quality network configured to: determine a frame quality associated with at least one of the plurality of frames; and generate a confidence value associated with the determined frame quality (Methods section). Claim 27: Byrne discloses aggregate data associated with the plurality of features when at least one of the determined frame quality or the confidence value is above a predetermined threshold; and display, on the display device, at least a portion of the aggregated data (Methods section). Claim 28: Byrne discloses detect a plurality of objects of interest in the plurality of frames; determine a plurality of sets of features associated with the plurality of objects of interest, wherein a set of features in the plurality of sets of features includes characterization and size information associated with a detected object of interest in the plurality of objects of interest; and display, on the display device, information associated with one or more sets of features in the plurality of sets of features (Title, Abstract and Methods sections). Claim 29: Claim 29 essentially recites a method comprising the steps of claim 1, and is therefore rejected over Byrne, Zur and Tegzes using the same rationale used above in the rejection of claim 1. Claim 54: Byrne teaches the medical treatment guideline is determined by the characterization network (Title, Abstract and Introduction sections, page 99 col 2). Note The Examiner cites particular columns, line numbers and/or paragraph numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2123. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed in the attached PTOL-892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED-IBRAHIM ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached on M-Th 8-6 Fri: 7-12/OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steph Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED H ZUBERI/Primary Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Jul 11, 2022
Application Filed
Sep 07, 2024
Non-Final Rejection — §103, §112
Oct 09, 2024
Applicant Interview (Telephonic)
Dec 10, 2024
Response Filed
Mar 18, 2025
Final Rejection — §103, §112
Jun 04, 2025
Applicant Interview (Telephonic)
Jun 09, 2025
Request for Continued Examination
Jun 11, 2025
Response after Non-Final Action
Jun 12, 2025
Examiner Interview Summary
Jun 13, 2025
Non-Final Rejection — §103, §112
Sep 09, 2025
Examiner Interview Summary
Sep 09, 2025
Applicant Interview (Telephonic)
Sep 19, 2025
Response Filed
Dec 17, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585923
DESPARSIFIED CONVOLUTION FOR SPARSE ACTIVATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12582478
SYSTEMS AND METHODS FOR INTEGRATING INTRAOPERATIVE IMAGE DATA WITH MINIMALLY INVASIVE MEDICAL TECHNIQUES
2y 5m to grant Granted Mar 24, 2026
Patent 12579650
IMPROVED SPINAL HARDWARE RENDERING
2y 5m to grant Granted Mar 17, 2026
Patent 12567496
METHOD AND APPARATUS FOR DISPLAYING AND ANALYSING MEDICAL SCAN IMAGES
2y 5m to grant Granted Mar 03, 2026
Patent 12547819
MODULAR SYSTEMS AND METHODS FOR SELECTIVELY ENABLING CLOUD-BASED ASSISTIVE TECHNOLOGIES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
70%
Grant Probability
98%
With Interview (+27.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 437 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month