Prosecution Insights
Last updated: April 19, 2026
Application No. 18/008,572

METHOD AND SYSTEMS FOR DETERMINING AN OBJECT MAP

Non-Final OA §103
Filed
Dec 06, 2022
Examiner
GOEBEL, EMMA ROSE
Art Unit
2662
Tech Center
2600 — Communications
Assignee
The United States Department of Veterans Affairs
OA Round
3 (Non-Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
24 granted / 45 resolved
-8.7% vs TC avg
Strong +47% interview lift
Without
With
+47.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
85
Total Applications
across all art units

Statute-Specific Performance

§101
18.2%
-21.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 6, 2026 has been entered. Status of Claims Claims 1-12 and 21-22, 24-26 and 28-30 are pending. Claims 23 and 27 have been canceled. Claims 29-30 are newly added. Priority Acknowledgement is made of Applicant’s claim of priority from PCT Application No. PCT/US2021/036420, filed June 8, 2021 and U.S. Provisional Application No. 63036280, filed June 8, 2020. Response to Arguments Applicant’s arguments, see p. 7, filed December 3, 2025, with respect to the 35 USC 112(a) rejections have been fully considered and are persuasive. The previously rejected claims have been canceled and therefore the 35 USC 112(a) rejection has been withdrawn. Applicant’s arguments, see p. 7-13, filed December 3, 2025, with respect to the 35 USC 101 rejections have been fully considered and are persuasive. The amendment to the claims to include the recitation “wherein the first machine learning classifier is trained, using a first training set” and “wherein the second machine learning classifier is trained, using a second training set” has overcome the previous abstract idea rejections because training a machine learning classifier is not a mental process. Therefore, the previous 35 USC 101 rejections have been withdrawn. Applicant’s arguments, see p. 13-15 filed December 3, 2025, with respect to the 35 USC 102(a)(1) rejections have been fully considered and are persuasive. The amendments overcome the previous 35 USC 102(a)(1) rejections and they are therefore withdrawn. However, as described below, the independent claims are now rejected under 35 USC 103 for being obvious over the Lian and newly presented Dassopoulos and Daerr references. Dassopoulos teaches a first machine learning classifier trained to detect and classify objects and Daerr teaches a second machine learning classifier trained to identify and classify disease states in the images. Therefore, the claims are rejected under 35 USC 103. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 21-22 and 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1). Regarding claim 1, Lian teaches a method comprising: determining, based on the object, a boundary of the object (Para. [0031], the polyp detection process begins with an optical image, which undergoes an edge detection and edge direction estimation, wherein a crude set of edge pixels are generated using an edge detector); determining, based on the boundary of the object, a plurality of sub-images within the boundary (Para. [0031], an image patch is formulated around each edge pixel, henceforth identified as a central edge pixel); determining, based on the predicted disease states for each sub-image of the plurality of sub-images, a predicted disease state for the object (Para. [0031], a vote accumulation and polyp localization step is performed, wherein edges whose corresponding image patches are assigned to the polyp category will be used in the vote accumulation scheme for polyp localization. Para. [0038], in the ideal classification scenario, all non-polyp edge pixels are removed and the arrangement of polyp-edge pixels indicate the location of polyps). Although Lian teaches performing polyp detection on an image (Lian, Para. [0030]), Lian does not explicitly teach “determining, based on a first machine learning classifier, an object in image data, wherein the first machine learning classifier is trained, using a first training set, to detect and classify objects within the image data”. However, in an analogous field of endeavor Dassopoulos teaches the system can construct improved digital images of polyps. The digital information is further processed by utilizing machine learning techniques to identify and extract relevant features from the digital image data (i.e., determining, based on a first machine learning classifier, an object in image data) (Dassopoulos, Para. [0033]). The generated images of each polyp (i.e., first training set) can be used for training and testing purposes, where a machine learning system applies another set of rules and classifies polyps as one of two or more known types of polyps according to those rules, more specifically, known types of diminutive polyps (i.e., first machine learning classifier is trained to detect and classify objects within the image data) (Dassopoulos, Para. [0020]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the method of Lian with the teaching of Dassopoulos by including a machine learning classifier trained using polyp images to detect and classify polyps in the image data. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for a real-time prediction of the histology of polyps, as recognized by Dassopoulos. Although Lian in view of Dassopoulos teaches determining a polyp and non-polyp category for each sub-image (Lian, Para. [0031]), they do not explicitly teach “determining, for each sub-image of the plurality of sub-images and using a second machine learning classifier, a predicted disease state, wherein the second machine learning classifier is trained, using a second training set, to identify and classify disease states for images”. However, in an analogous field of endeavor, Daerr teaches a second machine-learning process to generate a classification result for the subject, the classification result being representative of a state or progression of a disease or disability of the subject (Daerr, Para. [0038]). Daerr further teaches training the second machine learning process with classification data (i.e., second training set) representative of a state of a disease or disability of the training subject (Daerr, Para. [0026]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lian in view of Dassopoulos with the teachings of Daerr by including a second machine-learning classifier trained using classification data to identify and classify a disease state in the image. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for and efficient and effective approach to assessing a state or progression of a disease of a subject, as recognized by Daerr. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 2, Lian in view of Dassopoulos further in view of Daerr teaches the method of claim 1, further comprising: determining, based on the predicted disease states for each sub-image of the plurality of sub-images, a visual indication for each sub-image of the plurality of sub-images (Para. [0031], each image patch is then grouped into one of, for example, six categories according to the orientation of the corresponding central edge pixels. Each category of patches is fed to the corresponding classifier. The goal is to classify image patches into polyp and non-polyp categories, where the polyp category contains the patches whose central edge pixels lie on the boundary of polyps, and the non-polyp category contains the patches whose central edge pixels are found on for example, vessels, folds, wrinkles and other objects with strong boundaries in the images. Para. [0037], image patches with classification confidence less than a threshold, for example 0.5, may be discarded, meaning that their central edge pixels are excluded from the vote accumulation stage. Only edge pixels whose corresponding image patches pass the classification threshold may participate in the vote accumulation stage); determining, based on the visual indications for each sub-image of the plurality of sub-images and the image data, a composite image (Para. [0028], also configured, for example, to highlight and/or alert an operator of the polyp detection system upon identification of a polyp location with the requisite or desired features. Para. [0050], if a polyp is positively identified, then, at process block 208, an alert or signal may be provided to an operator, indicating a positive polyp identification. The alert or signal may take any shape, form, or sound. Subsequently, at process block 210, a report is generated, which may take any shape or form); and outputting the composite image (Para. [0028], the output may take any shape or form, as desired, and may include visual and/or audio system, configured for displaying, for example, acquired optical images as a result of a medical procedure, such as a colonoscopy, and also configured, for example, to highlight and/or alert an operator of the polyp detection system upon identification of a polyp location with the requisite or desired features). Claims 21 and 22 recite apparatuses with elements corresponding to the steps recited in Claims 1 and 2, respectively. Therefore, the recited elements of these claims are mapped to the proposed combination in the same manner as the corresponding steps in their corresponding method claims. Additionally, the rationale and motivation to combine the Lian, Dassopoulos and Daerr references, presented in rejection of Claim 1, apply to these claims. Finally, the combination of the Lian, Dassopoulos and Daerr references disclose a processor and a memory (Lian, Para. [0022]). Claims 25 and 26 recite computer-readable storage mediums storing programs with instructions corresponding to the steps recited in Claims 1 and 2, respectively. Therefore, the recited elements of these claims are mapped to the proposed combination in the same manner as the corresponding steps in their corresponding method claims. Additionally, the rationale and motivation to combine the Lian, Dassopoulos and Daerr references, presented in rejection of Claim 1, apply to these claims. Finally, the combination of the Lian, Dassopoulos and Daerr references discloses a computer readable storage medium (Lian, Para. [0022]). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1), as applied to claims 1-2, 21-22 and 25-26 above, and further in view of Ngo Dinh et al. (US 2019/0385302 A1). Regarding claim 3, Lian in view of Dassopoulos further in view of Daerr teaches the method of claim 2, as described above. Although Lian in view of Dassopoulos further in view of Daerr teaches discarding image patches with a classification confidence lower than a threshold (Lian, Para. [0037]), they do not explicitly teach “wherein the visual indication changes a hue of each of the one of the sub-images that is weighted according to a first sub-image and a second sub-image of the plurality of sub-images”. However, in an analogous field of endeavor, Ngo Dinh teaches the at least one processor may overlay a border comprising a two-dimensional shape around a region of the image as including the feature-of-interest, the border being rendered a first color. After an elapsed period of time, the processor may modify the border to appear in a second color if the feature-of-interest is a true positive, and to appear in a third color if the feature-of-interest is a false positive (Ngo Dinh, Para. [0121]). Additionally, or alternatively, the at least one processor may modify the border to the second color if the feature-of-interest is classified in a first category, and modify the border to the third color if the feature-of-interest is classified in a second category. For example, if the feature-of-interest is a lesion, the first category may comprise cancerous lesions and the second category may comprise non-cancerous lesions (Ngo Dinh, Para. [0122]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lian in view of Dassopoulos further in view of Daerr with the teachings of Ngo Dinh by including changing the hue the image patches (i.e., border) based on the classification of the feature-of-interest. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for identifying and providing the location of a detected object, as recognized by Ngo Dinh. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1) and Ngo Dinh et al. (US 2019/0385302 A1), as applied to claim 3 above, and further in view of Dror Zur (US 2020/0387706 A1). Regarding claim 4, Lian in view of Dassopoulos further in view of Daerr and Ngo Dinh teaches the method of claim 3, as described above. Although Lian in view of Dassopoulos further in view of Daerr and Ngo Dinh teaches image patches formed around each edge pixel (Lian, Para. [0031]), they do not explicitly teach “wherein the first sub-image comprises all pixels within the boundary”. However, in an analogous field of endeavor, Zur teaches the detection neural network may include a segmentation process that identifies the location of the detected polyp in the image, for example, by generating a boundary box and/or other contour that delineates the polyp in the 2D frame (Zur, Para. [0099]). The volume of the polyp may be calculated automatically by tasking the computed 3D values of the 2D pixels inside the polyp delineating contour and/or bounding box, and finding the best fitting 3D sphere (or circle if the polyp is flat) to the exposed 3D surface created by interpolating between the 3D values of the pixels. The radius of the sphere (or the circle) is the polyp size (Zur, Para. [0191]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lian in view of Dassopoulos further in view of Daerr and Ngo Dinh with the teachings of Zur by including a first sub-image (i.e., region of interest) that comprises all pixels within a boundary (i.e., boundary box). One having ordinary skill in the art would have been motivated to combine these references, because doing so would allow for processing colon polyps automatically detected during a colonoscopy procedure, as recognized by Zur. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1) and Ngo Dinh et al. (US 2019/0385302 A1), as applied to claim 3 above, and further in view of Yao et al. (US 7,454,045 B2). Regarding claim 5, Lian in view of Dassopoulos further in view of Daerr and Ngo Dinh teaches the method of claim 3, as described above. Although Lian in view of Dassopoulos further in view of Daerr and Ngo Dinh teaches creating image patches, for example of size 25x25, around each central edge pixel (Lian, Para. [0036]), they do not explicitly teach “wherein the first sub-image is a first 64-pixel-by-64-pixel contiguous block and the second sub-image is a second 64-pixel-by-64-pixel contiguous block”. However, in an analogous field of endeavor, Yao teaches that candidate features of interest can be presented as regions associated with the detected features. For example, a digital representation of a region including and surrounding the feature, such as an n by n pixel region (for example, 64x64 pixels or some other size) can be submitted (Col. 9, lines 27-37). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lian in view of Dassopoulos further in view of Daerr and Ngo Dinh with the teachings of Yao by including the sub-images (i.e., image patches) as 64x64 pixel blocks. One having ordinary skill in the art would have been motivated to combine these references, because doing so would allow for identifying features in anatomical structures and determining if they are of interest or not of interest, as recognized by Yao. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention. Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1) and Ngo Dinh et al. (US 2019/0385302 A1), as applied to claim 3 above, and further in view of Ozcan et al. (US 2021/0264214 A1, with priority to PCT/US2019/025014, filed March 29, 2019). Regarding claim 6, Lian in view of Dassopoulos further in view of Daerr and Ngo Dinh teaches the method of claim 3, as described above. Although Lian in view of Dassopoulos further in view of Daerr and Ngo Dinh teaches creating image patches, for example of size 25x25, around each central edge pixel (Lian, Para. [0036]), they do not explicitly teach “wherein a fourth of the first sub-image is equal to a first fourth of the second sub-image”. However, in an analogous field of endeavor, Ozcan teaches for patch generation, data augmentation was applied by using 25% patch overlap for the kidney tissue images (Ozcan, Para. [0058]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lian in view of Dassopoulos further in view of Daerr and Ngo Dinh with the teachings of Ozcan by including that the sub-images overlap by 25% (i.e., a fourth). One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for more accurate object detection in images. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 7 Lian in view of Dassopoulos further in view of Daerr, Ngo Dinh and Ozcan teaches the method of claim 6, and further teaches wherein the plurality of sub-images includes a third sub-image, and a second fourth of the first sub-image is equal to the third sub-image and the first fourth and the second fourth overlap (Ozcan, Para. [0058], for the patch generation, data augmentation was applied by using 25% patch overlap for the kidney tissue images). The proposed combination as well as the motivation for combining the Lian, Dassopoulos, Daerr, Ngo Dinh, and Ozcan references presented in the rejection of Claim 6, apply to Claim 7 and are incorporated herein by reference. Thus, the method recited in Claim 7 is met by Lian in view of Dassopoulos further in view of Daerr, Ngo Dinh and Ozcan. Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1), as applied to claims 1-2, 21-22 and 25-26 above, and further in view of the American Society for Gastrointestinal Endoscopy (ASGE) (“PIVI on Real-time Endoscopic assessment of the histology of diminutive colorectal polyps”, included with Applicant’s IDS). Regarding claim 8, Lian in view of Dassopoulos further in view of Daerr teaches the method of claim 1, as described above. Although Lian in view of Dassopoulos further in view of Daerr teaches an alert or signal may be provided to an operator indicating a positive polyp identification (Lian, Para. [0050]), they do not explicitly teach “resecting a portion of tissue defined by the plurality of sub-images based on the predicted disease state; and discarding the portion”. However, in an analogous field of endeavor, the American Society for Gastrointestinal Endoscopy (ASGE) teaches a resect-and-discard approach applied by fully trained endoscopists and applies only to lesions with a typical benign appearance (ASGE, p. 4, para. 4). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lian in view of Dassopoulos further in view of Daerr with the teachings of the ASGE by including performing a resect-and-discard approach when the lesions have a typical benign appearance (i.e., disease state). One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for performing appropriate use of technologies to improve digestive health outcomes, as recognized by the ASGE. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 9, Lian in view of Dassopoulos further in view of Daerr teaches the method of claim 1, as described above. Although Lian in view of Dassopoulos further in view of Daerr teaches an alert or signal may be provided to an operator indicating a positive polyp identification (Lian, Para. [0050]), they do not explicitly teach “resecting a portion of tissue defined by the plurality of sub-images based on the predicted disease state; and analyzing the portion for malignancy”. However, in an analogous field of endeavor, the ASGE teaches that lesions that are hard, firm, ulcerated or otherwise atypical, which may rarely occur in the less than or equal to 5mm size category, should be resected and submitted for pathology (ASGE, pg. 4, para. 4). The proposed combination as well as the motivation for combining the Lian in view of Dassopoulos further in view of Daerr and ASGE references presented in the rejection of Claim 8, apply to Claim 9 and are incorporated herein by reference. Thus, the method recited in Claim 9 is met by Lian in view of Dassopoulos further in view of Daerr and the ASGE. Claims 10, 24 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1), as applied to claims 1-2, 21-22 and 25-26 above, and further in view of Chen et al. (US 2021/0012459 A1, with priority to PCT/CN2018/093522, filed June 29, 2018, USPGPub used herein as a translation and for mapping purposes). Regarding claim 10, Lian in view of Dassopoulos further in view of Daerr teaches the method of claim 1, as described above. Although Lian in view of Dassopoulos further in view of Daerr teaches a crude set of edge pixels are generated using an edge detector (Lian, Para. [0031]), they do not explicitly teach “resizing a portion of the image data according to the boundary, the resizing an upsample of the portion to a dimension”. However, in an analogous field of endeavor, Chen teaches an upsample region can be a region including the border region or a region within the border region (Chen, Para. [0060]) and teaches performing an upsampling operation in the upsample region (Chen, Para. [0081]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lian in view of Dassopoulos further in view of Daerr with the teachings of Chen by including resizing the image data by upsampling the region within the boundary. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for performing upsampling in a desired region, as recognized by Chen. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention. Claim 24 recites an apparatus with elements corresponding to the steps recited in Claim 10. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Lian, Dassopoulos, Daerr and Chen references, presented in rejection of Claim 10, apply to this claim. Finally, the combination of the Lian, Dassopoulos, Daerr and Chen references discloses a processor and a memory (Lian, Para. [0022]). Claim 28 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 10. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Lian, Dassopoulos, Daerr and Chen references, presented in rejection of Claim 10, apply to this claim. Finally, the combination of the Lian, Dassopoulos, Daerr and Chen references discloses a computer readable storage medium (Lian, Para. [0022]). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1), as applied to claims 1-2, 21-22 and 25-26 above, and further in view of Zhang et al. (US 8,208,700 B2). Regarding claim 11, Lian in view of Dassopoulos further in view of Daerr teaches the method of claim 1, further comprising: Although Lian in view of Dassopoulos further in view of Daerr teaches a crude set of edge pixels are generated using an edge detector (Lian, Para. [0031]), Lian does not explicitly teach “resizing a portion of the image data according to the boundary, the resizing a downsample of the portion to a dimension”. However, in an analogous field of endeavor, Zhang teaches extracting a ROI (region of interest) from the mass candidate and the ROI is down-sampled to a fixed dimension (such as, for example, 256x256) (Zhang, Col. 2, lines 45-50). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lian in view of Dassopoulos further in view of Daerr with the teachings of Zhang by including downsampling the image data according to the boundary (i.e., region of interest) to a fixed dimension. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for downsampling a specific region of an image, as recognized by Zhang. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1) and Zhang et al. (US 8,208,700 B2), as applied to claim 11 above, and further in view of Jun et al. (“Automated diagnosis of pneumothorax using an ensemble of convolutional neural networks with multi-sized chest radiography images”). Regarding claim 12, Lian in view of Dassopoulos further in view of Daerr and Zhang teaches the method of claim 11, as described above. Although Lian in view of Dassopoulos further in view of Daerr and Zhang teaches downsampling a specific region of an image to a fixed dimension of 256x256 (Zhang, Col. 2, lines 45-50), they do not explicitly teach “wherein the dimension is 384 pixels by 384 pixels, wherein the boundary is squarer than oblong”. However, in an analogous field of endeavor, Jun teaches resizing 1024x1024 original radiography images into three different sized images of 512x512, 384x384, and 256x256 (Jun, p. 3). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Lian in view of Dassopoulos further in view of Daerr and Zhang with the teachings of Jun by including resizing the portion of the image data to a dimension of 384x384 pixels. One having ordinary skill in the art would have been motivated to combine these references, because doing so would allow for efficiently classifying relatively large images by resizing, as recognized by Jun. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claims 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Lian et al. (US 2016/0217573 A1) in view of Dassopoulos et al. (US 2020/0265275 A1) further in view of Daerr et al. (US 2020/0253548 A1), as applied to claims 1-2, 21-22 and 25-26 above, and further in view of Dror Zur (US 2020/0387706 A1). Regarding claim 29, Lian in view of Dassopoulos further in view of Daerr teaches the one or more non-transitory computer-readable media of claim 26, as described above. Although Lian in view of Dassopoulos further in view of Daerr teaches highlighting and/or alerting an operator of the polyp detection system upon identification of a polyp location (Para. [0028]), they do not explicitly teach “wherein the visual indication changes a hue of each of the one of the sub-images that is weighted according to a first sub-image and a second sub-image of the plurality of sub-images”. However, in an analogous field of endeavor, Zur teaches instructions are generated for augmenting the image with the location of the polyp detected, for example, marking the ROI (e.g., boundary box) delineating the detected polyp, and/or color coding the polyp and/or boundary box, and/or an arrow pointing to the polyp (Zur, Para. [0208]). The alert may be generated, for example, as a coloring of the polyp and/or border of the ROI delineating the polyp on the image using a unique color denoting a suggestion to remove (e.g., red for removal and green to leave in) (Zur, Para. [0216]). Therefore, it would have been obvious to one having ordinary skill in the art to modify the computer-readable media of Lian in view of Dassopoulos further in view of Daerr with the teachings of Zur by including changing the hue of the sub-image (i.e., color coding the polyp). One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for processing colon polyps automatically detected during a colonoscopy procedure, as recognized by Zur. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 30 recites a system with elements corresponding to the elements recited in Claim 29. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding elements in its corresponding computer-readable medium claim. Additionally, the rationale and motivation to combine the Lian, Dassopoulos, Daerr and Zur references, presented in rejection of Claim 29, apply to this claim. Finally, the combination of the Lian, Dassopoulos, Daerr and Zur references a processor and a memory (Lian, Para. [0022]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Emma Rose Goebel/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Dec 06, 2022
Application Filed
May 20, 2025
Non-Final Rejection — §103
Aug 29, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103
Dec 03, 2025
Response after Non-Final Action
Jan 06, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597236
FINE-TUNING JOINT TEXT-IMAGE ENCODERS USING REPROGRAMMING
2y 5m to grant Granted Apr 07, 2026
Patent 12597129
METHOD FOR ANALYZING IMMUNOHISTOCHEMISTRY IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12597093
UNDERWATER IMAGE ENHANCEMENT METHOD AND IMAGE PROCESSING SYSTEM USING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12597124
DEBRIS DETERMINATION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12588885
FAT MASS DERIVATION DEVICE, FAT MASS DERIVATION METHOD, AND FAT MASS DERIVATION PROGRAM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+47.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month