Prosecution Insights
Last updated: April 19, 2026
Application No. 18/601,806

SYSTEMS AND METHODS FOR AUTOMATIC CONTEXT-BASED ANNOTATION

Non-Final OA §103
Filed
Mar 11, 2024
Examiner
BITAR, NANCY
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Nationstar Mortgage LLC D/B/A/ Mr Cooper
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
91%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
786 granted / 946 resolved
+21.1% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
978
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
62.1%
+22.1% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 946 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7-15,17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mukhopadhyay (US 2020/0074169 Al) in view of Gong et al (US 2016/0239711) As to claim 1, Mukhopadhyay teaches a method for automatic context-based document annotation, comprising: receiving, by a computing system, a candidate image of a document for annotation identification (202,905, para. [0046] [0081], Mukhopadhyay teaches inputting an image document for subsequent analysis); selecting, by the computing system from a plurality of template images, a template image (915, para. [0081],Mukhopadhyay teaches matching the determined layout against a plurality of document templates to determine a measure of similarity between the document and each of the templates) having a highest correlation between structural features of the candidate image and structural features of the template image ( In step 920, one of the document templates may be selected as a matched template, based on a determination that the determined measure of similarity between the determined skeletal layout and the selected document template exceeds a predetermined threshold similarity value, paragraph [0081]). While Mukhopadhyay teaches the limitation above, Mukhopadhyay fails to teach “populating, by the computing system, the candidate image with one or more annotation labels according to a corresponding one or more annotation labels of the selected template image. “ However, Gong et al teaches the correlation coefficient comprising a weighted combination of luminance, contrast, and structure comparisons for each of a plurality of portions of the template image Where the target object information contains semantic attributes (perhaps provided by the user) or even where this is the only target object information provided, then the candidate object search is carried out on these semantic attributes. Some semantic attributes may be more reliable at providing results or matches and the matching process may include assigning various weights to each attribute to take this into account (i.e. more reliable semantic attributes may have a greater weighting)(paragraph [0017], Gong teaches the semantic attribute are interpreted as the label or descriptive text term that adds increased robustness to viewing condition covariates compared to existing LLF re-identification systems. Meanwhile, the LLF term adds increased robustness to changes of attributes. This in turn facilitates summarization features mentioned earlier such as summarizing attribute changes (see Summarization Feature 3 and item 104 in FIG. 1; ( paragraph [0149][0152][0155]).It would have been obvious to one skilled in the art before filing of the claimed invention to add the annotation labels in order improve identification accuracy. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. As to claim 2, Gong et al teaches the method of claim 1, wherein the correlation is based on a luminance comparison between the candidate image and the template image (Generating semantic attributes for images (i.e. of a user supplied image or of candidate targets) may be achieved by taking the image (or only the portion of the image that displays the object) and extracting from it low-level features (e.g. color, luminance or histogram). These low-level features may be compared with low-level features extracted from images having known semantic attributes (e.g. by user input or preferably from machine learning techniques); paragraph [0012]). As to claim 3, Gong et al teaches the method of claim 1, wherein the correlation is based on a contrast comparison between the candidate image and the template image(Generating semantic attributes for images (i.e. of a user supplied image or of candidate targets) may be achieved by taking the image (or only the portion of the image that displays the object) and extracting from it low-level features (e.g. color, luminance or histogram). These low-level features may be compared with low-level features extracted from images having known semantic attributes (e.g. by user input or preferably from machine learning techniques); paragraph [0012]). As to claim 7, Mukhopadhyay teaches the method of claim 1, wherein the computing system comprises a plurality of computing devices, and wherein selecting the template image further comprises receiving, by a first computing device from a second one or more computing devices of the plurality of computing devices, a correlation score between structural features of the candidate image and a template image of the plurality of template images (915, 920 para. [0081], matching the determined layout against a plurality of document template to determine a measure of similarity between the document and each of the templates). As to claim 8, Mukhopadhyay teaches the method of claim 7, further comprising providing, by the first computing device to the second one or more computing devices, the candidate image and an identification of one or more template images to be compared by the respective computing device( 102,104,116, paragraph [0030] the processor informs the template matcher located on one or more physical devices). As to claim 9, Mukhopadhyay teaches the method of claim 1, wherein populating the candidate image with one or more annotation labels further comprises, for each of the one or more annotation labels, retrieving coordinates and dimensions of the annotation label within the selected template image( 125,925, paragraph [0061][0081] information related to annotations includes coordinate and segmentation region from the image document according to the matched template) As to claim 10, Mukhopadhyay teaches the method of claim 9, further comprising, for each of the one or more annotation labels: extracting alphanumeric text from the candidate image within the retrieved coordinates and dimensions of the annotation label; and adding the extracted alphanumeric text to metadata of the candidate image in association with an identification of the annotation label(940-950 paragraph [0082]). As to claim 11, Mukhopadhyay teaches the method of claim 9, wherein extracting alphanumeric text comprises applying optical character recognition to a portion of the candidate image within the retrieved coordinates and dimensions ((930, para. [0082], Mukhopadhyay teaches applying optical character recognition to the extracted regions of the document image). As to claim 12, Mukhopadhyay teaches the method of claim 9, further comprising receiving a modification to the extracted alphanumeric text; and storing the modified alphanumeric text in metadata of the candidate image ( 935, paragraph [0082] processing the OCT text to correct errors made by the OCR prior to generating the document report). The limitation of claims 13-15,17-20 has been addressed in claims 1-12 above. Claim(s) 4, 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mukhopadhyay (US 2020/0074169 A1) in view of Gong et al (US 2016/0239711 )and further in view of Tian (U.S. Patent Pub No. 2014/0072219). As to claim 4, while Mukhopadhyay and Gong teaches the limitation above they fail to teach” the correlation is based on an edge comparison between the candidate image and the template image. “ Tian teaches edge-preserving filtering to remove noise to later extract horizontal and vertical lines within the document). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Mukhopadhyay by incorporating the edge filtering as taught by Tian, to make the invention that compares input documents to a template database (Mukhopadhyay) and determines features by extracting line segments within the document image (Tian); thus, one of ordinary skilled in the art would be motivated to combine the references since this process would provide an accurate and reliable method of segmenting document images that contain text and non-text contents (Tian, para. [0006]). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art at the effective filing date of the claimed invention. As to claim 6, Tian et al teaches the method of claim 1, wherein detecting the set of structural features comprises filtering noise from the candidate image according to a predetermined window ((S1, S2, para. [0024], Tian teaches edge-preserving filtering to remove noise to later extract horizontal and vertical lines within the document). The limitation of claim 16 has been addressed in claim 4 above. Claim(s) 5 are rejected under 35 U.S.C. 103 as being unpatentable over Mukhopadhyay (US 2020/0074169 A1) in view of Gong et al (US 2016/0239711 )and further in view of Berard (U.S. Patent Pub No. 2009/0092320 A1). As to claim 5, while Mukhopadhyay and Gong teaches the limitation above they fail to teach” scaling the candidate image to a size corresponding to a size of the template images. “ Berard is also in the field of document recognition systems. Berard teaches the method and system comprising scaling the candidate image to a size corresponding to a size of the template images (102, para. [0038], Berard teaches scaling the document image to match the document template).Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Mukhopadhyay by incorporating the document scaling that is taught by Berard, to make the invention that compares input documents to a template database (Mukhopadhyay) and determines feature matching by scaling the documents to match; thus, one of ordinary skilled in the art would be motivated to combine the references since scale corrections would provide a user with faster, more efficient, and adaptive method of document recognition (Berard, para. [0004] [0006]). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art at the effective filing date of the claimed invention. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mrs. Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NANCY . BITAR Examiner Art Unit 2664 /NANCY BITAR/ Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Mar 11, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599437
PRE-PROCEDURE PLANNING, INTRA-PROCEDURE GUIDANCE FOR BIOPSY, AND ABLATION OF TUMORS WITH AND WITHOUT CONE-BEAM COMPUTED TOMOGRAPHY OR FLUOROSCOPIC IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12597132
IMAGE PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12597240
METHOD AND SYSTEM FOR AUTOMATED CENTRAL VEIN SIGN ASSESSMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597189
METHODS AND APPARATUS FOR SYNTHETIC COMPUTED TOMOGRAPHY IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591982
MOTION DETECTION ASSOCIATED WITH A BODY PART
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
91%
With Interview (+8.2%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 946 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month