Prosecution Insights
Last updated: April 19, 2026
Application No. 18/697,170

METHOD FOR EXTRACTING AND STRUCTURING INFORMATION

Non-Final OA §101§103§112
Filed
Mar 29, 2024
Examiner
CHAN, CAROL WANG
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Faculdades Catolicas
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
292 granted / 351 resolved
+21.2% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
19 currently pending
Career history
370
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
24.1%
-15.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 351 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/29/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 1 is objected to because of the following informalities: Lines 3-8 recite “(1) PDF page separator, (2) block detection and segmentation model, (3) table extractor, (4) image extractor, (5) image classification model, (6) text extractor, (7) computer vision model for improving the image quality of the texts, (8) optical character recognition model, (09) model for spelling correction, (10) models for semantic enrichment of the text, (11) output file organizer and (12) metadata aggregator for information enrichment, algorithm for generating synthetic documents and Artificial Intelligence models”, which Examiner suggests amending to “(1) a PDF page separator, (2) a block detection and segmentation model, (3) a table extractor, (4) an image extractor, (5) an image classification model, (6) a text extractor, (7) a computer vision model for improving the image quality of the texts, (8) an optical character recognition model, (09) a model for spelling correction, (10) models for semantic enrichment of the text, (11) an output file organizer and (12) a metadata aggregator for information enrichment, an algorithm for generating synthetic documents, and artificial intelligence models”. Appropriate correction is required. Claim 2 is objected to because of the following informalities: Line 3 recites “a) Transform all pages…images (1)” which Examiner suggests amending to “a) transforming all pages…images” (deleting “(1)”). Line 4 recites “b) Use the (2) block detection model” which Examiner suggest amending to “b) using the block detection and segmentation model” (deleting “(2)”). Line 6 recites “c) Extract (3) table” which Examiner suggests amending to “c) extracting a table” (deleting “(3)”). Line 7 recites “CSV format” which Examiner suggests amending to “Comma Separated Values (CSV) format”. Line 8 recites “d) Extract (4) images” which Examiner suggests amending to “d) extracting images” (deleting “(4)”). Line 9 recites “processed by one (5) image classification model)” which Examiners suggests amending to “processed by one image classification model” (deleting “(5)”). Line 11 recites “e) Extract (6) content if it is text, list or equation, but if it is not possible…” which Examiner suggests amending to “e) extracting content if it is text, a list, or an equation, wherein if it is not possible…” (deleting “(6)”). Line 12 recites “by (7) computer vision models” which Examiner suggests amending to “by computer vision models” (deleting “(7)”). Line 13 recites “one (8) optical character recognition” which Examiner suggests amending to “one optical character recognition” (deleting “(8)”). Lines 15-16 recite “f) For text format blocks, the textual content is also subjected to steps of (9) spelling correction… and (10) enrichment” which Examiner suggests amending to “f) for text format blocks, subjecting the textual content to steps of spelling correction… and enrichment” (deleting “(9)” and “(10)”). Lines 17-18 recite “metadata (including processes…and Part of Speech Tagging), being stored in XML files” which Examiner suggests amending to “metadata, including processes…and part of speech tagging, being stored in Extensible Markup Language (XML) files” (deleting the parentheses after “metadata” and “tagging”). Lines 19-20 recite “g) All extracted information is (11) organized in the output file organizer and (12) new information is aggregated to enrich metadata” which Examiner suggests amending to “g) organizing all extracted information in the output file organizer and aggregating new information to enrich metadata” (deleting “(11)” and “(12)”). Appropriate correction is required. Claim 5 is objected to because of the following informalities: Line 2 recites “4” which Examiner suggests deleting. Line 3 recites “a) Generation of synthetic documents (1),” which Examiner suggests amending to ““a) generating synthetic documents” (deleting “(1)” and “,”). Line 4 recites “b) Training/Tuning of…classification models (2)” which Examiner suggests amending to “b) training/tuning…classification models” (deleting “of” and “(2)”). Line 5 recites “c) Quality control…and real sets (3)” which Examiner suggests amending to “c) controlling quality…and real sets” (deleting “(3)”). Line 6 recites “d) Assessment of extraction results…domain (4)” which Examiner suggests amending to “d) assessing extraction results…domain” (deleting “of” and “(4)”). Line 7 recites “e) Identification of new formats…formats (5)” which Examiner suggests amending to “e) identifying new formats…formats” (deleting “of” and “(5)”). Line 8 recites “f) Adjustment of parameters / Configuration of new synthetic formats (6)” which Examiner suggests amending to “f) adjusting parameters / configuring new synthetic formats” (deleting “of” and “(6)”). Appropriate correction is required. Claim 6 is objected to because of the following informalities: Line 3 recites “(2)” which Examiner suggests deleting. Line 4 recites “(5)” and “(7)” which Examiner suggests deleting. Line 5 recites “(8)” and “(09)” which Examiner suggests deleting. Line 6 recites “(10)” which Examiner suggests deleting. Lines 6-7 recite “models for semantic enrichment of the text (including processes for recognizing named entities, identifying relations and Part of Speech Tagging)” which Examiner suggests amending to “models for semantic enrichment of the text, including processes for recognizing named entities, identifying relations and part of speech tagging” (deleting the parentheses). Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 recites a method for extracting and structuring information, however, there are no method steps disclosed in the claim. Thus, it is unclear as to whether the claim is actually a method claim or if it is a different type of claim. Examiner suggests amending the claim to include actual method steps for extracting and structuring information. Claim 1 recites the limitations "the image quality of the texts" in Line 5 and “the text” in Line 6. There is insufficient antecedent basis for these limitations in the claim as there is no earlier mention of an image quality of texts and it is unclear as to which text is being referred to, whether it is the texts disclosed in Line 5 or another text. Examiner suggests amending the limitations to “an image quality of texts” (deleting “the”) and “the texts”, respectively, and has interpreted the limitations as such. Claim 2 recites the limitations "the document" in Line 3, “the main elements of each page” in Line 4, “the block” and “the information” in Line 6, “the textual information” in Lines 11-12, “the main file” in Line 12, “the textual content” in Line 15, and “the oil and gas (O&G) domain vocabulary” in Line 16. There is insufficient antecedent basis for these limitations in the claim as there is no earlier mention of a document, main elements of each page, a block, information, textual information, a main file, textual content, and oil and gas (O&G) domain vocabulary. Examiner suggests amending the limitations to “a document" in Line 3, “main elements of each page” (deleting “the”) in Line 4, “a block” and “information” (deleting “the”) in Line 6, “textual information” (deleting “the”) in Lines 11-12, “a main file” in Line 12, “the textual information” in Line 15, and “oil and gas (O&G) domain vocabulary” (deleting “the”) in Line 16, and has interpreted the limitations as such. Claim 3 recites the limitations "the synthetic document generation algorithm" in Line 2, “the oil and gas (O&G) industry” in Line 4, and “the synthetic document generator” in Lines 4-5. There is insufficient antecedent basis for these limitation in the claim as there is no earlier mention of a synthetic document generation algorithm, an oil and gas (O&G) industry, or a synthetic document generator. Examiner suggests amending the limitations to "the algorithm for generating synthetic documents" in Line 2, “an oil and gas (O&G) industry” in Line 4, and “a synthetic document generator” in Lines 4-5 and has interpreted the limitations as such. Claim 4 recites the limitation "the artificial intelligence models used in the main process of extracting information" in Lines 2-3. There is insufficient antecedent basis for this limitation in the claim as there is no earlier mention of artificial intelligence models used in a main process of extracting information or a main process of extracting information. Examiner suggests amending to “artificial intelligence models used in a main process of extracting information” (deleting “the”), and has interpreted the limitation as such. Claim 5 recites the limitations "the models" in Line 5 and “the oil and gas (O&G) domain”. There is insufficient antecedent basis for these limitations in the claim as it is unclear as to which models are being referred to (the models disclosed in claim 1 or the computer vision and classification models disclosed in Line 4 of claim 5) and there is no earlier mention of an oil and gas (O&G) domain (only an oil and gas (O&G) industry). Examiner suggests amending to “the computer vision and classification models” and “the oil and gas (O&G) industry”, respectively, and has interpreted the limitation as such. Claim 6 recites the limitation "the training and updating" in Line 2. There is insufficient antecedent basis for this limitation in the claim as there is no earlier mention of training and updating. Examiner suggests amending the limitation to “training and updating” (deleting “the”) and has interpreted the limitation as such. Claim 6 recites a method for extracting and structuring information (dependent on claim 1), however, there are no method steps disclosed in the claim. Thus, it is still unclear as to whether the claim is actually a method claim or if it is a different type of claim. Examiner suggests amending the claim to include actual method steps. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows. Claims 1 and 6 recite a method without disclosing any method steps and only discloses software components, thus constituting a program per se. Computer programs, per se, are not in one of the statutory categories of invention. See MPEP § 2106. Thus, claims 1 and 6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 and 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over Pillai et al. (US 2024/0153299) in view of Zhu et al. (US 2019/0236132). With regards to claim 1, Pillai et al. discloses a method for extracting and structuring information, characterized in that it comprises: (1) PDF page separator (Para. 0025 lines 1-7, "pages" "separate input image"), (2) block detection and segmentation model (Para. 0017 lines 31-33, 0028 lines 4-13, Fig. 6.2, "entities"), (3) table extractor (Para. 0017 lines 31-33, 0028 lines 4-13, "tables"), (4) image extractor (Para. 0017 lines 31-33, 0028 lines 4-13, "figures"), (5) image classification model (Para. 0017 lines 19-21, 0030 lines 11-17, "secondary entity type of figure"), (6) text extractor (Para. 0017 lines 31-33, 0028 lines 4-13, "text"), (7) computer vision model for improving the image quality of the texts (Para. 0017 lines 28-31, 0026 lines 1-9, "preprocessing"), (8) optical character recognition model (Para. 0017 lines 33-36, 0029 lines 7-10, 0033 lines 18-20, "OCR"), (11) output file organizer (Para. 0033 lines 1-10, Fig. 6.3, "document layout structure") and (12) metadata aggregator for information enrichment, algorithm for generating synthetic documents and Artificial Intelligence models (Para. 0039 lines 3-10, 0044 lines 1-16, 0046 lines 1-13, "synthetic training data" "machine learning module 1"). Pillai et al. does not explicitly teach comprising (09) model for spelling correction and (10) models for semantic enrichment of the text. However, Zhu et al. discloses the concept of comprising a model for spelling correction and models for semantic enrichment of the text (Para. 016 lines 1-4, 0017 lines 4-8, 0026 lines 1-38, 0055 lines 1-7, 0057 lines 1-8, "spelling correction" "robust domain-specific lexicon") in order to enhance the quality and accuracy of the output text (Para. 0014 lines 1-12, 0016 lines 1-25, "quality" "consequence"). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include the concept of comprising a model for spelling correction and models for semantic enrichment of the text as taught by Zhu et al. into the method of Pillai et al. The motivation for this would be to enhance the quality and accuracy of the text. With regards to claim 3, the combination of Pillai et al. and Zhu et al. discloses the method according to claim 1, characterized in that the synthetic document generation algorithm creates a training base made up of millions of synthetic documents, which emulate real documents commonly used by the oil and gas (O&G) industry in different variations of layouts by means of the synthetic document generator (Pillai et al.: Para. 0039 lines 3-10, 0040 lines 1-6, 0044 lines 1-16, 0046 lines 1-13, 0047 lines 1-7, "synthetic training data" "machine learning module 1"). With regards to claim 4, the combination of Pillai et al. and Zhu et al. discloses the method according to claim 3, characterized in that synthetic documents are used to train and update the artificial intelligence models used in the main process of extracting information (Pillai et al.: Para. 0017 lines 31-33, 0028 lines 4-13 and 28-34, 0039 lines 3-10, 0040 lines 1-6, 0044 lines 1-16, 0046 lines 1-13, 0047 lines 1-7, 0048 lines 1-16, "synthetic training data" "machine learning module"). With regards to claim 5, the combination of Pillai et al. and Zhu et al. discloses the method according to claim 3 4, characterized in that it comprises the following steps: a) Generation of synthetic documents (1), in different layout configurations (Pillai et al.: Para. 0039 lines 3-10, 0044 lines 1-16, 0046 lines 1-13, "page layout formats"); b) Training/Tuning of computer vision and classification models (2) (Pillai et al.: Para. 0039 lines 3-10, 0044 lines 1-16, 0048 lines 1-16, "training"); c) Quality control of the models under synthetic and real sets (3) (Pillai et al.: Para. 0039 lines 3-10, 0044 lines 1-16, 0048 lines 1-16, 0049 lines 1-8, "training" "loss function"); d) Assessment of extraction results in the oil and gas (O&G) domain (4) (Pillai et al.: Para. 0017 lines 1-8 and 31-33, 0028 lines 4-13, 0030 lines 1-17, 0032 lines 1-11, "oil and gas" "type of figure" "postprocessing"); e) Identification of new formats or alterations to existing formats (5) (Pillai et al.: Para. 0044 lines 1-16, 0046 lines 1-13, 0047 lines 1-7, "varied document formats" "repeated"); f) Adjustment of parameters / Configuration of new synthetic formats (6) (Pillai et al.: Para. 0046 lines 1-13, 0047 lines 1-7, "varied document formats" "repeated"). Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over Pillai et al. (US 2024/0153299) in view of Zhu et al. (US 2019/0236132) and further in view of Wang et al. (US 2023/0139831), Al-Gharaibeh et al. (US 10,832,046), and Ackermann et al. (US 2019/0163781). With regards to claim 2, the combination of Pillai et al. and Zhu et al. discloses the method according to claim 1, characterized in that it comprises the following steps: a) Transform all pages of the document into images (1) (Pillai et al.: Para. 0025 lines 1-7, "pages" "separate input image"); b) Use the (2) block detection model to identify the main elements of each page, segmenting them into blocks of texts, images and tables (Pillai et al.: Para. 0017 lines 31-33, 0028 lines 4-13, Fig. 6.2, "entities" "tables" "figures" "text"); c) Extract (3) table if the block is classified as a table (Pillai et al.: Para. 0017 lines 31-33, 0028 lines 4-13 and 25-28, 0034 lines 1-6, 0035 lines 12-16, "tables"); d) Extract (4) images and their respective captions, if the block is identified as an image (Pillai et al.: Para. 0017 lines 31-33, 0028 lines 4-13 and 25-28, 0034 lines 1-6, "figures" "captions"), recorded in individual files and processed by one (5) image classification model to aggregate additional metadata (Pillai et al.: Para. 0017 lines 19-21, 0030 lines 11-17, 0033 lines 1-10, 0059 lines 1-13, Fig. 6.3, "secondary entity type of figure"); e) Extract (6) content if it is text, list or equation (Pillai et al.: Para. 0017 lines 31-33, 0028 lines 4-13 and 25-28, 0034 lines 1-6, "text"), and subsequently extracted from one (8) optical character recognition (OCR) model (Pillai et al.: Para. 0033 lines 18-20, "OCR"); f) For text format blocks, the textual content is also subjected to steps of (9) spelling correction considering the oil and gas (O&G) domain vocabulary (Zhu et al.: Para. 016 lines 1-4, 0017 lines 4-8, 0026 lines 1-38, 0037 lines 26-28, 0055 lines 1-7, 0057 lines 1-8, "spelling correction" "oil and gas") and (10) enrichment with semantic metadata (Zhu et al.: Para. 0026 lines 1-38, 0055 lines 1-7, 0057 lines 1-8, "robust domain-specific lexicon"), being stored in XML files (Pillai et al: Para. 0033 lines 1-10, 0060 lines 1-3, 0094 lines 13-15, 0096 lines 1-5, "data repository" "XML"); g) All extracted information is (11) organized in the output file organizer and (12) new information is aggregated to enrich metadata (Pillai et al.: Para. 0033 lines 1-10, 0039 lines 3-10, 0044 lines 1-16, 0046 lines 1-13, Fig. 6.3, "document layout structure"). The combination of Pillai et al. and Zhu et al. does not explicitly teach extracting a table so that the information contained therein is structured and stored in a file in CSV format. However, Wang et al. discloses the concept of extracting a table so that the information is structured and stored in a file in CSV format in order to convert documents into a form that is interpretable by information retrieval engines (Para. 0003 lines 4-5, 0027 lines 1-3, 0046 lines 7-12, 0055 lines 1-7, "CSV" "interpretable"). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include the concept of extracting a table so that the information is structured and stored in a file in CSV format as taught by Wang et al. into the method of the combination of Pillai et al. and Zhu et al. The motivation for this would be to convert table information into a form that is interpretable by information retrieval engines. The combination of Pillai et al., Zhu et al., and Wang et al. does not explicitly teach if it is not possible to retrieve the textual information directly from the main file, it is pre-processed by (7) computer vision models to improve image quality. However, Al-Gharaibeh et al. discloses the concept of preprocessing image data to improve image quality when it is not possible to retrieve textual information (Col. 3 lines 57-66, Col. 4 lines 1-9 and 21-26, Col. 5 lines 6-20, "degraded" "pre-processes" "clean document") in order to allow for high OCR accuracy (Col. 3 lines 60-67, Col. 4 lines 1-2, "accuracy"). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include the concept of preprocessing image data to improve image quality when it is not possible to retrieve textual information as taught by Al-Gharaibeh et al. into the method of the combination of Pillai et al., Zhu et al., and Wang et al. The motivation for this would be to allow for high OCR accuracy. The combination of Pillai et al., Zhu et al., Wang et al., and Al-Gharaibeh et al. does not explicitly teach enrichment with semantic metadata including processes for recognizing named entities, relation identification and Part of Speech Tagging. However, Ackermann et al. discloses the concept of performing text enrichment with semantic metadata including processes for recognizing named entities, relation identification and Part of Speech Tagging in order to obtain more information and context on the text (Para. 0001 lines 1-3, 0032 lines 4-9 and 13-14, 0036 lines 1-6, 0038 lines 4-18, 0039 lines 1-4, "name recognition" "semantic relationships" "part of speech"). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include the concept of performing text enrichment with semantic metadata including processes for recognizing named entities, relation identification and Part of Speech Tagging in order to obtain more information and context on the text as taught by Ackermann et al. into the method of the combination of Pillai et al., Zhu et al., Wang et al., and Al-Gharaibeh et al. The motivation for this would be to obtain more information on the text. Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Pillai et al. (US 2024/0153299) in view of Zhu et al. (US 2019/0236132) and further in view of Al-Gharaibeh et al. (US 10,832,046) and Ackermann et al. (US 2019/0163781). With regards to claim 6, the combination of Pillai et al. and Zhu et al. discloses the method according to claim 1, characterized in that the training and updating of all artificial intelligence models used in the method are included in the steps of (2) block detection and segmentation model (Pillai et al.: Para. 0017 lines 31-33, 0028 lines 4-13 and 28-34, Fig. 6.2, "entities"), (5) image classification model (Pillai et al.: Para. 0017 lines 19-21, 0030 lines 11-24, "secondary entity type of figure"), (09) model for spelling correction (Zhu et al.: Para. 0026 lines 1-38, "spelling correction"). The combination of Pillai et al. and Zhu et al. does not explicitly teach that the training and updating of all artificial intelligence models used in the method are included in the steps of (7) computer vision model for improving the image quality of the texts and (8) optical character recognition OCR model. However, Al-Gharaibeh et al. discloses the concept of training and updating artificial intelligence models used in a computer vision model for improving the image quality of the texts and in the optical character recognition OCR model (Col. 3 lines 57-66, Col. 4 lines 27-32, Col. 10 lines 2-5 and 23-35, "clean document" "CNN" "OCR neural network" "training") in order to allow for high OCR accuracy (Col. 3 lines 60-67, Col. 4 lines 1-2, "accuracy"). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include the concept of training and updating artificial intelligence models used in computer vision model for improving the image quality of the texts and in the optical character recognition OCR model as taught by Al-Gharaibeh et al. into the method of the combination of Pillai et al. and Zhu et al. The motivation for this would be to allow for high OCR accuracy. The combination of Pillai et al., Zhu et al., and Al-Gharaibeh et al. does not explicitly teach that the training and updating of all artificial intelligence models used in the method are included in the step (10) models for semantic enrichment of the text (including processes for recognizing named entities, identifying relations and Part of Speech Tagging). However, Ackermann et al. discloses the concept of including training and updating of artificial intelligence models in text enrichment with semantic metadata including processes for recognizing named entities, relation identification and Part of Speech Tagging in order to obtain more information and context on the text (Para. 0001 lines 1-3, 0032 lines 1-14, 0036 lines 1-6, 0038 lines 4-18, 0039 lines 1-4, "natural language processor" "name recognition" "semantic relationships" "part of speech"). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include the concept of including training and updating of artificial intelligence models in text enrichment with semantic metadata including processes for recognizing named entities, relation identification and Part of Speech Tagging in order to obtain more information and context on the text as taught by Ackermann et al. into the method of the combination of Pillai et al., Zhu et al., and Al-Gharaibeh et al. The motivation for this would be to obtain more information on the text. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicants are directed to consider additional pertinent prior art included on the Notice of References Cited (PTOL 892) attached herewith. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROL W CHAN whose telephone number is (571)272-5766. The examiner can normally be reached 9:30-3:30 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAROL W CHAN/Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Mar 29, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579803
TOTAGRAPHY FOR SUPERRESOLUTION IMAGING AND SIGNAL PROCESSING OF POSITIVE, REAL-VALUED IMAGES AND SIGNALS
2y 5m to grant Granted Mar 17, 2026
Patent 12573205
ELECTRONIC DEVICE AND METHOD FOR VEHICLE WHICH ENHANCES DRIVING ENVIRONMENT RELATED FUNCTION
2y 5m to grant Granted Mar 10, 2026
Patent 12573240
LIGHT SOURCE SPECTRUM AND MULTISPECTRAL REFLECTIVITY IMAGE ACQUISITION METHODS AND APPARATUSES, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573206
BIRD’S-EYE VIEW ADAPTIVE INFERENCE RESOLUTION
2y 5m to grant Granted Mar 10, 2026
Patent 12567237
OBJECT EVALUATION METHOD, OBJECT EVALUATION DEVICE, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+36.2%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 351 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month