Prosecution Insights
Last updated: April 19, 2026
Application No. 17/835,813

TABLE EXTRACTION FROM IMAGE-BASED DOCUMENTS

Final Rejection §102§103
Filed
Jun 08, 2022
Examiner
RODGERS, ALEXANDER JOHN
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Oracle International Corporation
OA Round
4 (Final)
70%
Grant Probability
Favorable
5-6
OA Rounds
3y 2m
To Grant
77%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
23 granted / 33 resolved
+7.7% vs TC avg
Moderate +7% lift
Without
With
+7.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
12 currently pending
Career history
45
Total Applications
across all art units

Statute-Specific Performance

§101
10.1%
-29.9% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
26.0%
-14.0% vs TC avg
§112
19.8%
-20.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 18 September 2025 with regards to the rejection under U.S.C. of Claim 1 have been fully considered but they are not persuasive. Specifically, applicant points to the light vs dark boxes cited in past action from Davis et al (WO Publication 2018175686 A1) in Figure 5 which in past actions was cited in conjunction with specification paragraph 0079 to show there are table regions and non-table regions being segmented. It seems the paragraph 0079 was not taken in conjunction with Figure 5 as was cited in the past action. In order to improve clarity of the office action and in the interest of advancing prosecution, additional citations from this portion of the text have been included in the USC 103 rejection below. Specifically noting also paragraph 0080 as well where the table region as disclosed in paragraph 0079 is further cropped from the image which would inherently segment a non-table region in the image as well. The light and dark gray boxes in paragraph 0079 and Figure 5 however have also been cited more clearly (as they are directly mentioned in paragraph 0079, whereas in the figure they might appear as more dashed lines vs bolder lines rather than light vs dark boxes). Finally, in regards to the newly introduced limitation in Claim 1 of “wherein the set of text portions located within the table region in the image-based document are associated with different formats,”, additional material from Davis has also been introduced to teach this claim. In light of the definition made in application specification paragraph 0036: “The techniques and systems described in this disclosure can be used to extract various different types and formats of tables from image-based documents. For example, the tables can have various different numbers of rows and columns, different sizes of rows and columns, tables with different styles (e.g., with borders, without borders, without demarcation of rows and columns), and with different formats (e.g., ambiguous structures with different alignment of the table cells).”, it appears Davis also teaches these different formats as shown in Figure 5 where the numbers within the dark gray boxes are of different digit widths and the columns of those numbers also have different alignments due to the different digit widths (See in Figure 5, the first column of the table area shows 6 digit decimal column and also columns with only 2 or 3 digits. There is also a column which includes numbers and letters for direction instead of just numbers for degrees or feet which would also for other reasons read as different formats, especially when considering the broadest possible scope of the claim language, the classical programming sense of the printf or scanf functions in the C standard library where numbers and letters are scanned or printed as separate formats or data types would be a definition of different formats an examiner must consider as the broadest possible definition of the claim language because these functions are so ubiquitous). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 6-11, 17, and 18 are rejected under 35 U.S.C. 103 as being anticipated by Davis et al (WO Publication 2018175686 A1). Regarding Claim 1, Davis discloses A computer-implemented method performed by a computer system (Reference “Computer-Implemented” and “data extraction method”, see Specification paragraph 0047 where the data extraction method is computer implemented) the method comprising: extracting a plurality of text portions (Reference “OCR”, see Specification paragraph 0047 where an OCR is performed on the whole page, also note Figure 5 showing a plurality of text portions on an example page being processed) from an image-based document, the image- based document including one or more table regions and one or more non-table regions (Reference “light gray boxes for text items” and “dark gray boxes for table candidates”, see Figure 5 showing an example image document and Specification paragraph 0079 describing said figure where table candidate regions are marked darker than non-table candidate region such as text block), each text portion corresponding to a portion of text content in the image-based document, the image- based document comprising a plurality of pixels (Reference “pixels”, see Specification paragraph 0066 describing approximate pixel counts of typical document layout); after extracting the plurality of text portions from the image-based document, detecting a table region within the image-based document (Reference “OCR” and “detect any table”, see Specification paragraph 0047 for high-level description where a text extraction or OCR is performed on the entire document followed by table detection of the document, and the text portions of the table are extracted to generate an output file); identifying, from the plurality of text portions extracted from the image-based document (Reference “light gray boxes”, see Specification paragraph 0079 which describes Figure 5. It specifically notes the light gray boxes are for all text items identified in Figure 5), a set of text portions located within the table region in the image-based document (Reference “dark gray boxes”, see Specification paragraph 0079 which describes the dark gray boxes as table candidates and further describes the box that encloses the tabular data of these dark gray boxes as the table region. See Specification paragraph 0080 where a table region is cropped from the rest of the image and the text portions which belong to it are identified), wherein the set of text portions located within the table region in the image-based document are associated with different formats (Examiner’s Note from application specification paragraph 0036: “The techniques and systems described in this disclosure can be used to extract various different types and formats of tables from image-based documents. For example, the tables can have various different numbers of rows and columns, different sizes of rows and columns, tables with different styles (e.g., with borders, without borders, without demarcation of rows and columns), and with different formats (e.g., ambiguous structures with different alignment of the table cells”. Returning to Davis, see Figure 5, the first column of the table area shows 6 digit decimal column and also columns with only 2 or 3 digits. There is also a column which includes numbers and letters for direction instead of just numbers for degrees or feet which would also for other reasons read as different formats, especially when considering the broadest possible scope of the claim language, the classical programming sense of the printf or scanf functions in the C standard library where numbers and letters are scanned or printed as separate formats or data types would be a definition of different formats an examiner must consider as the broadest possible definition of the claim language) wherein a bounding box associated with the identified set of text portions are within boundaries of the table region (Reference “text bounding rectangle”, see Specification paragraph 0070 where the bounding rectangles are associated with portions of text. Further note in Specification paragraph 0080 where these bounding rectangles which are associated with the boundaries of a table region); assigning a row index and a column index to each text portion in the set of one or more text portions located within the table region (Reference “row and column”, see Specification paragraph 0081 where the text portions are stored with labels identifying row and column such as A to Z or integer values beginning at 0 which both read as an index); and generating a machine-readable representation table based upon the set of one or more text portions and the row index and the column index assigned to each of the text portions in the set of one or more text portions (Reference “Table object”, see Specification paragraph 0086 where a table object is generated and an object is a machine readable representation and is generated from the text portions identified within the table and output to CSV for example), wherein each text portion in the set of one or more text portions corresponds to a cell of the generated machine-readable representation of the table (Reference “cell”, see Specification paragraph 0081 where the bounding boxes used to generated the table were specifically cell bounding boxes and their values were used to generate this table object). Regarding Claim 2, Davis discloses The method of claim 1, wherein extracting the plurality of text portions comprises using an optical character recognition (OCR) technique to extract the plurality of text portions from the image-based document (Reference “OCR”, see Specification paragraph 0047 where an OCR is performed on the whole page, also note Figure 5 showing a plurality of text portions on an example page being processed). Regarding Claim 3, Davis discloses The method of claim 1, wherein, for a text portion from the set of one or more text portions, the row index and the column index assigned to the text portion indicates a position of the text portion within the table (Reference “row and column”, see Specification paragraph 0080 describing the organization of the of the rows and columns where for example the column numbers increase to the right and therefore indicate both horizontal and vertical position of the portion within the table). Regarding Claim 4, Davis discloses The method of claim 1, further comprising deriving a number of rows and a number of columns for the table based upon the set of one or more text portions and positions of the text portions within the table region in the image-based document (Reference “row and column”, see Specification paragraph 0080 describing the organization of the of the rows and columns where for example the column numbers increase to the right and therefore indicate both horizontal and vertical position of the portion within the table. Also note the numbers increasing would be a count of number of columns for example) to derive an alignment for cells in the table (Reference “alignment”, see Specification paragraph 0071 where the vertical alignment derived should agree with the values between rows. Also see Specification paragraph 0074 where an alignment for rows is found). Regarding Claim 6, Davis discloses The method of claim 1, further comprising: computing a number of rows and a number of columns for the table based upon the set of one or more text portions and positions of the text portions within the table region in the image-based document (Reference “row and column”, see Specification paragraph 0080 describing the organization of the of the rows and columns where for example the column numbers increase to the right and therefore indicate both horizontal and vertical position of the portion within the table. Also note the numbers increasing would be a count of number of columns for example); and aligning the set of one or more text portions resulting in each text portion in the set of one or more text portions being aligned to a row from the number of rows and a column from the number of columns (Reference “align” and “row”, see Specification paragraph 0071 where each row has the same vertical alignment for example rows B and C. Also note the row/column numbers A0, A1, etc.), wherein assigning the row index and the column index to each text portion in the set of one or more text portions comprises assigning the row index and the column index to each text portion in the set of one or more text portions based upon a position of the text portion after the aligning (Note that in this process the Specification paragraph 0081 where the specific row and column numbers are assigned occurs after the alignment process described in paragraphs 0071-0079. Once this is performed, the method continues at step 212 where the text portions specific to the detected tables are extracted). Regarding Claim 7, Davis discloses The method of claim 1, further comprising providing the generated machine-readable representation of the table to a table processing system for subsequent processing (Reference “export”, see Specification paragraph 0096 where the table dictionary can be output to a Pandas dataframe, a data structure used in Python to depict tables for further reference see Pandas documentation, an open source library, for components such as “pandas.Dataframe” for definitions and example functionalities of such table processing systems). Regarding Claim 8, Davis discloses The method of claim 1, wherein the machine-readable representation of the table is an editable table, a spreadsheet, a JSON (JavaScript Object Notation) object, an XMLfile, or an XML object (Reference “JSON”, see Specification paragraph 0059 where JSON is specifically an output format type. Also see Specification paragraph 0096 further describing data structures which can be output). Regarding Claim 9, Davis discloses The method of claim 1, wherein assigning the row index and the column index to each text portion in the set of one or more text portions comprises: clustering the set of one or more text portions based upon positions of the set of one or more text portions within the table region (Reference “aligning”, see Specification paragraph 0071 describing the cluster or alignment rules for grouping text portions within the table region, where for example the rows are expected to be vertically aligned along their right boundaries to show numeric values. And that the values in the same line should also agree in alignment) ; and assigning the row index and the column index to each text portion in the set of one or more text portions based upon the clustering (Reference “A0, A1…”, see Specification paragraph 0071 and also paragraph 0081 where the indices of both column and row are described based upon these alignment rules from which these rows and columns are grouped/clustered and identified). Regarding Claim 10, Davis discloses A table identification and extraction system comprising (Reference “Computing system”, see Specification paragraph 0097 where the computing system implements the table extraction methods described above, see rejection of Claim 1): a processor (Reference “processor”, see Specification paragraph 0097); and a computer-readable medium including instructions stored thereon that, when executed by the processor, cause the processor to perform processing comprising (Reference “memory” and “instructions”, see Specification paragraph 0097): obtaining an image-based document, the image-based document including one or more table regions and one or more non-table regions (Reference “light gray boxes for text items” and “dark gray boxes for table candidates”, see Figure 5 showing an example image document and Specification paragraph 0079 describing said figure where table candidate regions are marked darker than non-table candidate region such as text block) ;processing the image-based document to: extract a series of text portions in the image-based document(Reference “OCR”, see Specification paragraph 0047 where an OCR is performed on the whole page, also note Figure 5 showing a plurality of text portions on an example page being processed); and after extracting the text portions in the image-based document, detect a region within the image-based document comprising a table (Reference “OCR” and “detect any table”, see Specification paragraph 0047 for high-level description where a text extraction or OCR is performed on the entire document followed by table detection of the document, and the text portions of the table are extracted to generate an output file); identifying, from the plurality of text portions extracted from the image-based document (Reference “light gray boxes”, see Specification paragraph 0079 which describes Figure 5. It specifically notes the light gray boxes are for all text items identified in Figure 5), a set of text portions located within the table region in the image-based document (Reference “dark gray boxes”, see Specification paragraph 0079 which describes the dark gray boxes as table candidates and further describes the box that encloses the tabular data of these dark gray boxes as the table region. See Specification paragraph 0080 where a table region is cropped from the rest of the image and the text portions which belong to it are identified), wherein the set of text portions located within the table region in the image-based document are associated with different formats (Examiner’s Note from application specification paragraph 0036: “The techniques and systems described in this disclosure can be used to extract various different types and formats of tables from image-based documents. For example, the tables can have various different numbers of rows and columns, different sizes of rows and columns, tables with different styles (e.g., with borders, without borders, without demarcation of rows and columns), and with different formats (e.g., ambiguous structures with different alignment of the table cells”. Returning to Davis, see Figure 5, the first column of the table area shows 6 digit decimal column and also columns with only 2 or 3 digits. There is also a column which includes numbers and letters for direction instead of just numbers for degrees or feet which would also for other reasons read as different formats, especially when considering the broadest possible scope of the claim language, the classical programming sense of the printf or scanf functions in the C standard library where numbers and letters are scanned or printed as separate formats or data types would be a definition of different formats an examiner must consider as the broadest possible definition of the claim language) wherein a bounding box associated with the identified text portions are within boundaries of the region comprising the table (Reference “text bounding rectangle”, see Specification paragraph 0070 where the bounding rectangles are associated with portions of text. Further note in Specification paragraph 0080 where these bounding rectangles which are associated with the boundaries of a table region); deriving a set of clusters within the region comprising the table, each cluster grouping text portions that are part of a row or a column (Reference “row and column”, see Specification paragraph 0080 describing the organization of the of the rows and columns where for example the column numbers increase to the right also note alignment rules which create these groups or clusters of rows or columns as previously noted in Specification paragraph 0071); assigning, for each of the subset of the series of text portions located within the region comprising the table, a row identifier and a column identifier according to a number of rows and columns for the table and the derived set of clusters (Reference “row and column”, see Specification paragraph 0081 where the text portions are stored with labels identifying row and column such as A to Z or integer values beginning at 0 which both read as an index.); generating a machine-readable version of the table that includes each of the subset of the series of text portions arranged according to the assigned row identifiers and column identifiers(Reference “row and column”, see Specification paragraph 0081 where the text portions are stored with labels identifying row and column such as A to Z or integer values beginning at 0 which both read as an index. Note this is described as a table dictionary which is computer implemented object for storing keys and values); and providing the generated table to a table processing system for subsequent processing(Reference “export”, see Specification paragraph 0096 where the table dictionary can be output to a Pandas dataframe, a data structure used in Python to depict tables for further reference see Pandas documentation, an open source library, for components such as “pandas.Dataframe” for definitions and example functionalities of such table processing systems). Regarding Claim 11, Davis discloses The table identification and extraction system of claim 10, wherein the image-based document is received by a client device configured to generate the image-based document via a scanning module of the client device (Reference “client device”, see Specification paragraphs 0048-0050 describing how the client devices are configured to perform data extraction. Also note in Specification paragraph 0055 and 0056 the documents are noted specifically being scanned image or produced by scanning). Regarding claim 17, claim 17 is considered a computer readable medium (CRM) claim corresponding to claim 1. Please see the discussion of claim 1 above. Furthermore, Davis discloses a non-transitory computer-readable medium (machine readable storage medium, see paragraph 0103) including stored thereon a plurality of instructions (instructions, see paragraph 0103), which when executed by a processor (processor, see paragraph 0103) causes the processor to execute the claim method steps. Regarding Claim 18, Davis discloses The non-transitory computer-readable medium of claim 17, wherein extracting the series of text portions includes performing an optical character recognition (OCR) process to generate the series of text portions into a machine-readable format (Reference “OCR”, see Specification paragraph 0047 where an OCR is performed on the whole page, also note Figure 5 showing a plurality of text portions on an example page being processed) and wherein deriving the number of rows and columns for the table includes allocating a row index and a column index to each of the subset of the series of text portions (Reference “row and column”, see Specification paragraph 0081 where the text portions are stored with labels identifying row and column such as A to Z or integer values beginning at 0 which both read as an index. Also note these are created in a dictionary, which requires allocation of key/value pairs for instantiation). Regarding Claim 21, Davis discloses The method according to claim 1, wherein the different formats comprise different alignments of table cells (Reference “alignment” and “association”, see Figure 5 where each of the different columns have different alignments due to the different text formats described in rejection of Claim 1 Specification paragraph 0077 where once all the alignment). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, 12, 14-16, 20, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Davis et al (WO Publication 2018175686 A1) in view of Wang et al (US Publication 20220076012 A1). Regarding Claim 5, Davis discloses The method of claim 4, but fails to disclose wherein deriving the number of rows and the number of columns for the table based upon the set of one or more text portions comprises: for at least one text portion in the set of one or more text portions, expanding a bounding box of the text portion until coordinates of the bounding box correspond with coordinates of another bounding box of another text portion from the set of one or more text portions. Instead, Wang discloses wherein deriving the number of rows and the number of columns for the table based upon the set of one or more text portions comprises: for at least one text portion in the set of one or more text portions, expanding a bounding box of the text portion until coordinates of the bounding box correspond with coordinates of another bounding box of another text portion from the set of one or more text portions (Reference “bounding box”, See Specification paragraph 0004, where the bounding box of each cell is expanded until it overlaps with another cell boundary which reads as the coordinates of another bounding box. Also note in Specification paragraph 0020 the extraction of cell boundaries is performed via K-means clustering on cell bounding box coordinates to define row and column locations). Motivation for this modification to utilize coordinates in a bounding box expansion system like Wang is shown (see Specification paragraph 0021) where the cell expansion can merge a cell with neighboring empty rows and columns to create hierarchical cells as an example. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Davis in view of Wang with a coordinate based bounding box expansion.. Regarding Claim 12, Davis discloses The table identification and extraction system of claim 10, but fails to disclose wherein the region within the image-based document comprising the table is bounded by a bounding box including two-dimensional coordinates specifying a position of the table in the image-based document. Instead, Wang discloses wherein the region within the image-based document comprising the table is bounded by a bounding box including two-dimensional coordinates specifying a position of the table in the image-based document (Reference “cells” and “image coordinates”, See paragraph 0003 where it is noted the cells coordinates of the image describing their bounding box on that image. Further it is noted these coordinates have vertical and horizontal component for row and columns and therefore are two dimensional). Motivation for this modification to utilize coordinates in a bounding box expansion system like Wang is shown (see Specification paragraph 0021) where the cell expansion can merge a cell with neighboring empty rows and columns to create hierarchical cells as an example. Regarding Claim 14, Davis discloses The table identification and extraction system of claim 10, but fails to disclose The table identification and extraction system of claim 10, wherein the instructions further cause the processor to perform processing comprising: identifying an alignment of each of the subset of text portions by deriving a coordinate position of each of the subset of text portions, wherein the row and column identifiers are assigned to each cell based on the derived coordinate positions, and wherein the row and column identifiers denote a cell position of the text portion within the generated table. Instead, Wang discloses wherein the instructions further cause the processor to perform processing comprising: identifying an alignment of each of the subset of text portions by deriving a coordinate position of each of the subset of text portions, (Reference “cells”, See Specification paragraph 0003 where the cells coordinates of each cell are noted. Also see Specification paragraph 0004 where a number of columns and rows are determined prior to aligning those rows and columns), wherein the row and column identifiers are assigned to each cell based on the derived coordinate positions (Reference “K-Means clustering”, see Specification paragraph 0020 where the cell coordinates are used in clustering the cells to identify the rows and columns), and wherein the row and column identifiers denote a cell position of the text portion within the generated table (Reference “GTE”, “generate” and “cell structure”, see Specification paragraphs 0017-0018 where the detection of the cell columns and rows is for generating a cell data structure for the global table extractor or GTE.). Motivation for this modification to utilize coordinates in a bounding box expansion system like Wang is shown (see Specification paragraph 0021) where the cell expansion can merge a cell with neighboring empty rows and columns to create hierarchical cells as an example. Regarding Claim 15, Davis discloses The table identification and extraction system of The table identification and extraction system of claim 10, but fails to disclose wherein deriving each of set of clusters includes identifying a grouping of text portions with at least one coordinate being within a threshold similarity, indicating that the grouping of text portions are part of a single row or column Instead, Wang discloses wherein deriving each of set of clusters includes identifying a grouping of text portions with at least one coordinate being within a threshold similarity, indicating that the grouping of text portions are part of a single row or column (Reference “K means clustering”, see Specification paragraph 0029 where K means clustering is performed to identify row and column position, Further note that K means clustering is an algorithm which groups clusters by minimizing distance between center of cluster to furthest points which reads as grouping them by a threshold similarity). Motivation for this modification to utilize coordinates in a bounding box expansion system like Wang is shown (see Specification paragraph 0021) where the cell expansion can merge a cell with neighboring empty rows and columns to create hierarchical cells as an example. Regarding Claim 16, Davis discloses The table identification and extraction system of claim 10, wherein deriving the number of rows and columns for the table includes expanding bounding boxes for each of the subset of the series of text portions until any of the bounding boxes reach another bounding box but fails to disclose wherein deriving the number of rows and columns for the table includes expanding bounding boxes for each of the subset of the series of text portions until any of the bounding boxes reach another bounding box. Instead, Wang discloses The table identification and extraction system of claim 10, wherein deriving the number of rows and columns for the table includes expanding bounding boxes for each of the subset of the series of text portions until any of the bounding boxes reach another bounding box (Reference “bounding box”, See Specification paragraph 0004, where the bounding box of each cell is expanded until it overlaps with another cell boundary). Motivation for this modification to utilize coordinates in a bounding box expansion system like Wang is shown (see Specification paragraph 0021) where the cell expansion can merge a cell with neighboring empty rows and columns to create hierarchical cells as an example. Claim 20 is rejected for containing similar limitations to the already rejected Claim 16, please see above. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Davis et al (WO Publication 2018175686 A1) in view of Bhuyan (US Publication No. US 20220036063 A1). Regarding Claim 13, Davis discloses The table identification and extraction system of claim 10, but fails to disclose wherein the instructions further cause the processor to perform processing comprising: comparing text portions in the machine-readable version of the table with a known output for the image-based document to derive an accuracy in extracting text in the table, wherein the derived accuracy is used to train a model for detecting region within the image- based document comprising the table. Instead, Bhuyan discloses wherein the instructions further cause the processor to perform processing comprising: comparing text portions in the machine-readable version of the table with a known output for the image-based document to derive an accuracy in extracting text in the table, wherein the derived accuracy is used to train a model for detecting region within the image- based document comprising the table. (Reference “supervised learning” and “accuracy”, see Specification paragraph 0037 where supervised learning is used to improve accuracy of the model. Further, note supervised learning is the practice of passing known data into a machine learning model for training purposes). The motivation for performing such a practice is to improve the accuracy of the model (see Specification paragraph 0037). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Davis with an accuracy evaluation and to potentially retrain or reparametrize a detection model for better accuracy as taught by Bhuyan. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER JOHN RODGERS whose telephone number is (703)756-1993. The examiner can normally be reached 5:30AM to 2:30PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached on (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER JOHN RODGERS/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Jun 08, 2022
Application Filed
Aug 24, 2024
Non-Final Rejection — §102, §103
Nov 05, 2024
Applicant Interview (Telephonic)
Nov 05, 2024
Examiner Interview Summary
Nov 26, 2024
Response Filed
Feb 20, 2025
Final Rejection — §102, §103
Apr 17, 2025
Interview Requested
Apr 28, 2025
Examiner Interview Summary
Apr 28, 2025
Applicant Interview (Telephonic)
May 27, 2025
Request for Continued Examination
May 28, 2025
Response after Non-Final Action
Jun 14, 2025
Non-Final Rejection — §102, §103
Aug 27, 2025
Interview Requested
Sep 05, 2025
Applicant Interview (Telephonic)
Sep 05, 2025
Examiner Interview Summary
Sep 18, 2025
Response Filed
Jan 24, 2026
Final Rejection — §102, §103
Mar 18, 2026
Interview Requested
Mar 26, 2026
Examiner Interview Summary
Mar 26, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548181
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12541961
INFORMATION EXTRACTION METHOD OF OFFSHORE RAFT CULTURE BASED ON MULTI-TEMPORAL OPTICAL REMOTE SENSING IMAGES
2y 5m to grant Granted Feb 03, 2026
Patent 12494058
RELATIONSHIP MODELING AND KEY FEATURE DETECTION BASED ON VIDEO DATA
2y 5m to grant Granted Dec 09, 2025
Patent 12453511
SYSTEMS AND METHODS FOR CONFIRMATION OF INTOXICATION DETERMINATION
2y 5m to grant Granted Oct 28, 2025
Patent 12430771
LIGHT FIELD RECONSTRUCTION METHOD AND APPARATUS OF A DYNAMIC SCENE
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
70%
Grant Probability
77%
With Interview (+7.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month