Prosecution Insights
Last updated: April 19, 2026
Application No. 18/611,661

SYSTEM AND METHOD FOR COMPARING DOCUMENTS USING AN ARTIFICIAL INTELLIGENCE (AI) MODEL

Non-Final OA §101§103
Filed
Mar 20, 2024
Examiner
MARLOW, ALEXANDER G
Art Unit
2658
Tech Center
2600 — Communications
Assignee
UST Global (Singapore) Pte. Limited
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
97%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
59 granted / 77 resolved
+14.6% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
9 currently pending
Career history
86
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Introduction This office action is in response to communications filed 03/20/2024. Claims 1-16 are pending, and likewise have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-11 and 13-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent Claims 1 and 9 recite the limitations “preprocessing a plurality of input documents into a structured, standardized format for comparison;”, “generating a layout indicative of an arrangement of one or more components in each of the plurality of input documents using the AI model, based on parsing the one or more components;”, “generating a canonical representation based on the layout, wherein the canonical representation indicates relationships between the one or more components of each of the plurality of input documents;”, “determining a plurality of sections within each of the plurality of input documents by segmenting the layout based on the canonical representation, wherein the plurality of sections indicates logical units in the corresponding input document of the plurality of input documents;”, “and determining a textual difference matrix using the AI model, based on a comparison of the plurality of sections within each of the plurality of input documents, wherein the textual difference matrix indicates an alteration in one or more expressions in each of the plurality of sections.” Claim 1 also recites “A method for comparing documents using an artificial intelligence (AI) model, the method comprising:”. Claim 9 also recites “A system for comparing documents using an artificial intelligence (AI) model, the system comprising: a memory; at least one processor in communication with the memory, the at least one processor is configured to:”. The limitations “preprocessing a plurality of input documents into a structured, standardized format for comparison;”, “generating a layout indicative of an arrangement of one or more components in each of the plurality of input documents using the AI model, based on parsing the one or more components;”, “generating a canonical representation based on the layout, wherein the canonical representation indicates relationships between the one or more components of each of the plurality of input documents;”, “determining a plurality of sections within each of the plurality of input documents by segmenting the layout based on the canonical representation, wherein the plurality of sections indicates logical units in the corresponding input document of the plurality of input documents;”, “and determining a textual difference matrix using the AI model, based on a comparison of the plurality of sections within each of the plurality of input documents, wherein the textual difference matrix indicates an alteration in one or more expressions in each of the plurality of sections.” as drafted, covers a mental process, as this could be done mentally or by hand with pen and paper. This judicial exception is not integrated into a practical application. Claim 1 recites “A method for comparing documents using an artificial intelligence (AI) model, the method comprising:”. Claim 9 recites “A system for comparing documents using an artificial intelligence (AI) model, the system comprising: a memory; at least one processor in communication with the memory, the at least one processor is configured to:”. These limitations direct towards using a computer for the method, and do not impose any meaningful limits on practicing the abstract idea. Claim 1 and 9 do not contain any additional limitations. The Claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The addition of the generic computer components recited above with regard to Claims 1 and 9, do not amount to more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Claim 1 and 11 do not contain any additional limitations. The claims as drafted, are not patent eligible. Dependent Claims 2 and 10 recite the additional limitations “highlighting the alteration in one or more expressions;”, “and displaying the highlighted alteration in each of the plurality of input documents such that the highlighted alteration indicates differences between the plurality of input documents”. These limitations cover mental processes, as they could be done mentally or by hand with pen and paper. These judicial exceptions are not integrated into a practical application. Claim 10 recites “the at least one processor configured to”. These limitations direct towards using a computer for the method, and do not impose any meaningful limits on practicing the abstract idea. Claim 2 and 10 do not contain any additional limitations. The Claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The addition of the generic computer components recited above with regard to Claim 10, does not amount to more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Claim 2 and 10 do not contain any additional limitations. The claims as drafted, are not patent eligible. Dependent Claims 3 and 11 recite the additional limitations “wherein generating the layout comprises: predicting a positional coordinate corresponding to each of the one or more components for each of the plurality of input documents;”, “parsing each of the one or more components in each of the plurality of input documents based on the positional coordinate to each of the one or more components;”, “and generating the layout for each of the plurality of input documents based on the parsing.”. These limitations cover mental processes, as they could be done mentally or by hand with pen and paper. These judicial exceptions are not integrated into a practical application. Claim 11 recites “the at least one processor configured to”. These limitations direct towards using a computer for the method, and do not impose any meaningful limits on practicing the abstract idea. Claim 3 and 11 do not contain any additional limitations. The Claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The addition of the generic computer components recited above with regard to Claim 11, does not amount to more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Claim 3 and 11 do not contain any additional limitations. The claims as drafted, are not patent eligible. Dependent Claims 5 and 13 recite the additional limitations “wherein determining the plurality of sections within each of the plurality of input documents comprises: identifying a heading component among the one or more components based on an iteration for each of the one or more components;”, “determining a section-start value upon identifying the heading component;”, “and determining the plurality of sections within each of the plurality of input documents based on the section-start value to be one of true or false.”. These limitations cover mental processes, as they could be done mentally or by hand with pen and paper. These judicial exceptions are not integrated into a practical application. Claim 13 recites “the at least one processor configured to”. These limitations direct towards using a computer for the method, and do not impose any meaningful limits on practicing the abstract idea. Claim 5 and 13 do not contain any additional limitations. The Claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The addition of the generic computer components recited above with regard to Claim 13, does not amount to more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Claim 5 and 13 do not contain any additional limitations. The claims as drafted, are not patent eligible. Dependent Claims 6 and 14 recite the additional limitations “wherein when the section-start value is true indicates that a section is in progress, and when the section-start value is false indicates start of another section.”. These limitations cover mental processes, as they could be done mentally or by hand with pen and paper. These judicial exceptions are not integrated into a practical application as Claim 5 and 13 do not contain any additional limitations. The Claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception as Claim 6 and 14 do not contain any additional limitations. The claims as drafted, are not patent eligible. Dependent Claims 7 and 15 recite the additional limitations “wherein determining the textual difference matrix comprises: obtaining a sequence of text from each of the plurality of sections within each of the plurality of input documents;”, “comparing the sequence of text to identify the alteration between the sequence of text between each of the corresponding plurality of sections, wherein the alteration indicates at least one of an insertion and a deletion in one or more expressions in the sequence of text;”, “creating a change log indicative of a list of alterations between the sequence of text on each page and on each of the plurality of sections within each of the plurality of input documents;”, “determining a set of coordinates associated with the plurality of sections within each of the plurality of input documents based on the change log and comparison of the sequence of text;”, “and determining the textual difference matrix based on the set of coordinates, such that the textual difference matrix provides location of alteration in each of the plurality of sections.”. These limitations cover mental processes, as they could be done mentally or by hand with pen and paper. These judicial exceptions are not integrated into a practical application. Claim 15 recites “the at least one processor configured to”. These limitations direct towards using a computer for the method, and do not impose any meaningful limits on practicing the abstract idea. Claim 7 and 15 do not contain any additional limitations. The Claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The addition of the generic computer components recited above with regard to Claim 15, does not amount to more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Claim 7 and 15 do not contain any additional limitations. The claims as drafted, are not patent eligible. Dependent Claims 8 and 16 recite the additional limitations “generating an identification boundary based on the set of coordinates surrounding the identified altered sequence of text in each of the plurality of sections within each of the plurality of input documents.”. These limitations cover mental processes, as they could be done mentally or by hand with pen and paper. These judicial exceptions are not integrated into a practical application. Claim 16 recites “the at least one processor configured to”. These limitations direct towards using a computer for the method, and do not impose any meaningful limits on practicing the abstract idea. Claim 8 and 16 do not contain any additional limitations. The Claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The addition of the generic computer components recited above with regard to Claim 16, does not amount to more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Claim 8 and 16 do not contain any additional limitations. The claims as drafted, are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7-8, 9-11 and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abraham et al. (US 20220414336 A1), and further in view of Peng et al. (US 20220108556 A1). Regarding Claim 1: Abraham teaches a method for comparing documents using an artificial intelligence (AI) model, the method comprising:……generating a layout indicative of an arrangement of one or more components in each of the plurality of input documents using the AI model, based on parsing the one or more components(Para [0033], Ln 1-9, A content parser 120 receives the documents 110 and 115 and parses each document into distinct instances of types of content, such text, images, and tables, as well as different types of text. The content parser 120 may utilize a machine learning model. Para [0034], Ln 1-11, One or more models may be used to identify the instances of content and may also be used to identify a semantic category 125 to identify the different types or semantic categories of content, such as a table model, an image model, and a text model, or via one or more models trained on various combinations of the types of models. Specific types or semantic categories of text may include section headings, sections, headers, footers, titles, authors, references, captions, etc. The output of the content parser 120 provides instances of content for each document within bounding boxes.); generating a canonical representation based on the layout, wherein the canonical representation indicates relationships between the one or more components of each of the plurality of input documents(Para [0035], Ln 1-7, The instances of content within the bounding boxes are classified into semantic categories 125 by the content parser 120. The semantic categories of the instances of content are used to select one of multiple category matching algorithms. Para [0034], Ln 1-11, Specific types or semantic categories of text may include section headings, sections, headers, footers, titles, authors, references, captions, etc. The output of the content parser 120 provides instances of content for each document within bounding boxes); determining a plurality of sections within each of the plurality of input documents by segmenting the layout based on the canonical representation, wherein the plurality of sections indicates logical units in the corresponding input document of the plurality of input documents(Para [0035], Ln 1-7, The instances of content within the bounding boxes are classified into semantic categories 125 by the content parser 120. The semantic categories of the instances of content are used to select one of multiple category matching algorithms. Para [0034], Ln 1-11, Specific types or semantic categories of text may include section headings, sections, headers, footers, titles, authors, references, captions, etc. The output of the content parser 120 provides instances of content for each document within bounding boxes); and determining a textual difference matrix using the AI model, based on a comparison of the plurality of sections within each of the plurality of input documents, wherein the textual difference matrix indicates an alteration in one or more expressions in each of the plurality of sections(Para [0037], Ln 1-13, The degree of similarity or difference between content instances can be determined using embeddings or vector representations produced by machine learning or deep learning models. These embeddings may come from, for instance, the output of the second-to-last layer of the content parser model that identifies the content instances themselves. The embeddings can numerically represent properties both of the instances and of their surrounding context…..determine that two instances with similar embeddings are similar enough to be considered the same. Para [0039], Ln 1-8, Once category matching has occurred and a diff has been computed at the semantic level, for each occurrence of a modification from one instance to another, the matching algorithms generate a characterization 150 of the difference between the two instances. Para [0074], 1-13, by comparing each instance of content of the specific category in the first document to each instance of content of the specific category in the second document. A similarity score is generated at operation 720 for each pair of respective instances of content. At operation 730, a greedy or bi-partite matching scheme determines the set of matching pairs with the most combined similarity. Para [0052], Ln 1-7, FIG. 3 is a visualization 300 of matching to determine changes between two documents by the semantic difference analyzer 145. The two documents are represented as pages in a first column 310 and a second column 315). Abraham does not specifically teach preprocessing a plurality of input documents into a structured, standardized format for comparison. In the same field of document comparison, Peng teaches preprocessing a plurality of input documents into a structured, standardized format for comparison(Para [0052], Ln 1-9, the two documents to be compared are both converted into PDF format documents, and then the operations 101 to 103 are performed for the content comparison). It would have been obvious for one skilled in the art, at the effective time of filling, to modify Abraham with the document conversion of Peng, as it allows the system to function independent of computer operating system, improving user convenience(Para [0052], Ln 1-9). Regarding Claim 2: The combination of Abraham and Peng teaches the method as claimed in claim 1, and Abraham teaches comprising: highlighting the alteration in one or more expressions; and displaying the highlighted alteration in each of the plurality of input documents such that the highlighted alteration indicates differences between the plurality of input documents(Para [0062], Ln 1-5, FIG. 5 is a representation of a user interface 500 for use in viewing differences between documents according to an example embodiment. Para [0063], Ln Para [0063], Ln 1-5, summary of differences 515 shows several icons that may be color coded to correspond to colors of instances of content in the document. Para [0064], Ln 1-8, icons may also be color coded with corresponding colors highlighting corresponding instances of content in the document). Regarding Claim 3: The combination of Abraham and Peng teaches the method as claimed in claim 1, and Abraham teaches wherein generating the layout comprises: predicting a positional coordinate corresponding to each of the one or more components for each of the plurality of input documents(Para [0033], Ln 1-9, A content parser 120 receives the documents 110 and 115 and parses each document into distinct instances of types of content, such text, images, and tables, as well as different types of text. The content parser 120 may utilize a machine learning model that has been trained on documents having such types of content identified by labeled bounding boxes. Parsing creates bounding boxes around different instances of content. A bounding box simply identifies a perimeter around an instance of content. See Fig 4. Para [0041], Ln 1-7, Each of the instances of content is shown at a particular location on the page 200. The location can be identified via the use of an x,y cartesian coordinate system having an origin at one corner of the page); parsing each of the one or more components in each of the plurality of input documents based on the positional coordinate to each of the one or more components(Para [0033], Ln 1-9, A content parser 120 receives the documents 110 and 115 and parses each document into distinct instances of types of content, such text, images, and tables, as well as different types of text. The content parser 120 may utilize a machine learning model that has been trained on documents having such types of content identified by labeled bounding boxes. Parsing creates bounding boxes around different instances of content. A bounding box simply identifies a perimeter around an instance of content); and generating the layout for each of the plurality of input documents based on the parsing(Para [0034], Ln 1-11, One or more models may be used to identify the instances of content and may also be used to identify a semantic category 125 to identify the different types or semantic categories of content, such as a table model, an image model, and a text model, or via one or more models trained on various combinations of the types of models. Specific types or semantic categories of text may include section headings, sections, headers, footers, titles, authors, references, captions, etc. The output of the content parser 120 provides instances of content for each document within bounding boxes. Para [0033], Ln 1-9, A content parser 120 receives the documents 110 and 115 and parses each document into distinct instances of types of content, such text, images, and tables, as well as different types of text. The content parser 120 may utilize a machine learning model that has been trained on documents having such types of content identified by labeled bounding boxes. Parsing creates bounding boxes around different instances of content. A bounding box simply identifies a perimeter around an instance of content). Regarding Claim 7: The combination of Abraham and Peng teaches the method as claimed in claim 1, and Abraham teaches wherein determining the textual difference matrix comprises: obtaining a sequence of text from each of the plurality of sections within each of the plurality of input documents(Para [0055], Ln 1-8, A number of similarities and semantic differences 145 have been identified by category matching 130, 135, 140, and are represented by lines pointing to where instances of content in the first document appear in the second document. Categories of the change, such as added, modified, removed, same, and re-ordered are also included); comparing the sequence of text to identify the alteration between the sequence of text between each of the corresponding plurality of sections, wherein the alteration indicates at least one of an insertion and a deletion in one or more expressions in the sequence of text(A number of similarities and semantic differences 145 have been identified by category matching 130, 135, 140, and are represented by lines pointing to where instances of content in the first document appear in the second document. Categories of the change, such as added, modified, removed, same, and re-ordered are also included); creating a change log indicative of a list of alterations between the sequence of text on each page and on each of the plurality of sections within each of the plurality of input documents(Para [0052], Ln 1-5, FIG. 3 is a visualization 300 of matching to determine changes between two documents by the semantic difference analyzer 145.); determining a set of coordinates associated with the plurality of sections within each of the plurality of input documents based on the change log and comparison of the sequence of text(Para [0041], Ln 1-7, Each of the instances of content is shown at a particular location on the page 200. The location can be identified via the use of an x,y cartesian coordinate system having an origin at one corner of the page. Para [0052], Ln 1-7, FIG. 3 is a visualization 300 of matching to determine changes between two documents by the semantic difference analyzer 145. The two documents are represented as pages in a first column 310 and a second column 315.); and determining the textual difference matrix based on the set of coordinates, such that the textual difference matrix provides location of alteration in each of the plurality of sections(Para [0052], Ln 1-7, FIG. 3 is a visualization 300 of matching to determine changes between two documents by the semantic difference analyzer 145. The two documents are represented as pages in a first column 310 and a second column 315). Regarding Claim 8: The combination of Abraham and Peng teaches the method as claimed in claim 7, and Abraham teaches comprising: generating an identification boundary based on the set of coordinates surrounding the identified altered sequence of text in each of the plurality of sections within each of the plurality of input documents(Para [0064], Ln 1-8, The summary of differences….These icons may also be color coded with corresponding colors highlighting corresponding instances of content in the document. See Fig 4. Para [0041], Ln 1-7, Each of the instances of content is shown at a particular location on the page 200. The location can be identified via the use of an x,y cartesian coordinate system having an origin at one corner of the page). Regarding Claim 9: Abraham teaches a system for comparing documents using an artificial intelligence (AI) model, the system comprising: a memory; at least one processor in communication with the memory, the at least one processor is configured to(Para [0016], Ln 1-12, storage device….microprocessor): generate a layout indicative of an arrangement of one or more components in each of the plurality of input documents using the AI model, based on parsing the one or more components(Para [0033], Ln 1-9, A content parser 120 receives the documents 110 and 115 and parses each document into distinct instances of types of content, such text, images, and tables, as well as different types of text. The content parser 120 may utilize a machine learning model. Para [0034], Ln 1-11, One or more models may be used to identify the instances of content and may also be used to identify a semantic category 125 to identify the different types or semantic categories of content, such as a table model, an image model, and a text model, or via one or more models trained on various combinations of the types of models. Specific types or semantic categories of text may include section headings, sections, headers, footers, titles, authors, references, captions, etc. The output of the content parser 120 provides instances of content for each document within bounding boxes); generate a canonical representation based on the layout, wherein the canonical representation indicates relationships between the one or more components of each of the plurality of input documents(Para [0035], Ln 1-7, The instances of content within the bounding boxes are classified into semantic categories 125 by the content parser 120. The semantic categories of the instances of content are used to select one of multiple category matching algorithms. Para [0034], Ln 1-11, Specific types or semantic categories of text may include section headings, sections, headers, footers, titles, authors, references, captions, etc. The output of the content parser 120 provides instances of content for each document within bounding boxes); determine a plurality of sections within each of the plurality of input documents by segmenting the layout based on the canonical representation, wherein the plurality of sections indicate logical units in the corresponding input document of the plurality of input documents(Para [0035], Ln 1-7, The instances of content within the bounding boxes are classified into semantic categories 125 by the content parser 120. The semantic categories of the instances of content are used to select one of multiple category matching algorithms. Para [0034], Ln 1-11, Specific types or semantic categories of text may include section headings, sections, headers, footers, titles, authors, references, captions, etc. The output of the content parser 120 provides instances of content for each document within bounding boxes); and determine a textual difference matrix using the AI model, based on a comparison of the plurality of sections within each of the plurality of input documents, wherein the textual difference matrix indicates an alteration in one or more expressions in each of the plurality of sections(Para [0037], Ln 1-13, The degree of similarity or difference between content instances can be determined using embeddings or vector representations produced by machine learning or deep learning models. These embeddings may come from, for instance, the output of the second-to-last layer of the content parser model that identifies the content instances themselves. The embeddings can numerically represent properties both of the instances and of their surrounding context…..determine that two instances with similar embeddings are similar enough to be considered the same. Para [0039], Ln 1-8, Once category matching has occurred and a diff has been computed at the semantic level, for each occurrence of a modification from one instance to another, the matching algorithms generate a characterization 150 of the difference between the two instances. Para [0074], 1-13, by comparing each instance of content of the specific category in the first document to each instance of content of the specific category in the second document. A similarity score is generated at operation 720 for each pair of respective instances of content. At operation 730, a greedy or bi-partite matching scheme determines the set of matching pairs with the most combined similarity. Para [0052], Ln 1-7, FIG. 3 is a visualization 300 of matching to determine changes between two documents by the semantic difference analyzer 145. The two documents are represented as pages in a first column 310 and a second column 315). Abraham does not specifically teach preprocess a plurality of input documents into a structured, standardized format for comparison. In the same field of document comparison, Peng teaches preprocess a plurality of input documents into a structured, standardized format for comparison(Para [0052], Ln 1-9, the two documents to be compared are both converted into PDF format documents, and then the operations 101 to 103 are performed for the content comparison). It would have been obvious for one skilled in the art, at the effective time of filling, to modify Abraham with the document conversion of Peng, as it allows the system to function independent of computer operating system, improving user convenience(Para [0052], Ln 1-9). Regarding Claim 10: Claim 10 contains similar limitations as Claim 2 and is therefore rejected for the same reasons. Regarding Claim 11: Claim 11 contains similar limitations as Claim 3 and is therefore rejected for the same reasons. Regarding Claim 15: Claim 15 contains similar limitations as Claim 7 and is therefore rejected for the same reasons. Regarding Claim 16: Claim 16 contains similar limitations as Claim 8 and is therefore rejected for the same reasons. Claim(s) 5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Abraham and Peng as applied to claim 1 above, and further in view of Rajanigandha et al. (US 20220215012 A1). Regarding Claim 5: The combination of Abraham and Peng teaches the method as claimed in claim 1, and Abraham teaches wherein determining the plurality of sections within each of the plurality of input documents comprises: identifying a heading component among the one or more components based on an iteration for each of the one or more components(Para [0034], Ln 1-11, One or more models may be used to identify the instances of content and may also be used to identify a semantic category 125 to identify the different types or semantic categories of content, such as a table model, an image model, and a text model, or via one or more models trained on various combinations of the types of models. Specific types or semantic categories of text may include section headings, sections, headers, footers, titles, authors, references, captions, etc. The output of the content parser 120 provides instances of content for each document within bounding boxes). The combination of Abraham and Peng does not teach determining a section-start value upon identifying the heading component; and determining the plurality of sections within each of the plurality of input documents based on the section-start value to be one of true or false. In the same field of document comparison, Rajanigandha teaches determining a section-start value upon identifying the heading component; and determining the plurality of sections within each of the plurality of input documents based on the section-start value to be one of true or false(Para [0032]m Ln 1-24 Each template defines one or more sections, each section storing some columns of data (i.e., a single label for an attribute associated with a single value for that attribute, such as the name and address 205) and/or some tables of data….. Each section also specifies header or start values/delimiters and footer or end values/delimiters to be used in identifying where one second ends and another begins while traversing the contents of the PDF document. Some sections may be repeated, and if so, the template will specify this property with a Boolean flag so that the extraction process will check for start delimiters of a new section even if that section type has been seen before.). It would have been obvious fore one skilled in the art, at the effective time of filling, to modify the combination of Abraham and Peng with the document processing of Rajanigandha as it can help improve efficiency, removing the need for human labor(Para [0005], Ln 1-5). Regarding Claim 13: Claim 13 contains similar limitations as Claim 5 and is therefore rejected for the same reasons. Allowable Subject Matter Claims 4, 6, 12 and 14 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, as well as having the rejections under 35 U.S.C. 101 overcome, in the case of claims 6 and 14. The following is a statement of reasons for the indication of allowable subject matter: Regarding Claim 4: The combination of Abraham and Peng teaches the method as claimed in claim in claim 3, but does not each wherein predicting the positional coordinate comprises: training the AI model with an annotated plurality of training documents based on at least one of a precision value, a recall value, and a Mean Average Precision (mAP) value associated with the AI model; determining hyperparameters associated with the annotated plurality of training documents based on the training; learning the positional coordinate corresponding to the one or more components in the plurality of training documents based on the determined hyperparameters; and predicting the positional coordinate corresponding to the one or more components for each of the plurality of input documents based on the learning. While Abraham does teach using a machine learning model to predict bounding boxes on the input data(Para [0033]), this does not teach the entirety of the above limitations. For this reasons the prior art of record alone or in combination does not teach the claimed limitations. Regarding Claim 6: The combination of Abraham, Peng and Rajanigandha teaches the method as claimed in claim 5, but does not teach wherein when the section-start value is true indicates that a section is in progress, and when the section-start value is false indicates start of another section. The prior art of record alone or in combination does not teach the claimed limitations. Claim 12 contains similar limitations as Claim 4 and therefore also contains allowable subject matter. Claim 14 contains similar limitations as Claim 6 and therefore also contains allowable subject matter. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bajaj et al. (US 20240202435 A1). Determining document similarity using machine learning models. Religa et al. (US 20220382728 A1) Determining differences between revisions using machine learning models. Roman et al. (US 20190347284 A1) Determining document differences using machine learning for semantic similarity comparison. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER G MARLOW whose telephone number is (571)272-4536. The examiner can normally be reached Monday - Thursday 10:00 am - 8:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richmond Dorvil can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER G MARLOW/Assistant Examiner, Art Unit 2658 /RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Mar 20, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548568
INTERFACING WITH APPLICATIONS VIA DYNAMICALLY UPDATING NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Feb 10, 2026
Patent 12511483
NAME ENTITY RECOGNITION WITH DEEP LEARNING
2y 5m to grant Granted Dec 30, 2025
Patent 12488802
METHOD AND APPARATUS FOR ERROR RECOVERY IN PREDICTIVE CODING IN MULTICHANNEL AUDIO FRAMES
2y 5m to grant Granted Dec 02, 2025
Patent 12482460
METHOD AND SYSTEM OF ENVIRONMENT-SENSITIVE WAKE-ON-VOICE INITIATION USING ULTRASOUND
2y 5m to grant Granted Nov 25, 2025
Patent 12462799
VOICE CONTROL METHOD, SERVER APPARATUS, AND UTTERANCE OBJECT
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
97%
With Interview (+20.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month