Prosecution Insights
Last updated: April 19, 2026
Application No. 18/511,111

MERGING MISIDENTIFIED TEXT STRUCTURES IN A DOCUMENT

Final Rejection §103
Filed
Nov 16, 2023
Examiner
SMITH, BENJAMIN J
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
260 granted / 408 resolved
+8.7% vs TC avg
Strong +55% interview lift
Without
With
+55.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
27 currently pending
Career history
435
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant's Response In Applicant's Response dated 12/22/2025, Applicant amended the Claims (and specification) and argued against all objections and rejections set forth in the previous Office Action. All objections and rejections not reproduced below are withdrawn. The prior art rejections of the Claims under 35 U.S.C. 102 and 103 previously set forth are withdrawn. The examiner appreciates the applicant noting where the support for the amendments are described in the specification. The Application was filed on 11/16/2023. Claim(s) 1-9, 11-20 are pending for examination. Claim(s) 1, 11, 18 is/are independent claim(s). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6, 11-14, 15-17, 18, 19, 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pito; Richard Anthony US Pub. No. 2021/0319177 (Pito) in view of Cohen; Scott et al. US Pub. No. 2018/0322339 (Cohen). Claim 1: Pito teaches: A method comprising: receiving a document including a plurality of text elements [¶ 0014, 66-67, 77-80] (OCR input, receiving an input body of text containing a plurality of chunks of text, identifying a set of features of each chunk, classifying each text chunk as a potential header depending on whether the chunk includes a mark or title text); determining, by a machine learning model [¶ 0124, 152] (supervised learning, group titles into related sequences using only values of m and so avoid defining any thresholds that might be needed for example when using a supervised learning approach), a merge classification based on a likelihood of merging a first text element of the plurality of text elements with a second text element of the plurality of text elements based on structure data and context data associated with the first and second text elements [¶ 0078, 101, 114, 153, 159, 168] (titles or headers can then be merged into a hierarchy or structure) [¶ 0111, 116, 174-176] (edges of G′ are thus enhanced with a score indicating the similarity of each pair of headings); determining whether the likelihood of merging the first text element with the second text element satisfies a threshold [¶ 0021, 23, 150] (comparing an average number of characters in a group of potential headers with similar features to a threshold); and responsive to determining that the likelihood of merging the first text element with the second text element satisfies the threshold, merging the first text element with the second text element [¶ 0078, 101, 114, 153, 159, 168] (titles or headers can then be merged into a hierarchy or structure) [¶ 0111, 116, 174-176] (edges of G′ are thus enhanced with a score indicating the similarity of each pair of headings) [¶ 0044, 80-85] (overcome OCR errors, this would mean they were “misidentified”). Pito does not appear to explicitly disclose “positive pairs of training data and negative pairs of training data”. However, the disclosure of Cohen teaches: wherein the machine learning model is trained to determine the merge classification using positive pairs of training data and negative pairs of training data [¶ 0065, 90-92, 103] (training data (a) may include positive or negative truth data for the elements within a data set, Training data 122(b) for the classifier neural network may be generated, for example, by dividing a bounding box that contains a single paragraph so that it is missing some lines or part of the paragraph. Training data 122(b) is also generated by combining two paragraphs into one bounding box); It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of header extraction in Pito and the method of page segmentation in Cohen, with a reasonable expectation of success. The motivation for doing so would have been the use of known technique to improve similar devices (methods, or products) in the same way; (See KSR Int’l Co. v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(D)). The know technique of positive and negative training data in Cohen could be applied to the header extraction in Pito. Cohen and Pito are similar devices because each identify heading. One of ordinary skill in the art would have recognized that applying the known technique would improve the similar devices and resulted in an improved system, with a reasonable expectation of success, to “improve the accuracy of the page segmentation” [Cohen: ¶ 0023, 46]. Claim 2: Pito teaches: The method of claim 1, wherein the plurality of text elements include a plurality of headings, and wherein the first text element is a candidate heading and the second text element is a target heading [¶ 0078, 101, 114, 153, 159, 168] (titles or headers can then be merged into a hierarchy or structure) [¶ 0111, 116, 174-176] (edges of G′ are thus enhanced with a score indicating the similarity of each pair of headings). Claim 3: Pito teaches: The method of claim 2, wherein the structure data associated with the first and second text elements includes a font of the candidate heading and a font of the target heading [¶ 0013, 31, 73, 79-80, 87, 116, 170] (similarity between two titles will depend on things like their font style, indentation, format, marks) [¶ 0007] (prior art that teaches identifying the logical parts of scientific documents using rules based on some font characteristics). Claim 4: Pito teaches: The method of claim 2, wherein the structure data associated with the first and second text elements includes a distance between the candidate heading and the target heading [¶ 0145-149] (Levenshtein distance between the text of two headings). Claim 6: Pito teaches: The method of claim 2, wherein the context data associated with the first and second text elements includes: a candidate heading embedding generated by a language machine learning model based on the candidate heading; and a target heading embedding generated by the language machine learning model based on the target heading [¶ 0111, 116, 174-175] (a score indicating the similarity of each pair of headings, a score could be considered an “embedding”) [¶ 0110-115, 134-143] (maximum weight perfect matching problem which can be cast as a linear sum assignment problem (LSAP), consider the weight on each edge (X, Y) as being the value of m(X, Y) then the problem is to find a matching that maximizes the cumulative sum of the weights of the edges in the matching) [¶ 0124, 152] (supervised learning, group titles into related sequences using only values of m and so avoid defining any thresholds that might be needed for example when using a supervised learning approach). Claim 21: The combination of Pito and Choen discloses the limitations recited in the parent claim(s) for the reasons discussed above. In addition, the present claim would be further obvious using the same reason, rationale and/or motivation as used above, over the disclosure of Cohen, which teaches: The method of claim 1, wherein a positive pair of the positive pairs of training data comprises a first portion of a first paragraph and a second portion of the first paragraph or a first portion of a first heading and a second portion of the first heading, and wherein a negative pair of the negative pairs of training data comprises the first portion or the second portion of the first paragraph and a portion of a second paragraph or the first portion or the second portion of the first heading and a portion of a second heading [¶ 0065, 90-92, 103] (training data (a) may include positive or negative truth data for the elements within a data set, Training data 122(b) for the classifier neural network may be generated, for example, by dividing a bounding box that contains a single paragraph so that it is missing some lines or part of the paragraph. Training data 122(b) is also generated by combining two paragraphs into one bounding box). Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pito; Richard Anthony US Pub. No. 2021/0319177 (Pito) in view of Cohen; Scott et al. US Pub. No. 2018/0322339 (Cohen) in view of Minagawa; Akihiro et al. US Pub. No. 2009/0112797 (Minagawa). Claim 5: Pito and Cohen teach all the elements of the claims as shown above. Pito and Cohen do not appear to explicitly disclose “distance between a center of the candidate bounding box”. However, the disclosure of Minagawa teaches: The method of claim 4, further comprising: receiving a candidate bounding box including the candidate heading [¶ 0055, 173, 185-189] (rectangle circumscribing heading candidate, a rectangle could be a “box”); receiving a target bounding box including the target heading [¶ 0055, 173, 185-189] (rectangle circumscribing heading candidate, a rectangle could be a “box”); and determining a distance between a center of the candidate bounding box and a center of the target bounding box to obtain the distance between the candidate heading and the target heading [¶ 0173, 212-213, 223] (distance between the centers of the heading word candidate and the data word candidate as shown in FIG. 29, the evaluation shown in FIG. 34). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of header extraction in Pito and the method of page segmentation in Cohen and the method of structure analysis in Minagawa, with a reasonable expectation of success. The motivation for doing so would have been the use of known technique to improve similar devices (methods, or products) in the same way; (See KSR Int’l Co. v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(D)). The know technique of centered heading identification in Minagawa could be applied to the header extraction in Pito and the positive and negative training data in Cohen. Minagawa, Cohen and Pito are similar devices because each identify heading. One of ordinary skill in the art would have recognized that applying the known technique would improve the similar devices and resulted in an improved system, with a reasonable expectation of success, for “higher accuracy can be achieved in the logical structure analysis of a form” [Minagawa: ¶ 0009, 190, 231]. Claim(s) 7-9, 15-17, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pito; Richard Anthony US Pub. No. 2021/0319177 (Pito) in view of Cohen; Scott et al. US Pub. No. 2018/0322339 (Cohen) in view of Radakovic; Bogdan et al. US Pub. No. US 2011/0222773 (Radakovic). Claim 7: Pito teaches all the elements of the claims as shown above, including [¶ 0124, 152] (supervised learning, group titles into related sequences using only values of m and so avoid defining any thresholds that might be needed for example when using a supervised learning approach). Pito and Chohen do not appear to explicitly disclose “candidate incomplete paragraph”. However, the disclosure of Radakovic teaches: The method of claim 1, wherein the plurality of text elements include a plurality of paragraphs, and wherein the first text element is a candidate incomplete paragraph and the second text element is a target incomplete paragraph [¶ 0002, 19, 27, 39-41, 45] (determined using training patterns to establish various combinations of feature values that characterize a beginning paragraph line and a continuation paragraph line, paragraph includes all lines located between two successive beginning paragraph lines) [¶ 0044, 47, 51] (training, paragraph alignment classification component can employ a machine learning technique such as a neural network, decision tree or a Bayesian framework) [¶ 0019, 37, 49-51] (justification and alignment could be considered “structure”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of header extraction in Pito and the method of page segmentation in Cohen and the method of paragraph identification in Radakovic, with a reasonable expectation of success. The motivation for doing so would have been the use of known technique to improve similar devices (methods, or products) in the same way; (See KSR Int’l Co. v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(D)). The know technique of paragraph identification in Radakovic could be applied to the header extraction in Pito and the positive and negative training data in Cohen. Radakovic, Cohen and Pito are similar devices because each identify data in OCR images. One of ordinary skill in the art would have recognized that applying the known technique would improve the similar devices and resulted in an improved system, with a reasonable expectation of success, to “improve the accuracy of the classification process” [Radakovic: ¶ 0044, 4751]. Claim 8: Pito teaches: The method of claim 7, wherein the candidate incomplete paragraph and the target incomplete paragraph share one or more structural attributes [¶ 0002, 19, 27, 39-41, 45] (determined using training patterns to establish various combinations of feature values that characterize a beginning paragraph line and a continuation paragraph line, paragraph includes all lines located between two successive beginning paragraph lines) [¶ 0044, 47, 51] (training, paragraph alignment classification component can employ a machine learning technique such as a neural network, decision tree or a Bayesian framework) [¶ 0019, 37, 49-51] (justification and alignment could be considered “structure”). Claim 9: Pito teaches: The method of claim 7, wherein the context data associated with the first and second text elements includes: a candidate incomplete paragraph embedding generated by the machine learning model based on the candidate incomplete paragraph [¶ 0111, 116, 174-175] (a score indicating the similarity of each pair of headings, a score could be considered an “embedding”), and a target incomplete paragraph embedding generated by the machine learning model based on the target incomplete paragraph [¶ 0002, 19, 27, 39-41, 45] (determined using training patterns to establish various combinations of feature values that characterize a beginning paragraph line and a continuation paragraph line, paragraph includes all lines located between two successive beginning paragraph lines) [¶ 0044, 47, 51] (training, paragraph alignment classification component can employ a machine learning technique such as a neural network, decision tree or a Bayesian framework) [¶ 0019, 37, 49-51] (justification and alignment could be considered “structure”). ALTERNATE REJECTION: Claim(s) 1, 2, 4, 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Roy Chowdhury; Sujoy Kumar et al. US Pub. No. 2023/0059946 (Chowdhury) in view of Cohen; Scott et al. US Pub. No. 2018/0322339 (Cohen). Claim 1: Chowdhury teaches: A method comprising: receiving a document including a plurality of text elements; determining, by a machine learning model [¶ 0014, 38-44] (train machine learning model, natural language processing, SoftMax), a merge classification based on a likelihood of merging a first text element of the plurality of text elements with a second text element of the plurality of text elements based on structure data and context data associated with the first and second text elements [¶ 0028-30, 35] (determines a similarity score indicating an amount of similarity between a first heading or subheading); determining whether the likelihood of merging the first text element with the second text element satisfies a threshold [¶ 0028-30, 35] (determines a similarity score indicating an amount of similarity between a first heading or subheading) [¶ 0028] (Euclidean distance-based similarity score indicating a similarity between (i) the heading or subheading and (ii) the name of the process block); and responsive to determining that the likelihood of merging the first text element with the second text element satisfies the threshold, merging the first text element with the second text element [¶ 0033, 66-68] (combines the reformatted chunks into a common layout). Chowdhury does not appear to explicitly disclose “positive pairs of training data and negative pairs of training data”. However, the disclosure of Cohen teaches: wherein the machine learning model is trained to determine the merge classification using positive pairs of training data and negative pairs of training data [¶ 0065, 90-92, 103] (training data (a) may include positive or negative truth data for the elements within a data set, Training data 122(b) for the classifier neural network may be generated, for example, by dividing a bounding box that contains a single paragraph so that it is missing some lines or part of the paragraph. Training data 122(b) is also generated by combining two paragraphs into one bounding box); It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of header extraction in Chowdhury and the method of page segmentation in Cohen, with a reasonable expectation of success. The motivation for doing so would have been the use of known technique to improve similar devices (methods, or products) in the same way; (See KSR Int’l Co. v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(D)). The know technique of positive and negative training data in Cohen could be applied to the header extraction in Chowdhury. Cohen and Chowdhury are similar devices because each identify heading. One of ordinary skill in the art would have recognized that applying the known technique would improve the similar devices and resulted in an improved system, with a reasonable expectation of success, to “improve the accuracy of the page segmentation” [Cohen: ¶ 0023, 46]. Claim 2: Chowdhury teaches: The method of claim 1, wherein the plurality of text elements include a plurality of headings, and wherein the first text element is a candidate heading and the second text element is a target heading [¶ 0028-30, 35] (determines a similarity score indicating an amount of similarity between a first heading or subheading). Claim 4: Chowdhury teaches: The method of claim 2, wherein the structure data associated with the first and second text elements includes a distance between the candidate heading and the target heading [¶ 0028] (Euclidean distance-based similarity score indicating a similarity between (i) the heading or subheading and (ii) the name of the process block). Claim 6: Chowdhury teaches: The method of claim 2, wherein the context data associated with the first and second text elements includes: a candidate heading embedding generated by a language machine learning model based on the candidate heading [¶ 0028-30, 35] (determines a similarity score indicating an amount of similarity between a first heading or subheading, a similarity score could be considered an “embedding”); and a target heading embedding generated by the language machine learning model based on the target heading [¶ 0014, 38-44] (train machine learning model, natural language processing, SoftMax). Claims 11-20: Claim(s) 11 and 18 is/are substantially similar to claim 1 and is/are rejected using the same art and the same rationale. Claim 1 is a “method” claim, claim 18 is a “system” claim and claim 11 is a “medium” claim, but the steps or elements of each claim are essentially the same. Claim 11 also recites a “tag” as the mechanism to merge. Pito discloses a classifying when merging which could mean a “tag”. [¶ 0101-103, 710] (classify). NOTE: Claims 2-6, 12-14, 19 are directed toward “headings”. Claim(s) 12 is/are substantially similar to claims 6 and 3 or 4 and is/are rejected using the same art and the same rationale. Claim(s) 13 is/are substantially similar to claim 3 and is/are rejected using the same art and the same rationale. Claim(s) 14 is/are substantially similar to claim 4 and is/are rejected using the same art and the same rationale. NOTE: Claims 7-9 15-17, 20 are directed toward “paragraphs”. Claim(s) 15 is/are substantially similar to claim 7 and 8 and is/are rejected using the same art and the same rationale. The claim also recites a “language machine learning model” which Radakovic teaches. Claim(s) 16 is/are substantially similar to claim 6 and 7 and is/are rejected using the same art and the same rationale. Claim(s) 17 is/are substantially similar to claims 7, 8, and 9 and is/are rejected using the same art and the same rationale. Claim(s) 19 is/are substantially similar to claim 2 and is/are rejected using the same art and the same rationale. Claim(s) 20 is/are substantially similar to claim 7 and is/are rejected using the same art and the same rationale. Response to Arguments Applicant’s arguments with respect to claim(s) 1-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. 35 USC 103 Rejection for Claims 6 and 12: The applicant argues that Pito fails to teach an “embedding” (response page 8). The examiner respectfully disagrees. Pito teaches a score indicating the similarity of each pair of headings, this score could be considered an “embedding”, see [¶ 0111, 116, 174-175]. 35 USC 103 Rejection for Claims 9 and 17: The applicant argues that Radakovic fails to teach an “embedding” (response page 8). The examiner respectfully disagrees. Radakovic may not teach this but Pito teaches a score indicating the similarity of each pair of headings, this score could be considered an “embedding”, see [¶ 0111, 116, 174-175]. 35 USC 103 Rejection for Claims 6 and 12: The applicant argues that Chowdhury fails to teach an “embedding” (response page 10). The examiner respectfully disagrees. Chowdhury teaches a score indicating the similarity of each pair of headings, this score could be considered an “embedding”, see [¶ 0028-30, 35] (determines a similarity score indicating an amount of similarity between a first heading or subheading) [¶ 0028] (Euclidean distance-based similarity score indicating a similarity between (i) the heading or subheading and (ii) the name of the process block). Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please See PTO-892: Notice of References Cited. Evidence of the level skill of an ordinary person in the art for Claim 1: LIU, GANG CN 117473034 teaches: training an OCR model, positive, negative; Docpoint refers to an end-to-end document structure analysis scheme, which can perform structure extraction on the document (scanning version, picture version and so on), including entity identification (entity refers to all elements to be detected, including text, row, column, cell and so on) and relation classification. Zhang; Jun et al. US 20150095769 teaches: physical position and line logic; performing cluster analysis on all final line units according to a layout physical position relationship and a matching degree of line logical text character strings and logical text character strings in a target logical paragraph, combining final line units clustered into the same category, and performing layout analysis and sequencing thereon to generate a paragraph unit. Evidence of the level skill of an ordinary person in the art for Claim 3: Agarwalla; Lalit et al. US 10049270 teaches: Visual Similarity Measure may be utilized to compute the closeness score between two content block categories based on the color, font size and font type. Evidence of the level skill of an ordinary person in the art for Claim 5: Rodriguez; Antonio Foncubierta et al. US 20200159820 discloses: Euclidean distance is computed between the centroids of the bounding boxes to match it with the correct text; headers usually follow the same format (similar font, font size etc.). Harrington, Steven J. et al. US 20050028099 teaches: center of visual weight; Distance can be the distance between content borders or alternatively the distance between content centers. Evidence of the level skill of an ordinary person in the art for Claim 7: Thompson; Stephen M. et al. US 20220318224 teaches: 0300-305-paragraph identification; deep learning, paragraph; paragraphs, etc.); a confidence score associated with the predicted/identified identity character component(s). Morariu; Vlad et al. US 20200175095 teaches: paragraphs that continue across columns; deep learning. Prebble; Tim US 20240111942 teaches: identifying a pair of candidate paragraphs spanning columns from among the set of candidate paragraphs, using natural language processing. Richardson; Joshua et al. US 9098471 teaches: semantic analysis may also be applied to relate syntactic structures on phrases and sentences, so that meaningful paragraphs can be formed. Goodwin; Robert L. et al. US 9229911 teaches: paragraph of text, which continues from a first location to a second location, and which is separated by an intervening feature such as a page break, column break, image, or other intervening feature, heuristic, training and learning. Citations to Prior Art A reference to specific paragraphs, columns, pages, or figures in a cited prior art reference is not limited to preferred embodiments or any specific examples. It is well settled that a prior art reference, in its entirety, must be considered for all that it expressly teaches and fairly suggests to one having ordinary skill in the art. Stated differently, a prior art disclosure reading on a limitation of Applicant's claim cannot be ignored on the ground that other embodiments disclosed were instead cited. Therefore, the Examiner's citation to a specific portion of a single prior art reference is not intended to exclusively dictate, but rather, to demonstrate an exemplary disclosure commensurate with the specific limitations being addressed. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968". In re: Upsher-Smith Labs. v. Pamlab, LLC, 412 F.3d 1319, 1323,75 USPQ2d 1213,1215 (Fed. Cir. 2005); In re Fritch, 972 F.2d 1260, 1264,23 USPQ2d 1780, 1782 (Fed. Cir. 1992); Merck & Co. v. Biocraft Labs., Inc., 874 F.2d 804, 807,10 USPQ2d 1843, 1846 (Fed. Cir. 1989); In re Fracalossi, 681 F.2d 792,794 n.1, 215 USPQ 569, 570 n.1 (CCPA 1982); In re Lamberti, 545 F.2d 747, 750, 192 USPQ 278, 280 (CCPA 1976); In re Bozek, 416 F.2d 1385,1390,163 USPQ 545, 549 (CCPA 1969). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN J SMITH whose telephone number is (571)270-3825. The examiner can normally be reached Monday - Friday 11:00 - 7:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ADAM QUELER can be reached on (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Benjamin Smith/Primary Examiner, Art Unit 2172 Direct Phone: 571-270-3825 Direct Fax: 571-270-4825 Email: benjamin.smith@uspto.gov
Read full office action

Prosecution Timeline

Nov 16, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §103
Dec 11, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Examiner Interview Summary
Dec 22, 2025
Response Filed
Mar 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602378
Document Processing and Response Generation System
2y 5m to grant Granted Apr 14, 2026
Patent 12591351
UNIFIED DOCUMENT SURFACE
2y 5m to grant Granted Mar 31, 2026
Patent 12566916
GENERATIVE COLLABORATIVE PUBLISHING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12566544
Page Sliding Processing Method and Related Apparatus
2y 5m to grant Granted Mar 03, 2026
Patent 12566804
SORTING DOCUMENTS ACCORDING TO COMPREHENSIBILITY SCORES DETERMINED FOR THE DOCUMENTS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+55.3%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month