Prosecution Insights
Last updated: April 19, 2026
Application No. 18/944,504

QUERY PROCESSING

Non-Final OA §103
Filed
Nov 12, 2024
Examiner
WU, YICUN
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
486 granted / 598 resolved
+26.3% vs TC avg
Strong +17% interview lift
Without
With
+17.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
16 currently pending
Career history
614
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
26.3%
-13.7% vs TC avg
§112
3.7%
-36.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 598 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . III. DETAILED ACTION Claims 1-20 are presented for examination. Priority filed under CHINA 202410137529.1 01/31/2024 has been accepted. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/12/2024 and 4/3/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-12 and 14-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Krishnapura Subbaraya et al. U.S. 20190303663 (KS hereinafter) in view of Azarmi U.S. 20250086190 As to claim 1, KS discloses a method of query processing, comprising: segmenting a target document into a plurality of document segments (fig. 3, item 302), the segmenting being at least based on structural information of the target document (structure [0005]) and a semantic analysis result of the target document (a semantic [0006]), the structural information at least indicating a hierarchical structure of the target document (hierarchical [0006]); KS does not explicitly teach generating a respective document vectorized representation of each of the plurality of document segments; and using the document vectorized representations to perform data retrieval against the target document. Azarmi teaches generating a respective document vectorized representation of each of the plurality of document segments ([0035]); and using the document vectorized representations to perform data retrieval against the target document ( vectors (documents) …closest (in the vector space) to the vector representation of the query. [0035]) It would have been obvious to a person having ordinary skill in the art at the time the invention was made to have modified KS by the teaching of Azarmi to include generating a respective document vectorized representation of each of the plurality of document segments; and using the document vectorized representations to perform data retrieval against the target document. with the motivation to increase the effectiveness of context data as taught by Azarmi ([0002]). As to claim 3, KS as modified teaches further comprising: in response to receiving a user query, generating a query vectorized representation corresponding to the user query (KS fig. 3, item 302); determining a plurality of match degrees between the query vectorized representation and respective document vectorized representations of the plurality of document segments (Azarmi matching. [0035]); selecting, from the plurality of document segments, at least one first document segment matching the user query based on the determined plurality of match degrees (Azarmi matching. [0035]); and generating a query response to the user query based at least on the at least one first document segment (Azarmi matching. [0035]). As to claim 4, KS as modified teaches a method of claim 3, wherein generating the query response to the user query based on the at least one document segment comprises: determining, from the plurality of document segments, at least one second document segment having semantic relevance with the at least one first document segment based on the structural information of the target document and a semantic context of the at least one first document segment in the target document (Azarmi semantic relevance [0067]); and generating a query response to the user query based on the at least one first document segment and the at least one second document segment. (Azarmi [0067]) As to claim 5, KS as modified teaches a method of claim 4, wherein determining the at least one second document segment comprises at least one of: for each of the at least one first document segment, in response to determining that a document portion of a predetermined granularity in which the first document segment is located (KS fig. 3) comprises at least one further document segment, determining the at least one further document segment as the at least one second document segment (Azarmi [0067] (KS fig. 3)); or for each of the at least one first document segment, determining at least one further document segment of the plurality of document segments as the at least one second document segment based on a semantic relevance between the first document segment and a further document segment of the plurality of document segments. (Azarmi [0067]). As to claim 6, KS as modified teaches a method of claim 3, wherein generating the query response to the user query based at least one the at least one first document segment comprises: generating a prompt input for a target model based at least on the at least one first document segment and the user query (Azarmi [0068]); providing the prompt input to the target model to obtain an output of the target model (Azarmi [0068]); and generating the query response to the user query based on the output of the target model (Azarmi [0068]). As to claim 7, KS as modified teaches a method of claim 1, wherein the structural information further indicates a data type in the target document, and wherein segmenting the target document into a plurality of document segments (data type (Azarmi [0131]) comprises: in response to detecting that at least partial data of the target document contains data of a first data type and data a second data type, segmenting the at least partial data into a first document segment and a second document segment, the first document segment comprising the data of the first data type, and the second document segment comprising the data of the second data type (data type (Azarmi [0131]). As to claim 8, KS as modified teaches a method of claim 1, wherein segmenting the target document into a plurality of document segments comprises: segmenting the target document into a plurality of document segments further based on a dimension of the document vectorized representations to be generated(dimension (Azarmi [0027]). As to claim 9, KS as modified teaches a method of claim 1, wherein the hierarchical structure of the target document comprises a document tree structure of the target document, and wherein segmenting the target document into a plurality of document segments comprises: for data corresponding to respective leaf nodes in the document tree structure, segmenting the target document into a plurality of document segments based at least on a semantic analysis result of the data corresponding to the respective leaf nodes (KS fig. 3). As to claim 10, KS as modified teaches a method of claim 1, further comprising: generating enhancement data for at least a portion of the target document, wherein the at least a portion of the target document comprises at least one document segment of the plurality of document segments, the enhancement data comprising at least one of: a reference question and answer pair constructed based on the at least a portion of the target document (relevant answers (Azarmi [0125]), or summary information extracted from the at least a portion of the target document (summarize (Azarmi [0126]); generating an enhancement vectorized representation of the enhancement data (Azarmi [0126]); and storing the enhancement vectorized representation in association with at least one respective document vectorized representation of the at least one document segment (Azarmi [0126]). As to claim 11, KS as modified teaches method of claim 10, further comprising: in response to one or more of the at least one document segment being determined as matching a user query, determining a query response to the user query based on the one or more document segments and the enhancement data (matching Azarmi [0035]). As to claims 12 and 14-20, the limitations of these claims have been noted in the rejection above. They are therefore rejected as set forth above. Claims 2 and 13 are rejected under 35 U.S.C. 103(a) as being unpatentable over Krishnapura Subbaraya et al. U.S. 20190303663 (KS hereinafter) in view of Azarmi U.S. 20250086190 further in view of Zhang et al. US 12050636 As to claims 2 and 13, the teachings of KS as modified teaches have been discussed above, KS does not teach length of each document segment. Zhang teaches length of each document segment. (i.e. length of the text. Col. 6, lines 30-45). It would have been obvious to a person having ordinary skill in the art at the time the invention was made to have modified KS by the teaching of Zhang to include length of each document segment with the motivation to increase the efficiency as taught by Zhang (col. 1, lines 17-32). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yicun Wu whose telephone number is 571-272-4087. The examiner can normally be reached on 8:00 am to 4:30 pm, Monday -Friday. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Kavita Stanley, can be reached on (571) 571-272-8352. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2100. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system: "http://portal.uspto.gov/external/portal/pair" Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) 866-217-9197 (toll-free) If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Yicun Wu Patent Examiner Technology Center 2100 /YICUN WU/ Primary Examiner, Art Unit 2153
Read full office action

Prosecution Timeline

Nov 12, 2024
Application Filed
Nov 29, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602351
Methods and Systems for Archiving File System Data Stored by a Networked Storage System
2y 5m to grant Granted Apr 14, 2026
Patent 12547643
UNIFIED CONTEXT-AWARE CONTENT ARCHIVE SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12541693
GENERATING AND UPGRADING KNOWLEDGE GRAPH DATA STRUCTURES
2y 5m to grant Granted Feb 03, 2026
Patent 12536239
METHODS AND SYSTEMS FOR REFRESHING CURRENT PAGE INFORMATION
2y 5m to grant Granted Jan 27, 2026
Patent 12511491
SYSTEM AND METHOD FOR MANAGING AND OPTIMIZING LOOKUP SOURCE TEMPLATES IN A NATURAL LANGUAGE UNDERSTANDING (NLU) FRAMEWORK
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+17.3%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 598 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month