Prosecution Insights
Last updated: April 19, 2026
Application No. 18/262,508

TEXT CHAIN GENERATION METHOD AND APPARATUS, DEVICE, AND MEDIUM

Final Rejection §101§103
Filed
Jul 21, 2023
Examiner
MASTERS, KRISTEN MICHELLE
Art Unit
2659
Tech Center
2600 — Communications
Assignee
BEIJING YOUZHUJU NETWORK TECHNOLOGY CO., LTD.
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
87%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
25 granted / 40 resolved
+0.5% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
36 currently pending
Career history
76
Total Applications
across all art units

Statute-Specific Performance

§101
35.2%
-4.8% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101 §103
Detailed Action This communication is in response to the Arguments and Amendments filed on 12/11/2025. Claims 1-7, 9-21 are pending and have been examined. Claims 1-7, 9-21 are rejected. Claim 8 has been cancelled Claims 1, 9 and 10 are independent and are parallel method, device, and storage medium claims. Any previous objection/rejection not mentioned in this Office Action has been withdrawn by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/12/2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The Applicant has amended the claims to include “implemented by software and/or hardware in an electronic device, and performed by one or more processors of the electronic device executing a computer program stored in a memory of the electronic device,” and “generated by a neural network model, and the phrase chain set is pre-stored in the memory,” Applicant has amended claim 16 to include “non-transitory computer storage medium of claim 10”. As a result, Examiner has withdrawn the previous objections. Regarding the 35 U.S. C. 101 rejection, The Applicant notes As amended, claim 1 specifies The method relies on the unique computing and storage capabilities of electronic devices, including processing large volumes of phrase chains generated by a neural network model and calculating the largest common subsequence efficiently, which cannot be achieved through human cognition, logic, or pen-and-paper operations. Thus, claim 1 does not fall within the scope of judicial exceptions and satisfies the requirements of Step 2A Prong One of the subject matter eligibility analysis. Examiner notes the question under Step 2A Prong One is whether the claim, read as a whole, is directed to an abstract idea — and high level cognitive/data manipulations (e.g., generate/align/average embeddings) are within the scope of judicial exceptions unless the claim ties them to particular technical implementations or practical improvements. Naming neural components without structure or constraints does not automatically transform an abstract data processing claim into a patent eligible technological improvement. Examiner further notes Claim 1 recites a sequence of data transformations and computations: processing large volumes of phrase chains generated by a neural network model and calculating the largest common subsequence These steps are paradigmatic mathematical/data processing operations (tokenization, encoding, alignment, computing) and therefore fall within the “mathematical concepts” exception recognized by the USPTO and Federal Circuit (see, e.g., Digitech, SAP America, Electric Power Group). Applicant notes, the claimed text chain generation method is integrated into a concrete technical application through three key components: (1) the electronic device as the hardware carrier, (2) the computer program as the executable logic, and (3) phrase chains generated by a neural network model as specialized input data. This integration directly addresses the technical problem of logical defects in phrases generated by neural networks identified in the specification, generating syntactically consistent text chains to enrich phrase corpus resources. Thus, claim 1 satisfies the requirements of Step 2A Prong Two. Examiner notes, on the present claim wording, the limitations are largely functional and outcome oriented (selecting, updating, connecting) without concrete computational detail or a recitation of how the arrangements materially improve the functioning of the computer system itself (e.g., speed/latency reductions, memory or computational efficiency, novel data representations that reduce error by a measurable metric, or specific unconventional network architectures constrained in a way that produces the improvement). Applicant notes Claim 1 further includes additional technical elements that amount to "significantly more than the judicial exception itself', for example: (1) the phrase chain set generated by a neural network model (a specialized technical input tailored to the problem to be solved); (2) the specific technical means of using the largest common subsequence as a common node to form branches and update the initial phrase chain; and (3) the synergistic combination of the processor executing the program and pre-stored data in the memory to achieve iterative integration of phrase chains. These elements are not conventional computer functions but targeted technical designs to solve the identified technical problem. Thus, claim 1, as amended, satisfies the requirements of Step 2B. Examiner notes claim limitations recite an abstract idea, the additional elements must supply an “inventive concept.” The claim recites known functional components (software, hardware, neural networks) and data transformations. Without claim specificity tying those components to particular unconventional architectures, constrained parameterizations, training/regimen steps, or demonstrable improvements, the recited elements appear to be routine, conventional uses of neural networks and generic software components, and therefore fail to supply an inventive concept (see Alice; Berkheimer — factual showing may rebut this with evidence). The Applicant’s arguments and amendments do not overcome the 35 U.S. C. 101 rejection. Applicant’s arguments with respect to claim(s) 1-7, 9-21 have been considered but are moot because the new ground of rejection does not rely on the primary reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Hence, new grounds for rejection have been made in view of Kelsey (US Patent Number US 20180260472 A1), in view of McGreevy (US Patent Number US 20020188587 A1). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 9-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding independent Claim 1, the claim recites “1. A text chain generation method, implemented by software and/or hardware in an electronic device, and performed by one or more processors of the electronic device executing a computer program stored in a memory of the electronic device, comprising: selecting a to-be-matched phrase chain from a phrase chain set to match an initial phrase chain and determining a largest common subsequence between the to-be-matched phrase chain and the initial phrase chain, wherein the phrase chain set comprises a plurality of phrase chains, generated by a neural network model, and the phrase chain set is pre-stored in the memory each of the plurality of phrase chains refers to a text chain formed by nodes connected in a phrase order and all words in at least one phrase constitute the nodes; updating the initial phrase chain by forming a branch of the initial phrase chain by adding a word from the to-be-matched phrase chain and other than the largest common subsequence into the initial phrase chain, wherein the largest common subsequence serves as a common node; using an updated initial phrase chain as a new initial phrase chain and repeating previous steps until traversing all phrase chains in the phrase chain set to obtain an updated phrase chain; and connecting a left node located in each branch of the updated phrase chain and not connected to any node to a preset common start node and connecting a right node located in each branch of the updated phrase chain and not connected to any node to a preset common end node to obtain a final phrase chain.” The limitations of “selecting…”, “updating…”, “using …”, “connecting …” as drafted covers a human mental activity or process. More specifically, A human is capable of selecting a to-be-matched phrase chain from a phrase chain set to match an initial phrase chain and determining a largest common subsequence between the to-be-matched phrase chain and the initial phrase chain using pen and paper and logic and reasoning to match phrases and determine a common sequence. A human is capable of updating the initial phrase chain by forming a branch of the initial phrase chain by adding a word from the to-be-matched phrase chain and other than the largest common subsequence into the initial phrase chain, wherein the largest common subsequence serves as a common node by using pen and paper to update the phrase chain. A human is capable of using an updated initial phrase chain as a new initial phrase chain and repeating previous steps until traversing all phrase chains in the phrase chain set to obtain an updated phrase chain in the human mind through natural language understanding and logic and reasoning. A human is capable of connecting a left node located in each branch of the updated phrase chain and not connected to any node to a preset common start node and connecting a right node located in each branch of the updated phrase chain and not connected to any node to a preset common end node to obtain a final phrase chain using pen and paper. Regarding independent Claim 9, Claim 9 is a device claim with limitations similar to that of claim 1 and is rejected under the same rationale. Regarding independent Claim 10, Claim 10 is a storage medium claim with limitations similar to that of claim 1 and is rejected under the same rationale. This judicial exception is not integrated into a practical application. In particular, claims 9 and 10 recites the additional element of “processor” and “memory” as per the independent claims. For example, in page 18 paragraph 5 of the as filed specification, there is description of the computer-readable storage medium may include, but are not limited to, an electrical connection with one or more wires, a portable computer magnetic disk, a hard disk, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any appropriate combination thereof and a processing apparatus 601 (such as a central processing unit or a graphics processing unit). The processing apparatus 601 may perform various types of appropriate operations and processing according to a program stored in a read-only memory (ROM) 602 or a program loaded from a storage apparatus 606 to a random-access memory (RAM) 603. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a processor and memory is noted as a general computer as noted. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitation in the claims noted above are directed towards insignificant solution activity. The claims are not patent eligible. With respect to claims 2, 11 and 16, the claims relate to selecting phrases of a preset length from a text database to generate the phrase chain set, wherein the phrase chain set comprises the plurality of phrase chains; and adding at least one of a word class tag or a word order tag to a word in each of the plurality of phrase chains in the phrase chain set. This relates to a human using logic and reasoning to select a phrase and pen and paper to generate a phrase chain set. No additional limitations are present. With respect to claims 3, 12 and 17 the claims relate to adding the word from the to-be-matched phrase chain and other than the largest common subsequence into the initial phrase chain, wherein the largest common subsequence serves as the common node comprises: determining whether the largest common subsequence of the to-be-matched phrase chain and the largest common subsequence of the initial phrase chain have a consistent word class tag; and in response to determining that a first word class tag of the largest common subsequence of the to-be-matched phrase chain and a second word class tag of the largest common subsequence of the initial phrase chain are identical, adding the word from the to-be-matched phrase chain and other than the largest common subsequence into the initial phrase chain. This relates to a human adding a word using pen and paper and determining a common subsequence using logic and reasoning and adding a word using pen and paper. No additional limitations are present. With respect to claims 4, 13, 18 and 21 the claims relate to in response to determining that the to-be-matched phrase chain and the initial phrase chain have no common subsequence, the method further comprising: connecting a first node of the to-be-matched phrase chain to the preset common start node; and connecting a last node of the to-be-matched phrase chain to the preset common end node. This relates to a human connecting nodes using pen and paper. No additional limitations are present. With respect to claim 5 the claim relates to removing a function word from the largest common subsequence. This relates to a human removing a word using pen and paper. No additional limitations present. With respect to claims 6, 14 and 19 the claims relate to traversing the final phrase chain and constructing and selecting a target phrase. This relates to a constructing a phrase chain using pen and paper. No additional limitations present. With respect to claim 7, 15 and 20 the claims relate to traversing the final phrase chain and constructing and selecting the target phrase comprises: constructing phrases by selecting nodes whose quantity is equal to a length of a window by moving the window along nodes of each branch of the final phrase chain from the common start node, wherein the length of the window has different values in different traversal processes; and selecting phrases of the preset length from constructed phrases; and selecting a phrase from the phrases of the preset length to serve as the target phrase, wherein each word of a selected phrase has a word order and a word order tag that are consistent with each other. This relates to a human constructing a phrase chain using pen and paper and selecting nodes using logic and reasoning to match words within a window length. No additional limitations present. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7 and 9-21 are rejected under 35 U.S.C. 103 as being unpatentable over Kelsey (US Patent Number US 20180260472 A1), in view of McGreevy (US Patent Number US 20020188587 A1). Regarding independent Claim 1, Kelsey teaches 1. A text chain generation method, implemented by software and/or hardware in an electronic device, and performed by one or more processors of the electronic device executing a computer program stored in a memory of the electronic device (see Kelsey [0044] “The method can be embodied as machine-readable instructions on non-transitory storage media which, when executed by one or more computer processors, cause the method to be performed.”) comprising: selecting a to-be-matched phrase chain from a phrase chain set to match an initial phrase chain (see Kelsey [0006] “In some examples, the source document can be a machine-readable facsimile of human-readable text organized in sentences and paragraphs. The criterion for a given passage to be selected can be that the given passage has a similarity at least equal to a content relevancy threshold, relative to at least one subject matter descriptor of the source document…”) and determining a largest common subsequence between the to-be-matched phrase chain and the initial phrase chain, wherein the phrase chain set comprises a plurality of phrase chains, (see Kelsey [0006] “A single text fragment can correspond to less than an entire sentence of the source document, or a single sentence, or multiple sentences. A combined semantic-syntactic pattern can include a plurality of nodes representing respective syntactic parts of a text fragment. At least one node can have an associated semantic attribute, and at least one pair of nodes can be coupled by a relationship attribute. A degree of a combined semantic-syntactic pattern can be defined as a number counting at least all nodes, all semantic attributes, and all relationship attributes of the combined semantic-syntactic pattern. The matching between a given text fragment and a given pattern can include comparing the nodes, their corresponding semantic attributes, and the relationship attributes, to determine a matching score. If the matching score is at least equal to a matching threshold, then the given text fragment can be selected. A combined semantic-syntactic pattern can include an emphasis attribute or non-local semantic content. Combined semantic-syntactic patterns or selectors can be chosen from a library based on classifications of desired questions. The classifications can be based on Bloom's taxonomy or a similar educational taxonomy.”) generated by a neural network model, and the phrase chain set is pre-stored in the memory, (see Kelsey [0131] Process block 760 is a second transformation phase. Previously identified fragments can be transformed into questions. In examples, this transformation can be dependent on the pattern which matched a given fragment. The generated questions 761 can be outputted to a computer-readable storage medium. In some examples, such a transformation can be performed by a neural network (e.g. mapping a text sequence of the fragment to a text sequence of the question, or simply “seq-to-seq”), while in other examples the transformation can be effected using procedural or functional software code, along with libraries of transformation templates organized according to pattern.”) each of the plurality of phrase chains refers to a text chain formed by nodes connected in a phrase order, and all words in at least one phrase constitute the nodes; (see Kelsey [0050] “After termination of the first transformation phase 130, the method proceeds to second selection phase 150 where fragments 151 suitable for question generation can be identified and extracted. A portion 135B, with resolved coreferences, can be treated as a collection of fragments 151A-151F. In varying examples, a fragment 151A-151F can be a single-sentence or a multi-sentence fragment. At process block 152, fragments 151A-151F can be read from storage 103 and matched against combined semantic-syntactic patterns 147A-147F read from pattern library 149 on storage 108. In brief, a combined semantic-syntactic pattern can be a collection of nodes having attributes and relationships. The nodes can represent respective syntactic parts of text (e.g. subject, predicate, verb phrase, object, adjective, dependent clause). Nodes can have attributes (e.g. a semantic attribute classifying a name as a male person or a female person, or an emphasis attribute distinguishing e.g. “doesn't” from “never does”, or “we won't lose” from “we will not lose”). A semantic attribute can indicate a category of a noun, verb, or other part of speech. Pairs of nodes can have relationships, such as placement order (e.g. subject before or after predicate), or relationships defined by conjunctions (e.g. “and” or “or”) or prepositions (e.g. “to”, “after”, or “in”).”) updating the initial phrase chain by forming a branch of the initial phrase chain by adding a word from the to-be-matched phrase chain and other than the largest common subsequence into the initial phrase chain, (see Kelsey [0165] The matching can be an aggregate of multiple probabilistic calculations. The code can be visualized as a recursive tree structure where, at multiple steps, a decision can be made based on probabilistic data. An example of this is determining whether or not a given word or expression in the input text meets the criteria for a semantic sub-constraint in a given pattern. For example, what is the probability that a verb, given a context in a sentence or document, represents an action involving physical motion? The answer to this question will be a probability from one or more ML models, based on all the examples of this verb being used in a very large document corpus that was used to train the model. The cutoff in each model can be set differently, and the results of those decisions can be aggregated. If a decision is ambiguous, both branches of the decision tree may be maintained for a limited number of operations. The question of whether or not the overall pattern is matched depends on a composite of the probabilities determined at each of multiple nodes of the recursive tree (which are decision points). As a further feature, blended decision-making techniques can be used, using a combination of probabilistic data from ML models and data from ontologies (e.g. WordNet). In such examples, the ML and non-ML data can be weighted to derive a composite or final decision. Multiple syntactic parsing methods can be used in parallel to determine if there are multiple possible higher probability parses of a given text input. A given parse can be denoted “high probability” if its probability is within a cutoff factor of the most probable parse. In examples, the cutoff factor can range from 1.5 to 10, for example 2. If the models do not converge on a single parse pattern, then semantic or other data can be used to distinguish between the high probability parses.”) wherein the largest common subsequence serves as a common node; (see Kelsey [0111] As with other patterns, ByGerundsBeard can be matched to a text fragment degree by degree. Each comparison of a node, attribute, or relationship results in a numerical score, which can be one for a perfect match, zero for a perfect mismatch, or varying scores in between. Non-binary scores can be due to different factors, including (i) indeterminacy of resolution or usage of a word or expression being compared, or (ii) a reference attribute of the pattern is non-binary, for example “bright color” would give red or orange a better score than gray or brown. The matching scores of each degree can be combined to derive a composite matching score for a candidate text fragment when compared to a given combined semantic-syntactic pattern. For example, matching scores for each degree can be added or multiplied to obtain a composite matching score. The individual or composite matching scores can be rated against the maximum possible match score. If the matching score is at least equal to a matching threshold, then the ByGerundsBeard processing can continue.”) and connecting a left node located in each branch of the updated phrase chain and not connected to any node to a preset common start node and connecting a right node located in each branch of the updated phrase chain and not connected to any node to a preset common end node to obtain a final phrase chain. (see Kelsey [0113-0114] “The ByGerundsBeard pattern contains sub-selectors for handling additional information (if any) that may be encoded in the source sentence. For example, if the source sentence encodes information that implies that relationship3 connects the tuple to other doubles (that could be encoded elsewhere in the source text), then the extraction results from that sentence can be stored in a separate array with a tag “possible one-of-several”. [0114] Turning to FIG. 6A, the code for the “sentence” function selects unique sentences from a given portion of text, after one or more iterations of filtering and transformation that can include correction of spelling, grammar, or punctuation errors (e.g. commas in the wrong place, using a comma to connect two independent clauses, etc.), and separation of complex sentences into simpler separate sentences. Thus, the sentences extracted by the sentence function have a high likelihood of being grammatically correct, and simplified to the extent that simplification can be done without loss of meaning. Also in FIG. 6A is function clauseSelector which defines a sub-selector (a part of a selector) for a specific semantic-syntactic pattern. The pattern must start with one of a specified list of words (e.g. “while as . . . ”), which must then be directly followed by a list of one or more semantic and syntactic elements, and the combination of these elements must form a discrete dependent clause in the sentence where it is found.”) Kelsey does not specifically teach using an updated initial phrase chain as a new initial phrase chain and repeating previous steps until traversing all phrase chains in the phrase chain set to obtain an updated phrase chain; However, McGreevy does teach this limitation (see McGreevy [0346] The phrase search in block 2214 outputs a ranked list of subsets from the database and a selected number of the ranked list of subsets are then designated as the relevant text and input to the extract phrases process described in FIG. 20 in block 1904. The phrases extracted from the extract phrases process in block 1904 are then input to the process of culling the extracted phrases described in FIG. 21 in block 1906. The phrases output from the process of culling the extracted phrases in block 1906 are then ranked at block 2204 and the process repeats, until the number in the phrase search counter is greater than the pre-selected number of phrase searches.”) Kelsey in view of McGreevy are in the same field of endeavor of signal processing, therefore It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Kelsey to incorporate the teachings of McGreevy to include using an updated initial phrase chain as a new initial phrase chain and repeating previous steps until traversing all phrase chains in the phrase chain set to obtain an updated phrase chain. Doing so allows for longer phrase chains as recognized by McGreevy in [0254]. Regarding independent Claim 9, Claim 9 is a parallel device claim with limitations similar to that of Claim 1 and is rejected under the same rationale. Additionally, Kelsey teaches 9. An electronic device, comprising: one or more processors; and a memory configured to store one or more programs, wherein when the one or more programs are executed by the one or more processors, the one or more processors perform a text chain generation method wherein when the one or more programs are executed by the one or more processors, the one or more processors perform the text chain generation method of any one of wherein the text chain generation method comprises: (see Kelsey [0168] With reference to FIG. 12, computing environment 1210 includes one or more processing units 1222 and memory 1224. In FIG. 12, this basic configuration 1220 is included within a dashed line. Processing unit 1222 executes computer-executable instructions, such as for implementing components of a question generation tool (e.g., components shown in FIG. 11), any of the methods described herein (e.g., illustrated in context of FIG. 7-2, 1, or 9-10), or various other architectures, components, data structures, handlers, managers, modules, or repositories described herein. Processing unit 1222 can be a general-purpose central processing unit (CPU), a processor in an application-specific integrated circuit (ASIC), or any other type of processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. Computing environment 1210 can also include a graphics processing unit or co-processing unit 1230. Tangible memory 1224 can be volatile memory (e.g., registers, cache, or RAM), non-volatile memory (e.g., ROM, EEPROM, or flash memory), or some combination thereof, accessible by processing units 1222, 1230. The memory 1224 stores software 1280 implementing one or more innovations described herein, in the form of computer-executable instructions suitable for execution by the processing unit(s) 1222, 1230.”) Regarding independent Claim 10, Claim 10 is a parallel storage medium claim with limitations similar to that of Claim 1 and is rejected under the same rationale. Additionally, Kelsey teaches 10. A non-transitory computer storage medium, storing a computer program which, when executed by a processor, causes the processor to perform a text chain generation method wherein the text chain generation method comprises: (see Kelsey [0012] The innovations can be implemented as part of one or more methods, as part of one or more computing systems adapted to perform an innovative method, or as part of non-transitory computer-readable media storing computer-executable instructions for causing a computing system to perform the innovative method(s).”) Regarding claim 2, Kelsey in view of McGreevy teaches 2. The method of Claim 1 Furthermore, Kelsey teaches, before matching the to-be-matched phrase chain to the initial phrase chain, the method further comprising: selecting phrases of a preset length from a text database to generate the phrase chain set, wherein the phrase chain set comprises the plurality of phrase chains; and adding at least one of a word class tag or a word order tag to a word in each of the plurality of phrase chains in the phrase chain set. (see Kelsey [0093] “In examples, nodes of a tree structure representing parsed text can have attributes or tags indicating word order.”) Regarding claim 3, Kelsey in view of McGreevy teaches 3. The method of Claim 2 Furthermore, Kelsey teaches, wherein adding the word from the to-be-matched phrase chain and other than the largest common subsequence into the initial phrase chain, wherein the largest common subsequence serves as the common node comprises: determining whether the largest common subsequence of the to-be-matched phrase chain and the largest common subsequence of the initial phrase chain have a consistent word class tag; and in response to determining that a first word class tag of the largest common subsequence of the to-be-matched phrase chain and a second word class tag of the largest common subsequence of the initial phrase chain are identical, (see Kelsey [0115] FIG. 6B shows code for function sentence<b>2</b>, which selects for a sub-pattern where the gerund clause (which may be a candidate to be replaced by a proform) can be qualified as possibly one of multiple possible answers in the source text. In the code below, the qualifier can be removed, and, in the function “sentence” shown in FIG. 6C, the sentence reconstructed without the qualifier. The reconstructed sentence can be stored in a separate array with the (removed) qualifier now represented by a tag, so that the qualifier remains available, if needed later to form a question. As with other analyses described herein, structured or unstructured forms of the reconstructed sentence can be stored. In the illustration of Table 2, row 1, the “such as by” phrase can be matched and the dependent clause can be split off by sentence<b>2</b>. The simplified sentence (without “such as by . . . ” clause) can be regenerated using the “sentence” function of FIG. 6C.”) adding the word from the to-be-matched phrase chain and other than the largest common subsequence into the initial phrase chain. (see Kelsey [0116] FIGS. 6D-6E provide functions for identification and extraction of prepositional phrases. These functions can extract any prepositional phrases from a larger clause containing the targeted gerund (part of the pattern's double). These functions can also extract adverbial phrases, and can store the resulting simplified sentences in tagged arrays (i.e. structured text) with the removed clauses stored separately so that they can be available later for building the question or answer, as needed. For example, prepositionlessSentenceTrimmedNP creates a separate array that selects and stores the first noun phrase (“distance of a star”) from the action (“measuring the parallax”) that can be accomplished by the gerund (“measuring”). Similarly, prepositionlessSentenceTrimmedVP creates a separate array that selects and stores the core verb phrase (“is known”) associated with the noun phrase (“distance of a star”) in the previous operation, while removing subordinate conjunctions, if any.”) Regarding Claim 4, Kelsey in view of McGreevy teaches 4. The method of Claim 1 Furthermore, Kelsey teaches, in response to determining that the to-be-matched phrase chain and the initial phrase chain have no common subsequence, the method further comprising: connecting a first node of the to-be-matched phrase chain to the preset common start node; and connecting a last node of the to-be-matched phrase chain to the preset common end node.(see Kelsey (see Kelsey [0113-0114] “The ByGerundsBeard pattern contains sub-selectors for handling additional information (if any) that may be encoded in the source sentence. For example, if the source sentence encodes information that implies that relationship3 connects the tuple to other doubles (that could be encoded elsewhere in the source text), then the extraction results from that sentence can be stored in a separate array with a tag “possible one-of-several”. [0114] Turning to FIG. 6A, the code for the “sentence” function selects unique sentences from a given portion of text, after one or more iterations of filtering and transformation that can include correction of spelling, grammar, or punctuation errors (e.g. commas in the wrong place, using a comma to connect two independent clauses, etc.), and separation of complex sentences into simpler separate sentences. Thus, the sentences extracted by the sentence function have a high likelihood of being grammatically correct, and simplified to the extent that simplification can be done without loss of meaning. Also in FIG. 6A is function clauseSelector which defines a sub-selector (a part of a selector) for a specific semantic-syntactic pattern. The pattern must start with one of a specified list of words (e.g. “while as . . . ”), which must then be directly followed by a list of one or more semantic and syntactic elements, and the combination of these elements must form a discrete dependent clause in the sentence where it is found.”) Regarding Claim 5, Kelsey in view of McGreevy teaches 5. The method of Claim 4 Furthermore, Kelsey teaches, further comprising: removing a function word from the largest common subsequence. (See Kelsey [0121] Some analysis can be performed prior to distractor generation. FIG. 6I shows code for answerKeywords, which removes stop words from the double, and code for answerKeywordNouns. The latter, together with filterTags, gerundFIlter, and gerundFilterIf, shown in FIGS. 6J-6L, identify the core semantic-syntactic profile of the answer, which can be used later to identify appropriate distractors.”) Regarding Claim 6, Kelsey in view of McGreevy teaches 6. The method of Claim 2 Furthermore, Kelsey teaches, further comprising: traversing the final phrase chain and constructing and selecting a target phrase. (See Kelsey [0114] Turning to FIG. 6A, the code for the “sentence” function selects unique sentences from a given portion of text, after one or more iterations of filtering and transformation that can include correction of spelling, grammar, or punctuation errors (e.g. commas in the wrong place, using a comma to connect two independent clauses, etc.), and separation of complex sentences into simpler separate sentences. Thus, the sentences extracted by the sentence function have a high likelihood of being grammatically correct, and simplified to the extent that simplification can be done without loss of meaning. Also in FIG. 6A is function clauseSelector which defines a sub-selector (a part of a selector) for a specific semantic-syntactic pattern. The pattern must start with one of a specified list of words (e.g. “while as . . . ”), which must then be directly followed by a list of one or more semantic and syntactic elements, and the combination of these elements must form a discrete dependent clause in the sentence where it is found.”) As to Claim 7, Kelsey in view of McGreevy teaches 7. The method of Claim 6, Furthermore Kelsey teaches and selecting phrases of the preset length from constructed phrases; and selecting a phrase from the phrases of the preset length to serve as the target phrase, wherein each word of a selected phrase has a word order and a word order tag that are consistent with each other. (see Kelsey [0064] Multi-sentence fragments 237 can be generated at process block 230 by detecting combined syntactic-semantic patterns at the paragraph level. Each paragraph can be used to generate the full set of possible multi-sentence frames (e.g. a paragraph of three sentences A, B, C would have 3 possible multi-sentence frames: <A, B>, <B, C> and the entire paragraph <A, B, C>. (examiner interprets length as “frame”) Each possible multi-sentence frame can be evaluated to see if it is a match or near match for a multi-sentence pattern. Multi-sentence patterns can be similar to those used for single sentences in that they combine both syntactic and semantic features in a single pattern and incorporate features spanning multiple layers. However, multi-sentence patterns can also incorporate features or tests that span multiple sentences. Examples of this include: (a) the same noun-phrase occurs as the subject of successive sentences, and the same or similar verbs appear as the primary verb in the predicates, (b) several sentences in a row contain predicates with close semantic relationships, or (c) specific sequences of adverbs, such as from a family {“first”, “then”, “finally”}. In other respects, the selection of multi-sentence fragments 237 can be similar to selection of single-sentence fragments 235.”) Furthermore, McGreevy teaches wherein traversing the final phrase chain and constructing and selecting the target phrase comprises: constructing phrases by selecting nodes whose quantity is equal to a length of a window by moving the window along nodes of each branch of the final phrase chain from the common start node, wherein the length of the window has different values in different traversal processes; (see McGreevy [0093] “A model of a database or subset includes summation relations and each summation relation includes several types of the relational summation metrics (RSMs) for each term pair. A model of a database or subset can be represented in a variety of forms including, but not limited to, a list of relations, a matrix of relations, and a network of relations. An example of a list representation of relations is shown in Table 1.5. An example of a matrix representation of the relations of Table 1.5 is shown in Table 1.6. An example of a network representation of the relations in Tables 1.5 and 1.6 is shown in FIG. 6A..”) (see McGreevy [0092] “The context window used to calculate the above-described metric values can have any one of a number of sizes. A context window can have a pre-selected number of terms. Typically, a context window is equal to a level of context desired by the user. Examples include: an average sentence length, or an average paragraph length, or an average phrase length, or a similar relationship to the text or the database. For an alternative embodiment, the context window can be entirely independent from the any relation to the database being analyzed such as a pre-selected number chosen by a user or a default process setting. Alternatively, the context window can vary as a function of the position of the context window within the text, or the contents of the context window.”) MA in view of McGreevy are in the same field of endeavor of speech processing, therefore It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Kelsey and McGreevy to incorporate the teachings of McGreevy to include traversing the final phrase chain and constructing and selecting the target phrase comprises: constructing phrases by selecting nodes whose quantity is equal to a length of a window by moving the window along nodes of each branch of the final phrase chain from the common start node, wherein the length of the window has different values in different traversal processes. Doing so allows for longer phrase chains as recognized by McGreevy in [0254]. As to Claim 11, claim 11 is a parallel device claim with limitations similar to that of Claim 2 and is rejected under the same rationale. As to Claim 12, claim 12 is a parallel device claim with limitations similar to that of Claim 3 and is rejected under the same rationale. As to Claim 13, claim 13 is a parallel device claim with limitations similar to that of Claim 4 and is rejected under the same rationale. As to Claim 14, claim 14 is a parallel device claim with limitations similar to that of Claim 6 and is rejected under the same rationale. As to Claim 15, claim 15 is a parallel device claim with limitations similar to that of Claim 7 and is rejected under the same rationale. As to Claim 16, claim 16 is a parallel storage medium claim with limitations similar to that of Claim 2 and is rejected under the same rationale. As to Claim 17, claim 17 is a parallel storage medium claim limitations similar to that of Claim 3 and is rejected under the same rationale. As to Claim 18, claim 18 is a parallel storage medium claim with limitations similar to that of Claim 4 and is rejected under the same rationale. As to Claim 19, claim 19 is a parallel storage medium claim with limitations similar to that of Claim 6 and is rejected under the same rationale. As to Claim 20, claim 20 is a parallel storage medium claim with limitations similar to that of Claim 7 and is rejected under the same rationale. As to Claim 21, claim 21 is a parallel method claim with limitations similar to that of Claim 4 and is rejected under the same rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). Any inquiry concerning this communication or earlier communications from the examiner should be directed to KRISTEN MICHELLE MASTERS whose telephone number is (703)756-1274. The examiner can normally be reached M-F 8:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KRISTEN MICHELLE MASTERS/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jul 21, 2023
Application Filed
Sep 04, 2025
Non-Final Rejection — §101, §103
Dec 11, 2025
Response Filed
Mar 22, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592219
Hearing Device User Communicating With a Wireless Communication Device
2y 5m to grant Granted Mar 31, 2026
Patent 12548569
METHOD AND SYSTEM OF DETECTING AND IMPROVING REAL-TIME MISPRONUNCIATION OF WORDS
2y 5m to grant Granted Feb 10, 2026
Patent 12548564
SYSTEM AND METHOD FOR CONTROLLING A PLURALITY OF DEVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12547894
ENTROPY-BASED ANTI-MODELING FOR MACHINE LEARNING APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12547840
MULTI-STAGE PROCESSING FOR LARGE LANGUAGE MODEL TO ANSWER MATH QUESTIONS MORE ACCURATELY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
87%
With Interview (+24.7%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month