Prosecution Insights
Last updated: April 19, 2026
Application No. 17/720,431

TECHNICAL SPECIFICATION MATCHING

Final Rejection §101§103§112
Filed
Apr 14, 2022
Examiner
AGRAWAL, SHISHIR
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Laboratories America Inc.
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 13 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
31 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
26.9%
-13.1% vs TC avg
§103
37.6%
-2.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
29.9%
-10.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Status of Claims This Office action is responsive to communications filed on 2025-09-04. Claim(s) 1-20 is/are pending and are examined herein. Claim(s) 1-20 is/are objected to. Claim(s) 4-14 and 18-20 is/are rejected under 35 USC 112(b). Claim(s) 1-20 is/are rejected under 35 USC 101. Claim(s) 1-20 is/are rejected under 35 USC 103. Notice of Pre-AIA or AIA Status The present application, filed on or after 2013-03-16, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Regarding objections for informalities and rejections under 35 USC 112(b), the applicant’s amendments resolve some, but not all, of the concerns raised in the previous Office action. The amendments also introduce further concerns. Unresolved and newly introduced concerns are described below. Regarding rejections under 35 USC 101, the applicant’s remarks have been fully considered. Regarding the previously filed claims, the applicant “believes that the claim as originally drafted recited patent-eligible subject matter” [remarks, page 11] but provides no rationale in support of this belief. This unsupported statement fails to comply with 37 CFR 1.111(b) because it amounts to a general allegation that the previously filed claims may define a patentable invention without specifically pointing out any reasons therefor. Regarding the amended claims, the applicant argues that the step of “identifying noun phrases and generating vector embeddings of the noun phrases” renders the claims eligible [remarks, pages 10-11]. The examiner respectfully disagrees. It is clear that a human mind can identify noun phrases. A human mind can also perform actions falling under the broadest reasonable interpretation of “generating vector embeddings” of noun phrases (the broadest reasonable interpretation of a vector encompasses any sequence of elements, so even, for example, an act of transcribing noun phrases as sequences of characters could be regarded as an act of “generating vector embeddings of the noun phrases” as recited by the claim). In other words, the limitation as recited is merely another mental process and does not help render the claim eligible. The complete 101 analysis, updated in view of the applicant’s amendments, is given below. Regarding rejections under 35 USC 103, the applicant’s remarks have been fully considered. Regarding the previously filed claims, the applicant asserts that they are “not conceding that the subject matter encompassed by the claims prior to this Amendment is unpatentable over the art cited by the Examiner” [remarks, page 9] but provides no rationale in support of this statement of non-concession. This unsupported statement of non-concession fails to comply with 37 CFR 1.111(b) because it amounts to a general allegation that the previously filed claims define a patentable invention without specifically pointing out how the language of the previously filed claims patentably distinguishes them from the references. Regarding the amended claims, the applicant argues that Huetle in view of Sun fails to disclose segmentation of text into sentences [remarks, pages 12-13]. The argument is moot in view of the new grounds of rejection as given below. The applicant attempts to argues that “Huetle should be removed as a reference” because it “expressly removes segmentation by eliminating punctuation and capitalization”, thereby teaching away from sentence segmentation since “text without capitalization and punctuation cannot be segmented into sentences” [remarks, page 13]. This line of argumentation is unpersuasive for at least the following reasons: First, the claim itself includes no limitations regarding punctuation and/or capitalization, so the fact that Huetle removes punctuation and/or capitalization is unconvincing as a point of contrast against the claimed invention. (In fact, the applicant’s specification also makes no explicit remarks regarding either punctuation or capitalization.) Second, nothing explicitly recited in the pending claims necessitates that, when comparing the claimed invention against the disclosures of Huetle, the sentence segmentation recited by the claim must be performed after the removal of capitalization and punctuation described in Huetle. The claims recite merely a step of segmenting text into sentences without indicating anything about the stage at which the segmentation is to be performed. The original documents described in Huetle have both capitalization and punctuation (cf. [Huetle, figure 1]), which means that a person of ordinary skill in the art could apply even a standard method of sentence segmentation (using, e.g., NLTK as cited in the conclusion of this Office action) to the documents as disclosed in Huetle. Third, in the combination as proposed in the previous Office action (and below), the keyword extractor of Huetle is anyway replaced by the keyword extractor of Sun, and Sun does not explicitly describe a step of removing punctuation and/or capitalization. In other words, the combination of Huetle and Sun may not even necessitate the removal of punctuation and/or capitalization described in Huetle. Fourth, the examiner respectfully disagrees with the applicant’s assertion that it is impossible to perform sentence segmentation without capitalization and punctuation. A person listening to another’s speech is able to segment that speech into sentences, despite the fact speech is not marked with capitalization and punctuation. Everyone who has transcribed speech into properly capitalized and punctuated text has performed this purportedly impossible act of sentence segmentation. Even if one restricts the scope of the discussion only to writing, neither capitalization nor punctuation is actually essential for sentence segmentation. For example, Japanese orthography has no system of capitalization, and it included almost no punctuation up until the Meiji period, but practiced readers of pre-Meiji Japanese texts can nonetheless segment these texts into sentences and understand their contents. Moreover, it is not just human beings that can perform this purportedly impossible act of sentence segmentation, since general-purpose computers implementing modern methods of natural language processing can emulate the ability of human beings to perform sentence segmentation without proper capitalization and/or punctuation. The applicant is invited to consult, for example, Matusov as cited in the conclusion of this Office action. The complete prior art rejection, updated in view of the applicant’s amendments, is given below. Examiner’s Remarks Claims 6, 13, and 20 recite an entity importance model H(v_e). In view of the originally filed disclosure, H(v_e) is equal to w_e, i.e., to the importance of the entity e represented by the vector embedding v_e provided as input to the model (cf. [specification, 0050; claims 6, 13, and 20 as originally filed]). In other words, the broadest reasonable interpretation, in view of the specification, of the “entity importance model H(v_e)” recited in these dependent claims encompasses at least the “neural network model” and/or the “trained importance calculator” recited in their respective parent claims. These claim elements are interpreted accordingly herein. The examiner suggests replacing “an entity importance model” with either “the neural network” or “the trained importance calculator” for consistency of nomenclature and clarity of the claimed subject matter. Claim Objections Claim(s) 1-20 is/are objected to because of the following informalities: Claims 1 and 15 recites segmenting text from the specification sheet and segmenting text from the plurality of descriptive sheets into sentences [emphasis added] this should be “segmenting text from the specification sheet and into sentences and segmenting text from the plurality of descriptive sheets into sentences”). Dependent claims 2-7 and 16-20 inherit the objection. Claims 1, 8, and 15 recites each identified technical feature in the first set of technical features represented by the vector embeddings and each identified technical feature in the second set of technical features represented by the vector embeddings [emphasis added] but this should be “each identified technical feature in the first set of technical features Claim 8 recites a feature importance calculator. This should be “the trained importance calculator” for proper antecedent basis. Dependent claims 9-14 inherit the objection. (The applicant is also invited to consult the suggestion regarding the indefiniteness rejection of claim 8 below.) Claim 15 recites causes the computer to perform the steps of: [emphasis added] but “the steps” lacks antecedent basis. The examiner suggests “causes the computer to perform steps comprising:” for proper antecedent basis. Dependent claims 16-20 inherit the objection. Appropriate correction is required. Claim Rejections - 35 USC 112(b) The following is a quotation of 35 USC 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 USC 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 4-14, and 18-20 is/are rejected under 35 USC 112(b) or 35 USC 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 USC 112, the applicant), regards as the invention. Claims 4, 11, and 18 are indefinite for at least the following reasons: In order for a formula appearing in a claim to be well-defined, every variable appearing therein must be clearly defined in the claim. However, the variables E_q and E_c are undefined in the claim. These undefined variables are interpreted in view of the specification as referring, respectively, to the identified features in the specification sheet and to the identified features in one of the plurality of descriptive sheets. They recite v_e denotes the vector embedding for each feature/entity e, and w_e is the importance for each identified technical feature/entity, e. For proper antecedent basis and consistency of nomenclature, the examiner suggests “v_e denotes the vector embedding for each identified technical feature e, and w_e is the importance for each identified technical feature e”. Dependent claims 5-7, 12-14, and 19-20 inherit these rejections. Claims 7 and 14 are indefinite because they recite a loss function L(t) = max(0, (1 - s_{i,p}) - (1 - s_{i,q} + α) but this formula is insufficiently defined since none of the variables t, i, p, q, and α are defined in the claim. Claim 8 is rendered indefinite by amendments made to the claim (due in part to the fact that the language used in claim 8 is not parallel to the language used in independent claims 1 and 15). For example, claim 8 recites the calculating including identifying noun phrases and generating vector embeddings of the noun phrases but the identification of noun phrases and generation of vector embeddings as described in the originally filed disclosure (and in independent claims 1 and 15) appear to be part of the functionality of the feature classifier, not of the importance calculator as presently recited in claim 8. Similarly, claim 8 recites the trained feature classifier identifies the technical features in the first set of technical features but the trained feature classifier as described by the originally filed disclosure (and in independent claims 1 and 15) appears to identify technical features in text (e.g., the specification sheet), not in a set of technical features as presently recited in claim 8. MPEP 2173.03(b) indicates that a claim is “indefinite when a conflict or inconsistency between the claimed subject matter and the specification disclosure renders the scope of the claim uncertain” and the aforementioned points are precisely such points of conflict or inconsistency. Moreover, claim 8 was also amended to recite text data including a specification sheet including a plurality of technical features including a first set of technical features, and a plurality of descriptive sheets each including a plurality of technical features including a second set of technical features but this clause is rendered ungrammatical/unclear due to the recitation of both “pluralit[ies]” of technical features and “set[s]” of technical features. Dependent claims 9-14 inherit the rejections. To avoid these issues (and further issues such as these), the examiner suggests amending claim 8 to be a system claim which otherwise mirrors the functional language used in independent claims 1 and 15. For the purpose of compact prosecution, the indefinite elements of the claim are interpreted as encompassing at least the interpretation suggested by the corresponding elements of independent claims 1 and 15. Claim Rejections - 35 USC 101 35 USC 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 is/are rejected under 35 USC 101 because the claimed invention(s) is/are directed to abstract ideas without significantly more. Claim 1 Step 1. The claim and its dependents 2-7 fall under the statutory category of methods. An analysis of step 2 for each of these claims follows. Step 2A Prong 1. The claim recites the following abstract ideas: identify technical features by identifying noun phrases and generating vector embeddings of the noun phrases; (This recites a mental process that can be performed in the human mind or by a human using pen and paper. A human mind can identify features and/or noun phrases and generate vectors representing noun phrases. See MPEP 2106.04(a)(2)(III).) calculate an importance value for each identified technical feature represented by the vector embeddings; (This recites a mathematical concept and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) segmenting text from the specification sheet and segment text from the plurality of descriptive sheets into sentences; (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) identifying a first set of technical features in the specification sheet and a second set of technical features in the plurality of descriptive sheets (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) calculating an importance for each identified technical feature in the first set of technical features represented by the vector embeddings and each identified technical feature in the second set of technical features represented by the vector embeddings (This recites a mathematical concept and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) and calculating a matching score between the specification sheet and each of the plurality of descriptive sheets based on the importance of each identified technical feature in the first set of technical features and each identified technical feature in the second set of technical features. (This recites a mathematical concept (cf. [specification, 0059]) and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: A method of detail matching, comprising: training a feature classifier to [identify…] training a neural network model for a trained importance calculator to [calculate…] (These are generically recited training steps. In other words, this recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) receiving a specification sheet including a plurality of technical features; receiving a plurality of descriptive sheets each including a plurality of technical features; (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) [identifying…] using the trained feature classifier; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) [calculating…] using the trained feature importance calculator; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: A method of detail matching, comprising: training a feature classifier to [identify…] training a neural network model for a trained importance calculator to [calculate…] (These are generically recited training steps. In other words, this recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) receiving a specification sheet including a plurality of technical features; receiving a plurality of descriptive sheets each including a plurality of technical features; (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data transfer. See MPEP 2106.05(d)(II), “Receiving or transmitting data over a network” and/or “Storing and retrieving information in memory”.) [identifying…] using the trained feature classifier; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) [calculating…] using the trained feature importance calculator; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) Claim 2 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). [The method of claim 1, wherein] the trained importance calculator is trained using triplets of the specification sheet and the plurality of descriptive sheets. (This merely indicates the training data that is used for a generically recited training step. In other words, this recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). [The method of claim 1, wherein] the trained importance calculator is trained using triplets of the specification sheet and the plurality of descriptive sheets. (This merely indicates the training data that is used for a generically recited training step. In other words, this recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) Claim 3 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). [The method of claim 2, further comprising] generating the vector embeddings for each identified technical feature (This recites a mathematical concept and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). using a trained Bidirectional Encoder Representations from Transformers (BERT) model. (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). using a trained Bidirectional Encoder Representations from Transformers (BERT) model. (BERT is well-understood, routine, conventional. For example, Kui XUE et al. (Fine-tuning BERT for Joint Entity and Relation Extraction in Chinese Medical Text, published 2019; hereafter “Xue”) discusses the “well-known BERT language model” [Xue, abstract]. Other references which support the conclusion that BERT is well-understood, routine, conventional can be found in the conclusion of a previous Office action.) Claim 4 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). [The method of claim 3, wherein] the matching scores, s_{q,c}, are calculated using s_{q,c} = sum_{e_q in E_q} w_{e_q} max_{e_c in E_c} frac{v_{e_q} v_{e_c}}{‖v_{e_q}‖ ‖v_{e_c}‖}, wherein v_e denotes the vector embedding for each feature/entity e, and w_e is the importance for each identified technical feature/entity, e. (This recites a mathematical concept and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). Claim 5 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). [The method of claim 4, wherein] training the feature classifier utilizes a positive feature set, P, and an unlabeled feature set, U, where E = P ∪ U, where E is a whole feature set. (This merely indicates the training data that is used for a generically recited training step. In other words, this recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). [The method of claim 4, wherein] training the feature classifier utilizes a positive feature set, P, and an unlabeled feature set, U, where E = P ∪ U, where E is a whole feature set. (This merely indicates the training data that is used for a generically recited training step. In other words, this recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) Claim 6 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). [The method of claim 4, wherein] matched documents are utilized to train an entity importance model H(v_e). (This amounts to an indication that the training data includes “matched documents”. In other words, this recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). [The method of claim 4, wherein] matched documents are utilized to train an entity importance model H(v_e). (This amounts to an indication that the training data includes “matched documents”. In other words, this recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) Claim 7 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). based on a loss function, L(t) = max(0, (1 - s_{i,p}) - (1 - s_{i,q} + α). (This recites a mathematical concept. See MPEP 2106.04(a)(2)(I).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). [The method of claim 6, wherein] parameters of entity importance model H(v_e) are tuned (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). [The method of claim 6, wherein] parameters of entity importance model H(v_e) are tuned (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) Claim 8 Step 1. The claim and its dependents 9-14 fall under the statutory category of machines. Step 2A Prong 1. The claim recites the following abstract ideas: identify technical features; (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) calculating an importance value for each identified technical feature the calculating includes identifying noun phrases and generating vector embeddings of the noun phrases; (This recites a mathematical concept and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) segments text from the specification sheet and text from the plurality of descriptive sheets into sentences (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) identifies the technical features in the first set of technical features and the technical features in the second set of technical features; (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) calculate an importance for each identified technical feature in the first set of technical features represented by the vector embeddings and reach identified technical feature in the second set of technical features represented by the vector embeddings (This recites a mathematical concept and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) calculate a matching score between the specification sheet and each of the plurality of descriptive sheets based on the calculated importance of each identified technical feature in the first set of technical features and each identified technical feature in the second set of technical features, (This recites a mathematical concept (cf. [specification, 0059]) and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: A computer system for detail matching, comprising: one or more processors; a computer memory in electronic communication with the one or more processors, and a display screen in electronic communication with the computer memory and the one or more processors; wherein the computer memory includes: (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) a feature classifier trained to [identify technical features;] a neural network model configured as a trained importance calculator for [calculating an importance value for each identified technical feature;] (These are generically recited training steps. In other words, this recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) text data including a specification sheet including a plurality of technical features including a first set of technical features, and a plurality of descriptive sheets each including a plurality of technical features including a second set of technical features, (This recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) wherein a text segmentor [segments text…] and the trained feature classifier [identifies the technical features…] (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) a feature importance calculator [to calculate an importance…] using the trained feature importance calculator; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) and a feature matching system to [calculate a matching score…] (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) wherein a closest matching product is presented to a user on the display screen. (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: A computer system for detail matching, comprising: one or more processors; a computer memory in electronic communication with the one or more processors, and a display screen in electronic communication with the computer memory and the one or more processors; wherein the computer memory includes: (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) a feature classifier trained to [identify technical features;] a neural network model configured as a trained importance calculator for [calculating an importance value for each identified technical feature;] (These are generically recited training steps. In other words, this recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) text data including a specification sheet including a plurality of technical features including a first set of technical features, and a plurality of descriptive sheets each including a plurality of technical features including a second set of technical features, (This recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) wherein a text segmentor [segments text…] and the trained feature classifier [identifies the technical features…] (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) a feature importance calculator [to calculate an importance…] using the trained feature importance calculator; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) and a feature matching system to [calculate a matching score…] (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) wherein a closest matching product is presented to a user on the display screen. (The insignificant extra-solution activity is well-understood, routine, conventional as it is merely presenting output. See MPEP 2106.05(d)(II), “Presenting offers”.) Claims 9-14 inherit limitations from claim 8 and recite additional limitations which are substantially similar to those recited by claims 2-7, respectively, so they are rejected by the same rationale. Claim 15 Step 1. The claim and its dependents 16-20 fall under the statutory category of machines. Step 2A Prong 1. The claim recites the following abstract ideas: identify technical features by identifying noun phrases and generating vector embeddings of the noun phrases; (This recites a mental process that can be performed in the human mind or by a human using pen and paper. A human mind can identify features and/or noun phrases and generate vectors representing noun phrases. See MPEP 2106.04(a)(2)(III).) calculate an importance value for each identified technical feature represented by the vector embeddings; (This recites a mathematical concept and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) segmenting text from the specification sheet and segment text from the plurality of descriptive sheets into sentences; (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) identifying a first set of technical features in the specification sheet and a second set of technical features in the plurality of descriptive sheets (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) calculating an importance for each identified technical feature in the first set of technical features represented by the vector embeddings and each identified technical feature in the second set of technical features represented by the vector embeddings (This recites a mathematical concept and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) and calculating a matching score between the specification sheet and each of the plurality of descriptive sheets based on the importance of each identified technical feature in the first set of technical features and each identified technical feature in the second set of technical features. (This recites a mathematical concept (cf. [specification, 0059]) and a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: A non-transitory computer readable storage medium comprising a computer readable program for detail matching, wherein the computer readable program when executed on a computer causes the computer to perform the steps of: (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) training a feature classifier to [identify…] training a neural network model for a trained importance calculator to [calculate…] (These are generically recited training steps. In other words, this recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) receiving a specification sheet including a plurality of technical features; receiving a plurality of descriptive sheets each including a plurality of technical features; (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) [identifying…] using the trained feature classifier; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) [calculating…] using the trained feature importance calculator; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: A non-transitory computer readable storage medium comprising a computer readable program for detail matching, wherein the computer readable program when executed on a computer causes the computer to perform the steps of: (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) training a feature classifier to [identify…] training a neural network model for a trained importance calculator to [calculate…] (These are generically recited training steps. In other words, this recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) receiving a specification sheet including a plurality of technical features; receiving a plurality of descriptive sheets each including a plurality of technical features; (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data transfer. See MPEP 2106.05(d)(II), “Receiving or transmitting data over a network” and/or “Storing and retrieving information in memory”.) [identifying…] using the trained feature classifier; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) [calculating…] using the trained feature importance calculator; (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) Claims 16-20 inherit limitations from claim 8 and recite additional limitations which are substantially similar to those recited by claims 2-6, respectively, so they are rejected by the same rationale. Claim Rejections - 35 USC 103 The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 USC 102(b)(2)(C) for any potential 35 USC 102(a)(2) prior art against the later invention. Claim(s) 1-3, 8-10, and 15-17 is/are rejected under 35 USC 103 as being unpatentable over Juan HUETLE-FIGUEROA (Measuring semantic similarity of documents with weighted cosine and fuzzy logic, published 2020; hereafter, “Huetle”) in view of Si SUN et al. (Joint Keyphrase Chunking and Salience Ranking with BERT, published 2020; hereafter, “Sun”) and Qun GUO et al. (US20200226367A1, published 2020-07-16; hereafter, “Guo”). Claim 1 Huetle discloses: A method of detail matching, comprising: ([Huetle, abstract and section 5]: Huetle discloses a method of “matching of documents” [Huetle, abstract]. More specifically, it describes matching “CVs (resumes) and job descriptions” [Huetle, abstract], and discusses this matching from the perspective of either an employer or a jobseeker [Huetle, section 5 last paragraph].) a feature classifier to identify technical features by identifying noun phrases ([Huetle, figures 1-2 and pages 2266-2267]: The method of Huetle includes “component (iv)” which performs “keyword extraction” [Huetle, pages 2266-2267 paragraph beginning “Fig. 2”]. In other words, component (iv) is the “feature classifier” of the claim, and the keywords are the “technical features” of the claim. The keywords depicted using highlights in [Huetle, figure 1] are “noun phrases” as required by the claim. See also: “keywords extraction was used to create an individual list of keywords for each document; All the documents have their own keyword list” [Huetle, section 6.2 second paragraph].) receiving a specification sheet including a plurality of technical features; receiving a plurality of descriptive sheets each including a plurality of technical features; ([Huetle, abstract, figure 1, section 1]: As noted above, Huetle discusses job descriptions and CVs [Huetle, abstract and/or figure 1 elements 3-4]. Moreover, there is a plurality of each (6917 job descriptions and 80 CVs) [Huetle, section 3 first two paragraphs]. From the perspective of an employer, a given job description maps to the “specification sheet” of the claim and the 80 CVs to the “plurality of descriptive sheets” of the claim. From the perspective of a jobseeker, a given CV maps to the “specification sheet” of the claim and the 6917 job descriptions to the “plurality of descriptive sheets” of the claim. While either of these perspectives falls under the broadest reasonable interpretation of the claim, the remainder of the mappings herein use the perspective of the jobseeker for concreteness.) identifying a first set of technical features in the specification sheet and a second set of technical features in the plurality of descriptive sheets using the trained feature classifier; ([Huetle, section 6.2]: Huetle indicates that “keywords extraction was used to create an individual list of keywords for each document; All the documents have their own keyword list” [Huetle, section 6.2 second paragraph]. In other words, the keywords in a given jobseeker’s CV maps to the “first set of technical features” of the claim, and the keywords in job descriptions map to the “second set of technical features” of the claim.) calculating an importance for each identified technical feature in the first set of technical features [represented by the vector embeddings] and each identified technical feature in the second set of technical features ([Huetle, section 5]: Huetle discloses the use of tf-idf (term frequency-inverse document frequency) to assign weights to keywords in each document [Huetle, section 5]. As tf-idf is a measure of importance of a word to a particular document in a collection of documents, calculating tf-idf maps to “calculating an importance” as recited by the claim. The examiner notes that the combination with Sun below suggests the use of different measures of importance.) and calculating a matching score between the specification sheet and each of the plurality of descriptive sheets ([Huetle, figure 1 and sections 5-6]: For a given CV [Huetle, figure 1 element 3], Huetle calculates similarity scores with each of the job descriptions [Huetle, figure 1 element 1]. In other words, each similarity score maps to the “matching score” of the claim. For more information about how similarity scores are calculated, see [Huetle, sections 5-6]; the examiner notes that the mapping for dependent claim 4 below gives an alternative way of mapping the “matching score” that fits the further limitations recited therein.) based on the importance of each identified technical feature in the first set of technical features and each identified technical feature in the second set of technical features. ([Huetle, sections 5-6]: The similarity scores of Huetle are based on tf-idf [Huetle, section 5 equation (3) and/or section 6.1], and are therefore “based on the importance of each identified technical feature” as mapped above.) While Huetle discloses a keyword extractor as well as calculating importance values, it does not distinctly disclose training a keyword extractor or a neural network for calculating importance values, and it does not distinctly discuss vector embeddings. In other words, Huetle might not distinctly disclose: training [a feature classifier] and generating vector embeddings of the noun phrases; training a neural network model for a trained importance calculator to calculate an importance value for each identified technical feature represented by the vector embeddings; … [calculating an importance for each identified technical feature in the first set of technical features] represented by the vector embeddings [and each identified technical feature in the second set of technical features] represented by the vector embeddings using the trained feature importance calculator; segmenting text from the specification sheet and segmenting text from the plurality of descriptive sheets into sentences; Sun is in the field of natural language processing. Moreover, Huetle in view of Sun discloses: training [a feature classifier] ([Sun, abstract and section 2]: Sun discloses “a multitask BERT-based model for keyphrase extraction” [Sun, abstract] and a method of training this model [Sun, abstract and/or section 2 paragraph beginning “Joint Training”]. In the combination, Sun’s model is used as the keyword extractor described above.) and generating vector embeddings of the noun phrases; ([Sun, section 2]: Sun discloses “us[ing] BERT to encode D to a sequence of vectors H = {h_1, …, h_i, …, h_n}” [Sun, section 2 paragraph beginning “Token Embedding”]. In other words, the vectors h_i corresponding to keywords map to the “vector embeddings” of the claim.) training a neural network model for a trained importance calculator to calculate an importance value for each identified technical feature represented by the vector embeddings; … [calculating an importance for each identified technical feature in the first set of technical features] represented by the vector embeddings [and each identified technical feature in the second set of technical features] represented by the vector embeddings using the trained feature importance calculator; ([Sun, section 2]: The keyphrase extraction model disclosed by Sun is a neural network containing a “ranking network” [Sun, section 2 paragraph beginning “This is achieved”], where the ranking network generates “salience scores” [Sun, section 2 paragraph beginning “Ranking Network”]. The salience scores are based on the vector embeddings h_i [Sun, section 2 equations (1-4)]. In other words, the ranking network of Sun (or, alternatively, the entire network which contains the ranking network) maps to the “neural network model” and the “trained importance calculator” of the claim. In the combination, a salience score as in Sun is used in place of tf-idf as described in Huetle and maps to the “importance” and “importance value” of the claim.) Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the document matching method of Huetle with the keyphrase extractor of Sun because the latter “has advantages in predicting long keyphrases and extracting phrases that are not entities but also meaningful” [Sun, abstract], so the combination would be more effective overall. Huetle in view of Sun might not distinctly disclose: segmenting text from the specification sheet and segmenting text from the plurality of descriptive sheets into sentences;
Read full office action

Prosecution Timeline

Apr 14, 2022
Application Filed
Jun 02, 2025
Non-Final Rejection — §101, §103, §112
Aug 19, 2025
Interview Requested
Aug 27, 2025
Examiner Interview Summary
Sep 04, 2025
Response Filed
Oct 07, 2025
Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month