Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This Office Action is in response to the Amendment filed on November 12, 2025, which paper has been placed of record in the file.
2. Claims 1-20 are pending in this application.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims 1-20 are rejected under 35 U.S.C. 101 because the claim invention is directed to a judicial exception (i.e., law of nature, natural phenomenon, or abstract idea) without significantly more.
Regarding independent claim 11, which is analyzing as the following:
Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claim recites an apparatus for analyzing consumer text responses. Thus, the claim is to a machine, which is one of the statutory categories of invention. (Step 1: YES).
Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
The claim recites an apparatus for analyzing consumer text responses. The claim recites the steps: determining labels for one or more of the consumer text responses; determining one or more text segments of each of the consumer text responses; generating an embedding of each of the text segments; generating a graph comprising a first set of nodes… and a plurality of edges between the first set of nodes and the second set of nodes; initializing weights of edges between the first set of nodes and the second set of nodes…, and to learn updated weights of the edges based on a predetermined objective, as drafted, is a process that, under its broadest reasonable interpretation when read in light of the Specification, covers performance of the limitations in the mind, can be practically performed by human in their mind or with pen/paper, but for the recitation of generic computer components. That is, other than reciting “a computer/processor”, nothing in the claim elements preclude the steps from practically being performed in the mind. The mere nominal recitation of generic computing devices does not take the claim limitation out of the Mental Processes grouping of abstract ideas. Thus, if a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgment, opinion). See MPEP 2106.04(a)(2), subsection III.
Therefore, the claim recites an abstract idea. (Step 2A, Prong One: YES).
Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d).
The claim recites the additional elements of “receiving a plurality of consumer text responses” and “using a graph neural network to learn updated weights of the edges.” The claim also recites that the steps of “receiving a plurality of consumer text responses, determining labels for one or more of the consumer text responses; determining one or more text segments of each of the consumer text responses; generating an embedding of each of the text segments; generating a graph comprising a first set of nodes… and a plurality of edges between the first set of nodes and the second set of nodes; initializing weights of edges between the first set of nodes and the second set of nodes…, and to learn updated weights of the edges based on a predetermined objective” are performed by one or more processors.
The additional element “receiving a plurality of consumer text responses” is mere data gathering and transmitting recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and transmitting, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05. Moreover, these additional elements do not provide any improvement to the technology, improvement to the functioning of the computer, they are just merely used as general means for gathering and transmitting data.
The additional element “using a graph neural network to learn updated weights of the edges” provides nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception.
The additional element “using a graph neural network to learn updated weights of the edges” is used to generally apply the abstract idea without placing any limits on how the graph neural network functions. Rather, this limitation only recites the outcome of “to learn updated weights of the edges” and does not include any details about how the solution is accomplished. See MPEP 2106.05(f).
The additional element “using a graph neural network to learn updated weights of the edges” also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “using a graph neural network to learn updated weights of the edges” limits the identified judicial exceptions “to learn updated weights of the edges”, this type of limitation merely confines the use of the abstract idea to a particular technological environment (graph neural network) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Further, the steps of “receiving a plurality of consumer text responses, determining labels for one or more of the consumer text responses; determining one or more text segments of each of the consumer text responses; generating an embedding of each of the text segments; generating a graph…; initializing weights of edges between the first set of nodes and the second set of nodes, and to learn updated weights of the edges”, are recited as being performed by the processor. The processor is recited at a high level of generality. In the limitations “receiving a plurality of consumer text responses”, the processor is used as a tool to perform the generic computer function of gathering and transmitting data. See MPEP 2106.05(f). In limitations “determining labels for one or more of the consumer text responses; determining one or more text segments of each of the consumer text responses; generating an embedding of each of the text segments; generating a graph…; initializing weights of edges between the first set of nodes and the second set of nodes, and to learn updated weights of the edges”, the processor is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). The additional elements recite generic computer components the processor, a memory, and software programming instructions that are recited a high-level of generality that merely perform, conduct, carry out, implement, and/or narrow the abstract idea itself. Accordingly, the additional elements evaluated individually and in combination do not integrate the abstract idea into a practical application because they comprise or include limitations that are not indicative of integration into a practical application such as adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea -- See MPEP 2106.05(f).
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception (Step 2A, Prong One: YES).
Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole, amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As explained with respect to Step 2A, Prong Two, the additional element of “using a graph neural network to learn updated weights of the edges” is at best mere instructions to “apply” the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f).
The additional element “receiving a plurality of consumer text responses” was found to be insignificant extra-solution activity in Step 2A, Prong Two, because it was determined to be insignificant limitation as necessary data gathering and transmitting. However, a conclusion that an additional element is insignificant extra solution activity in Step 2A, Prong Two should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g).
As discussed in Step 2A, Prong Two above, the additional element of “receiving a plurality of consumer text responses” is recited at a high level of generality. This element amounts to gathering and transmitting data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely genetic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).
As discussed in Step 2A, Prong Two above, the recitation of the processor to perform limitations “receiving a plurality of consumer text responses, determining labels for one or more of the consumer text responses; determining one or more text segments of each of the consumer text responses; generating an embedding of each of the text segments; generating a graph…; initializing weights of edges between the first set of nodes and the second set of nodes, and to learn updated weights of the edges”, amounts to no more than mere instructions to apply the exception using a generic computer component.
Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. Therefore, the claim is not patent eligible. (Step 2B: NO).
Regarding independent claim 1, Alice Corp. establishes that the same analysis should be used for all categories of claims. Therefore, independent claim 1 directed to a method, is also rejected as ineligible subject matter under 35 U.S.C. 101 for substantially the same reasons as independent method claim 11.
Regarding dependent claims 2-10 and 12-20, the dependent claims do not impart patent eligibility to the abstract idea of the independent claim. The dependent claims rather further narrow the abstract idea and the narrower scope does not change the outcome of the two-part Mayo test. Narrowing the scope of the claims is not enough to impart eligibility as it is still interpreted as an abstract idea, a narrower abstract idea.
Regarding dependent claims 2 and 12, the claims simply refine the abstract idea by further reciting wherein the graph further comprises a third set of nodes comprising product attributes associated with the consumer text responses, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 3 and 13, the claims simply refine the abstract idea by further reciting wherein the graph further comprises a third set of nodes comprising demographic information associated with consumers associated with the consumer text responses, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 4 and 14, the claims simply refine the abstract idea by further reciting wherein the one or more text segments comprise one or more of causes, effects, and needs associated with the consumer text responses, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 5 and 15, the claims simply refine the abstract idea by further reciting generating the embedding of each of the text segments by determining a vectorization of each of the text segments using natural language processing, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 6 and 16, the claims simply refine the abstract idea by further reciting determining the one or more text segments based on a linguistic structure of the consumer text responses, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 7 and 17, the claims simply refine the abstract idea by further reciting determining the labels based on numerical ratings associated with the consumer text responses, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 8 and 18, the claims simply refine the abstract idea by further reciting determining the labels based on problem categories determined by an expert, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 9 and 19, the claims simply refine the abstract idea by further reciting determining the latent components by performing cluster analysis on the embeddings of each of the text segments, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 10 and 20, the claims simply refine the abstract idea by further reciting outputting a predetermined number of items of information most relevant to the predetermined objective based on the updated weights, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Therefore, none of the dependent claims alone or as an ordered combination add limitations that qualify as significantly more than the abstract idea.
Accordingly, claims 1-20 are not draw to eligible subject matter as they are directed to an abstract idea without significantly more and are rejected under 35 USC § 101 as being directed to non-statutory subject matter.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Norton et al. (hereinafter Norton, US 2020/0279017) in view of Yoon et al. (hereinafter Yoon, US 2021/0058345), and further in view of Subraveti et al. (hereinafter Subraveti, US 2024/0012844).
Regarding claim 1, Norton discloses a method comprising:
receiving a plurality of consumer text responses (para [0025], the intelligent-text-insight system provides textual responses to a response-extraction-neural network to generate extracted sentences that correspond to selected sentences from the textual responses);
determining labels for one or more of the consumer text responses (para [0025], The intelligent-text-insight system further determines a sentiment indicator for each extracted sentence to sort the extracted sentences);
determining one or more text segments of each of the consumer text responses (para [0103], To illustrate sorting the extracted sentences 340a-340n, in some implementations, the intelligent-text-insight system 106 groups the extracted sentences 340a-340n into (i) a positive segment comprising extracted sentences corresponding to sentiment scores indicating positive sentiments and (ii) a negative segment comprising extracted sentences corresponding to sentiment scores indicating negative sentiments);
generating an embedding of each of the text segments (para [0187], generating the first cluster of extracted sentences corresponding to positive sentiment scores and to a first linguistic context embedding; and generating the second cluster of extracted sentences corresponding to negative sentiment scores and to a second linguistic context embedding).
1. Norton does not disclose, however, Yoon discloses:
generating a graph comprising a first set of nodes comprising latent components based on the embedding of each of the text segments, a second set of nodes comprising the consumer text responses, and a plurality of edges between the first set of nodes and the second set of nodes (para [0019], the support identification system generates a graph topology having edge connections between a plurality of nodes corresponding to a plurality of text phrases and a query. In particular, each node in the graph topology can correspond to an individual text phrase or the query text. In one or more embodiments, the support identification system generates the graph topology by generating edge connections between particular sets of nodes);
initializing weights of edges between the first set of nodes and the second set of nodes based on the embedding of each of the text segments (para [0022], the support identification graph neural network can propagate node representations among the plurality of nodes based on the determined similarities and then update the node representation of each node. For instance, the support identification system can compare node representations while applying learned edge weights of the support identification graph neural network to propagate and modify node representations across edge connections); and
using a graph neural network to learn updated weights of the edges based on a predetermined objective (para [0023], the support identification graph neural network can propagate node representations iteratively, updating the node representation of a given node with each iteration. Indeed, in one or more embodiments, the support identification system applies learned update weights from the support identification graph neural network as part of a skip connection to determine an amount or degree to update the node representation between iterations. By applying attention weights and learned update weights, the support identification system can iteratively propagate and update node representations across edge connections. The support identification graph neural network can then identify supporting text phrases based on the updated node representation of each node).
2. Norton and Yoon do not disclose, however, Subraveti discloses:
wherein the predetermined objective is based on the labels (para [0104], Training of a machine learning model may include an iterative process that includes iterations of making determinations, monitoring the performance of the machine learning model using the objective function, and backpropagation to adjust the weights (e.g., weights, kernel values, coefficients) in various nodes 510. For example, a computing device may receive a training set that includes past survey responses or other past text files with known outcomes. Each training sample in the training set may be assigned with labels indicating the outcomes such as the sentiments. The computing device, in a forward propagation, may use the machine learning model to generate predicted sentiments. The computing device may compare the predicted sentiments with the labels of the training sample. The computing device may adjust, in a backpropagation, the weights of the machine learning model based on the comparison).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify the Norton’s to incorporate the features taught by Yoon and Subraveti above, for the purpose of providing more flexible and accurate in identifying text responses by applying a graph neural network. (see Yoon, para [0003], “…utilize a graph neural network to flexibly and accurately identify supporting text phrases that can be utilized to generate accurate digital query responses”). Since Norton discloses “By using a response-extraction-neural network to extract sentences from textual responses, the disclosed systems can generate a response summary of textual responses based on sentiment indicators corresponding to the textual responses” (para [0005]), Yoon teaches using a graph neural network to learn updated weights of the edges, Subraveti teaches using a graph neural network to generate an embedding for each survey responses, wherein the predetermined objective is based on the labels (paras [0071-0073]), as described above, therefore, one of ordinary skill in the art would have recognized that the combination of Norton, Yoon, and Subraveti would have yield predictable results in identifying text responses by applying a graph neural network.
Regarding claims 2-3, Norton does not disclose the method of claim 1, wherein the graph further comprises a third set of nodes comprising product attributes associated with the consumer text responses; a third set of nodes comprising demographic information associated with consumers associated with the consumer text responses. However, Norton discloses product attributes associated with the consumer text responses and demographic information associated with consumers associated with the consumer text responses (para [0002], reviews describing unusually good or bad experiences with a product—including one-of-a-kind manufacturing defects or service experiences; para [0233], A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. Additionally, a user profile may include financial and billing information of users (e.g., users 116a and 116n, customers, etc.)). Yoon discloses a graph topology that includes edge connections between a plurality of nodes (para [0018]).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify the Norton’s to add a third set of nodes comprising product attributes associated with the consumer text responses and demographic information associated with consumers associated with the consumer text responses to Yoon’s graph neural network, for the purpose of providing more flexible and accurate in identifying text responses by applying a graph neural network. (see Yoon, para [0003], “…utilize a graph neural network to flexibly and accurately identify supporting text phrases that can be utilized to generate accurate digital query responses”). Since Norton discloses “By using a response-extraction-neural network to extract sentences from textual responses, the disclosed systems can generate a response summary of textual responses based on sentiment indicators corresponding to the textual responses” (para [0005]), Yoon teaches using a graph neural network to learn updated weights of the edges as described above, therefore, one of ordinary skill in the art would have recognized that the combination of Norton and Yoon would have yield predictable results in identifying text responses by applying a graph neural network.
Regarding claim 4, Norton discloses the method of claim 1, wherein the one or more text segments comprise one or more of causes, effects, and needs associated with the consumer text responses (para [0002], By selecting for comments based on voting or views, conventional comment-review systems can also surface textual comments that capture isolated user experiences unrepresentative of the body of comments, such as reviews describing unusually good or bad experiences with a product—including one-of-a-kind manufacturing defects or service experiences).
Regarding claim 5, Norton discloses the method of claim 1, further comprising:
generating the embedding of each of the text segments by determining a vectorization of each of the text segments using natural language processing (para [0105], The intelligent-text-insight system 106 subsequently generates the clusters of extracted sentences 348a-348n by identifying extracted sentences corresponding to each of the linguistic-context embeddings. In some such cases, the intelligent-text-insight system 106 sets the number of vector dimensions to 128 for a smooth-inverse-frequency embedding. By applying a smooth-inverse-frequency algorithm, the intelligent-text-insight system 106 generates clusters of extracted sentences with a diverse set of linguistic vectors).
Regarding claim 6, Norton discloses the method of claim 1, further comprising:
determining the one or more text segments based on a linguistic structure of the consumer text responses (para [0104], Accordingly, the linguistic contexts 350a-350n may be word embeddings, phrase embeddings, or sentence embeddings from computational linguistics).
Regarding claim 7, Norton discloses the method of claim 1, further comprising:
determining the labels based on numerical ratings associated with the consumer text responses (para [0066], a response group of textual response includes textual responses corresponding to a particular range of sentiment scores or corresponding to a positive sentiment indicator, a negative sentiment indicator, a neutral sentiment indicator, or a mixed sentiment indicator).
Regarding claim 8, Norton discloses the method of claim 1, further comprising:
determining the labels based on problem categories determined by an expert (para [0007], Based on the sentiment indicator and the topic for each textual response, the disclosed systems generate a first response group of textual responses and a second response group of textual responses).
Regarding claim 9, Norton discloses the method of claim 1, further comprising:
determining the latent components by performing cluster analysis on the embeddings of each of the text segments (para [0028], Based on such sentiment scores, the intelligent-text-insight system optionally generates clusters of extracted sentences. For instance, in some cases, the intelligent-text-insight system generates clusters of extracted sentences that correspond to positive or negative sentiment scores and to a first or second linguistic context embedding).
Regarding claim 10, Norton discloses the method of claim 1, further comprising:
outputting a predetermined number of items of information most relevant to the predetermined objective based on the updated weights (para [0037], the intelligent-text-insight system selects representative-textual responses from response groups of textual responses for display within a graphical user interface. For instance, in some embodiments, the intelligent-text-insight system selects first and second representative-textual responses respectively from first and second response groups based on a textual quality score and relevancy parameter for each textual response within their respective response groups).
Regarding to claims 11-20, Norton discloses an apparatus comprising one or more processors (see figure 1, Processor 1002) configured to performed the method described in claims 1-10 above, therefore are rejection by the same rationale.
Response to Arguments/Amendment
7. Applicant's arguments with respect to claims 1-20 have been fully considered and are moot in view of new grounds of rejections.
I. Claim Rejections - 35 USC § 101
Claims 1-20 are rejected under 35 U.S.C. 101 because the claim invention is directed to a judicial exception (i.e., law of nature, natural phenomenon, or abstract idea) without significantly more.
In response to the Applicant’s argument that the claimed features cannot practically be performed in the human mind, in particular “generating an embedding of each of the text segments”, the Examiner respectfully disagrees and submits that the claims do not recite the text segment is vectorized using natural language processing techniques, thus “generating an embedding of each of the text segments” can be practically performed by human in their mind or with pen/paper, but for the recitation of generic computer components. That is, other than reciting “a computer/processor”, nothing in the claim elements preclude the steps from practically being performed in the mind. The mere nominal recitation of generic computing devices does not take the claim limitation out of the Mental Processes grouping of abstract ideas. Thus, if a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgment, opinion). See MPEP 2106.04(a)(2), subsection III. Therefore, the claim recites an abstract idea.
The step of “using a graph neural network to learn updated weights of the edges based on a predetermined objective, wherein the predetermined objective is based on the labels” is additional element and is analyzed in Step 2A-Prong 2. This additional element is used to generally apply the abstract idea without placing any limits on how the graph neural network functions. Rather, this limitation only recites the outcome of “to learn updated weights of the edges” and does not include any details about how the solution is accomplished. See MPEP 2106.05(f). The additional element “using a graph neural network to learn updated weights of the edges” also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “using a graph neural network to learn updated weights of the edges” limits the identified judicial exceptions “to learn updated weights of the edges”, this type of limitation merely confines the use of the abstract idea to a particular technological environment (graph neural network) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Moreover, this additional element does not provide any improvements to the computer functionality, improvements to the technology, Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.04(a)(2), subsection III. Accordingly, the claims do not integrate the abstract idea into a practical application.
Accordingly, the 101 rejection is maintained.
II. Claim Rejections - 35 USC § 103
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The new ground of 103 rejection described above.
Conclusion
8. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
9. Claims 1-20 are rejected.
10. The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure:
Cunningham et al. (US 2025/0094708) disclose methods for generating content-item-specific large language model responses from content items by segmenting a content item and selecting relevant sections of the content item to provide to a large language model to generate a corresponding output.
DeVos et al. (US 2024/0370517) disclose a system and method generate answers to user queries by providing natural language responses containing direct citations to primary sources.
Gore et al. (US 2024/0320424) disclose the subject technology uses context models to improve the performance of language models. The technology may introduce one or more context insights into the prompts for language models to personalize the outputs of the language models for particular users.
Muppalla et al. (US 2023/0297779) disclose a method of filtering an original set of user-provided text responses across a network.
Hou et al. (US 2023/0267322) disclose perform edge union to obtain a merged graph; combine the embedding and the merged graph to obtain a relation graph; perform a relation graph neural network on the relation graph; extract hidden representation of the aspect term from updated relation neural network.
Chopra et al. (US 2023/0112369) disclose methods and apparatuses are described for automated analysis of customer interaction text to generate customer intent information and a hierarchy of customer issues.
Wang et al. (US 2022/0171963) disclose a method includes constructing a hierarchal graph associated with a document.
Creed et al. (US 2001/0081717) disclose methods and apparatus for generating a graph neural network (GNN) model based on an entity-entity graph.
BaderEddin et al. (Us 2020/0334697) disclose generating responses for survey questions using user-generated text blocks (i.e., segments of text extracted from messages.
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner NGA B NGUYEN whose telephone number is (571) 272-6796. The examiner can normally be reached on Monday-Friday 7AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beth Boswell can be reached on (571) 272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NGA B NGUYEN/Primary Examiner, Art Unit 3625 March 3, 2026