DETAILED ACTION
This Office action is in response to original application filed on 06/04/2025.
Claims 1-20 are pending. Claims 1-20 are rejected.
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement(s) (IDS) submitted on 06/04/2025, 06/10/2025, 08/29/2025, and 02/20/2026 were filed prior to this Office action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner.
The Examiner has attached a new copy of NPL reference #1 from IDS 06/10/2025.
The Examiner was provided 25 NPL listings from IDS 02/20/2026 but was unable to find reference #20 of the 26 NPL references, either in the application or online. Reference #20 has been crossed out accordingly, but the Examiner can review a subsequent copy in the future.
Examiner Notes/Objections
Claim 1 is objected for listing “1.” twice.
Appropriate correction may be required.
Statutory Review under 35 USC § 101
Claims 1-9 are directed toward a system and have been reviewed.
Claims 1-9 initially appear to be statutory, as the system includes hardware (a non-transitory memory).
However, claims 1-6 and 8-9 do not appear to be patent-eligible at this time as they perform an abstract idea without significantly more based on the current patent subject matter eligibility determination.
Claim 7 appears to integrate the abstract idea into a practical application based on Step 2A, Prong Two of the current patent subject matter eligibility determination and are patent-eligible.
Claims 10-15 are directed towards a method and have been reviewed.
Claims 10-15 do not appear to be patent-eligible at this time as they perform an abstract idea without significantly more based on the current patent subject matter eligibility determination.
Claims 16-20 are directed toward an article of manufacture and have been reviewed.
Claims initially appear to be statutory, as the article of manufacture excludes transitory signals (claim says non-transitory).
However, claims 16-20 do not appear to be patent-eligible at this time as they perform an abstract idea without significantly more based on the current patent subject matter eligibility determination.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 and 8-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites receiving an attribute extraction request, generating at least one prompt, extracting a value, generating a final attribute set, and implementing an attribute-based automated process, which is an abstract idea.
Looking to the instant specification, not counting the summary portion, ¶ 0005-0007, reflecting the language of the claims, “automated process” is mentioned once in ¶ 0062, referring to the attribute evaluation engine 378 implements an automated process configured to apply one or more evaluation tests and/or evaluation rules.”
As a result, as the automated process is merely applying one or more evaluation tests and/or evaluation rules, the automated process is an evaluation, which is a mental process (including an observation, evaluation, judgment, opinion), thus falling under an abstract idea.
Regarding the receiving an attribute extraction request, generating at least one prompt, extracting a value, and generating a final attribute set, see relevantly MPEP 2106.04(a)(2), Section III, Subsection D referring to an “application program interface for extracting and processing information from a diversity of types of hard copy documents – Content Extraction, 776 F.3d at 1345, 113 USPQ2d at 1356” and MPEP 2106.05(f) referring to “the abstract idea of ‘collecting, displaying, and manipulating data’” Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017).
Receiving a request and extracting a value falls under “collecting.”
Generating a prompt and generating final attribute set falls under “manipulating.”
Step 2A, Prong Two
This judicial exception of receiving an attribute extraction request, generating at least one prompt, extracting a value, generating a final attribute set, and implementing an attribute-based automated process is not integrated into a practical application despite the generically recited computer elements shown below:
a processor
at least one generative prompt
at least one generative model
The generically recited computer elements amount to implementing the abstract idea on a computer, merely using a computer as a tool to perform an abstract idea, or generally linking the use of a judicial exception to a particular technological environment or field of use as seen below.
wherein the processor is configured to read a set of instructions to:
configure at least one generative model based on the at least one generative prompt to extract a value of one or more attributes identified in the attribute extraction request;
This additional element merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.04(d)).
generate at least one generative prompt based on the attribute extraction request and the item element data;
The prompt is specified to be a generative prompt; generation of the prompt for use by the at least one generative model in the later “configure” step generally links the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.04(d)).
Step 2B
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception despite the additional elements shown below:
wherein the processor is configured to read a set of instructions to:
configure at least one generative model based on the at least one generative prompt to extract a value of one or more attributes identified in the attribute extraction request;
This additional element merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
generate at least one generative prompt based on the attribute extraction request and the item element data;
The prompt is specified to be a generative prompt; generation of the prompt for use by the at least one generative model in the later “configure” step generally links the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)).
a non-transitory memory;
These elements store and retrieve information in memory, which are well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d).
Claim 2 recites determining a classifier, receiving an attribute extraction template, and generating the at least one generative prompt based on the template.
Determining a classifier, receiving an attribute extraction template, and generating the at least one prompt based on the template falls under an abstract idea; the prompt being generative generally links the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.04(d)).
Claim 3 recites receiving attribute model configuration data and determining the one or more attributes to be extracted, the collecting data and the mental determination falling under an abstract idea.
Claim 4 specifies that there are one or more configurations for the at least one generative model and that the at least one generative model is configured based on the configurations; however, claim 4 does not add a meaningful limitation as these are merely nominal or token extra-solution components of the claim and serves only as an attempt to generally link the product of nature to a further particular technological environment (see MPEP 2106.05(h)).
Claim 5 introduces determining at least a portion of the at least one generative prompt based on an associated type of attribute, which falls under a mental process and is thus an abstract idea.
Claim 6 appears to specify that the prompt is to comprise a plurality of attribute definitions and further configuring the at least one generative model based on the plurality of attribute definitions; however, claim 6 does not add a meaningful limitation as these are merely nominal or token extra-solution components of the claim and serves only as an attempt to generally link the product of nature to a further particular technological environment (see MPEP 2106.05(h)).
Claim 8 introduces extracting a confidence value and further specifies that the generated final attribute set must include the value associated with a highest corresponding confidence value; extracting is an abstract idea; the latter element does not add a meaningful limitation as these are merely nominal or token extra-solution components of the claim and serves only as an attempt to generally link the product of nature to a further particular technological environment (see MPEP 2106.05(h)).
Claim 9 describes displaying data in conjunction with interface elements, an additional element that is mere generic transmission and presentation of collected and analyzed data which is considered to be insignificant extra solution activity (MPEP 2106.05(g)).
Claims 10-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 10 recites receiving an attribute extraction request, generating at least one prompt, extracting a value, generating a final attribute set, and implementing an attribute-based automated process, which is an abstract idea.
Looking to the instant specification, not counting the summary portion, ¶ 0005-0007, reflecting the language of the claims, “automated process” is mentioned once in ¶ 0062, referring to the attribute evaluation engine 378 implements an automated process configured to apply one or more evaluation tests and/or evaluation rules.”
As a result, as the automated process is merely applying one or more evaluation tests and/or evaluation rules, the automated process is an evaluation, which is a mental process (including an observation, evaluation, judgment, opinion), thus falling under an abstract idea.
Regarding the receiving an attribute extraction request, generating at least one prompt, extracting a value, and generating a final attribute set, see relevantly MPEP 2106.04(a)(2), Section III, Subsection D referring to an “application program interface for extracting and processing information from a diversity of types of hard copy documents – Content Extraction, 776 F.3d at 1345, 113 USPQ2d at 1356” and MPEP 2106.05(f) referring to “the abstract idea of ‘collecting, displaying, and manipulating data’” Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017).
Receiving a request and extracting a value falls under “collecting.”
Generating a prompt and generating final attribute set falls under “manipulating.”
Step 2A, Prong Two
This judicial exception of receiving an attribute extraction request, generating at least one prompt, extracting a value, generating a final attribute set, and implementing an attribute-based automated process is not integrated into a practical application despite the generically recited computer elements shown below:
at least one generative prompt
at least one generative model
The generically recited computer elements amount to implementing the abstract idea on a computer, merely using a computer as a tool to perform an abstract idea, or generally linking the use of a judicial exception to a particular technological environment or field of use as seen below.
configuring at least one generative model based on the at least one generative prompt to extract a value of one or more attributes identified in the attribute extraction request;
This additional element merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.04(d)).
generating at least one generative prompt based on the attribute extraction request and the item element data;
The prompt is specified to be a generative prompt; generation of the prompt for use by the at least one generative model in the later “configure” step generally links the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.04(d)).
Step 2B
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception despite the additional elements shown below:
configuring at least one generative model based on the at least one generative prompt to extract a value of one or more attributes identified in the attribute extraction request;
This additional element merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
generating at least one generative prompt based on the attribute extraction request and the item element data;
The prompt is specified to be a generative prompt; generation of the prompt for use by the at least one generative model in the later “configure” step generally links the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)).
Claim 11 recites determining a classifier, receiving an attribute extraction template, and generating the at least one generative prompt based on the template.
Determining a classifier, receiving an attribute extraction template, and generating the at least one prompt based on the template falls under an abstract idea; the prompt being generative generally links the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.04(d)).
Claim 12 recites receiving attribute model configuration data and determining the one or more attributes to be extracted, the collecting data and the mental determination falling under an abstract idea.
Claim 13 specifies that there are one or more configurations for the at least one generative model and that the at least one generative model is configured based on the configurations; however, claim 13 does not add a meaningful limitation as these are merely nominal or token extra-solution components of the claim and serves only as an attempt to generally link the product of nature to a further particular technological environment (see MPEP 2106.05(h)).
Claim 14 introduces determining at least a portion of the at least one generative prompt based on an associated type of attribute, which falls under a mental process and is thus an abstract idea.
Claim 15 appears to specify that the prompt is to comprise a plurality of attribute definitions and further configuring the at least one generative model based on the plurality of attribute definitions; however, claim 15 does not add a meaningful limitation as these are merely nominal or token extra-solution components of the claim and serves only as an attempt to generally link the product of nature to a further particular technological environment (see MPEP 2106.05(h)).
Claims 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 16 recites receiving an attribute extraction request, generating at least one prompt, extracting a value, generating a final attribute set, and implementing an attribute-based automated process, which is an abstract idea.
Looking to the instant specification, not counting the summary portion, ¶ 0005-0007, reflecting the language of the claims, “automated process” is mentioned once in ¶ 0062, referring to the attribute evaluation engine 378 implements an automated process configured to apply one or more evaluation tests and/or evaluation rules.”
As a result, as the automated process is merely applying one or more evaluation tests and/or evaluation rules, the automated process is an evaluation, which is a mental process (including an observation, evaluation, judgment, opinion), thus falling under an abstract idea.
Regarding the receiving an attribute extraction request, generating at least one prompt, extracting a value, and generating a final attribute set, see relevantly MPEP 2106.04(a)(2), Section III, Subsection D referring to an “application program interface for extracting and processing information from a diversity of types of hard copy documents – Content Extraction, 776 F.3d at 1345, 113 USPQ2d at 1356” and MPEP 2106.05(f) referring to “the abstract idea of ‘collecting, displaying, and manipulating data’” Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017).
Receiving a request and extracting a value falls under “collecting.”
Generating a prompt and generating final attribute set falls under “manipulating.”
Step 2A, Prong Two
This judicial exception of receiving an attribute extraction request, generating at least one prompt, extracting a value, generating a final attribute set, and implementing an attribute-based automated process is not integrated into a practical application despite the generically recited computer elements shown below:
at least one processor
at least one device
at least one generative prompt
at least one generative model
The generically recited computer elements amount to implementing the abstract idea on a computer, merely using a computer as a tool to perform an abstract idea, or generally linking the use of a judicial exception to a particular technological environment or field of use as seen below.
wherein the instructions, when executed by at least one processor, cause the at least one device to perform operations comprising:
configuring at least one generative model based on the at least one generative prompt to extract a value of one or more attributes identified in the attribute extraction request;
This additional element merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.04(d)).
generating at least one generative prompt based on the attribute extraction request and the item element data;
The prompt is specified to be a generative prompt; generation of the prompt for use by the at least one generative model in the later “configure” step generally links the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.04(d)).
Step 2B
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception despite the additional elements shown below:
configuring at least one generative model based on the at least one generative prompt to extract a value of one or more attributes identified in the attribute extraction request;
This additional element merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
generating at least one generative prompt based on the attribute extraction request and the item element data;
The prompt is specified to be a generative prompt; generation of the prompt for use by the at least one generative model in the later “configure” step generally links the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)).
A non-transitory computer readable medium having instructions stored thereon,
These elements store and retrieve information in memory, which are well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d).
Claim 17 recites determining a classifier, receiving an attribute extraction template, and generating the at least one generative prompt based on the template.
Determining a classifier, receiving an attribute extraction template, and generating the at least one prompt based on the template falls under an abstract idea; the prompt being generative generally links the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.04(d)).
Claim 18 recites receiving attribute model configuration data and determining the one or more attributes to be extracted, the collecting data and the mental determination falling under an abstract idea.
Claim 19 specifies that there are one or more configurations for the at least one generative model and that the at least one generative model is configured based on the configurations; however, claim 19 does not add a meaningful limitation as these are merely nominal or token extra-solution components of the claim and serves only as an attempt to generally link the product of nature to a further particular technological environment (see MPEP 2106.05(h)).
Claim 20 introduces determining at least a portion of the at least one generative prompt based on an associated type of attribute, which falls under a mental process and is thus an abstract idea.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 7, 9; 10; and 16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Manivannan et al., U.S. Patent Application Publication No. 2025/0378106 (filed June 10, 2024, prior to the instant application date of July 30, 2024; hereinafter Manivannan).
Regarding claim 1, Manivannan teaches:
A system, comprising: a non-transitory memory; a processor communicatively coupled to the non-transitory memory, wherein the processor is configured to read a set of instructions to: (Manivannan ¶ 0129-0130: The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof unless specifically described as being implemented in a specific manner … If implemented in software, the techniques may be realized at least in part by a non-transitory processor-readable storage medium, including instructions that, when executed by at least one processor, perform one or more of the methods described herein (including computer-implemented methods))
receive an attribute extraction request identifying item element data; (Manivannan FIG. 8, ¶ 0109: act 810 includes providing an interactive interface within the data analytics system that enables the selection of the target dataset and receiving the user query within a text query field associated with the interactive interface; see also Manivannan FIG. 7, ¶ 0100-0101: the dataset query tool 712 includes a message thread that includes a user query 714 requesting that the data insights system 206 generate visual and corresponding plain-language insights for a target dataset)
generate at least one generative prompt based on the attribute extraction request and the item element data; (Manivannan FIG. 8, ¶ 0109: act 810 includes generating the database query prompt ... act 810 includes providing an interactive interface within the data analytics system that enables the selection of the target dataset and receiving the user query within a text query field associated with the interactive interface; see again Manivannan FIG. 7, ¶ 0100-0101: the dataset query tool 712 includes a message thread that includes a user query 714 requesting that the data insights system 206 generate visual and corresponding plain-language insights for a target dataset)
configure at least one generative model based on the at least one generative prompt to extract a value of one or more attributes identified in the attribute extraction request; (Manivannan FIG. 8, ¶ 0110-0111: act 820 involves providing the database query prompt to a generative AI model to generate a database query … the series of acts 800 includes act 830 of executing the database query to obtain selected data from the target dataset and a visualization object; Manivannan shows values in ¶ 0067: Upon identifying data relevant to the user query based on the metrics, filters, columns, and descriptions, the database search tool 422 can generate, lookup, and/or obtain metadata information 424 corresponding to the search results ... upon fetching relevant metric and filter values based on the user query; see also relevant Manivannan ¶ 0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720)
extract, by the at least one generative model, the value of the one or more attributes; (Manivannan FIG. 8, ¶ 0111-0112: the series of acts 800 includes act 830 of executing the database query to obtain selected data from the target dataset and a visualization object … the series of acts 800 includes act 840 of generating data attributes from the selected data. For instance, in example implementations, act 840 involves generating data attributes and their corresponding attribute causes based on analyzing the selected data; Manivannan shows values in ¶ 0067: Upon identifying data relevant to the user query based on the metrics, filters, columns, and descriptions, the database search tool 422 can generate, lookup, and/or obtain metadata information 424 corresponding to the search results ... upon fetching relevant metric and filter values based on the user query; see also relevant Manivannan ¶ 0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720)
generate a final attribute set including at least a portion of the value of the one or more attributes identified in the attribute extraction request; and (Manivannan FIG. 8, ¶ 0113: act 850 involves utilizing the generative AI model to generate a plain-language insight summary of the data attributes and the corresponding attribute causes ... generating the plain-language insight summary using the generative AI model includes generating a plain-language insight summary prompt that instructs the generative AI model to convert the data attributes and the corresponding attribute causes into natural language text, and providing the plain-language insight summary prompt to the generative AI model; see also relevant Manivannan ¶ 0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720)
implement an attribute-based automated process based on at least one attribute value in the final attribute set. (Manivannan FIG. 8, ¶ 0114: the series of acts 800 includes act 860 of providing the visualization object and the insight summary in response to the user query; see this subsequent to Manivannan ¶ 0112-0113 including: act 840 of generating data attributes from the selected data ... act 850 involves utilizing the generative AI model to generate a plain-language insight summary of the data attributes and the corresponding attribute causes)
Regarding claim 10, Manivannan teaches:
A computer-implemented method, comprising: receiving an attribute extraction request identifying item element data; (Manivannan FIG. 8, ¶ 0109: act 810 includes providing an interactive interface within the data analytics system that enables the selection of the target dataset and receiving the user query within a text query field associated with the interactive interface; see also Manivannan FIG. 7, ¶ 0100-0101: the dataset query tool 712 includes a message thread that includes a user query 714 requesting that the data insights system 206 generate visual and corresponding plain-language insights for a target dataset)
generating at least one generative prompt based on the attribute extraction request and the item element data; (Manivannan FIG. 8, ¶ 0109: act 810 includes generating the database query prompt ... act 810 includes providing an interactive interface within the data analytics system that enables the selection of the target dataset and receiving the user query within a text query field associated with the interactive interface; see again Manivannan FIG. 7, ¶ 0100-0101: the dataset query tool 712 includes a message thread that includes a user query 714 requesting that the data insights system 206 generate visual and corresponding plain-language insights for a target dataset)
configuring at least one generative model based on the at least one generative prompt to extract a value of one or more attributes identified in the attribute extraction request; (Manivannan FIG. 8, ¶ 0110-0111: act 820 involves providing the database query prompt to a generative AI model to generate a database query … the series of acts 800 includes act 830 of executing the database query to obtain selected data from the target dataset and a visualization object; Manivannan shows values in ¶ 0067: Upon identifying data relevant to the user query based on the metrics, filters, columns, and descriptions, the database search tool 422 can generate, lookup, and/or obtain metadata information 424 corresponding to the search results ... upon fetching relevant metric and filter values based on the user query; see also relevant Manivannan ¶ 0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720)
extracting, by the at least one generative model, the value of the one or more attributes; (Manivannan FIG. 8, ¶ 0111-0112: the series of acts 800 includes act 830 of executing the database query to obtain selected data from the target dataset and a visualization object … the series of acts 800 includes act 840 of generating data attributes from the selected data. For instance, in example implementations, act 840 involves generating data attributes and their corresponding attribute causes based on analyzing the selected data; Manivannan shows values in ¶ 0067: Upon identifying data relevant to the user query based on the metrics, filters, columns, and descriptions, the database search tool 422 can generate, lookup, and/or obtain metadata information 424 corresponding to the search results ... upon fetching relevant metric and filter values based on the user query; see also relevant Manivannan ¶ 0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720)
generating a final attribute set including at least a portion of the value of the one or more attributes identified in the attribute extraction request; and (Manivannan FIG. 8, ¶ 0113: act 850 involves utilizing the generative AI model to generate a plain-language insight summary of the data attributes and the corresponding attribute causes ... generating the plain-language insight summary using the generative AI model includes generating a plain-language insight summary prompt that instructs the generative AI model to convert the data attributes and the corresponding attribute causes into natural language text, and providing the plain-language insight summary prompt to the generative AI model; see also relevant Manivannan ¶ 0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720)
implementing an attribute-based automated process based on at least one attribute value in the final attribute set. (Manivannan FIG. 8, ¶ 0114: the series of acts 800 includes act 860 of providing the visualization object and the insight summary in response to the user query; see this subsequent to Manivannan ¶ 0112-0113 including: act 840 of generating data attributes from the selected data ... act 850 involves utilizing the generative AI model to generate a plain-language insight summary of the data attributes and the corresponding attribute causes)
Regarding claim 16, Manivannan teaches:
A non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor, cause the at least one device to perform operations comprising: (Manivannan ¶ 0129-0130: The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof unless specifically described as being implemented in a specific manner … If implemented in software, the techniques may be realized at least in part by a non-transitory processor-readable storage medium, including instructions that, when executed by at least one processor, perform one or more of the methods described herein (including computer-implemented methods))
receiving an attribute extraction request identifying item element data; (Manivannan FIG. 8, ¶ 0109: act 810 includes providing an interactive interface within the data analytics system that enables the selection of the target dataset and receiving the user query within a text query field associated with the interactive interface; see also Manivannan FIG. 7, ¶ 0100-0101: the dataset query tool 712 includes a message thread that includes a user query 714 requesting that the data insights system 206 generate visual and corresponding plain-language insights for a target dataset)
generating at least one generative prompt based on the attribute extraction request and the item element data; (Manivannan FIG. 8, ¶ 0109: act 810 includes generating the database query prompt ... act 810 includes providing an interactive interface within the data analytics system that enables the selection of the target dataset and receiving the user query within a text query field associated with the interactive interface; see again Manivannan FIG. 7, ¶ 0100-0101: the dataset query tool 712 includes a message thread that includes a user query 714 requesting that the data insights system 206 generate visual and corresponding plain-language insights for a target dataset)
configuring at least one generative model based on the at least one generative prompt to extract a value of one or more attributes identified in the attribute extraction request; (Manivannan FIG. 8, ¶ 0110-0111: act 820 involves providing the database query prompt to a generative AI model to generate a database query … the series of acts 800 includes act 830 of executing the database query to obtain selected data from the target dataset and a visualization object; Manivannan shows values in ¶ 0067: Upon identifying data relevant to the user query based on the metrics, filters, columns, and descriptions, the database search tool 422 can generate, lookup, and/or obtain metadata information 424 corresponding to the search results ... upon fetching relevant metric and filter values based on the user query; see also relevant Manivannan ¶ 0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720)
extracting, by the at least one generative model, the value of the one or more attributes; (Manivannan FIG. 8, ¶ 0111-0112: the series of acts 800 includes act 830 of executing the database query to obtain selected data from the target dataset and a visualization object … the series of acts 800 includes act 840 of generating data attributes from the selected data. For instance, in example implementations, act 840 involves generating data attributes and their corresponding attribute causes based on analyzing the selected data; Manivannan shows values in ¶ 0067: Upon identifying data relevant to the user query based on the metrics, filters, columns, and descriptions, the database search tool 422 can generate, lookup, and/or obtain metadata information 424 corresponding to the search results ... upon fetching relevant metric and filter values based on the user query; see also relevant Manivannan ¶ 0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720)
generating a final attribute set including at least a portion of the value of the one or more attributes identified in the attribute extraction request; and (Manivannan FIG. 8, ¶ 0113: act 850 involves utilizing the generative AI model to generate a plain-language insight summary of the data attributes and the corresponding attribute causes ... generating the plain-language insight summary using the generative AI model includes generating a plain-language insight summary prompt that instructs the generative AI model to convert the data attributes and the corresponding attribute causes into natural language text, and providing the plain-language insight summary prompt to the generative AI model; see also relevant Manivannan ¶ 0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720)
implementing an attribute-based automated process based on at least one attribute value in the final attribute set. (Manivannan FIG. 8, ¶ 0114: the series of acts 800 includes act 860 of providing the visualization object and the insight summary in response to the user query; see this subsequent to Manivannan ¶ 0112-0113 including: act 840 of generating data attributes from the selected data ... act 850 involves utilizing the generative AI model to generate a plain-language insight summary of the data attributes and the corresponding attribute causes)
Regarding claim 7, Manivannan teaches:
wherein the at least one generative prompt comprises a first generative prompt and a second generative prompt, and the at least one generative model comprises a first generative model and a second generative model, (Manivannan ¶ 0113: In some implementations, the generative AI model represents multiple and/or different generative AI models. For example, the database query prompt is provided to a first generative AI model, and the insight summary prompt is provided to a second, different generative AI model)
wherein the processor is configured to read the instructions to: configure the first generative model based on the first generative prompt; (Manivannan FIG. 8, ¶ 0110-0111: the series of acts 800 includes act 820 of providing the database query prompt to a generative AI model)
configure the second generative model based on the second generative prompt; (Manivannan ¶ 0113: providing the plain-language insight summary prompt to the generative AI model. In some implementations, the generative AI model represents multiple and/or different generative AI models. For example, the database query prompt is provided to a first generative AI model, and the insight summary prompt is provided to a second, different generative AI model)
extract, by the first generative model, at least a first portion of the value of the one or more attributes; (Manivannan FIG. 8, ¶ 0111-0112: the series of acts 800 includes act 830 of executing the database query to obtain selected data from the target dataset and a visualization object … the series of acts 800 includes act 840 of generating data attributes from the selected data. For instance, in example implementations, act 840 involves generating data attributes and their corresponding attribute causes based on analyzing the selected data; Manivannan shows values in ¶ 0067: Upon identifying data relevant to the user query based on the metrics, filters, columns, and descriptions, the database search tool 422 can generate, lookup, and/or obtain metadata information 424 corresponding to the search results ... upon fetching relevant metric and filter values based on the user query)
extract, by the second generative model, at least a second portion of the value of the one or more attributes; and (Manivannan FIG. 8, ¶ 0111-0112: the series of acts 800 includes act 830 of executing the database query to obtain selected data from the target dataset and a visualization object … the series of acts 800 includes act 840 of generating data attributes from the selected data. For instance, in example implementations, act 840 involves generating data attributes and their corresponding attribute causes based on analyzing the selected data; see this in light of Manivannan showing a second model in ¶ 0113: the generative AI model represents multiple and/or different generative AI models; Manivannan shows values in ¶ 0067: Upon identifying data relevant to the user query based on the metrics, filters, columns, and descriptions, the database search tool 422 can generate, lookup, and/or obtain metadata information 424 corresponding to the search results ... upon fetching relevant metric and filter values based on the user query)
generate the final attribute set based on the first portion of the value of the one or more attributes and the second portion of the value of the one or more attributes. (Manivannan FIG. 8, ¶ 0113: the series of acts 800 includes act 850 of utilizing the generative AI model to generate an insight summary of the selected data. For instance, in example implementations, act 850 involves utilizing the generative AI model to generate a plain-language insight summary of the data attributes and the corresponding attribute causes; Manivannan FIG. 7, ¶ 0100-0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720. The message thread also includes the plain-language insight summary 730, which corresponds to the visualization object 720 and the user query 714)
Regarding claim 9, Manivannan teaches:
wherein the interface generation process causes a display of the at least one attribute value in the final attribute set in conjunction with interface elements associated with the item element data. (Manivannan FIG. 8, ¶ 0112-0115: act 850 involves utilizing the generative AI model to generate a plain-language insight summary of the data attributes and the corresponding attribute causes; Manivannan FIG. 7, ¶ 0100-0101: the message thread includes a response 718 that includes data attributes identified from the user query 714 and used to generate the visualization object 720. The message thread also includes the plain-language insight summary 730, which corresponds to the visualization object 720 and the user query 714)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 11, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Manivannan in view of Nguyen et al., U.S. Patent Application Publication No. 2020/0301916 (hereinafter Nguyen).
Regarding claims 2, 11, and 17, Manivannan teaches all the features with respect to claims 1, 10, and 16 above respectively including:
generate the at least one generative prompt… (Manivannan FIG. 8, ¶ 0109: act 810 includes generating the database query prompt ... act 810 includes providing an interactive interface within the data analytics system that enables the selection of the target dataset and receiving the user query within a text query field associated with the interactive interface)
Manivannan does not expressly disclose:
determine a classifier associated with the item element data;
receive an attribute extraction template based on the determination;
Manivannan further does not expressly disclose the prompt being based on the attribute extraction template.
However, Nguyen addresses this by teaching:
determine a classifier associated with the item element data; (Nguyen ¶ 0032: each set of users interacting with a system can customize the system to process natural language queries typically asked in a particular domain; Nguyen ¶ 0132: The ability to add query templates allows organizations/enterprises to build their own database of query templates that is capable of processing the typical queries that users perform in that domain ... The queries based on a query intent take as input the set of attributes associated with the query intent. For example, the query intent of comparing two attributes takes as input at least a first attribute and a second attribute. Each of these query templates specifies the same set of attributes, i.e., the set of attributes associated with the query intent)
receive an attribute extraction template based on the determination; and (Nguyen FIG. 7, ¶ 0108-0109: The suggestion module 370 matches 720 the input query string against templates of natural language queries stored in the query template store 390. The suggestion module 370 identifies terms of the input query string and matches the terms of query templates in the order in which the terms occur in the natural language query and the order in which the query template expects the terms ... the query template may specify that a term can be an attribute or a user defined metric)
generate the at least one … prompt based on the attribute extraction template. (Nguyen FIG. 7, ¶ 0109-0115: The suggestion module 370 matches 720 the input query string against templates of natural language queries stored in the query template store 390 … The user interaction module 240 determines the terms to be suggested in response to the received query string)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the prompt generation of Manivannan with the prompt generation of Nguyen.
In addition, both of the references (Manivannan and Nguyen) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as prompt generation and presentation.
Motivation to do so would be to improve the functioning of Manivannan generating query prompts with the ability in similar reference Nguyen also generating query prompts but with the improvement of query template comparisons.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to provide suitable interface for users to analyze the large amount of information available in an enterprise as seen in Nguyen ¶ 0004.
Claims 3, 5; 12, 14; 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Manivannan in view of Guerra et al., U.S. Patent No. 12,346,314 (filed July 16, 2024, prior to the instant application date of July 30, 2024; hereinafter Guerra).
Regarding claims 3, 12, and 18, Manivannan teaches all the features with respect to claims 1, 10, and 16 above respectively including:
receive attribute model configuration data, and determine the one or more attributes to be extracted from the … data based on the attribute model configuration data. (Manivannan FIG. 5A, ¶ 0083-0089: the prompt generator 510 utilizes a data forecasting model, tool, or system to determine some of the data attributes 520 … the data forecasting tool determines the data attributes 520 based on a trend function … the prompt generator 510 determines the data attribute reasoning 530 by correlating the data attributes 520 with event and incident data ... the prompt generator 510 determines the data attribute reasoning 530 by re-running the data forecasting tool with a refined baseline that includes event data to capture the effects of the events)
Manivannan does not expressly disclose the one or more attributes to be extracted from the item element data.
However, Guerra addresses this by teaching one or more attributes extracted from the item element data. (Guerra FIG. 4, step 422, col. 11, line 56-col. 12, line 16: a table column may have a cryptic field name (e.g., “OSDSTATUS”) and/or may contain data values with specific notations (e.g., categories “A,” “B,” “C,” or space). If such data is sent directly to a generative AI model, it may not be able to interpret the data correctly. However, by replacing these cryptic field names and data values with descriptive metadata (e.g., replacing “OSDSTATUS” with “Sales Order Overall Processing Status,” and replacing categorical data value “A” with “Complete,” or the like), the data becomes more meaningful and interpretable for the generative AI model to generate more contextually relevant responses)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the generative query techniques of Manivannan with the generative query techniques of Guerra.
In addition, both of the references (Manivannan and Guerra) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as generative query techniques.
Motivation to do so would be to improve the functioning of Manivannan analyzing data attributes with the ability in similar reference Guerra also analyzing data attributes but with the improvement of enhancing the understandability of the data.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to provide more meaningful and interpretable data for a generative model to generate more contextually relevant responses as seen in Guerra col. 12, lines 9-16.
Regarding claims 5, 14, and 20, Manivannan teaches all the features with respect to claims 1, 10, and 16 above respectively but does not expressly disclose:
determine at least a portion of the at least one generative prompt based on an associated type of attribute.
However, Guerra addresses this by teaching:
determine at least a portion of the at least one generative prompt based on an associated type of attribute. (Guerra FIG. 4, steps 422-424, col. 11, line 56-col. 12, line 38: by replacing these cryptic field names and data values with descriptive metadata (e.g., replacing “OSDSTATUS” with “Sales Order Overall Processing Status,” and replacing categorical data value “A” with “Complete,” or the like), the data becomes more meaningful and interpretable for the generative AI model to generate more contextually relevant responses ... Based on the intent of the user query, the method can select the prompt template from a plurality of prompt templates, each corresponding to a specific operation mode ... For an operation mode to obtain processing status of a specific sales order object, the prompt template can include instructions for the generative AI model to generate a list of one or more business objects (including the sales order object) and their status information in a sequential order)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the generative query techniques of Manivannan with the generative query techniques of Guerra.
In addition, both of the references (Manivannan and Guerra) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as generative query techniques.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to provide more meaningful and interpretable data for a generative model to generate more contextually relevant responses as seen in Guerra col. 12, lines 9-16.
Claims 4, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Manivannan in view of Bharadwaj et al., U.S. Patent Application Publication No. 2024/0273345 (filed February 13, 2023, prior to the instant application date of July 30, 2024; hereinafter Bharadwaj).
Regarding claims 4, 13, and 19, Manivannan teaches all the features with respect to claims 1, 10, and 16 above respectively including:
generate the at least one generative prompt to comprise one or more configurations for the at least one generative model, and configure the at least one generative model… (Manivannan FIG. 8, ¶ 0109: act 810 includes generating the database query prompt by validating the query parameters and automatically updating a query parameter that does not initially pass (e.g., fails) validation; see then Manivannan ¶ 0113: In some implementations, the generative AI model represents multiple and/or different generative AI models)
Manivannan does not expressly disclose to configure based on the one or more configurations.
However, Bharadwaj addresses this by teaching the following:
generate the at least one generative prompt to comprise one or more configurations for the at least one generative model, and (see prompts in Bharadwaj FIG. 1A, ¶ 0114-0116: The one or more characteristics of the input query may include, for instance, a user identifier and a user prompt ... after determining one or more characteristics of the input query at step 104, the method 100 may proceed to step 106a, wherein step 106a comprises selecting, from a plurality of response modules, one or more response modules based on the one or more characteristics of the input query and one or more user preference metrics associated with each of the one or more response modules; see configurations in Bharadwaj ¶ 0126-0129: each model forming a response module further comprises configurations/configuration files for optimizing one or more parameters, settings, preferences, etc. of each respective machine learning model of the plurality of machine learning models)
configure the at least one generative model based on the one or more configurations. (Bharadwaj FIG. 1A, ¶ 0114-0116: step 106a comprises selecting, from a plurality of response modules, one or more response modules based on the one or more characteristics of the input query and one or more user preference metrics associated with each of the one or more response modules; see then Bharadwaj ¶ 0126-0129: each model forming a response module further comprises configurations/configuration files for optimizing one or more parameters, settings, preferences, etc. of each respective machine learning model of the plurality of machine learning models)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the generative query techniques of Manivannan with the generative query techniques of Bharadwaj.
In addition, both of the references (Manivannan and Bharadwaj) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as generative query techniques.
Motivation to do so would be to improve the functioning of Manivannan analyzing received queries with the ability in similar reference Bharadwaj also analyzing received queries but with the improvement of a variety of selectable response modules.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve various prompts based on characteristics of a user that provided an input query as seen in Bharadwaj ¶ 0123.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Manivannan in view of Burton, U.S. Patent No. 12,210,839 (filed April 29, 2024, prior to the instant application date of July 30, 2024; hereinafter Burton).
Regarding claims 6 and 15, Manivannan teaches all the features with respect to claims 1 and 10 above respectively but does not expressly disclose:
generate the at least one generative prompt to comprise a plurality of attribute definitions for each of the one or more attributes, and configure the at least one generative model based on the plurality of attribute definitions.
However, Burton addresses this by teaching:
generate the at least one generative prompt to comprise a plurality of attribute definitions for each of the one or more attributes, and (Burton col. 57, lines 18-44: the neural networks performing framing analysis inference with event-related tags and types may be arranged in any economic configuration, including but not limited to: omnibus classifiers; one vs. rest ensembles with a consensus step; or few-shot methods which use agentic or standard completion LLMs to answer written prompts asking questions about framing attributes directly (e.g. “Considering the definitions of channels I have provided to you, please classify the following sentence with one of the provided channel names: <sentence>.”) ... the tag determination may be done by prompting a GPT-type model with information related to the channel (e.g. “Given this list of tags concerning Revenue themes: <list of tags>, which 0-3 do you consider to be present in the following text: <text>?”))
configure the at least one generative model based on the plurality of attribute definitions. (Burton col. 57, lines 18-44: The models may be trained by the use of the training module which performs a reverse operation to the inferencer, which consumes hierarchically organized annotation datum files and performing inference on the text extracted from and referenceable to any document markup (e.g. HTML) which may contain the text in the original textual material)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the generative models of Manivannan with the generative models of Burton.
In addition, both of the references (Manivannan and Burton) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as generative model prompts.
Motivation to do so would be to fortify the teachings of Manivannan involving training generative models with similar reference Burton also training generative models.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to allow quality of results as experienced within the interactive portion of the system to improve over time as seen in Burton col. 57, lines 55-62.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Manivannan in view of Cheah et al., U.S. Patent Application Publication No. 2023/0418853 (hereinafter Cheah).
Regarding claim 8, Manivannan teaches all the features with respect to claim 1 above but does not expressly disclose:
extract, by the at least one generative model, a confidence value associated with each of the one or more attributes, and
generate the final attribute set to include, for each of the one or more attributes, the value associated with a highest corresponding confidence value.
However, Cheah addresses this by teaching:
extract, by the at least one generative model, a confidence value associated with each of the one or more attributes, and (Cheah FIG. 3, ¶ 0068-0070: The attribute prediction model 236 may be a generative model, that is, one that approximates the probability of an output Y for a candidate token based on the preliminary labels of the candidate token without needing ground truth Y ... Based on the confidence levels for the attribute label(s) (e.g. ABN, BSB and ACC) for the tokens of the subset, the document labelling module 238 may select suitable attribute label values for the document. For example, the document labelling module 238 may be configured to determine a token having a highest confidence value for a specific attribute label as being the value for that specific attribute label for the document)
generate the final attribute set to include, for each of the one or more attributes, the value associated with a highest corresponding confidence value. (Cheah FIG. 3, ¶ 0075-0076, "At 306, the system 202 determines a set of preliminary attribute labels for each of the one or more tokens" and ¶ 0091, "At 312, the system 202 determines a set of refined labels for each document based on the confidence values associated with the tokens of the respective subset of tokens. The set of refined labels comprises a value for the one or more attribute types"; see this in light of Cheah ¶ 0070: the document labelling module 238 may be configured to determine a token having a highest confidence value for a specific attribute label as being the value for that specific attribute label for the document)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the attribute analysis of Manivannan with the attribute analysis of Cheah.
In addition, both of the references (Manivannan and Cheah) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as data extraction techniques.
Motivation to do so would be to improve the functioning of Manivannan identifying attributes with the ability in similar reference Cheah also identifying attributes but with the improvement of confidence values.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to provide efficient and accurate extraction of relevant data as seen in Cheah ¶ 0003.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
B. Yang, “How Walmart uses LLMs to manage its massive product catalogs,” Walmart Global Tech, May 20, 2025, 4 pages [a replacement copy of NPL reference #3 from IDS 06/10/2025]
McCarson, U.S. Patent No. 12,088,599 "Generative AI And Agentic AI Systems And Methods For Prevention, Detection, Mitigation And Remediation Of Cybersecurity Threats"' filed May 1, 2024; see McCarson col. 4, lines 12-31 describing attribute data received from data sources and a set of individual machine learning models, each having a different type and purpose, relevant to at least dependent claim 7 describing multiple generative models and attributes.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEDIDIAH P FERRER whose telephone number is (571)270-7695. The examiner can normally be reached Monday-Friday 12:00pm-8:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached at (571)272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.P.F/Examiner, Art Unit 2153 March 21, 2026
/KAVITA STANLEY/Supervisory Patent Examiner, Art Unit 2153