DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This office action is in reply to Applicant’s Response dated 11/17/2025. Claims 1-3, 5, 8, 10-13 and 18 are amended. Claims 1-19 remain pending in the application.
Response to Arguments
In response to the Applicant’s arguments (see page 7) with respect to the rejections under 35 U.S.C. 112(b), the rejections under 35 U.S.C. 112(b) have been withdrawn in view of the amendments made to the claims.
The Applicant argues (see pages 7-8) with respect to the rejection under 35 U.S.C. 101 that as amended, the independent claims recite an entity extractor, an numerated value reducer, an enum-to-language expansion model and a prompt constructor. These are specific, non-generic computational modules that are explicitly disclosed in the specification as originally filed. The independent claims require that each operation ( e.g., separating, reducing, expanding, combining, submitting) is performed by specific components.
The Applicant further argues that human mind cannot perform enumerated value reduction across sensitive data types using a reducer module as recited in the independent claims, cannot generate deterministic textual expansions using an enum-to-language expansion model as recited in the independent claims, cannot construct composite LLM prompts from enumerated outputs as recited in the independent claims, and cannot execute bounded leakage transformations based on prescribed enumerated value sets. Thus, the independent claims recite machine-only operations utilizing an architecture with concrete modules that provide specified functionality. This structure is consistent with the requirements of Enfish, McRO and CardioNet, in not being mental processes. The independent claims recite a technical data transformation architecture that utilizes specialized and specified modules configured for preserving privacy when using the LLM (i.e., ensures mathematically bounded leakage). This is a practical improvement in computer functionality.
In response to the Applicant’s arguments, the Examiner respectfully disagrees. First, the Applicant’s has not shown how the entity extractor, an enumerated value reducer, and an enum-to-language expansion model, and a prompt constructor are non-generic. Claim 1 recites “executed by processing circuitry configured with at least an entity extractor, an enumerated value reducer, and an enum-to-language expansion model, and a prompt constructor stored in memory”. However, claim 1 recites no features that explain what these elements are. The Applicant states (see footnote on page 7 of the Applicant’s Remarks) that “See, for example, FIG. 2 and FIG. 4: system 400, modules 211, 215 and 221.” Fig. 4 shows a “Processing circuitry”, “Memory”, “Storage” and “Network Interface”, which are conventional and generic computer components. Thus, claim 1 merely uses generic and conventional computer components to carry out the steps in the claim.
Second, as the Applicant argues above, the independent claims recite a technical data transformation architecture. The separating, reducing, expanding, and combining steps merely involve data transformation and therefore, are mental processes. The Applicant has not shown how merely separating information, reducing information, expanding information and combining information would improve computer functionality. These data transformation steps do not improve computer functionality. Instead, the steps merely transform information into a form usable by the LLM.
Accordingly, the claims are not directed to a specific asserted solution or improvement in computer functionality. The claims focus instead on a process that qualifies as a mental process and abstract idea for which computers are used to perform generic function or merely executing “apply it” to the abstract idea.
The Applicant argues (see page 9) with respect to claim 1 that Nothing in Gomez teaches or suggests selecting from a prescribed number of enumerated options, deterministically expanding enumerated outputs into textual forms or ensuring mathematically bounded leakage based on enumerated spaces. Gomez's anonymization destroys meaning but does not reduce data to a fixed, enumerated domain. Because neither Gomez nor Gozalez teach or suggest selecting from a prescribed number of enumerated options, deterministically expanding enumerated outputs into textual forms or ensuring mathematically bounded leakage based on enumerated spaces, no combination of the cited references can render the claims obvious.
In response to the Applicant’s argument, the Examiner respectfully disagrees. Claim 1 fails to recite limitations that explain how the selection from a prescribed number of options is carried out. Gonzalez teaches that a set of sensitive entity types have been predefined and identifying as sensitive text entities each of the text entities in the query that is of any of the predefined sensitive entity types; and for the identified sensitive text entities, determining a corresponding plurality of semantically similar replacement entities and generating the modified query may include substituting each of the identified sensitive text entities in the query with the corresponding one of the semantically similar replacement entities (Gonzalez, see paragraph 0026).
Thus, Gonzalez teaches “reducing (identifying) each respective one of the at least one type of sensitive data (text entities) to one enumerated output (sensitive text entities) selected from a prescribed number of options (any of the predefined sensitive entity types) for that respective type of sensitive data”.
Gomez teaches the prompt anonymization can be implemented by replacing pieces of sensitive data contained in the original prompt with corresponding unique, non-sensitive tokens generated by a tokenizer (e.g., the tokenizer 134). The modified prompt can also include a context prompt (e.g., provided by the contextualizer 140) configured to provide relevant contextual information for the original prompt and the modified prompt can be submitted to a large language model (e.g., the LLM 200 or 360) (Gomez, see paragraph 0091-0092).
It is clear here that Gomez teaches expanding each enumerated output (sensitive data) to a respective expanded form (form with non-sensitive tokens) that is usable by the generative AI model (Gomez, see paragraph 0091-0092). Thus, the combination of Gomez and Sanchez teaches all of the features of claims 1, 10 and 11.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “entity extractor”, “enumerated value reducer” and “enum-to-language expansion model” in claim 11.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: " processing circuitry 410 may be realized as one or more hardware logic components and circuits… general-purpose microprocessors” (see paragraph 0039 of the specification as filed).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-19 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 10 and 11 recite “an entity extractor, an enumerated value reducer, and an enum-to-language expansion model, and a prompt constructor”. The terms “entity extractor”, “enumerated value reducer” and “enum-to-language expansion model” are absent from the specification and drawings. Therefore, these terms are not supported by the disclosure. As such, claims 1-19 fail to comply with the written description requirement.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 10 and 11 satisfy Step 1 because the claims are a process, article of manufacture or machine.
In Step 2A prong 1, the claim 1 recites “separating, using the entity extractor, non-sensitive data of the original query…reducing, using the enumerated value reducer, each respective one of the at least one type of sensitive data to one enumerated output…expanding, using the enum-to-language expansion model, each enumerated output to a respective expanded form that is usable by the generative AI model…combining, using the prompt constructor, the expanded forms with the non-sensitive data to form a prompt”, which, under the broadest reasonable interpretation, are steps that are performed in the human mind. Claim 1 involves separating data, scrubbing data of sensitive information and formatting data to a format usable form usable by an AI model. Such steps of separating data, scrubbing data of certain information and formatting data are performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Claims 10 and 11 recite similar limitations and therefore, claims 10 and 11 also recite the abstract idea.
In Step 2A prong 2, the judicial exception is not integrated into a practical application because processor and memory are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. The claims recite an entity extractor, an enumerated value reducer, an enum-to-language expansion model, and a prompt constructor. The prompt constructor is merely computer instruction stored in the generic memory. The claims fail to recite corresponding structures for the entity extractor, enumerated value reducer, and enum-to-language expansion model and therefore, these components are interpreted as general processors. Thus, the entity extractor, enumerated value reducer, enum-to-language expansion model, and prompt constructor do not add meaningful limitation to the abstract idea. The claims also recite the additional step of “submitting the prompt to the generative AI model”. However, this is an insignificant extra-solution activity. The claims fail to provide specificity as to how the prompt is submitted. Adding insignificant extra-solution activity to the judicial exception is not enough to qualify as “significantly more”. The additional element or step do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
In Step 2B, the claim do not include additional elements that are sufficient to amount to significantly more than the judicial exception because memory and processing circuitry are general purpose computer components, which are well-understood, routine and conventional (see Decasper et al. (U.S. PGPub 2007/0192474) paragraph 0004 where include conventional components such as a processor, a memory (e.g., RAM)… a network interface, such as a conventional modem), performing the steps recited in the claims and are not sufficient to transform a judicial exception into a patentable invention.
Regarding claims 2-9 and 12-19, claims 2-9 and 12-19 recite the “textual language form…”, “receiving an evaluation…taking an action…”, “…determined based on a training data set”, “iterating…iteration until an error is less than a prescribed threshold…”, “enriching the prompt…”, “one of context and know-how”, “enables the generative AI to better understand a meaning…” and “one of a set of queries that are being evaluated…” However, these features do not add meaningful limitation to the abstract idea because they merely describe the data used in the mental process recited in claims 1, 10 and 11 or an output data of the mental process (e.g. the form usable by the generative AI model is a textual language form, additional information is at least one of context and know-how, additional information enables the generative AI to better understand, original query is one of a set of queries). Claims 3 and 13 fail to specify how the evaluation is received and what the action being taken is. Collecting or receiving data and taking an action are insignificant extra-solution activities. Claims 5 and 15 merely require iteration or repetition of the mental process and extra-solution activity until a condition is met. Therefore, claims 5 and 15 recite the mental process and do not add meaningful limitation to the abstract idea. Thus, claims 2-9 and 12-19 are rejected.
The elements recited in claims 1-19, when considered individually or in an ordered combination, fail to amount to significantly more than the abstract idea. Accordingly, claims 1-19 are not eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-13 and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Gomez et al. (U.S. PGPub 2025/0165648) in view of Gonzalez Sanchez et al. (WO 2024/261525), hereafter "Gonzalez".
Regarding claims 1, 10 and 11, Gomez teaches A computer-implemented method for secure and privacy preserving operation of a large language model (LLM), the method executed by processing circuitry configured with at least an entity extractor, an enumerated value reducer, and an enum-to-language expansion model, and a prompt constructor stored in memory, the method comprising: separating, using the entity extractor, non-sensitive data of the original query and at least one type of the sensitive data; (Gomez, see figs. 1-2 and 4-6; see paragraphs 0039-0040 where identify sensitive data contained in the prompt 102 that violates a security protocol... anonymize the prompt 102 containing sensitive data... generates unique, non-sensitive tokens to represent each piece of sensitive data identified by the detector...; see paragraphs 0090-0091 where sensitive data in the original prompt that violates a security protocol can be detected...replacing pieces of sensitive data contained in the original prompt with corresponding unique, non-sensitive tokens...); see paragraph 0100 detected sensitive data...)
expanding, using the enum-to-language expansion model, each enumerated output to a respective expanded form that is usable by the LLM; (Gomez, see figs. 4-6; see paragraph 0040 where anonymize the prompt 102 containing sensitive data...universally unique identifiers (UUID), which are 128-bit labels that can be generated using UUID generators (e.g., UUID-2, UUID-3, UUID-5, etc.), can be used as tokens representing pieces of the sensitive data. In yet a further example, the unique, non-sensitive tokens can be generated using a counter-based approach, where a counter is incremented for each token generation, and then the counter can be combined with other data (e.g., a predefined prefix to form unique tokens)...; see also paragraph 0100 where each piece of sensitive data is hashed by applying a SHA-256 algorithm to generate a corresponding hash value (i.e., anonymized information) in this example. Other hashing algorithms, or more generally, tokenization algorithms, can be applied to convert each piece of the sensitive data to a corresponding unique, non-sensitive token...; see paragraph 0091 where , the prompt anonymization can be implemented by replacing pieces of sensitive data contained in the original prompt with corresponding unique, non-sensitive tokens generated by a tokenizer...)
combining, using the prompt constructor, the expanded forms with the non-sensitive data to form a prompt; and (Gomez, see figs. 5-6 elements 530 and 630; see paragraph 0101 anonymous prompt 530 is modified from the original prompt 510 by replacing sensitive data with corresponding anonymized information (e.g., hash values). The anonymous prompt 530 can then be submitted to the LLM...; see paragraph 0107 where anonymous prompt 630 is modified from the original prompt 610 by replacing sensitive data with corresponding anonymized information (e.g., hash values). The anonymous prompt 630 can then be submitted to the LLM...)
submitting the prompt to the LLM without leaking the sensitive data beyond its corresponding enumerated output. (Gomez, see figs. 5-6 elements 530 and 630; see paragraph 0101 anonymous prompt 530 is modified from the original prompt 510 by replacing sensitive data with corresponding anonymized information (e.g., hash values). The anonymous prompt 530 can then be submitted to the LLM...; see paragraph 0107 where anonymous prompt 630 is modified from the original prompt 610 by replacing sensitive data with corresponding anonymized information (e.g., hash values). The anonymous prompt 630 can then be submitted to the LLM...)
However, Gomez does not explicitly teach reducing, using the enumerated value reducer, each respective one of the at least one type of sensitive data to one enumerated output selected from a prescribed number of options for that respective type of sensitive data;
Gonzalez teaches reducing, using the enumerated value reducer, each respective one of the at least one type of sensitive data to one enumerated output selected from a prescribed number of options for that respective type of sensitive data; (Gonzalez, see fig. 2; see paragraph 0026 where a set of sensitive entity types may have been predefined...identifying as sensitive text entities each of the text entities in the query that is of any of the predefined sensitive entity types; and for the identified sensitive text entities, determining a corresponding plurality of semantically similar replacement entities...the operation of generating the modified query may include substituting each of the identified sensitive text entities in the query with the corresponding one of the semantically similar replacement entities...; see paragraph 0028 where generating a modified response by substituting the semantically similar replacement entities with the corresponding one of the identified sensitive text entities...)
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to combine Gomez and Sanchez to provide the technique of reducing each respective one of the at least one type of sensitive data to one enumerated output selected from a prescribed number of options for that respective type of sensitive data of Gonzalez in the system of Gomez in order to improve search performance without disclosing confidential information (Gonzalez, see paragraph 0070).
Regarding claims 2 and 12, Gomez-Gonzalez teaches wherein the form usable by LLM comprises a textual language form. (Gomez, see figs. 5-6 elements 530 and 630; see paragraph 0101 anonymous prompt 530 is modified from the original prompt 510 by replacing sensitive data with corresponding anonymized information (e.g., hash values). The anonymous prompt 530 can then be submitted to the LLM...; see paragraph 0107 where anonymous prompt 630 is modified from the original prompt 610 by replacing sensitive data with corresponding anonymized information (e.g., hash values). The anonymous prompt 630 can then be submitted to the LLM...)
Regarding claims 3 and 13, Gomez-Gonzalez teaches further comprising: receiving an evaluation of the prompt from the LLM; and taking an action based on the received evaluation. (Gomez, see figs. 4-6; see paragraph 0105 where the original prompt 610 contains sensitive data including the following business-critical information: units of product (10), unit price ($50), and total price ($600). Note that the calculation in the original prompt has an error: the total price should be $500 (10×$50) instead of $600...; see paragraph 0109 where Finally, a modified reply 650 can be presented to the user in response to the user's original prompt 610...Note that the calculation error in the original prompt 610 is corrected in the modified reply 650, which shows the correct total price $500...)
Regarding claims 6 and 16, Gomez-Gonzalez teaches further comprising enriching the prompt with additional information that is not present in the original query and is not sensitive data. (Gomez, see figs. 4-6; see paragraphs 0044-0045 where modify the prompt 102 by adding a context prompt (which can also be referred to as a “context query”) provided by the contextualizer 140. For example, the anonymized prompt (in which the sensitive data is replaced with unique, non-sensitive tokens) can be combined with the context prompt to form the modified prompt 104, which can be submitted to the generative AI 160... provide relevant contextual information for the prompt; see paragraph 0067 where enabling the model to learn complex relationships in the data and generate more contextually accurate and expressive text.)
Regarding claims 7 and 17, Gomez-Gonzalez teaches wherein the additional information is at least one of context and know-how. (Gomez, see figs. 4-6; see paragraphs 0044-0045 where modify the prompt 102 by adding a context prompt (which can also be referred to as a “context query”) provided by the contextualizer 140. For example, the anonymized prompt (in which the sensitive data is replaced with unique, non-sensitive tokens) can be combined with the context prompt to form the modified prompt 104, which can be submitted to the generative AI 160... provide relevant contextual information for the prompt; see paragraph 0067 where enabling the model to learn complex relationships in the data and generate more contextually accurate and expressive text.)
Regarding claims 8 and 18, Gomez-Gonzalez teaches wherein the additional information enables the LLM to determine a meaning of at least one other piece of information in the prompt. (Gomez, see figs. 4-6; see paragraphs 0044-0045 where modify the prompt 102 by adding a context prompt (which can also be referred to as a “context query”) provided by the contextualizer 140. For example, the anonymized prompt (in which the sensitive data is replaced with unique, non-sensitive tokens) can be combined with the context prompt to form the modified prompt 104, which can be submitted to the generative AI 160... provide relevant contextual information for the prompt; see paragraph 0067 where enabling the model to learn complex relationships in the data and generate more contextually accurate and expressive text.)
Regarding claims 9 and 19, Gomez-Gonzalez teaches wherein the original query is one of a set of queries that are being evaluated automatically. (Gomez, see fig. 5 and 6, two (a set of) queries; see paragraphs 0099-0100 The original prompt 510, however, contains sensitive data that violates a security protocol. Specifically, the sensitive data detected in the original prompt 510 includes personal information (e.g., a person's name “Mr. John”) business-critical information (e.g., a company name “BuyCorp”)....; see paragraph 0105 a user enters an original prompt 610 which also requests an LLM to proofread and improve a sentence. Different from the example shown in FIG. 6 which contains plain text, the original prompt 610 includes calculation of some numerical values...)
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gomez-Gonzalez in view of Relan et al. (U.S. PGPub 2022/0374420).
Regarding claims 4 and 14, Gomez-Gonzalez teaches all of the features of claims 1 and 11. However, Gomez-Gonzalez does not explicitly teach wherein the options for at least one type of sensitive data are determined based on a training data set.
Relan teaches wherein the options for at least one type of sensitive data are determined based on a training data set. (Relan, see figs. 1-2; see paragraph 0016 where It is trained to detect very similar questions. The training data set for the QQP Model is based on the schema, from which a set of hypothetical database queries are produced. It is called the Bronze Data Set...; see paragraph 0075 where have been trained the system is ready for user submitted natural language questions. When a user submits a natural language question to be converted into a SQL query and its associated where values, the Deep Neural Network (DNN) components look for similarities between the submitted question and the data it has been trained with in order to generate a corresponding query....; see paragraph 0077 where a user might ask “How many buys greater than $100 did Leo Swanson make after Jan. 12, 2019.” The Conversational Agent component 204 removes any data that would be sensitive or private to the business and substitutes generic parameters to be passed over the DMZ 500...)
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to combine Gomez-Sanchez and Relan to provide the technique of the options for at least one type of sensitive data are determined based on a training data set of Relan in the system of Gomez-Sanchez in order to provide accurate output for natural language questions (Relan, see paragraph 0010).
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Gomez-Gonzalez-Relan in view of Cappetta (U.S. PGPub 2025/0094132).
Regarding claims 5 and 15, Gomez-Gonzalez-Relan teaches all of the features of claims 4 and 14. However, Gomez-Gonzalez-Relan does not explicitly teach wherein the options for the at least one type of sensitive data are determined by configuring the system to iteratively separate, reduce, expand, combine, and submit using the training data set using different options for at least one type of sensitive data during each iteration until an error is less than a prescribed threshold.
Cappetta teaches wherein the options for the at least one type of sensitive data are determined by configuring the system to iteratively separate, reduce, expand, combine, and submit using the training data set using different options for at least one type of sensitive data during each iteration until an error is less than a prescribed threshold. (Cappetta, see fig. 2; see paragraph 0011 where after inputting the component-level prompt to the generative AI model to produce the component definition...in each of one or more iterations, until no errors remain in the component definition or a number of the one or more iterations has reached a threshold: determine whether or not any errors exist in the component definition; and when at least one error exists in the component definition, re-input the component-level prompt to the generative AI model to produce a new component definition...; see paragraph 0039 where the request-level prompt, output by subprocess 210, is input into a generative AI model to produce the requested set of objectives…; see paragraphs 0070-0071 where the next component is selected from the set of components, extracted in subprocess 220...a component-level prompt is generated for the selected component...; see paragraph 0074 where in subprocess 245, the component definition may be checked for one or more errors...detecting an error after the number of retries has reached or exceeded the predefined threshold (i.e., “Yes, Retries≥Threshold” in subprocess 245), process 200 may proceed to fallback processing 280. Otherwise, when detecting no errors (i.e., “No” in subprocess 245), process 200 may proceed to subprocess 250...)
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to combine Gomez-Sanchez-Relan and Cappetta to provide the technique of the options for the at least one type of sensitive data are determined by configuring the system to iteratively separate, reduce, expand, combine, and submit using the training data set using different options for at least one type of sensitive data during each iteration until an error is less than a prescribed threshold of Cappetta in the system of Gomez-Sanchez-Relan in order to provide error free outputs (Cappetta, see paragraph 0011).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MENG VANG whose telephone number is (571)270-7023. The examiner can normally be reached M-F 8AM-2PM, 3PM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NICHOLAS TAYLOR can be reached at (571) 272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MENG VANG/Primary Examiner, Art Unit 2443