Prosecution Insights
Last updated: April 18, 2026
Application No. 17/846,866

NEURAL NETWORK-BASED LANGUAGE RESTRICTION

Non-Final OA §101§103§112
Filed
Jun 22, 2022
Examiner
SERROU, ABDELALI
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
5 (Non-Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
437 granted / 587 resolved
+12.4% vs TC avg
Strong +30% interview lift
Without
With
+30.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
23 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
19.7%
-20.3% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 587 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/31/2026has been entered. Response to Amendment 3. In response to the office action mailed on 12/31/2025, applicant filed an amendment on 03/31/2026, amending claims 1-30. The pending claims are 1-30. Response to Arguments 4. Applicant's arguments filed 03/31/2026have been fully considered but they are not persuasive. With regard to art, applicant argues that Werner does not teach providing one or more text prompts to one or more pre-trained neural network language models to generate text content including generated restricted text content. Applicant asserts that the data selected to train a natural language processor and/or a machine learning system in Werner is not generated based on one or more prompts provided to the same neural network language model that is being trained. The examiner notes that that Werner teaches, at paragraph [0056], the training input is generated by a user device and received at a server for processing. The training input may be generated and received by the same device. The training input may be generated and/or received by any combination of devices; at paragraph [0057], Werner teaches training input may include labeled documents, non-disclosure agreements (NDAs), emails with various levels of confidentiality, classified tokens, text strings labeled with various levels of confidentiality, historical communications, etc., and at paragraph [0058] - [0060], Werner teaches the training input (text prompt) is analyzed by natural language models and parsed into confidentiality levels and/or classified tokens (generated restricted text content). Accordingly, Werner reads on the claim language. With regard to the rejection of claims 1-30 under 35 U.S.C. 101 for being directed to non-statutory subject matter, applicant argues that the claims do not recite limitations that can practically be performed in the human mind. Applicant asserts that updating neural network parameters is not something that can be performed in a human mind, the amended claims offer a practical application. The examiner notes that, even with the amendment, claims 1, 7, 13, 19, 25 merely claiming the idea of using one or more pretrained neural networks to generate restricted and non-restricted content and using the generated content to update parameters of the one or more neural network. The claims do not show how the used neural networks are trained, the restricted content is generated, one or more parameters of the one or more neural network is updated. The added limitation “providing one or more text prompts to one or more pre-trained neural network language models to generate text content including generated restricted text content” is considered as a mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). According to the specification, a parameter could be a weight parameter. A human can update a weigh value with no need for a machine. It’s purely a mental process. Given the broadest reasonable interpretation to the limitation, using one or more pretrained neural networks to generate restricted and non-restricted content and using restricted content generated by one or more neural networks to update one or more parameters of the one or more neural networks that generated the restricted content to cause the one or more neural networks to generate a new content without the restricted content, is simply a form of an intended use of using restricted content. The claims do not provide any information on how the restricted content is generated and how the parameters are updated. The neural network is recited at high level of generality and used as a tool to perform the generic computer function of receiving data. The mere nominal recitation of a generic network appliance does not take the claim limitation out of the mental processes grouping. With regard to improvement to the technology and practical application, to show that the involvement of a computer assists in improving the technology, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § 2106.05(f) for more information about mere instructions to apply an exception. In our case the additional elements of “neural network”, “processor”, “circuit”, “machine-readable medium” are recited at a high level of generality and used as a tool to perform the generic computer function of receiving data. See MPEP 2106.05(f). According to MPEP 2106.05(f), the recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". Claims 1, 7, 13, 19, and 25 fail to recite details of how a solution to a problem is accomplished. So, even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claims are directed to the judicial exception. The additional elements are at best mere instruction to apply the abstract idea, which cannot provide an inventive concept. The dependent claims, as detailed below, further refer and describe the generated language text, the used corpus of language, generating a language prompt, determining a probability of the output text, training a language synthesis network. The claims do not provide any details about how the training is performed, the probability is determined… Accordingly, these additional elements do not take the claims limitations out of the mental processes grouping, and do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, the claims are directed to an abstract idea. Claim Rejections - 35 USC § 112 5. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Independent claim 1 recites “the generated text language content”. There is no antecedent basis for this limitation in the claim. Dependent claims 2-6, are rejected for being dependent on claim 1. Claim Rejections - 35 USC § 101 6. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Is the claimed invention to a process, machine, manufacture or composition of matter? Independent claims 1, 7, 13, 19, 25 are directed to a method (process), system (machine) and computer readable medium (manufacture) to provide one or more text prompts to one or more pre-trained neural network language models to generate text content including generated restricted text content; use the one or more pre-trained neural networks language model to generate, based on the one or more text prompts, the text content including restricted text content, and include the output that includes restricted content as negative training samples in a training dataset; update one or more network parameters of the one or more pre-trained neural network language models using the training dataset including the generated text language content generated by the one or more pre-trained neural network language models to reduce a likelihood the one or more neural network language model generate a new text content having the restricted text content. Step 2A, prong 1: Does the claim recite an abstract idea, law or nature, or natural phenomenon? Under the 35 U.S.C. 101 new guidelines, the broadest reasonable interpretation of the claims, the claimed steps fall within the “Mental Processes” grouping of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. The steps of providing…, using the one or more pre-trained neural networks language models to generate…., and updating one or more network parameters of the one or more pre-trained neural network language models …, encompass mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. The providing to one or more pre-trained neural network language models, using the one or more pre-trained neural networks language models to generate, and updating one or more network parameters of the one or more pre-trained neural network language models encompass mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. The claims do not provide any details about how the neural network model operates to perform the claimed steps. See MPEP 2106.04(a)(2), subsection III. Therefore, the claimed steps fall within the mental process grouping of abstract ideas Step 2A, prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? The claims recite the additional elements of “a processor”, “neural network”, … are mere data gathering and manipulating recited at high level of generality, and thus are insignificant extra-solution activity. The processor is recited at a high level of generality, and it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). The recitation of “neural network” is at high level of generality. The mere nominal recitation of a generic network appliance does not take the claims limitations out of the mental processes grouping. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claims are directed to the judicial exception. Step 2B: Does the claim recite additional elements that amount to significantly more than the abstract idea? As to whether the claims as a whole amount to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim (Step 2B), as explained above in Step 2A, Prong 2, the use of “neural network”, “processor” is at high level of generality, and even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, and therefore do not provide an inventive concept. Accordingly, the claims are ineligible. The dependent claims further refer and describe the process of Independent claims 1, 7, 13, 19, and 25, which encompasses a mental process that is practically performed in the human mind, as explained above. Claims 2, 8, 14, 20 and 26 recites “…train/training a language synthesis network, …to generate language text…” This relates to a human teaching a class on confidential information, and the class writing the information. In particular claims 2, 8, 14, 20, and 26 recite additional element of “neural network” and claims 2, 8, 20, and 26 recite additional element of “processor” as per the independent claims; claim 2 also recites “circuit” as per the independent claim; and claim 20 also recites “machine-readable medium”. No additional limitations are present. With respect to claims 3, 9, 15, 21 and 27 recites “…perform/performing an initial training of the language synthesis network using a corpus of language having a probability of including at least a subset of the restricted content.” This relates to a human teaching a class to use a dictionary for reference. In particular claims 3, 9, 21, and 27 recite additional element of “processor” as per the independent claims; claim 3 also recites “circuit” as per the independent claim; and claim 21 also recites “machine-readable medium”. No additional limitations are present. With respect to claims 4, 10, 16, 22 and 28 recites “…generate/generating one or more language prompts and cause the language synthesis network to generate output text based,” This relates to a human writing a questionnaire or quiz on a chalk board and the class to write the response answer. In particular claims 4, 10, 22, and 28 recite additional element of “processor” as per the independent claims; claim 4 also recites “circuit” as per the independent claim; and claim 22 also recites “machine-readable medium”. No additional limitations are present. With respect to claims 5, 11, 17, 23 and 29 recites “…use/using the one or more neural networks to determine a probability of the output text including the restricted content, wherein output text having a probability above a determined value is determined to correspond to the restricted content.” This relates to a human activity of a person or a class calculating the probability of the words being restricted. In particular claims 5, 11, 17, 23, and 29 recite additional element of “neural network” and claims 5, 11, 23, and 29 recite additional element of “processor” as per the independent claims; claim 5 also recites “circuit” as per the independent claim; and claim 23 also recites “machine-readable medium”. No additional limitations are present. With respect to claims 6, 12, 18, 24 and 30 recites “…use/ using output text that at least includes the restricted content, or does not include the restricted content, to further train the language synthesis network.” This relates to human activity of teaching a class that based on the texted whether or not the text is confidential. In particular claims 6, 12, 24, and 30 recite additional element of “processor” as per the independent claims; claim 6 also recites “circuit” as per the independent claim; and claim 24 also recites “machine-readable medium”. No additional limitations are present. Accordingly, the pending claims are directed to an abstract idea and are not patent eligible. Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7-9, 13-15, 19-21, and 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Werner et. al. (US 20200314068 A1) in view of Shih (US 2021/0064925). As per claim 1, Werner teaches a processor, comprising: one or more circuits to: provide one or more text prompts to one or more pre-trained neural network language models to generate text content including generated restricted text content ([0056], wherein the training input is generated by a user device and received at a server for processing. The training input may be generated and received by the same device. The training input may be generated and/or received by any combination of devices; [0057], wherein said, training input may include labeled documents, non-disclosure agreements (NDAs), emails with various levels of confidentiality, classified tokens, text strings labeled with various levels of confidentiality, historical communications, etc., and [0058] - [0060], wherein the training input (text prompt) is analyzed by natural language models and parsed into confidentiality levels and/or classified tokens (generated restricted text content)). use the one or more pre-trained neural networks language model to generate, based on the one or more text prompts, the text content including restricted text content (method 300 of Fig. 3 and [0058] - [0060], wherein the pre-trained neural networks language model analyzes and parses the training input (the one or more text prompts) into confidentiality levels and/or classified tokens), and include the output that includes restricted content as negative training samples in a training dataset (Fig. 3, [0054]- [0077], wherein method 300 identifies restricted (negative) and non-restricted (positive) content (as in Fig. 6 and [0114]) and uses the result to update the token database to further train the machine learning model, see [[0025], [0032], and [0052]- [0053]); update one or more network parameters of the one or more pre-trained neural network language models using the training dataset including the generated text language content generated by the one or more pre-trained neural network language models to reduce a likelihood the one or more neural network language model generate a new text content having the restricted text content (paragraph Fig. 6, [0025], [0058], [0116]- [0117], a machine learning system that is trained to extract classified tokens (e.g., potentially confidential terms) from exemplary communication 602 as identified by some of the operations of method 300, method 400 and/or method 500. The classified tokens which meet the threshold probability are redacted from the exemplary communication 602; and exemplary communication 606 is generated without the classified tokens (restricted content). During training, the machine learning system (some of the natural language processing uses AI with a neural network such as Google Natural Language) updates one or more parameters of the one or more neural networks language models that generated the restricted content to cause the one or more neural network language models to generate a new content without the restricted content (see [0053], [0063], [0077], [0087], wherein said, the confidential analysis engine may learn new classified tokens; and increasing a confidentiality level of classified tokens in the classified token database.). As to using one or more pre-trained neural networks to generate output that includes content having restricted content; and include the output that includes restricted content as negative training samples generated by one or more pre-trained neural network language models in a training dataset Werner, method 300 of Fig. 3, uses a machine learning model that is already trained to extract classified tokens (e.g., potentially confidential terms) from an exemplary communication (otherwise, it wouldn’t identify the classified tokens). Then, based on the extracted classified tokens, the method 300 may loop back to operation 302 and reiterate for each set of training input… method 300 comprise creating and/or updating the classified token database for redacting classified tokens according to various operations in future time as in method 400 and/or method 500 ([0077]). Updating the tokens database means introducing new terms for the retraining the machine learning model. However, in order to expedite prosecution, Shih in the same field of endeavor teaches training a machine learning model using various other types of data may be used as training data as well, as may include text data, audio data, video data, and so on ([0079]), wherein classified data outputted by a trained classification model is stored to a training data repository, which can be used for further training of trained model ([0092]). Therefore, it would have been obvious at the time the application was filed to use the above feature of Shih with the system of Werner, in order to improve accuracy and performance by adapting to a new data. As to independent claim 7, the system with processor of claim 7 is related to the processor of claim 1 and uses the same network. Accordingly claim 7 is similarly rejected under the same rationale as applied above with respect to the processor claim. Furthermore, claim 7 teaches one or more processors (see [0055] “the method 300 may be partially or entirely performed by computers, or some other device having one or more processors therein.”). As to independent claim 13, the method of claim 13 is related to the processor of claim 1 and uses the same network. Accordingly claim 13 is similarly rejected under the same rationale as applied above with respect to the processor claim. As to independent claim 19, the machine-readable medium and processor of claim 19 is related to the processor of claim 1 and uses the same network. Accordingly claim 19 is similarly rejected under the same rationale as applied above with respect to the processor claim. Furthermore, Werner teaches a machine-readable medium having stored thereon a set of instructions (see [0118] “The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.”), and one or more processors (see [0055] “the method 300 may be partially or entirely performed by computers, or some other device having one or more processors therein.”). As to independent claim 25, the content management system with processors of claim 25 is related to the processor of claim 1 and uses the same network. Accordingly claim 25 is similarly rejected under the same rationale as applied above with respect to claim 1. Furthermore, Werner teaches one or more processors (see [0055] “the method 300 may be partially or entirely performed by computers, or some other device having one or more processors therein.”), and memory for storing network parameters for the one or more neural networks (see [0047] “The workstation shown in FIG. 2 includes a Random-Access Memory (RAM) 214,” and see [0046] “FIG. 2 shows a representative hardware environment associated with a user device 116 and/or server 114 of FIG. 1”). Regarding Claim 2, Werner teaches wherein the one or more circuits are further to train a language synthesis network, of the one or more neural network language models (see [0025] “An administrator module 118 may select data to train a natural language processor and/or a machine learning system. In some approaches, the administrator module 118 may comprise a trained natural language processor and/or machine learning system which may be downloaded from an administrator site.”), to generate language text, wherein at least a portion of the generated language text has a probability of including the restricted content (see [0025] “Data selected by the administrator module 118 may comprise training input. Training input may include communications, labeled documents, non-disclosure agreements (NDAs), emails with various levels of confidentiality, classified tokens, text strings labeled with various levels of confidentiality, keyword inputs, etc. ... A classified token as referred to herein may be a tag and/or metadata associated with any text, string of text data, image data, audio data, etc. which has been marked as confidential and/or comprises a confidentiality level. In some approaches, the classified token is the particular text, string of text data, image data, audio data, etc. which is confidential.” Where the generated language text is the classified token, the restricted content can be viewed as confidential. Additionally see [0056] “...the training input is generated by a user device and received at a server for processing. The training input may be generated and received by the same device. The training input may be generated and/or received by any combination of devices.” Which is followed by [0057] that also has different training input formats and a classified token that is associated with the generate language text. Furthermore, see [0064] “The cognitive method may identify certain classified tokens which have a high probability of being marked as confidential. A high probability may be determined based at least in part on association with previously marked classified tokens and/or confidentiality levels”). As to dependent claims 8 and 14, the systems with processor of claims 8 and the method of claim 14 are related to the processor of claim 2 and uses the same network. Accordingly claims 8 and 14 are similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 20, the machine-readable medium and processor of claim 20 is related to the processor of claim 2 and uses the same network. Accordingly claim 20 is similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 26, the content management system with processors of claim 26 is related to the processor of claim 2 and uses the same network. Accordingly claim 26 is similarly rejected under the same rationale as applied above with respect to the processor claim. Regarding Claim 3, Werner teaches wherein the one or more circuits are further to train a language synthesis network, of the one or more neural network language models to generate language text, wherein at least a portion of the generated language text has a probability of including the restricted content ([0025], see rejection of claim 2), wherein the one or more circuits are further to perform an initial training of the language synthesis network using a corpus of language having a probability of including at least a subset of the restricted content (see [0053] “Natural language processing and/or machine learning may be used to create and/or update a classified token database in real-time in order to classify user communications based on learned confidential information. In various approaches, the classified tokens may be associated with certain projects, business units, teams, etc. The classified token database may be used to dynamically redact learned confidential information from a user communication in real-time. At least some of the operations of the methods as disclosed herein provide an additional level of protection and/or limit data leaks associated with employee negligence.” Additionally see [0064] "The cognitive method may identify certain classified tokens which have a high probability of being marked as confidential. A high probability may be determined based at least in part on association with previously marked classified tokens and/or confidentiality levels. In some approaches, the method 300 may assign identified tokens a threshold probability of being marked as confidential. A probability may be measured on a scale from 0 to 1 where 0 indicates that the identified token is unlikely to be marked as confidential and 1 indicates that the identified token is substantially certain to be marked as confidential. In a preferred approach, the threshold probability for marking an identified token as confidential may be more than approximately 0.40. If no portion of the training input is marked as confidential and no known classified token exists within the training input, the method 300 reiterates for each new set of training input."). As to dependent claims 9 and 15, the system with processor of claim 9 and the method of claim 15 are related to the processor of claim 3 and uses the same network. Accordingly claims 9 and 15 are similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 21, the machine-readable medium and processor of claim 21 is related to the processor of claim 3 and uses the same network. Accordingly claim 21 is similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 27, the content management system with processors of claim 27 is related to the processor of claim 3 and uses the same network. Accordingly claim 27 is similarly rejected under the same rationale as applied above with respect to the processor claim. Claims 4-5, 10-11, 16-17, 22-23, and 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Shih, as applied to claims 1-3, 7-9, 13-15, 19-21, and 25-27 above, and further in view of Jalaluddin et. al. (US 20220171930 A1). Regarding Claim 4, Werner teaches wherein the one or more circuits are further to train a language synthesis network, of the one or more neural network language models, to generate language text, wherein at least a portion of the generated language text has a probability of including the restricted content, wherein the one or more circuits are to perform an initial training of the language synthesis network using a corpus of language having a probability of including at least a subset of the restricted content (see rejection of claim 3). More, Werner teaches language synthesis network (see [0025] “An administrator module 118 may select data to train a natural language processor and/or a machine learning system. In some approaches, the administrator module 118 may comprise a trained natural language processor and/or machine learning system which may be downloaded from an administrator site.”). Werner in view of Shih does not specifically teach wherein the one or more circuits are further to generate one or more language prompts and cause the language synthesis network to generate output text based, at least in part, upon the one or more language prompts However, Jalaluddin does teach the use of prompts and outputting text (see [0009] “In some embodiments, the set of OOD examples is generated using a corpus, a lexical database, a text generating model, an adversarial attack model, or any combination thereof.” Additionally see [0034] “…the bot system may convert the content into a standardized form (e.g., a representational state transfer (REST) call against enterprise services with the proper parameters) and generate a natural language response. The bot system may also prompt the end user for additional input parameters or request other additional information.” And is further supported by [0048] “Each skill associated with a digital assistant helps a user of the digital assistant complete a task through a conversation with the user, where the conversation can include a combination of text or audio inputs provided by the user and responses provided by the skill bots. These responses may be in the form of text or audio messages to the user”). Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed inventions to have modified the redacting technique as taught by Werner, with the prompt and chatbots as taught by Jalaluddin so that it improves “…such that the ML model is more resilient towards irrelevant context and more accurately learns the pattern or boundary of an intent.” [0005]. As to dependent claims 10 and 16, the system with processor of claim 10 and the method of claim 16 are related to the processor of claim 4 and uses the same network. Accordingly claims 10 and 16 are similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 22, the machine-readable medium and processor of claim 22 is related to the processor of claim 4 and uses the same network. Accordingly claim 22 is similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 28, the content management system with processors of claim 28 is related to the processor of claim 4 and uses the same network. Accordingly claim 28 is similarly rejected under the same rationale as applied above with respect to the processor claim. Regarding Claim 5, Werner teaches wherein the one or more circuits are further to train a language synthesis network, of the one or more neural network language models, to generate language text, wherein at least a portion of the generated language text has a probability of including the restricted content. wherein the one or more circuits are to perform an initial training of the language synthesis network using a corpus of language having a probability of including at least a subset of the restricted content, wherein the one or more circuits are further to generate one or more language prompts and cause the language synthesis network to generate output text based, at least in part, upon the one or more language prompts (see rejection of claim 4), and wherein the one or more circuits are further to use the one or more neural networks to determine a probability of the output text including the restricted content (see FIG. 2 and [0058] “Operation 304 includes performing natural language processing on the training input. Any known technique may be used to perform natural language processing including Google Natural Language, Natural Language Toolkit, Apache Lucene and Soir, Apache OpenNLP, CoreNLP, SpaCy, etc. In a preferred embodiment, the natural language processing is performed using Watson Natural Language Understanding, Watson Tone Analyzer, and/or Watson Natural Language Classifier.” Where some of the natural language processing uses AI with a neural network such as Google Natural Language. This is used to determine restricted content by confidentiality [0061] “Decision block 306 includes an operation to determine whether a confidentiality level is present in and/or associated with some or all of the training input.” as well as [0064] “…the method may proceed to decision block 314 to determine whether a known classified token exists within the training input… method may identify certain classified tokens which have a high probability of being marked as confidential.”), wherein output text having a probability above a determined value is determined to correspond to the restricted content (see FIG. 6 and [0064] “the threshold probability for marking an identified token as confidential may be more than approximately 0.40.” Where the restricted content is confidential). As to dependent claims 11 and 17, the system with processor of claim 11 and the method of claim 17 are related to the processor of claim 5 and uses the same network. Accordingly claims 11 and 17 are similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 23, the machine-readable medium and processor of claim 23 is related to the processor of claim 5 and uses the same network. Accordingly claim 23 is similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 29, the content management system with processors of claim 29 is related to the processor of claim 5 and uses the same network. Accordingly claim 29 is similarly rejected under the same rationale as applied above with respect to the processor claim. Claims 6, 12, 18, 24, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Werner in view of Jalaluddin and Shih, as applied to claim 4 above, and further in view of Mallikarjuniah et. al. (US 11232784 B1). Regarding Claim 6, Werner in view of Jalaluddin teaches wherein the one or more circuits are further to train a language synthesis network, of the one or more neural network language models, to generate language text, wherein at least a portion of the generated language text has a probability of including the restricted content. wherein the one or more circuits are to perform an initial training of the language synthesis network using a corpus of language having a probability of including at least a subset of the restricted content, wherein the one or more circuits are further to generate one or more language prompts and cause the language synthesis network to generate output text based, at least in part, upon the one or more language prompts, wherein the one or more circuits are further to use the one or more neural networks to determine a probability of the output text including the restricted content, wherein output text having a probability above a determined value is determined to correspond to the restricted content (see rejection of claim 5), and at least includes the restricted content, or does not include the restricted content (see Werner [0017] “dynamically redacting confidential information from communications includes performing natural language processing on training input and determining whether a confidentiality level is present in the training input.”). Werner in view of Shih and Jalaluddin may not explicitly disclose wherein the one or more circuits are further to use output text…, to further train the language synthesis network. However, Mallikarjuniah does teach output text that trains a model. (see column 13, lines 45-67 and column 14, lines 1-5) “One or more of the subcomponents of the scoring component 285 may implement one or more trained machine learning models. Various machine learning techniques may be used to train such model(s). Models may be trained and operated according to various machine learning techniques. Such techniques may include, for example, inference engines, trained classifiers, etc. Examples of trained classifiers include conditional random fields (CRF) classifiers, Support Vector Machines (SVMs), neural networks (such as deep neural networks and/or recurrent neural networks), decision trees, AdaBoost (short for “Adaptive Boosting”) combined with decision trees, and random forests. Focusing on CRF as an example, CRF is a class of statistical models used for structured predictions. In particular, CRFs are a type of discriminative undirected probabilistic graphical models. A CRF can predict a class label for a sample while taking into account contextual information for the sample. CRFs may be used to encode known relationships between observations and construct consistent interpretations. A CRF model may thus be used to label or parse certain sequential data, like query text as described above. Classifiers may issue a “score” indicating which category the data most closely matches. The score may provide an indication of how closely the data matches the category.” Where restriction is determined by trained classifier, such as CRF, that uses class label which can be viewed as the labels of restricted content or nonrestricted content; the output text is based on CRF, which is used for training models). Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed inventions to have modified the redacting technique to identify restricted content as taught by Werner, with the prompt and chatbots as taught by Jalaluddin so that it improves “techniques for keyword data augmentation for training…in natural language processing” (Jalaluddin [0002]), with the classifier as taught by Mallikarjuniah in order “to determine an intent of the user” (Mallikarjuniah (column 2, line 10)). As to dependent claims 12 and 18, the system with processor of claim 12 and the method of claim 18 are related to the processor of claim 6 and uses the same network. Accordingly claims 12 and 18 are similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 24, the machine-readable medium and processor of claim 24 is related to the processor of claim 6 and uses the same network. Accordingly claim 24 is similarly rejected under the same rationale as applied above with respect to the processor claim. As to dependent claim 30, the content management system with processors of claim 30 is related to the processor of claim 6 and uses the same network. Accordingly claim 30 is similarly rejected under the same rationale as applied above with respect to the processor claim. Conclusion 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELALI SERROU whose telephone number is (571)272-7638. The examiner can normally be reached M-F 9 Am - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDELALI SERROU/Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jun 22, 2022
Application Filed
Apr 30, 2024
Non-Final Rejection — §101, §103, §112
Jun 27, 2024
Examiner Interview Summary
Jun 27, 2024
Applicant Interview (Telephonic)
Aug 05, 2024
Response Filed
Dec 19, 2024
Final Rejection — §101, §103, §112
Jun 27, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Jul 02, 2025
Non-Final Rejection — §101, §103, §112
Oct 07, 2025
Response Filed
Dec 29, 2025
Final Rejection — §101, §103, §112
Mar 31, 2026
Request for Continued Examination
Apr 02, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602544
INFORMATION PROCESSING APPARATUS, OPERATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596875
TECHNIQUES FOR ADAPTIVE LARGE LANGUAGE MODEL USAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12597417
EXPORTING MODULAR ENCODER FEATURES FOR STREAMING AND DELIBERATION ASR
2y 5m to grant Granted Apr 07, 2026
Patent 12596889
GENERATION OF NATURAL LANGUAGE (NL) BASED SUMMARIES USING A LARGE LANGUAGE MODEL (LLM) AND SUBSEQUENT MODIFICATION THEREOF FOR ATTRIBUTION
2y 5m to grant Granted Apr 07, 2026
Patent 12591603
AUTOMATED KEY-VALUE EXTRACTION USING NATURAL LANGUAGE INTENTS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+30.4%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 587 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month