Prosecution Insights
Last updated: April 19, 2026
Application No. 18/885,787

SYSTEM AND METHOD USING INTELLIGENT PRIVACY ASSISTANT MODEL FOR LARGE LANGUAGE MODEL OPERATION

Non-Final OA §103
Filed
Sep 16, 2024
Examiner
MOHAMMADI, FAHIMEH M
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
The Hong Kong University of Science and Technology
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
224 granted / 294 resolved
+18.2% vs TC avg
Strong +53% interview lift
Without
With
+52.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
24 currently pending
Career history
318
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
58.1%
+18.1% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 294 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to the application 18/885787 filed on 09/16/2024. Claims 1-20 have been examined and are pending in this application. Priority Applicant priority to U.S. Provisional Application No. 63/591766, filed on 10/20/2023, is acknowledged. Information Disclosure Statement The information disclosure statement (IDS), submitted on 11/27/2024, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 1, 5, 6, 11, 15 and 16 are objected to because of the following informalities: Regarding claim 1, 5, 6, 11, 15 and 16; claims 1, 5, 6, 11, 15 and 16 recite the limitation "BERT" and "LLM". The acronym BERT and LLM are recited without spelling out in full at its first occurrence. The examiner notes for acronym BERT and LLM should be spelled out with its first occurrence. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “checklist module configured to query/processing,” “a personalized learning module [] configured to process/interpret the input data,” “a reasoning module [] configured to integrate multi-turn dialog contexts,” recited in claim 1; “a privacy interception module configured to [] monitor/intercept transmission of the input data,” recited in claim 2; “a privacy norm collector module configured to query the database,” “the rule-based privacy evaluator module is configured to process the input data,” recited in claim 3; “the privacy judgment module is configured to compile and format [] report,” recited in claim 4; “a contextual and processor module [] configured to process and interpret the input data,” recited in claim 5; “a knowledge distillation module configured to extract essential knowledge,” “a cloud-based foundation module [] configured to act as a base model,” recited in claim 6; “the reward module is configured to perform a learning process,” recited in claim 7; “a contextual integration module [] configured to integrate the multi-turn dialog context,” “a reasoning evaluator module [] configured to utilize/evaluate,” recited in claim 8; and “an in-context learning module [] configured to leverage few-shot in-context learning,” recited in claim 9. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5, 8, 10-15, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over GOMEZ (“Gomez,” US 2023/0029190) in view of Soliman et al. (“Soliman,” US 12499879) and SOUNDARARAJAN (“Soundararajan,” WO 2024/213993). Regarding claim 1: Gomez discloses a system using an intelligent privacy assistant model for large language models operations, comprising: a text receiver for parsing and preprocessing input data from at least one user so as to convert user-input text into a machine-readable format (Gomez: par. 0021 system 100 may retrieve documents from document database 110 to perform insight extraction [NOTE: insight extraction (often referred to as Information Extraction or Text Analytics) is extensively used to convert unstructured text into machine-readable and actionable structured formats. It acts as a bridge between human-readable language (documents, emails, social media) and computer-understandable data (JSON, CSV, SQL tables)]); a rule-based privacy checklist module configured to query a database for retrieving privacy norms and processing these norms to create annotated rules, wherein the rule-based privacy checklist module processes the input data from the text receiver and compare it against the annotated norms and rules, thereby compiling and formatting findings from the comparing into a readable structured report (Gomez: par. 0031 PET selector 130 may use a rule-based security classification 238 to classify the level of security for extracted insight 226. Security classification 238 may include the data classification 236 for a particular extracted insight 226 to determine security classification 238; par. 0032 intended use classifier 145 may use intended use classification model 245 to determine formula 247 based on pre-determined key performance indicators (KPI) 242, terms and conditions 241, and/or regulation policy 243; par. 0037 once insight encryptor 160 processes and encrypts extracted insight 226, the resulting value may be protected insight 262); a personalized learning module receiving the input data via the text receiver and configured to process and interpret the input data to understand contextual information thereof (Gomez: par. 0032 metadata associated with KPI 242 may include contextual information and a formal language query (or queries) used to generate KPI 242 [] KPI 242 may be dynamically retrieved by machine learning techniques that extract mathematical expressions from textual sources such as document 210). Gomez does not explicitly disclose wherein the personalized learning module comprises a fine-tuned BERT module for classifying whether the contextual information constitutes a privacy violation, and wherein classification results by the fine-tuned BERT module is locally processed to generate a local privacy report and a reasoning module receiving the input data via the text receiver and configured to integrate multi-turn dialog contexts to assess privacy based on server-side reasoning. However, Soliman discloses wherein the personalized learning module comprises a fine-tuned BERT module for classifying whether the contextual information constitutes a privacy violation, and wherein classification results by the fine-tuned BERT module is locally processed to generate a local privacy report (Soliman: col. 4 lines 43-49 a language model (e.g., a Bidirectional Encoder Representations from Transformers (BERT) model) may be coupled with multistage fine-tuning steps to generate functionality-based representations of user inputs. Using the functionality-based representations, user inputs may then be clustered into functionalities; col. 3 lines 51-55 unsupported functionalities may cause friction (e.g., the system outputting an undesired response to a user input, the system processing resulting in an error condition, etc.) and can degrade the user experience); and a reasoning module receiving the input data via the text receiver and configured to integrate multi-turn dialog contexts to assess privacy based on server-side reasoning (Soliman: col. 21 lines 22-34 dialog processing is a field of computer science that involves communication between a computing system and a human via text, audio, and/or other forms of communication [] more complicated dialog processing involves determining and optionally acting on one or more goals expressed by the user over multiple turns of dialog [] these multi-turn "goal-oriented" dialog systems typically need to recognize, retain, and use information collected during more than one input during a back-and-forth or "multi-tum" interaction with the user). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of Soliman with the system/method of Gomez to include a fine-tuned BERT module for classifying whether the contextual information constitutes a privacy violation, and receiving the input data via the text receiver and configured to integrate multi-turn dialog contexts to assess privacy. One would have been motivated to involve converting a user's input into text data which may then be provided to various text-based software applications (Soliman: col. 1 lines 17-19). Gomez in view of Soliman does not explicitly disclose wherein the reasoning module extracts integrated context and utilizes chain-of-thought (CoT) reasoning to evaluate whether entire context of integrated context has private information, and wherein the reasoning module delivers comprehensive privacy judgments for interactions occurring on cloud at a stage of evaluation for the private information and generates a cloud-based report. However, Soundararajan discloses wherein the reasoning module extracts integrated context and utilizes chain-of-thought (CoT) reasoning to evaluate whether entire context of integrated context has private information, and wherein the reasoning module delivers comprehensive privacy judgments for interactions occurring on cloud at a stage of evaluation for the private information and generates a cloud-based report (Soundararajan: page 11 lines 20-25 the governance module (118) is also configured to control user prompt or chain of thought (CoT) prompts based on whitelisting or backlisting of domains or types of queries for different categories of users against identified type of prompt and the level of risks identified in those prompts [] the chain-of-thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps; page 12 lines 5-8 the governance module (118) is configured to generate reports related to the risks involved in the input query and the response outputs. In one embodiment, the responsible AI report focuses on identifying, addressing, and managing risks associated with adversarial machine learning). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of Soundararajan with the system/method of Gomez and Soliman to include extracts integrated context and utilizes chain-of-thought (CoT) reasoning to evaluate whether entire context of integrated context has private information. One would have been motivated to provide a system for multi modal aggregation in a governance platform in a responsible artificial intelligence (Soundararajan: page 2 lines 6-7). Regarding claim 2: Gomez in view of Soliman and Soundararajan discloses the system according claim 1. Gomez further discloses a privacy interception module configured to actively monitor and intercept transmission of the input data when potential privacy violations in the input data are detected by the rule-based privacy checklist module, the personalized learning module, the reasoning module, or combinations thereof (Gomez: par. 0075 each privacy enhancing technique may be useful in some contexts while compromising the usability of protected data in other contexts. PET selector 130 may optimize compliance with data privacy regulations and the usability of protected data by automating selection of privacy enhancing techniques based on the intended usage of extracted insight 226. Using security classification 238, processing requirement 250, and data requirement 254 for extracted insight 226 from document 310, the PET selector 130 may then map these combined factors to the most appropriate privacy enhancing technique); and if analysis by the privacy interception module indicates that the input data involves sensitive or private information, the privacy interception module intervenes by halting data transmission process (Gomez: par. 0076 PET selector may use PET selection 256 to generate a PET insight list 258 mapping a selected privacy enhancing technique to extracted insight 226 [] when certain extracted insight 226 [] is designated with a public security classification 238 and corresponds to no processing requirement 250, PET selector 130 may not map the extracted insight 226 to a privacy enhancing technique). Regarding claim 3: Gomez in view of Soliman and Soundararajan discloses the system according claim 1. Gomez further discloses a rule-based privacy evaluator module, wherein the annotated rules are transmitted to the rule-based privacy evaluator module from the privacy norm collector module, and wherein the rule-based privacy evaluator module is configured to process the input data by comparing it against the annotated norms and rules using a rule-matching algorithm (Gomez: par. 0031 PET selector 130 may use a rule-based security classification 238 to classify the level of security for extracted insight 226. Security classification 238 may include the data classification 236 for a particular extracted insight 226 to determine security classification 238; par. 0032 intended use classifier 145 may use intended use classification model 245 to determine formula 247 based on pre-determined key performance indicators (KPI) 242, terms and conditions 241, and/or regulation policy 243; par. 0037 once insight encryptor 160 processes and encrypts extracted insight 226, the resulting value may be protected insight 262). Soliman further discloses a privacy norm collector module configured to query the database storing related privacy laws at regular intervals and to retrieve updated privacy norms for creating the annotated rules (Soliman: col. 8 lines 20-24 the supporting device(s) 120 may determine that a NU component, which can include NLU components [] is to be configured (e.g., updated, retrained, etc.) to be able to process with respect to an unsupported functionality). The motivation is the same that of claim 1 above. Regarding claim 4: Gomez in view of Soliman and Soundararajan discloses the system according claim 3. Gomez further discloses wherein the rule-based privacy checklist module further comprises: a privacy judgment module, wherein comparing results made by the rule-based privacy evaluator module are transmitted to the privacy judgment module, and wherein the privacy judgment module is configured to compile and format the findings into the readable structured report (Gomez: par. 0036 based on the security classification 238 and processing requirement 250, PET selector 130 may select the most appropriate privacy enhancing technique for extracted insight 226 [] PET selector 130 may associate extracted insight 226 with the appropriate privacy enhancing technique. PET selector 130 may generate a PET insight list 258 representing a list of each extracted insight 226 with its mapping to the relevant privacy enhancing technique). Regarding claim 5: Gomez in view of Soliman and Soundararajan discloses the system according claim 4. Gomez further discloses wherein the personalized learning module further comprises: a contextual data processor module receiving the input data and configured to process and interpret the input data to understand the contextual information (Gomez: par. 0032 metadata [] may include contextual information and a formal language query [] may be dynamically retrieved by machine learning techniques that extract mathematical expressions from textual sources such as document 210). Soliman further discloses wherein the contextual information processed by the contextual data processor module is transmitted to the fine-tuned BERT module, and wherein the classification results made by the fine-tuned BERT module are further forwarded to the privacy judgment module for updating its privacy assessment records (Soliman: col. 12 lines 19-24 the representation determination component 210 may include a Sentence BERT (SBERT) model that is pretrained for generating sentence embeddings. The SBERT model may be further trained/fine-tuned so that the generated sentence embeddings map to a functionality-based vector space). The motivation is the same that of claim 1 above. Regarding claim 8: Gomez in view of Soliman and Soundararajan discloses the system according claim 1. Soliman further discloses a contextual integration module receiving the input data and configured to integrate the multi-turn dialog contexts, involving aggregating sequences of user interactions over time to form the comprehensive context to be extracted as the integrated context (Soliman: col. 22 lines 4-17 the NLG component 779 may generate dialog data based on one or more response templates [] the NLG component 779 may analyze the logical form of the template to produce one or more textual responses including markups and annotations to familiarize the response that is generated [] the selection may, therefore, be based on past responses, past questions, a level of formality, and/or any other feature, or any other combination thereof). The motivation is the same that of claim 1 above. Soundararajan further discloses a reasoning evaluator module receiving the integrated context from the contextual integration module and configured to utilize the CoT reasoning to evaluate whether the entire context includes the private information, wherein the evaluation by the reasoning evaluator module involves breaking down the integrated context into logical steps and assessing each step for privacy violations (Soundararajan: page 11 lines 20-27 the governance module (118) is also configured to control user prompt or chain of thought (CoT) prompts based on whitelisting or backlisting of domains or types of queries for different categories of users against identified type of prompt and the level of risks identified in those prompts [] the chain-of-thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps [] the user prompt is controlled with context-based prompt control). The motivation is the same that of claim 1 above. Regarding claim 10: Gomez in view of Soliman and Soundararajan discloses the system according claim 1. Soliman further discloses wherein the system is operated in a smartphone, a tablet, or an edge device (Soliman: col. 54 lines 27-33 a smart phone 110b [] a tablet computer 110d [] may be connected to the network(s) 199). The motivation is the same that of claim 1 above. Regarding claim 11: Gomez discloses a method using an intelligent privacy assistant model for large language models operations, comprising: parsing and preprocessing, by a text receiver, input data from at least one user so as to convert user-input text into a machine-readable format (Gomez: par. 0021 system 100 may retrieve documents from document database 110 to perform insight extraction [NOTE: insight extraction (often referred to as Information Extraction or Text Analytics) is extensively used to convert unstructured text into machine-readable and actionable structured formats. It acts as a bridge between human-readable language (documents, emails, social media) and computer-understandable data (JSON, CSV, SQL tables)]); querying, by a rule-based privacy checklist module, a database for retrieving privacy norms and processing these norms to create annotated rules, wherein the rule-based privacy checklist module processes the input data from the text receiver and compare it against the annotated norms and rules, thereby compiling and formatting findings from the comparing into a readable structured report (Gomez: par. 0031 PET selector 130 may use a rule-based security classification 238 to classify the level of security for extracted insight 226. Security classification 238 may include the data classification 236 for a particular extracted insight 226 to determine security classification 238; par. 0032 intended use classifier 145 may use intended use classification model 245 to determine formula 247 based on pre-determined key performance indicators (KPI) 242, terms and conditions 241, and/or regulation policy 243; par. 0037 once insight encryptor 160 processes and encrypts extracted insight 226, the resulting value may be protected insight 262); and receiving, by a personalized learning module, the input data via the text receiver (Gomez: par. 0032 metadata associated with KPI 242 may include contextual information and a formal language query (or queries) used to generate KPI 242 [] KPI 242 may be dynamically retrieved by machine learning techniques that extract mathematical expressions from textual sources such as document 210). Gomez does not explicitly disclose processing and interpreting, by the personalized learning module, the input data to understand contextual information thereof, wherein the personalized learning module comprises a fine-tuned BERT module for classifying whether the contextual information constitutes a privacy violation, and wherein classification results by the fine-tuned BERT module is locally processed to generate a local privacy report and receiving, by a reasoning module, the input data via the text receiver. However, Soliman discloses processing and interpreting, by the personalized learning module, the input data to understand contextual information thereof, wherein the personalized learning module comprises a fine-tuned BERT module for classifying whether the contextual information constitutes a privacy violation, and wherein classification results by the fine-tuned BERT module is locally processed to generate a local privacy report (Soliman: col. 4 lines 43-49 a language model (e.g., a Bidirectional Encoder Representations from Transformers (BERT) model) may be coupled with multistage fine-tuning steps to generate functionality-based representations of user inputs. Using the functionality-based representations, user inputs may then be clustered into functionalities; col. 3 lines 51-55 unsupported functionalities may cause friction (e.g., the system outputting an undesired response to a user input, the system processing resulting in an error condition, etc.) and can degrade the user experience); and receiving, by a reasoning module, the input data via the text receiver (Soliman: col. 2 lines 62-64 Intent Classification, which involves classifying a user input into a defined set of intent labels as represented by intent data). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of Soliman with the system/method of Gomez to include the personalized learning module comprises a fine-tuned BERT module for classifying whether the contextual information constitutes a privacy violation. One would have been motivated to involve converting a user's input into text data which may then be provided to various text-based software applications (Soliman: col. 1 lines 17-19). Gomez in view of Soliman does not explicitly disclose integrating, by the reasoning module, multi-turn dialog contexts to assess privacy based on server-side reasoning, wherein the reasoning module extracts integrated context and utilizes chain-of-thought (CoT) reasoning to evaluate whether entire context of integrated context has private information, and wherein the reasoning module delivers comprehensive privacy judgments for interactions occurring on cloud at a stage of evaluation for the private information and generates a cloud-based report. However, Soundararajan discloses integrating, by the reasoning module, multi-turn dialog contexts to assess privacy based on server-side reasoning, wherein the reasoning module extracts integrated context and utilizes chain-of-thought (CoT) reasoning to evaluate whether entire context of integrated context has private information, and wherein the reasoning module delivers comprehensive privacy judgments for interactions occurring on cloud at a stage of evaluation for the private information and generates a cloud-based report (Soundararajan: page 11 lines 20-25 the governance module (118) is also configured to control user prompt or chain of thought (CoT) prompts based on whitelisting or backlisting of domains or types of queries for different categories of users against identified type of prompt and the level of risks identified in those prompts [] the chain-of-thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps; page 12 lines 5-8 the governance module (118) is configured to generate reports related to the risks involved in the input query and the response outputs. In one embodiment, the responsible AI report focuses on identifying, addressing, and managing risks associated with adversarial machine learning). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of Soundararajan with the system/method of Gomez and Soliman to include extracts integrated context and utilizes chain-of-thought (CoT) reasoning to evaluate whether entire context of integrated context has private information. One would have been motivated to provide a system for multi modal aggregation in a governance platform in a responsible artificial intelligence (Soundararajan: page 2 lines 6-7). Regarding claims 12-15: Claims 12-15 are similar in scope to claims 2-5, respectively, and are therefore rejected under similar rationale. Regarding claim 18: Claim 18 is similar in scope to claim 8, and is therefore rejected under similar rationale. Regarding claim 20: Claim 20 is similar in scope to claim 10, and is therefore rejected under similar rationale. Claims 6-7, 9, 16-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over GOMEZ (“Gomez,” US 2023/0029190) in view of Soliman et al. (“Soliman,” US 12499879), SOUNDARARAJAN (“Soundararajan,” WO 2024/213993) and Karpman et al. (“Karpman,” US 11995803). Regarding claim 6: Gomez in view of Soliman and Soundararajan discloses the system according claim 5. Gomez in view of Soliman and Soundararajan does not explicitly disclose a knowledge distillation module configured to extract essential knowledge from a LLM database to enhance a smaller BERT model within the fine-tuned BERT module and a cloud-based foundation module communicating with the knowledge distillation module and configured to act as a base model for privacy violation learning and knowledge distillation. However, Karpman discloses knowledge distillation module configured to extract essential knowledge from a LLM database to enhance a smaller BERT model within the fine-tuned BERT module (Karpman: col. 4 lines 17-24 each pre-trained text encoder 118 defines a model, such as a (large) language model, that is configured to generate embeddings (e.g., a set of vectors) [] the set of pre-trained text encoders 118 can include text-only language models (e.g., TS-XXL, BERT) and multi-modal language models); and a cloud-based foundation module communicating with the knowledge distillation module and configured to act as a base model for privacy violation learning and knowledge distillation (Karpman: col. 7 lines 33-38 a set of (pre-trained) content moderation models is configured to classify and reject natural language requests to generate inappropriate content, such as sexually explicit, violent, or otherwise inappropriate content and/or classify and filter out any similarly inappropriate images generated by the text-to-image diffusion model 112). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of Karpman with the system/method of Gomez, Soliman and Soundararajan to include extract essential knowledge from a LLM database to enhance a smaller BERT model within the fine-tuned BERT module. One would have been motivated to provide new and useful artificial intelligence systems and methods for training and deploying text-to-image generative models (Karpman: col. 1 lines 24-27). Regarding claim 7: Gomez in view of Soliman, Soundararajan and Karpman discloses the system according claim 6. Karpman further discloses wherein the personalized learning module further comprises: a reward module interacting with the cloud-based foundation module for receiving data and updating the cloud-based foundation module based on identified privacy violations, wherein the reward module is further configured to perform a learning process involving supervised fine-tuning or reinforcement learning from human feedback (RLHF), and wherein, once the cloud-based foundation module is updated by the reward module, the distilled knowledge is transmitted back to the knowledge distillation module (Karpman: col. 8 lines 25-34 a reward model 114 that is pre-trained on aggregated human assessments of quality and preferability for images created by the text-to-image diffusion model 112. During the training and/or fine-tuning stages, the system can modify and/or optimize parameters of the text-to-image diffusion model 112 using the reward model 114 to train the text-to-image diffusion model 112 to account for human preferences and/or feedback on aesthetic quality). The motivation is the same that of claim 6 above. Regarding claim 9: Gomez in view of Soliman and Soundararajan discloses the system according claim 8. Gomez in view of Soliman and Soundararajan does not explicitly disclose an in-context learning module interacting with the reasoning evaluator module and configured to leverage few-shot in-context learning to refine privacy judgments, wherein the refining by the in-context learning module involves using a small number of examples or samples to train at least one model of the reasoning evaluator module. However, Karpman discloses an in-context learning module interacting with the reasoning evaluator module and configured to leverage few-shot in-context learning to refine privacy judgments, wherein the refining by the in-context learning module involves using a small number of examples or samples to train at least one model of the reasoning evaluator module (Karpman: col. 12 lines 58-65 the system can first execute the captioner module on the training corpus and then execute the filter module on the resulting training set in order to remove image-text pairs that include either misaligned alternate text or misaligned (e.g., inaccurate) synthetic captions generated by the captioner module, thereby increasing overall caption fidelity of the training corpus but yielding a (slightly) smaller set of training examples). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the teachings of Karpman with the system/method of Gomez, Soliman and Soundararajan to include the refining by the in-context learning module involves using a small number of examples or samples to train at least one model of the reasoning evaluator module. One would have been motivated to provide new and useful artificial intelligence systems and methods for training and deploying text-to-image generative models (Karpman: col. 1 lines 24-27). Regarding claims 16-17: Claims 16-17 are similar in scope to claims 6-7, respectively, and are therefore rejected under similar rationale. Regarding claim 19: Claim 19 is similar in scope to claim 9, and is therefore rejected under similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fahimeh Mohammadi whose telephone number is (571)270-7857. The examiner can normally be reached Monday - Friday 9:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached at 5712705002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FAHIMEH MOHAMMADI/ Examiner, Art Unit 2439 /LUU T PHAM/Supervisory Patent Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Sep 16, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604186
Methods and Systems for Network Authentication Using a Unique Authentication Identifier
2y 5m to grant Granted Apr 14, 2026
Patent 12598078
NETWORK ACCESS USING HARDWARE-BASED SECURITY
2y 5m to grant Granted Apr 07, 2026
Patent 12598174
FLEET MANAGEMENT SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12568073
SECURE EXCHANGE OF CERTIFICATE AUTHORITY CERTIFICATE INLINE AS PART OF FILE TRANSFER PROTOCOL
2y 5m to grant Granted Mar 03, 2026
Patent 12562966
Transitioning Network Entities Associated With A Virtual Cloud Network Through A Series Of Phases Of A Certificate Bundle Distribution Process
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+52.6%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 294 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month