Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This Office Action is in response to the amendment filed on 12/04/2025.
Claims 9-12 have been canceled.
Claims 1-8 and 13-24 are pending.
Information Disclosure Statement
2. The information disclosure statement (IDS) filed on 09/17/2025 and 12/10/2025 comply with the provisions of M.P.E.P. 609. The examiner has considered it.
Response to Arguments
3. Applicant's arguments with respect to claims 1-8 and 13-24 have been considered but are moot in view of the new ground(s) of rejection.
Applicants’ argument is valid. The Provisional Applications of Maizels et al, US 20250173415, such as 63/390,653, 63/438,061, 63/394,329, 63/441,183, 63/487,299, 63/229,091 and 63/229,091, do not describe paragraphs 320, 525, 581, 985, and 1497-1498 of Maizels.
Examiner’s Note
4. “User segment” (According to paragraph 40 of the instant Specification): “As used herein, a “user segment” refers to a group of users corresponding to a group of user profiles and identified by a group of user identifiers. As used herein, a “user profile” refers to data corresponding to a user. Examples of data corresponding to a user include a name, contact information, demographic data, user device information, a purchase history, a correspondence history, and any other data relating to the user. As used herein, a “user identifier” refers to a unique identifier (such as a name, an email address, an identification number, etc.) for a user. In some cases, a user profile includes a user identifier. In some cases, the user segment includes one or more users corresponding to user profiles that include a common attribute or quality.”
“a label of the user segment” (According to paragraphs 90 and 117 of the instant Specification): “machine learning model 220 generates a label of the user segment. In some examples, machine learning model 220 identifies one or more attributes of the user segment, where the label is based on the one or more attributes”.
“summary statistics of the user segment” (According to paragraphs 27 and 50 of the instant Specification): “using summary statistics and described traits of the user segment along with projected performance of the user segment towards the content provider's objective”.
A Large Language Model (According to Google): “A Large Language Model (LLM) is a type of AI designed to understand, generate, and process human language by analyzing massive datasets using deep learning, specifically neural networks called transformers. They predict the next word in a sequence based on probability, allowing them to summarize, translate, and answer questions. Common examples include ChatGPT, Gemini, and Claude.”
Erlingsson et al, US 11,770,398, [Erlingsson: Column 101, lines 1-67 (“In some embodiments, the one or more other natural language inputs for the other prompt may be generated according to similar approaches as are described above, including machine learning approaches using a trained model. In such embodiments, the natural language input and/or a response to the received natural language input (e.g., including a response to the query corresponding to the received natural language input) may be provided as input to such a model.”, i.e., ‘generating a prompt for a machine learning model’)] [Erlingsson: Column 8, lines 31-43 (“a baseline of datacenter activity can be modeled, and deviations from that baseline can be identified as anomalous. Anomaly detection can be beneficial in a security context, a compliance context, an asset management context, a DevOps context, and/or any other data analytics context”, i.e., ‘contextual information’)] [Erlingsson: Column 86, lines 18-43 (“For example, historical information may be used to compare the current state of a particular cluster (as measured by one or more quantifiable characteristics associated with the cluster) with a previous state of the cluster such that trends and/or trajectories may be identified.”, i.e., ‘data trend’)] [Erlingsson: Column 101, lines 1-10 and column 101, lines 43-67 (“the corresponding query may be generated from the received natural language input using machine learning approaches”, i.e., ‘natural language text input’ and ‘machine learning model’)] [Erlingsson: Column 17, lines 31-44 (“automatically discover entities (which may implement compute assets 16) deployed in a given datacenter. Examples of entities include workloads, applications, processes, machines, virtual machines, containers, files, IP addresses, domain names, and users. The entities may be grouped together logically (into analysis groups) based on behaviors, and temporal behavior baselines can be established”, i.e., ‘user segment’)] [Erlingsson: Column 42, lines 58-67 through column 43, lines 1-5 (“t 363, the received network activity is used to identify user login activity. And, at 364, a logical graph that links the user login activity to at least one user and at least one process is generated”, i.e., ‘user segment’)] [Erlingsson: Column 49, lines 34-45 (“the effective user in the Tier 2 node may or may not match the original user (while the original user in the Tier 2 node will match the original user in the Tier 1 node)”, i.e., ‘user segment’)] [Erlingsson: Column 54, lines 32-47 (“As one example, user A can see that contact was made with examplebad.com a total of 17 times during the time period”, i.e., ‘user segment’)] [Erlingsson: Column 75, lines 30-43 “customers (and their corresponding deployments) may be modeled into logical groups such that cross customer learning could be carried out only across customers in the same logical group, or other customers in the same logical group may be given a greater weighting for the purposes of cross customer learning”., i.e., ‘user segment’)] [Erlingsson: Column 5, lines 45-50 and column 25, lines 31-39 (“Such queries may be generated using any suitable query language” and “using a query language, such as SQL,”., ‘structured query’)].
Sadr et al, US 20240202796, [Sadr: Abstract and paragraphs 2, 7, 10 and 48 (“to generate a prompt input that can be processed by a machine-learned model to generate outputs that can be reviewed by a user and selected to be input into a search engine to receive search results associated with the selected output”)] [Sadr: Paragraphs 80 and 107 (“the images can provide a more detailed context of what a user is requesting during the search, which can allow for a more tailored search than text alone” and “learned information 402 (e.g., fashion knowledge, personalization (e.g., based on stored data associated with a user), and/or trends (e.g., purchase trends, social media trends, and/or search trends)) can be obtained and utilized to generate a prompt and/or to suggest prompt inputs for selection via selectable user interface elements”, i.e., ‘contextual information comprising a data trend’)].
Divakaran et al, US 20210297498, [Divakaran: Paragraphs 10 and 37 (“creating a respective first modality vector representation of the content of the multimodal content having a first modality using a machine learning model for each of a plurality of content of the multimodal content, at least a second embedding module creating at least a respective second modality vector representation of content of the multimodal content having at least a second modality using a machine learning model for each of a plurality of content of the multimodal content”)].
Tran, US 20230351102, [Tran: Abstract and paragraph 9 (“generate a document from one or more first and second text prompts, generating one or more context-sensitive text suggestions using a transformer with an encoder on the text prompts and a decoder that produces a text expansion to provide the context-sensitive text suggestions based on the one or more first and second text prompts by applying generative artificial intelligence”, i.e., ‘generating a prompt for a machine learning model’)] [Tran: Paragraph 198 (“determined by the learning machine trained for routing users to agents includes rating agents on performance or success of agent data and caller data, or both. The checking for optimal interaction includes combining agent work performance, agent demographic/psychographic data, and other work performance data (“agent data”), along with demographic, psychographic, and other business-relevant data about callers (“caller data”). Agent and caller demographic data can be: gender, race, age, education, accent, income, nationality, ethnicity, area code, zip code, marital status, job status, credit score, for example. Agent and caller psychographic data can cover introversion, sociability, work/employment status, film and television preferences, among others”, i.e., ‘user segment’)].
Claim Rejections - 35 USC § 102
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
7. Claims 1-8, 13 and 16-24 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Erlingsson et al (US 11,770,398).
Claim 1:
Erlingsson suggests a method for data clustering, comprising: obtaining contextual information comprising a data trend and a natural language text input corresponding to the data trend [Erlingsson: Column 8, lines 31-43 (“a baseline of datacenter activity can be modeled, and deviations from that baseline can be identified as anomalous. Anomaly detection can be beneficial in a security context, a compliance context, an asset management context, a DevOps context, and/or any other data analytics context”, i.e., ‘contextual information’)] [Erlingsson: Column 86, lines 18-43 (“For example, historical information may be used to compare the current state of a particular cluster (as measured by one or more quantifiable characteristics associated with the cluster) with a previous state of the cluster such that trends and/or trajectories may be identified.”, i.e., ‘data trend’)]. Erlingsson suggests generating a prompt for a machine learning model based on the contextual information and the natural language text input, wherein the prompt includes the data trend and the natural language text input [Erlingsson: Column 101, lines 1-67 (“In some embodiments, the one or more other natural language inputs for the other prompt may be generated according to similar approaches as are described above, including machine learning approaches using a trained model. In such embodiments, the natural language input and/or a response to the received natural language input (e.g., including a response to the query corresponding to the received natural language input) may be provided as input to such a model.”, i.e., ‘generating a prompt for a machine learning model’)]. Erlingsson suggests generating, using the machine learning model, a structured query for a database of users based on the prompt [Erlingsson: Column 101, lines 1-67 (“In some embodiments, the one or more other natural language inputs for the other prompt may be generated according to similar approaches as are described above, including machine learning approaches using a trained model. In such embodiments, the natural language input and/or a response to the received natural language input (e.g., including a response to the query corresponding to the received natural language input) may be provided as input to such a model.”, i.e., ‘generating a prompt for a machine learning model’)] [Erlingsson: Column 5, lines 45-50 and column 25, lines 31-39 (“Such queries may be generated using any suitable query language” and “using a query language, such as SQL,”., ‘structured query’)]. Erlingsson suggests generating, using a user experience platform, a user segment based on the structured
query, wherein the user segment comprises a set of the users [Erlingsson: Column 17, lines 31-44 (“automatically discover entities (which may implement compute assets 16) deployed in a given datacenter. Examples of entities include workloads, applications, processes, machines, virtual machines, containers, files, IP addresses, domain names, and users. The entities may be grouped together logically (into analysis groups) based on behaviors, and temporal behavior baselines can be established”, i.e., ‘user segment’)] [Erlingsson: Column 42, lines 58-67 through column 43, lines 1-5 (“t 363, the received network activity is used to identify user login activity. And, at 364, a logical graph that links the user login activity to at least one user and at least one process is generated”, i.e., ‘user segment’)] [Erlingsson: Column 49, lines 34-45 (“the effective user in the Tier 2 node may or may not match the original user (while the original user in the Tier 2 node will match the original user in the Tier 1 node)”, i.e., ‘user segment’)] [Erlingsson: Column 54, lines 32-47 (“As one example, user A can see that contact was made with examplebad.com a total of 17 times during the time period”, i.e., ‘user segment’)] [Erlingsson: Column 75, lines 30-43 “customers (and their corresponding deployments) may be modeled into logical groups such that cross customer learning could be carried out only across customers in the same logical group, or other customers in the same logical group may be given a greater weighting for the purposes of cross customer learning”., i.e., ‘user segment’)] [Erlingsson: Column 5, lines 45-50 and column 25, lines 31-39 (“Such queries may be generated using any suitable query language” and “using a query language, such as SQL,”., ‘structured query’)].
Claim 2:
Erlingsson suggests monitoring, using the user experience platform, a set of data [Erlingsson: Column 39, lines 7-29 (“Such information can be used to track user behavior correctly, even where a malicious user attempts to hide his trail by changing user identities (e.g., through lateral movement). Extended user session tracking can also be useful in operational use cases without malicious intent”)]; and detecting, using the user experience platform, the data trend in the set of data [Erlingsson: Column 86, lines 18-43 (“For example, historical information may be used to compare the current state of a particular cluster (as measured by one or more quantifiable characteristics associated with the cluster) with a previous state of the cluster such that trends and/or trajectories may be identified.”, i.e., ‘data trend’)].
Claim 3:
Erlingsson suggests generating, using the machine learning model, a label of the user segment [Erlingsson: Column 17, lines 31-44 (“automatically discover entities (which may implement compute assets 16) deployed in a given datacenter. Examples of entities include workloads, applications, processes, machines, virtual machines, containers, files, IP addresses, domain names, and users. The entities may be grouped together logically (into analysis groups) based on behaviors, and temporal behavior baselines can be established”, i.e., ‘user segment’)] [Erlingsson: Column 42, lines 58-67 through column 43, lines 1-5 (“t 363, the received network activity is used to identify user login activity. And, at 364, a logical graph that links the user login activity to at least one user and at least one process is generated”, i.e., ‘user segment’)].
Claim 4:
Erlingsson suggests identifying, using the machine learning model, one or more attributes of the user segment, wherein the label is based on the one or more attributes [Erlingsson: Column 17, lines 31-44 (“automatically discover entities (which may implement compute assets 16) deployed in a given datacenter. Examples of entities include workloads, applications, processes, machines, virtual machines, containers, files, IP addresses, domain names, and users. The entities may be grouped together logically (into analysis groups) based on behaviors, and temporal behavior baselines can be established”, i.e., ‘user segment’)] [Erlingsson: Column 42, lines 58-67 through column 43, lines 1-5 (“t 363, the received network activity is used to identify user login activity. And, at 364, a logical graph that links the user login activity to at least one user and at least one process is generated”, i.e., ‘user segment’)].
Claim 5:
Erlingsson suggests generating, using the machine learning model, one or more summary statistics of the user segment [Erlingsson: Column 59, lines 49-67 (“For example, a sales OLTP application probably has no need to know about the weather at various sales locations, but sales predictions could take advantage of that data. By adding historical weather data to a data warehouse, it would be possible to factor it into models of historical sales data”)] [Erlingsson: Column 86, lines 18-44 (“such that trends and/or trajectories may be identified”)].
Claim 6:
Erlingsson suggests receiving, by the machine learning model, an additional prompt; and
modifying, using the user experience platform, the user segment based on the additional prompt [Erlingsson: Column 12, lines 58-67 (“Once the agent has determined which process is associated with the network connection (203), the agent can then collect additional information associated with the process”)] [Erlingsson: Column 53, lines 50-67 (“A interacts with an element in FIG. 4H (e.g., clicks on box 461, clicks on link 464-1, or clicks on tab 465), her actions are translated/formalized into filters on the data set and used to dynamically generate SQL queries. The SQL queries are generated transparently to user A (and also to a designer of the user interface shown in FIG. 4H)”)].
Claim 7:
Erlingsson suggests receiving, via a user interface, a query about the user segment; and
generating a response to the query using the machine learning model [Erlingsson: Column 17, lines 31-44 (“automatically discover entities (which may implement compute assets 16) deployed in a given datacenter. Examples of entities include workloads, applications, processes, machines, virtual machines, containers, files, IP addresses, domain names, and users. The entities may be grouped together logically (into analysis groups) based on behaviors, and temporal behavior baselines can be established”, i.e., ‘user segment’)] [Erlingsson: Column 42, lines 58-67 through column 43, lines 1-5 (“t 363, the received network activity is used to identify user login activity. And, at 364, a logical graph that links the user login activity to at least one user and at least one process is generated”, i.e., ‘user segment’)] [Erlingsson: Column 49, lines 34-45 (“the effective user in the Tier 2 node may or may not match the original user (while the original user in the Tier 2 node will match the original user in the Tier 1 node)”, i.e., ‘user segment’)] [Erlingsson: Column 54, lines 32-47 (“As one example, user A can see that contact was made with examplebad.com a total of 17 times during the time period”, i.e., ‘user segment’)] [Erlingsson: Column 75, lines 30-43 “customers (and their corresponding deployments) may be modeled into logical groups such that cross customer learning could be carried out only across customers in the same logical group, or other customers in the same logical group may be given a greater weighting for the purposes of cross customer learning”., i.e., ‘user segment’)]
Claim 8:
Erlingsson suggests generating, using the machine learning model, a behavioral prediction for the user segment [Erlingsson: Column 59, lines 49-67 (“For example, a sales OLTP application probably has no need to know about the weather at various sales locations, but sales predictions could take advantage of that data. By adding historical weather data to a data warehouse, it would be possible to factor it into models of historical sales data”)] [Erlingsson: Column 86, lines 18-44 (“such that trends and/or trajectories may be identified”)].
Claim 13:
Claim 13 is essentially the same as claim 1 except that it sets forth the claimed invention as an apparatus rather than a method and rejected under the same reasons as applied above.
Claim 16:
Claim 16 is essentially the same as claim 3 except that it sets forth the claimed invention as an apparatus rather than a method and rejected under the same reasons as applied above.
Claim 17:
Claim 17 is essentially the same as claim 4 except that it sets forth the claimed invention as an apparatus rather than a method and rejected under the same reasons as applied above.
Claim 18:
Claim 18 is essentially the same as claim 5 except that it sets forth the claimed invention as an apparatus rather than a method and rejected under the same reasons as applied above.
Claim 19:
Erlingsson suggests the machine learning model is further trained to generate a response to a query about the user segment [Erlingsson: Column 99, lines 50-67 through column 100, lines 1-12 (“generating the one or more natural language inputs may include providing input to a trained model configured to output an indication a of a predefined security workflow from which the one or more natural language inputs are selected, to output a security workflow generated by the model itself, or to output a particular one or more natural language inputs independent for progressively dynamically generating a security workflow via subsequent user interactions. The trained model may be trained based on at least a portion of the gathered 1302 data described above. In some embodiments, some portion of the gathered 1302 data describing historical activity may be used to train the model.”)].
Claim 20:
Claim 20 is essentially the same as claim 8 except that it sets forth the claimed invention as an apparatus rather than a method and rejected under the same reasons as applied above.
Claim 21:
Claim 21 is essentially the same as claim 1 except that it sets forth the claimed invention as a program product rather than a method and rejected under the same reasons as applied above.
Claim 22:
Claim 22 is essentially the same as claim 2 except that it sets forth the claimed invention as a program product rather than a method and rejected under the same reasons as applied above.
Claim 23:
Claim 23 is essentially the same as claim 3 except that it sets forth the claimed invention as a program product rather than a method and rejected under the same reasons as applied above.
Claim 24:
Claim 24 is essentially the same as claim 4 except that it sets forth the claimed invention as a program product rather than a method and rejected under the same reasons as applied above.
Claim Rejections - 35 USC § 103
8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
10. Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Erlingsson et al (US 11,770,398), in view of Sadr et al (US 20240202796).
Claim 14:
The combined teachings of Erlingsson and Sadr suggest wherein: the machine learning model comprises a large language model [Sadr: Paragraphs 77, 178 and 187 (“a transformer model” and “various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models.”)].
Both references (Erlingsson and Sadr) taught features that were directed to analogous art and they were directed to the same field of endeavor, such as data processing. It would have been obvious to one of ordinary skill in the art at the time the invention was made, having the teachings of Erlingsson and Sadr before him/her, to modify the system of Erlingsson with the teaching of Sadr in order to implement a deep neural network in natural language processing [Sadr: Paragraphs 77, 178 and 187].
Claim 15:
The combined teachings of Erlingsson and Sadr suggest wherein: the machine learning model comprises a transformer [Sadr: Paragraphs 77, 178 and 187 (“a transformer model” and “various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models.”)].
Both references (Erlingsson and Sadr) taught features that were directed to analogous art and they were directed to the same field of endeavor, such as data processing. It would have been obvious to one of ordinary skill in the art at the time the invention was made, having the teachings of Erlingsson and Sadr before him/her, to modify the system of Erlingsson with the teaching of Sadr in order to implement a deep neural network in natural language processing [Sadr: Paragraphs 77, 178 and 187].
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to [Hung D. Le], whose telephone number is [571-270-1404]. The examiner can normally be communicated on [Monday to Friday: 9:00 A.M. to 5:00 P.M.].
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached on [571-272-4080]. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, contact [800-786-9199 (IN USA OR CANADA) or 571-272-1000].
Hung Le
02/10/2026
/HUNG D LE/Primary Examiner, Art Unit 2161