Prosecution Insights
Last updated: April 19, 2026
Application No. 18/929,739

Semantic Analysis Of Session Data

Non-Final OA §102§103
Filed
Oct 29, 2024
Examiner
MCFARLAND-BARNES, KELAH JANAE
Art Unit
2431
Tech Center
2400 — Computer Networks
Assignee
Dynatrace LLC
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+42.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
18 currently pending
Career history
20
Total Applications
across all art units

Statute-Specific Performance

§101
12.8%
-27.2% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§102 §103
DETAILED ACTION In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This Office Action is in response to the communication filed on 10/29/2024. Claims 1-19 are pending for consideration. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 3, and 6 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kota et al. (U.S. 11,068,663)(hereinafter Kota). Regarding claim 1, Kota teaches a computer-implemented method for semantically analyzing session data captured in a distributed computing system, comprising: receiving, by a computer processor, session data from a session occurring in the distributed computing system (Kota: see Col 8 lines 20-23, "Sequence-modeling apparatus 204 and/or another component of the system generate activity sequences 222 based on data 202 in data repository 134 and/or events received over event streams 200"); generating, by the computer processor, a textual description for the session data (Kota: see Col 10 lines 65-67 - Col 11 line 1, "For example, sequence-modeling apparatus 204 generates and/or obtains a string representing an activity sequence that contains an ordered sequence of actions by the user"); generating, by the computer processor, a vector embedding from the textual description, where the vector embedding represents the semantics of the session data (Kota: see Col 9 lines 1-13, "A model-creation apparatus 210 creates a language model 208 from sentences 212 that represent a sample set of activity sequences 222 in data 202. For example, language model 208 includes a Bidirectional Encoder Representations from Transformers (BERT) model and/or another type of bidirectional transformer encoder. Input into the BERT model includes embedded and/or encoded representations of tokens in sentences 212, positions of the tokens in sentences 212, ordinal sequences of segments or sentences 212 in the same input, separators between tokens in different sentences 212, and/or a classification token that is used to generate an aggregate output representation of each input into the BERT model"); and storing, by the computer processor, the vector embedding, along with a reference to the session data, in a database (Kota: see Col 15 lines 34-42, "Operations 304-312 may be repeated for remaining users (operation 314). For example, a separate set of output embeddings, session embeddings, and/or session history embeddings is generated for each user for which activity sequences are collected. The embeddings are stored, inputted into a machine learning model, and/or otherwise used to characterize and/or predict the user's behavior, intent, and/or preferences with respect to jobs and/or other entities with which the user interacts"). Regarding claim 2, Kota teaches generating the textual description for the session data using a large language model (Kota: see Col 9 lines 6-13, " Input into the BERT model includes embedded and/or encoded representations of tokens in sentences 212, positions of the tokens in sentences 212, ordinal sequences of segments or sentences 212 in the same input, separators between tokens in different sentences 212, and/or a classification token that is used to generate an aggregate output representation of each input into the BERT model"). Regarding claim 3, Kota teaches generating the vector embedding using a text embedding model (Kota: see Col 3 lines 3-8, "By representing sequences of activity within and/or across user sessions as sentences and using a language model to generate embeddings from the sentences, the disclosed embodiments convert sequences of variable numbers or types of activities and/or tokens in the sentences into fixed-length vector representations"). Regarding claim 6, Kota teaches querying the database for session data (Kota: see Col 4 lines 49-59, "For example, each profile update, profile view, connection, follow, post, comment, like, share, search, click, message, interaction with a group, address book interaction, response to a recommendation, purchase, and/or other action performed by an entity in online network 118 is tracked and stored in a database, data warehouse, cloud storage, and/or other data-storage mechanism providing data repository 134. Data in data repository 134 is then used to generate recommendations and/or other insights related to listings of jobs or opportunities within online network 118") Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Kota in view of Anand (U.S. 10,389,818)(hereinafter Anand). Regarding claims 4 and 5, Kota teaches generating the textual description considering session data and backend data (Kota: see Col 9 lines 6-13, " Input into the BERT model includes embedded and/or encoded representations of tokens in sentences 212, positions of the tokens in sentences 212, ordinal sequences of segments or sentences 212 in the same input, separators between tokens in different sentences 212, and/or a classification token that is used to generate an aggregate output representation of each input into the BERT model"). However, Kota does not teach collecting backend data from host computers in the distributed computing system. Nevertheless, Anand-which is in the same field of endeavor- teaches collecting session data and backend data from computer hosts in the distributed computing system during the performance of the session (Anand: see Col 6 lines 1-3, "Data is collected for the session by agents at step 230. The agents may monitor the application, machine, and network, as well as user activity"; Col 6 lines 46-56, "A machine is monitored during the session at step 330. Machine monitoring may result in metrics for CPU usage, memory usage, and other machine performance. Machine data is then recorded with the same session ID as the application data at step 340. A network is monitored during the session at step 350. Monitoring the network may include performing packet capture at a socket to detect network data. The network data may be rolled up into metrics such as latency, packet loss, throughput, and other data. Network data is then recorded with the session ideas step 360"). Kota and Anand are analogous art because they are from the same field of endeavor. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to utilize Anand’s method of collecting backend and application data with Kota’s text description generation. The suggestion/motivation for doing so would be to detect anomalous or malicious activity and report the details of the activity to a user in a human-readable way. Claims 7, 8, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Kota in view of Schlerf et al. (U.S. 2024/0078269)(hereinafter Schlerf). Regarding claim 7, Kota teaches the invention detailed above. However, Kota does not teach receiving, by a user interface, session data or a textual description for a target session; generating a target vector embedding from the session data or the textual description; and querying the database using the target vector embedding. Nevertheless, Schlerf-which is in the same field of endeavor- teaches receiving, by a user interface, session data or a textual description for a target session (Schlerf: see Page 8 paragraph 0062 lines 1-3, "In the process 300, the web page prediction model 320 receives input data 310, which includes session context 311, web page vectors 312, and a user profile 313"); generating a target vector embedding from the session data or the textual description (Schlerf: see Page 8 paragraph 0062 lines 3-6, "Although not depicted, some or all of the input data 310 may be input to the vector generator 243, which may then output a feature vector that is input to the web page prediction model 320"); and querying the database using the target vector embedding (Schlerf: see Page 9 paragraph 0067 lines 4-7, "The web page curator 246 may query the database 245 for a web page that the user is likely to visit while performing the identified action, and the database 245 may return the web page 330"). Kota and Schlerf are analogous art because they are from the same field of endeavor. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to utilize Schlerf’s targeted query of a database of vector embeddings Kota’s capturing of session data. The suggestion/motivation for doing so would be to query session data for training or anomaly detection mechanisms. Regarding claim 8, Kota and Schlerf teach computing a similarity measure between the target vector embedding and the vector embeddings in the database (Schlerf: see Page 14 claim 12, "calculating a plurality of similarity metrics between a web address of the latest viewed web page of the web pages viewed with web addresses of a set of the web pages viewed before the latest viewed web page"); comparing the similarity measure to a threshold; and reporting vector embeddings having a similarity measure greater than the threshold (Schlerf: see Page 14 claim 12, "and identifying a subset of the generated vectors corresponding to a subset of the web page addresses having at least a threshold similarity metric with the web page address of the latest viewed web page, wherein the subset of the generated vectors is used to determine the combined vector"). Motivation to combine Kota and Schlerf in the instant claim, is the same as that in claim 7. Regarding claim 9, Kota and Schlerf teach the similarity measure is further defined as a cosine similarity (Schlerf: see Page 3 paragraph 0028 lines 23-26, "For example, the model training engine 241 may automatically label a feature vector based on a similarity (e.g., cosine similarity) with a previously labeled feature vector"). Motivation to combine Kota and Schlerf in the instant claim, is the same as that in claim 7. Claims 10 – 14 are rejected under 35 U.S.C. 103 as being unpatentable over Schlerf in view of Dong (U.S. 11,308,497)(hereinafter Dong). Regarding claim 10, Schlerf teaches receiving, by a computer processor, new session data from a session occurring in the distributed computing system (Schlerf: see Page 8 paragraph 0062 lines 1-3, " In the process 300, the web page prediction model 320 receives input data 310, which includes session context 311, web page vectors 312, and a user profile 313"); generating, by the computer processor, a new vector embedding from the new session data, where the new vector embedding represents the new session data (Schlerf: see Page 8 paragraph 0062 lines 3-7, "Although not depicted, some or all of the input data 310 may be input to the vector generator 243, which may then output a feature vector that is input to the web page prediction model 320"); receiving, by a computer processor, reference session data (Schlerf: see Page 3 paragraph 0028 lines 2-9, "In some embodiments, the model training engine 241 generates a training data set using at least one of web pages viewed by users and corresponding user characteristics"), generating, by the computer processor, a reference vector embedding from the reference session data (Schlerf: see Page 3 paragraph 0028 lines 2-9, "In some embodiments, the model training engine 241 generates a training data set using at least one of web pages viewed by users and corresponding user characteristics. The model training engine 241 may use feature vectors (e.g., generated by the vector generator 243) that are quantitative representations of one or more of users' previously viewed web pages (e.g., pageview histories), characteristics, or session contexts"). However, Schlerf does not teach where the reference session data is indicative of a fraudulent session; where the reference vector embedding represents the fraudulent session; comparing, by the computer processor, the new vector embedding to the reference vector embedding; and reporting, by the computer processor, the new session data as being fraudulent in response to the new vector embedding being similar to the reference vector embedding. Nevertheless, Dong-which is in the same field of endeavor- teaches where the reference session data is indicative of a fraudulent session Dong: see Col 13 lines 24-30, "At block 604, computer system 100 receives a request 134 to access the first electronic resource 120 via the first link 122. At block 606, before granting the request 134 to access the first electronic resource 120, computer system 100 evaluates the request 134 to access the first electronic resource 120 using a fraud detection model 104"); where the reference vector embedding represents the fraudulent session where the reference vector embedding represents the fraudulent session (Dong: see Col 3 lines 56-64, "In various embodiments, fraud detection model 104 is used by computer system 100 to evaluate requests 134 before granting such requests to access electronic resources 120. In various embodiments, fraud detection model 104 is generated by receiving a plurality of previous requests 134 and sequentially generating embedding values for the fraud detection model 104 that correspond to the sender account 102 and recipient account 130 associated with each respective request 134"); comparing, by the computer processor, the new vector embedding to the reference vector embedding; and reporting, by the computer processor, the new session data as being fraudulent in response to the new vector embedding being similar to the reference vector embedding (Dong: see Col 11 lines 41-53, " Equation 506 produces a final output value of fraud detection model 104 for an incoming request 134 that is used to determine whether the incoming request 134 is fraudulent or legitimate. In various embodiments, this final output value is a prediction score for the likelihood that a particular recipient account 130 is (or is an associate of) an attacker. This prediction score is used in determining whether to grant incoming request 134. If the recipient accounts 130 behaves like an attacker, (i.e. the output value of equation 506 is close to 1 or above a certain threshold), then this request 134 will be classified as fraudulent (fraudulent prediction 508), and guided through an additional authentication flow again, or in embodiments outright denied"). Schlerf and Dong are analogous art because they are from the same field of endeavor. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to utilize Dong’s comparison and threshold value to determine if a request is fraudulent with Schlerf’s method for capturing of session data. The suggestion/motivation for doing so would be to classify session data based on historical or reference data that has been stored previously. Regarding claim 11, Schlerf teaches generating at least one of the new vector embedding or the reference vector embedding using a text embedding model (Schlerf: see Page 8 paragraph 0062 lines 3-7). Regarding claim 12, Schlerf and Dong teach computing a similarity measure between the new vector embedding and the reference vector embedding (Schlerf: see Page 3 paragraph 0028 lines 23-27) and reporting the new session data as being fraudulent in response to the similarity measure exceeding a threshold (Dong: see Col 11 lines 48-53, " If the recipient accounts 130 behaves like an attacker, (i.e. the output value of equation 506 is close to 1 or above a certain threshold), then this request 134 will be classified as fraudulent (fraudulent prediction 508), and guided through an additional authentication flow again, or in embodiments outright denied"). Motivation to combine Schlerf and Dong in the instant claim, is the same as that in claim 10. Regarding claim 13, Schlerf teaches querying a database using the reference vector embedding, where database stores a plurality of vector embeddings and each of the plurality of vector embedding represent a session in the distributed computer system (Schlerf: see Page 9 paragraph 0067 lines 4-7; Page 5 paragraph 0038 lines 13-15, "A training data set may include feature vectors representing the web pages viewed by users"; Page 7 paragraph 0053 lines 1-10, "The database 245 stores data for the central database system 240 to perform predictive web page navigation. Examples of data stored in the database 245 may include training data sets used by the model training engine 241.. session context data, outputs of the one or more models 242 (e.g., predicted next web pages)..."). Regarding claim 14, Schlerf and Dong teach computing a similarity measure between the reference vector embedding and each of the plurality of the vector embeddings in the database; comparing the similarity measures to a threshold (Schlerf: see Page 3 paragraph 0028 lines 23-27, "For example, the model training engine 241 may automatically label a feature vector based on a similarity (e.g., cosine similarity) with a previously labeled feature vector. The model training engine 241 may retrieve labels from pageview histories stored in the database 220 or 245"; Schlerf: see Page 14 claim 12, "and identifying a subset of the generated vectors corresponding to a subset of the web page addresses having at least a threshold similarity metric with the web page address of the latest viewed web page, wherein the subset of the generated vectors is used to determine the combined vector"); and tagging select vector embeddings in the database as being fraudulent, where the select vector embedding have a similarity measure greater than the threshold (Dong: see Col 11 lines 48-53, " If the recipient accounts 130 behaves like an attacker, (i.e. the output value of equation 506 is close to 1 or above a certain threshold), then this request 134 will be classified as fraudulent (fraudulent prediction 508), and guided through an additional authentication flow again, or in embodiments outright denied"; Col 10 lines 38-44, "However, in various embodiments, fraud detection model 104 has the luxury of tagging information of related transactions obtained through user filed claims or automated tagging rules engines. While such tagging is not guaranteed to be 100% accurate, these transaction tags can be leveraged to provide supervised learning in various embodiments"). Motivation to combine Schlerf and Dong in the instant claim, is the same as that in claim 10. Claims 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Schlerf in view of Wenig et al. (U.S. 8,533,532)(hereinafter Wenig). Regarding claim 15, Schlerf receiving, by a user interface, a textual description for a target session with the website (Schlerf: see Page 8 paragraph 0057 lines 1-5, "The web page curator 246 can modify an interface to display a web element to direct a user to the predicted next web page. The web element may be any suitable element for displaying the recommendation to direct the user to the predicted next web page"); generating, by a computer processor, a target vector embedding from the textual description, where the target vector embedding represents the target session (Schlerf: see Page 8 paragraph 0062 lines 3-6, "Although not depicted, some or all of the input data 310 may be input to the vector generator 243, which may then output a feature vector that is input to the web page prediction model 320"; Page 9 paragraph 0063 lines 1-7, "Using the input data 310, the vector generator 243 may generate a feature vector representing a web page of the web pages 312 and optionally, data from the session context 311 and the user profile 313. The vector generator 243 may generate multiple feature vectors corresponding to the multiple web pages that a user views during a web session"); retrieving, by the computer processor, a subset of session data from a database by querying the database using the target vector embedding (Schlerf: see Page 2 paragraph 0020 lines 1-8, "The database 220 stores data for providing predictive web navigation. The database 220 may include data for input into a model (e.g., a machine-learned model) for recommending a next web page that a user is likely to view. The data can include web pages viewed by the user, user characteristics, web session context, any suitable data describing a user's web navigation, or a combination thereof"; Page 2 paragraph 0021 lines 1-8, "User characteristics may be retrieved from a user profile (e.g., a profile of the user hosted by an entity's human resources department) that may also be stored at the database 220 or at a remote server that is communicatively coupled to the database 220 and/or the central database system 240 through the network 230" ), where the database stores a plurality of vector embeddings representing sessions with the website (Schlerf: see Page 5 paragraph 0038 lines 13-15, "A training data set may include feature vectors representing the web pages viewed by users"; Page 7 paragraph 0053 lines 1-10, "The database 245 stores data for the central database system 240 to perform predictive web page navigation. Examples of data stored in the database 245 may include training data sets used by the model training engine 241.. session context data, outputs of the one or more models 242 (e.g., predicted next web pages)..."). However, Schlerf does not teach creating, by the computer processor, synthetic test data for the website from the subset of sessions. Nevertheless, Wenig-which is in the same field of endeavor- teaches creating, by the computer processor, synthetic test data for the website from the subset of sessions (Wenig: see Col 4 lines 27-42, " … A replay system 106 is then used to analyze the captured network data 38 and the captured UI events 34 for the captured network session 50. The capture system 12 provides the unique combination of capturing both network data 38 exchanged between client 14 and web application 43 during the web session 50 and also capturing the local UI events 34 on the computing device 13.”; Col 6 lines 26-32, "The replay system 106 uses the replay rules 110 generated by the test system 102 when replaying a previously captured web session to make inferences about events that likely happened but were not actually captured by the capture system 12. This allows the replay system 106 to more accurately replay captured web sessions that could have otherwise failed or moved into an undefined state due to the missed events"). Schlerf and Wenig are analogous art because they are from the same field of endeavor. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to utilize Schlerf’s machine learning method for predicting user activities with Wenig’s method of generating replay systems to test events that happened during a web session. The suggestion/motivation for doing so would be to make troubleshooting more efficient by replicating the steps of a user or improve the user’s experience through testing. Regarding claim 16, Schlerf teaches generating the target vector embedding using a text embedding model (Schlerf: see Page 8 paragraph 0058 lines 14-18, "...provide the received data to the vector generator 243 to generate feature vectors, and provide a subset or all of the generated vectors to a machine-learned model of the models 242 to determine a predicted next web page to which the user intends to navigate"). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Schlerf and Wenig, as applied to claims 15-16 above, and in further view of Qin (U.S. 2024/0346256)(hereinafter Qin). Regarding claim 17, Schlerf and Wenig teach clustering, by a computer processor, sessions in a database such that each subset of sessions belonging to a cluster is similar to other sessions in the same cluster (Schlerf: see Page 6 paragraph 0046, "The one or more models 242 may include multiple cluster models configured to cluster feature vectors into clusters corresponding to identified actions based on different criteria (e.g., entity preferences, input types, etc.) and/or include multiple cluster models configured to output clusters corresponding to different actions"). However, Schlerf and Wenig does not teach adding session data to the subset of session data, where the added session data corresponds to vector embeddings having a similarity measure greater than the threshold. Nevertheless, Qin-which is in the same field of endeavor- teaches adding session data to the subset of session data, where the added session data corresponds to vector embeddings having a similarity measure greater than the threshold (Qin: see Page 4 paragraph 0051, " In step 306, the first feature vector is compared to a plurality of second feature vectors, each of which corresponding to a piece of augmentation information, to determine second feature vectors that satisfy a predetermined condition with respect to the first feature vector. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228. As discussed above, comparator 208 may calculate a cosine similarity between first feature vector 226 and each second feature vector 228 to determine second feature vectors that are most similar to first feature vector 226. For instance, comparator 208 may compare first feature vector 226 to second feature vectors 228 to determine second feature vectors that are most similar to first feature vector 226. As discussed above, the determined second feature vectors may include second feature vectors having a cosine similarity to the first feature vector that satisfies a first predetermined relationship with a first predetermined threshold, second feature vectors that correspond to a first predetermined number of second feature vectors having the highest cosine similarities to the first feature vector, and/or second feature vectors that correspond to a second predetermined number of second feature vectors having the highest cosine similarities to the first feature vector that satisfy a second predetermined relationship with a second predetermined threshold. In embodiments, comparator 208 may provide, to retriever 210, indication(s) 230 that correspond to the second feature vectors 228 that are most similar to first feature vector 226"). Schlerf, Wenig, and Qin are analogous art because they are from the same field of endeavor. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to utilize Schlerf’s method of training the machine learning model with Qin’s method for finding similar vectors. The suggestion/motivation for doing so would be to improve the machine learning model. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Schlerf and Wenig, as applied to claims 15-16 above, and in further view of Turner et al. (U.S. 2025/0130918)(hereinafter Turner). Regarding claim 18, Schlerf and Wenig teach the invention detailed above. However, Schlerf and Wenig do not teach creating synthetic test data using process mining. Nevertheless, Turner-which is in the same field of endeavor- teaches creating synthetic test data using process mining (Turner: see Page 2 paragraph 0019 lines 9-14, "The exemplary system implementation 100, in FIG. 1, illustrates a general overview of various components associated with a data mining process for obtaining real and/or original datasets for training data models"; Page 3 paragraph 0022, "For example, referring back to example 100, the original/real training dataset 116 may be fed into a synthetic data generating process/module 122 to generate a plurality of synthetic training datasets 124 (representing a large quantity of training data) which realistically represent user behaviors associated with large groups of hypothetical users"). Schlerf, Wenig, and Turner are analogous art because they are from the same field of endeavor. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to utilize Schlerf and Wenig’s method of identifying similar sessions with Turner’s use of original datasets to generate synthetic data. The suggestion/motivation for doing so would be to further train the machine learning model with data that is realistic and improve the model’s ability to classify session data. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Schlerf in view of Breternitz et al. (U.S. 8,887,056)(hereinafter Breternitz). Regarding claim 19, Schlerf teaches clustering, by a computer processor, sessions in a database such that each subset of sessions belonging to a cluster is similar to other sessions in the same cluster (Schlerf: see Page 6 paragraph 0046, "The one or more models 242 may include multiple cluster models configured to cluster feature vectors into clusters corresponding to identified actions based on different criteria (e.g., entity preferences, input types, etc.) and/or include multiple cluster models configured to output clusters corresponding to different actions"). However, Schlerf does not teach creating, by the computer processor, synthetic test data for the subset of session data in a cluster; generating, by the computer processor, a textual description for the synthetic test data in a cluster; selecting, by a user interface, one or more textual descriptions for synthetic test data; and combining, by the computer processor, synthetic test data corresponding to selected textual descriptions. Nevertheless, Breternitz-which is in the same field of endeavor- teaches creating, by the computer processor, synthetic test data for the subset of session data in a cluster (Breternitz: see Col 32 lines 23-29, "At block 612, code synthesizer 79 of workload configurator 78 generates a synthetic test workload for execution on cluster of nodes 14 based on a set of user-defined workload parameters provided via user interface 200. The set of user-defined workload parameters (e.g., provided with the trace file) identify execution characteristics of the synthetic test workload, as described herein"); generating, by the computer processor, a textual description for the synthetic test data in a cluster (Breternitz: see Col 32 lines 26-29, "The set of user-defined workload parameters (e.g., provided with the trace file) identify execution characteristics of the synthetic test workload, as described herein"); selecting, by a user interface, one or more textual descriptions for synthetic test data (Breternitz: see Col 32 lines 51-56, "In one embodiment, configurator 22 provides the user interface 200 comprising selectable synthetic test workload data, and workload configurator 78 selects the set of user-defined workload parameters for generation of the synthetic test workload based on user selection of the selectable synthetic test workload data"); and combining, by the computer processor, synthetic test data corresponding to selected textual descriptions (Breternitz: see Col 32 lines 23-26, "At block 612, code synthesizer 79 of workload configurator 78 generates a synthetic test workload for execution on cluster of nodes 14 based on a set of user-defined workload parameters provided via user interface 200"). Schlerf and Breternitz are analogous art because they are from the same field of endeavor. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to utilize Schlerf’s method for clustering sessions based on specific criteria with Breternitz’s method for generating clusters of synthetic data. The suggestion/motivation for doing so would be to improve test or training data and identify potential weaknesses in the system. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KELAH JANAE MCFARLAND-BARNES whose telephone number is (571)272-5953. The examiner can normally be reached Monday through Friday 8:00am until 4:00pm Central Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn D Feild can be reached at 571-272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KELAH JANAE MCFARLAND-BARNES/Examiner, Art Unit 2431 /SARAH SU/Primary Examiner, Art Unit 2431
Read full office action

Prosecution Timeline

Oct 29, 2024
Application Filed
Mar 03, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579256
LARGE LANGUAGE MODEL (LLM) SUPPLY CHAIN SECURITY
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month