Prosecution Insights
Last updated: April 19, 2026
Application No. 17/675,704

MULTI-TENANCY MACHINE-LEARNING BASED ON COLLECTED DATA FROM MULTIPLE CLIENTS

Final Rejection §102
Filed
Feb 18, 2022
Examiner
ZECHER, CORDELIA P K
Art Unit
2100
Tech Center
2100 — Computer Architecture & Software
Assignee
Prosoc Inc.
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
76%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
253 granted / 509 resolved
-5.3% vs TC avg
Strong +26% interview lift
Without
With
+25.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
287 currently pending
Career history
796
Total Applications
across all art units

Statute-Specific Performance

§101
19.0%
-21.0% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 509 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This final action is responsive to the amendments filed on 11/4/25. Claims 1, 8-11, and 18-20 are pending. Response to Arguments With respect to the 101 rejection, the claims have been amended to recite limitations previously not rejected under 101 and the rejection is withdrawn. To be clear, the amended claims appear to be integrated for the reason at least that the claimed invention improves the function of a computer itself by focusing on computer operations (e.g., a specific data structure that improves anomaly detection). Overall, the applicant has presented arguments with respect to the 102 rejection that individually and separately recite elements of the claimed invention and elements of the cited reference, but nowhere do the arguments provide substantive argumentation linking the two and, specifically, their alleged differences. However, in an effort to advance prosecution, the examiner will attempt to provide some clarity to the prior Office action. The applicant alleges that Lozano (e.g., abstract, [0024], [0045]-[0047] and fig. 1) fails to teach “a machine learning job that retrieves data from a database of log events, partitions the data by each client, analyzes the data from the database of log events for each client, and, based on the analysis of the data from the database of log events, determines if an anomaly has occurred for a client, wherein, an anomaly occurs when a log event matches or exceeds, a predefined threshold” (pp. 9 and 10). Upon further consideration, the examiner respectfully disagrees. As a first matter, the applicant is silent on a key aspect of Lozano, including the machine learning model operation (e.g., e.g., [0007], [0008], [0023], [0044], [0046], [0059], [0061]-[0064], [0077], [0084], [0091], fig. 6) and the collecting training data across users for input into the ML model (e.g., event logs and log identifiers generation [0004])). Lozano clearly discloses collecting client data as logged event data in a database (fig. 1) and training a machine learning model using data from a corresponding database (fig. 6), and sending the results of the analysis back to the client database such as updated training data [0099]. As such, the trained ML model can generate alerts or suggested solutions for detected anomalies or errors associated with a particular user based on, e.g., verified solutions for an identified anomaly [0007] and [0023] further based on a correlation [0024] and confidence score and threshold [0093]. Thus, the claims remain rejected and the 102 rejection is reiterated below. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 8-11, and 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lozano et al. (US 20220019496, Herein “Lozano”). Regarding claim 1, Lozano teaches a method for multi-tenancy machine-learning based on collected data from multiple clients (collecting event data from multiple devices (fig. 1 – data 110 including event logs 108) for collected data from multiple users for, e.g., error detection (fig. 1) corresponding with, e.g., received event log (fig. 4); machine learning [0007] for received, e.g., event logs [0019] and [0022], thus improving logging documentation technologies such as for tracking errors and especially assisting users in gathering logged data and analyzing the data such as for generating alerts and searchable databases [0008] and [0031]) comprising: obtaining client data from multiple clients (obtaining data for an analysis including, e.g., event data, such as event data 108 and data 110 corresponding with the respective user devices (fig. 1) in addition to previously identified defects from a defects data base which are compiled from the various user devices [0024]), wherein the client data obtained from the multiple clients is obtained through a log collector and log events from the client data obtained from the multiple clients through the log collector (retrieved data corresponding with defects, for populating a defect ticket with information gathered based on event log and additional information including, e.g., user identifier [0005]) are tagged with a client name and the client data obtained from the multiple clients (inferred and additional information such as time stamp, user identifier, application, etc. [0005]) is sent to a database and indexed by the client name in the database (database according to a ticket indexed according to, e.g., log identifier [0006]; that is, training data is created based on stored event logs and defects databased [0007] based on log data triggered by events received from client devices [0019] the error documentation receiving data including event logs and a tagging log identifier to identify the event log corresponding with metadata [0022]with respect relevant identifiers [0023]; updated device database [0024]); pulling data from the database by a machine learning job based on job parameters (pull data from one component of the database (i.e., the defects database) so that the input defect may be determined to correlate (based on correlation parameters) with previously identified defects from the defects database [0024], see also [0046] to [0051]); partitioning the data by each client for the machine learning job (partition the defects database not only into individually identified defects but also partition the input defect to be able to determine whether or not the input defect correlates with previously identified defects from the defects database [0024]); analyzing the data from the multiple clients by the machine learning job (determine correlations to classify input data by correlating the input data with identified defects based on patterns/parameters [0008]; fig. 4 showing determine, by inputting the defect information into the model, whether the defect correlates to an identified defect in the defects database); sending the results of the analysis of the data from the multiple clients by the machine learning job back to the database (retrieve the ticket for the identified defect and append the event log to the defect ticket by adding the results of the ML model processing [0047]; further, based on the defect analyzer analysis by comparison of the input data with the previously identified defects from the defects database, append the event log to an existing defect ticket in the database by adding an entry citing the log identifier [0024]; further, store the examined defect ticket in the defects database [0081] such as by appending to an existing identified defect ticket [0055]; in other words, based on the analysis of the various client data by the ML model, add the results of the analysis to the database by appending to an existing ticket within the database); and wherein the machine learning job retrieves data from the database of log events (ML models using training data from stored event logs and defects databases [0007]) and partitions the data by each client (partitioning according to each log identifier each log identifier associated with a respective client, to identify the event log; the error documentation component may tag or otherwise associate the metadata of the event log with the log identifier [0022]) and analyzes the data from the database of log events for each client (analyze the event log data corresponding with identified defects from the defects database [0093]), and, based on the analysis of the data from the database of log events, determines if an anomaly has occurred for a client (detect a defect based on a confidence score [0093]), wherein, an anomaly occurs when a log event matches or exceeds, a predefined threshold (relative confidence score based on threshold [0093]; see also user notification [0029] and [0030]). Regarding claim 8, Lozano teaches the limitations of claim 1, as above. Furthermore, Lozano teaches The method of claim 7, wherein, if an anomaly occurs, a new log event for the client is sent back to the database (in response to identifying a defect, the ticket/database is retrieved and the newly identified event log is appended to the defect ticket by adding an entry indicating the log identifier associated with the event log [0047]; see also the newly logged anomaly is added to and stored in the defects database for further analysis [0051]), including the client name and data about the original event (e.g., log identifier associated with the event log [0047] and metadata (fig. 1)). Regarding claim 9, Lozano teaches the limitations of claims 1 and 8, as above. Furthermore, Lozano teaches The method of claim 8, wherein, after, the new log event for the client is sent back to the database (upon appending the event log to the defect ticket (i.e., database) by adding an entry indicating the log identifier associated with the event log [0047]), the new log event for the client is analyzed by an alert rule (analyze additional information, such as resolved vs. resolved [0047], threshold number of defects for determining priority level [0048], and/or performing a resolution for the identified defect (e.g., flagging for review) [0050]), and if the conditions of the alert rule are met, an alert is sent to an alerting platform to be sent to the client (e.g., notifying the developer about the possible solution in the defect ticket [0050], publish notifications to any subscribers [0006] and [0040] to [0050], workflow notification [0054] determining whether a user should be notified based on the determined error type of the defect [0055], etc.). Regarding claim 10, Lozano teaches the limitations of claim 1, as above. Furthermore, Lozano teaches The method of claim 7, wherein, the machine learning job operates as a singular machine learning job (ML model to, e.g., classify input data by correlating the input data of a particular job [0008]), wherein the singular machine learning job analyzes the data from the database of log events for each of the clients of the multiple clients (analyzing identified defects corresponding with defects from the database of log events for each of the clients [0008] in addition to analyzing data of current log events from each of the clients shown in fig. 1), in a partitioned manner, such that each of the log events for each client are analyzed separately (e.g., examining individual event logs such as event log 108(N) triggered by an error event on an application running on the respective device [0029]), and, based on the analysis of the data from the database of log events for each client, the machine learning job determines if an anomaly has occurred, for each client (determine if the event corresponds with, e.g., an existing identified defect corresponding with an analysis of data from the database of existing log events for each client such that if the currently examined event corresponds with an existing identified defectr, then the event can be appended to the identified defect ticket database [0029]). Regarding claim 11, Locano teaches A non-transitory computer-readable medium comprising code which, when executed by a processor, causes the processor to execute a method for multi-tenancy machine- learning based on collected data from multiple clients (computer network figs. 1 and 2) comprising: The claim recites similar limitations as claim 1 – see above. Regarding claim 18, the claim recites similar limitations as claim 8 – see above. Regarding claim 19, the claim recites similar limitations as claim 9 – see above. Regarding claim 20, the claim recites similar limitations as claim 10 – see above. Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON EDWARDS whose telephone number is (571) 272-5334. The examiner can normally be reached on Mon-Fri; 8am-5pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on 571-272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form /JASON T EDWARDS/ Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Feb 18, 2022
Application Filed
May 31, 2025
Non-Final Rejection — §102
Nov 04, 2025
Response Filed
Nov 29, 2025
Final Rejection — §102
Feb 18, 2026
Examiner Interview Summary
Feb 18, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583466
VEHICLE CONTROL MODULES INCLUDING CONTAINERIZED ORCHESTRATION AND RESOURCE MANAGEMENT FOR MIXED CRITICALITY SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12578751
DATA PROCESSING CIRCUITRY AND METHOD, AND SEMICONDUCTOR MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12561162
AUTOMATED INFORMATION TECHNOLOGY INFRASTRUCTURE MANAGEMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12536291
PLATFORM BOOT PATH FAULT DETECTION ISOLATION AND REMEDIATION PROTOCOL
2y 5m to grant Granted Jan 27, 2026
Patent 12393641
METHODS FOR UTILIZING SOLVER HARDWARE FOR SOLVING PARTIAL DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
76%
With Interview (+25.8%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 509 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month