Prosecution Insights
Last updated: April 19, 2026
Application No. 18/087,873

METHOD FOR GENERATING UNIVERSAL LEARNED MODEL

Non-Final OA §103
Filed
Dec 23, 2022
Examiner
NGUYEN, CINDY
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
Aising Ltd.
OA Round
5 (Non-Final)
78%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
87%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
542 granted / 692 resolved
+23.3% vs TC avg
Moderate +9% lift
Without
With
+9.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
13 currently pending
Career history
705
Total Applications
across all art units

Statute-Specific Performance

§101
17.3%
-22.7% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 692 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/06/2025 has been entered. Status of the claims Claims 1, 3, 5-10 were pending, claims 1, 3 and 10 have been amended. Therefore, claims 1, 3, 5-10 are currently pending for examination. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5-7 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Miao et al. (US 20150242760, hereafter Miao) in view of Kida (US 20170178024). Regarding claim 1, Miao discloses: A method comprising: Performing machine learning, at respective information processing devices that are connected to a network, based on data generated by the respective information processing device, and generating a learned model at the respective information processing devices based on the data generated by the respective information processing device ( Miao discloses [0056-0058] discloses: generates a personalized machine learning model at client devices 502 based on the information collected locally by client devices); Transmitting the learned model and accompanying information respectively corresponding each of the learned models through the network from the respective information processing devices to an integration processing server (Miao [0046] discloses: The personalized machine learning models 306A-C of each of the respective client devices 302A-C can be transmitted from each of the client devices 302A-C to server 304); Miao didn’t disclose, but Kida discloses: transmitting accompanying information respectively corresponding each of the learned models through the network from the respective information processing devices to an integration processing server (Kida [0026] discloses: A device may be initially activated with an application model that is not customized to a user, device, or environment. Periodically, the device may autonomously upload representative sensor data to the cloud ;[0038; 0048] discloses: receive, at a server, a series of measurements from a device. The series of measurements may be received from a wearable device that uses a sensor to take the series of measurements); integrating, at the integration processing server, the respective learned models to generate an integrated learned model (Kida [0026] discloses: A device may be initially activated with an application model that is not customized to a user, device, or environment. Periodically, the device may autonomously upload representative sensor data to the cloud, The data from the user combined with the data from the other users of similar characteristic may be used to develop a personalized model); transmitting the integrated learned model through the network from the integration processing server to the respective information processing device (Kida [0020; 0026] discloses: the new model is downloaded to the wearable device 104 to deliver a superior accuracy and inferences. The new model may be a personalized model for the wearable device 104 or the user 102); wherein the integrating respective learned models comprises: selectively integrating respective learned models that have common accompanying information (Kida [0029] discloses: The technique includes determining that there are features, also called characteristics, such as a first feature 406, that are common to the data from the target device and the stored data device. The similarities between the target device and the stored data device may be determined through techniques, such as machine learning techniques, to provide data to generate a model for the target device). Miao and Kida are analogous art because they are in the same field of endeavor, Personalizing machine learning. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Miao, to include the teaching of Kida, in order to create personalized models. The suggestion/motivation to combine is building increasingly accurate personalized models with received data. Regarding claim 3, Miao as modified discloses: The method, according to claim 1, further comprising: reading parameter files of two or more of the respective learned models, the parameter files being generated when the respective learned models are generated, and determining whether content of the parameter files match or essentially match, and upon determining the contents of the parameters files match or essentially match, perform the integration of the two or more respective learned models (Kida [0034-0036] discloses: The recorded data may be modeled and saved in a database for later comparison with Mark's data. The models may be used to predetermine a feature. The feature may be used to compare Mark's data to the recorded data to determine a subset of the recorded data within a tolerance margin. The subset of the recorded data may be used to create a model for Mark. The data collected by Mark for the creation of a personalized a model may be small, only sufficient for determining similarity to the recorded data stored in the system that was collected ahead of time. [0039] discloses: The predefined feature may be determined by comparing the plurality of models using machine learning). Regarding claim 5, Miao as modified discloses: The method, according to claim 1, further comprising: providing, by at least one of the respective information processing devices, an interface configured to receive a selection indicating whether the integrated learned model is to be applied to the respective information processing device (Miao [0069] discloses: a user may desire to bias predictions by the machine learning model. In one example implementation, biasing can be performed explicitly by a user adjusting or inputting settings. In another example implementation, biasing can be performed implicitly based on user actions. Such biasing by the user can improve performance of the machine learning model). Regarding claim 6, Miao as modified discloses: The method, according to claim 1, wherein the machine learning comprises subjecting additional learning to the existing learned model based on the data stored in the respective information processing devices (Miao [0052] discloses: each individual personalized machine learning model from each client device can be collected and aggregated together at the server. This collection and aggregation can be used to update a global machine learning model, which can subsequently be transmitted to each individual client device to update the personalized machine learning models. These updated personalized machine learning models can be further updated at each of the individual client devices (e.g., based on information collected at each client device). Regarding claim 7, Miao discloses: The method, according to claim 1, wherein the machine learning comprises subjecting additional learning to an initial learned model that is obtained by performing machine learning on a prescribed machine learning model based on prescribed initial data (Miao [0055] discloses: in addition to consensus machine learning model 508 being initially loaded onto client device 502, a consensus machine learning model 510 may be loaded onto server 504 as an initial global machine learning model that can subsequently be updated based, at least in part, on de-identified data collected on one or more of client devices 506. In another example, as described below, consensus machine learning model 508 may be loaded onto client device 502 as an initial machine learning model that can subsequently be personalized to a user of client device 502. Consensus machine learning model 508 can be based, at least in part, on training data that includes data from a population, such as a population of users operating client devices (e.g., other than client device 502) or applications executed by a processor of client devices. Data can include information resulting from actions of users or can include information regarding the users themselves. Data from the population of users can be used to train consensus machine learning model 508). Regarding claim 10, Miao discloses: A system, comprising: a plurality of information processing devices and an integration processing server connected with the respective information processing devices through a network, wherein each information processing device comprises (Miao [0016-0017]): a memory coupled to a processor, the memory storing instructions that when executed by the processor, configure the processor to (Miao [0029]): perform machine learning based on data generated by the respective information processing devices and generate a learned model at the respective information processing devices based on the data generated by the respective information processing device ( Miao discloses [0056-0058] discloses: generates a personalized machine learning model at client devices 502 based on the information collected locally by client devices); and Transmit the learned model through the network from the respective information processing devices to an integration processing server (Miao [0046] discloses: The personalized machine learning models 306A-C of each of the respective client devices 302A-C can be transmitted from each of the client devices 302A-C to server 304); Wherein the integration processing server comprises: a memory coupled to a processor, the memory storing instructions that when executed by the processor, configure the processor to (Miao [0029]): Miao didn’t disclose, but Kida discloses: transmitting accompanying information respectively corresponding each of the learned models through the network from the respective information processing devices to an integration processing server (Kida [0026] discloses: A device may be initially activated with an application model that is not customized to a user, device, or environment. Periodically, the device may autonomously upload representative sensor data to the cloud ;[0038; 0048] discloses: receive, at a server, a series of measurements from a device. The series of measurements may be received from a wearable device that uses a sensor to take the series of measurements); integrate the respective learned models to generate an integrated learned model (Kida [0026] discloses: A device may be initially activated with an application model that is not customized to a user, device, or environment. Periodically, the device may autonomously upload representative sensor data to the cloud, The data from the user combined with the data from the other users of similar characteristic may be used to develop a personalized model); transmit the integrated learned model through the network to the respective information processing devices (Kida [0020; 0026] discloses: the new model is downloaded to the wearable device 104 to deliver a superior accuracy and inferences. The new model may be a personalized model for the wearable device 104 or the user 102); wherein the integrating respective learned models comprises: selectively integrating respective learned models that have common accompanying information (Kida [0029] discloses: The technique includes determining that there are features, also called characteristics, such as a first feature 406, that are common to the data from the target device and the stored data device. The similarities between the target device and the stored data device may be determined through techniques, such as machine learning techniques, to provide data to generate a model for the target device). Miao and Kida are analogous art because they are in the same field of endeavor, Personalizing machine learning. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Miao, to include the teaching of Kida, in order to create personalized models. The suggestion/motivation to combine is building increasingly accurate personalized models with received data. Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Miao et al. (US 20150242760, hereafter Miao) in view of Kida (US 20170178024) and in view of Weber US 20200057817. Regarding claim 8, Miao as modified didn’t disclose, but Weber discloses: The method, according to claim 1, wherein the integrating the respective learned models comprises multi-stage integration comprising the integration between the integrated learned models (Weber [0068; 0069] discloses: a collection of trees may be provided to a neural network to cause the machine learning model to be trained to predict a candidate tree subset for a given objective. As an example, the machine learning model may generate a prediction of a candidate tree subset such that a root node of the candidate tree subset indicates an objective supporting the given objective). Miao and Weber are analogous art because they are in the same field of endeavor, a machine learning model. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Miao, to include the teaching of Weber, in order to cause a prediction model to update one or more of its configurations. The suggestion to combine is to utilize such predictions to improve the efficiency and speed at which data may be derived from one or more sources and to update one or more of its configurations (Weber [0020]). Regarding claim 9, Miao as modified discloses: The method, according to claim 1, wherein the learned model comprises a learned model having a tree structure (Weber [0068; 0069] discloses: a collection of trees may be provided to a neural network to cause the machine learning model to be trained to predict a candidate tree subset for a given objective. As an example, the machine learning model may generate a prediction of a candidate tree subset such that a root node of the candidate tree subset indicates an objective supporting the given objective). Miao and Weber are analogous art because they are in the same field of endeavor, a machine learning model. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Miao, to include the teaching of Weber, in order to cause a prediction model to update one or more of its configurations. The suggestion to combine is to utilize such predictions to improve the efficiency and speed at which data may be derived from one or more sources and to update one or more of its configurations (Weber [0020]). Response to Arguments Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CINDY NGUYEN whose telephone number is (571)272-4025. The examiner can normally be reached M-F 8:00-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhatia Ajay can be reached on 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CINDY NGUYEN/Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Dec 23, 2022
Application Filed
Feb 08, 2024
Non-Final Rejection — §103
May 14, 2024
Response Filed
Jul 02, 2024
Applicant Interview (Telephonic)
Jul 08, 2024
Examiner Interview Summary
Jul 11, 2024
Final Rejection — §103
Oct 15, 2024
Applicant Interview (Telephonic)
Oct 15, 2024
Response after Non-Final Action
Oct 16, 2024
Examiner Interview Summary
Nov 05, 2024
Response after Non-Final Action
Nov 13, 2024
Request for Continued Examination
Nov 19, 2024
Response after Non-Final Action
Jan 16, 2025
Non-Final Rejection — §103
Apr 01, 2025
Response Filed
May 28, 2025
Final Rejection — §103
Oct 06, 2025
Request for Continued Examination
Oct 07, 2025
Interview Requested
Oct 14, 2025
Response after Non-Final Action
Oct 22, 2025
Applicant Interview (Telephonic)
Oct 29, 2025
Examiner Interview Summary
Nov 12, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596762
METHOD FOR PROVIDING INFORMATION, METHOD FOR GENERATING DATABASE, AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12572537
LEARNED RESOURCE CONSUMPTION MODEL FOR OPTIMIZING BIG DATA QUERIES
2y 5m to grant Granted Mar 10, 2026
Patent 12566795
Method and system for synchronized search and retrieval of visual work instructions using artificial intelligence
2y 5m to grant Granted Mar 03, 2026
Patent 12554598
DATA RECOVERY METHOD, SYSTEM AND APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM AND PROGRAM PRODUCT
2y 5m to grant Granted Feb 17, 2026
Patent 12541542
HYBRID AI ARCHITECTURE FOR NATURAL LANGUAGE QUERY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
78%
Grant Probability
87%
With Interview (+9.1%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 692 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month