DETAILED ACTION
This office action is responsive to the request for reconsideration filed 12/31/2025. The application contains claims 1-18, all examined and rejected.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-8 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites that “third interference model … input training sensing data into the first inference model” however claim 1 disclose that the “third interference model” consume the first inference model to output a model selection “inputting the first inference result into a third inference model to obtain, from the third inference model, model selection”. It is unclear how the model that accept the first model output to use it to select a model is the same model that provide the first model input. Dependent claims inherit the deficiency of claim 2.
Claims 10-16 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites that “first inference mode is a model which: inputs training sensing data into the first inference model” however claim 9 disclose that the “first interference model” consume the sensing data to output a model selection. It is unclear how the model that accept sensing data is the model that consume the sensing data to select a model is the same model that input the sensing data to itself. Dependent claims inherit the deficiency of claim 10.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Banis et al. [US 2020/0210867 A1, hereinafter Banis] in view of Matsumoto [US 2019/0147361 A1, hereinafter Moto].
With regard to Claim 1,
Banis teach an information processing method that a processor executes using memory, the information processing method comprising:
inputting sensing data into a first inference model to obtain a first inference result of the sensing data from the first inference model (Fig. 9, ¶111, ¶134, “ 902, an inference request is received from a computer. The inference request includes a target defining a set of features related to a task to be processed by the MLAAS system”, ¶7, “ cognitive processes system receives an inference request including a model identifier and a target defining a set of features for use in processing the inference request from a computer”, “The cognitive processes system generates an inference outcome by processing the inference request using the target as input to the one or more machine learning models”, ¶135, “ Determining the inference outcome can include generating a plurality of candidate inference outcomes based on different processing of the target using one or more of the machine learning models”, ¶51, “ inference requests 108 are domain-specific requests for inferences based on a target. A target defines a set of features of the software product 104. Thus, a target may refer to a collection of one or more data items related to a task for which an inference is requested”), the sensing data being an image (¶79, “There may be a set of enhancement data source identification cross-references that allows the enhancement system to, for each word, phrase, image and the like expressed in the request (e.g., “user”, “upgrade”, “license”, and the like) determine other related types of information that may be used to enhance the target”);
inputting the first inference result into a third inference model to obtain, from the third inference model, model selection information indicating at least one second inference model to be operated from among a plurality of second inference models held by a model library, (¶135, “904, an inference outcome is determined by processing the target using one or more machine learning models. Determining the inference outcome can include generating a plurality of candidate inference outcomes based on different processing of the target using one or more of the machine learning models identified by the inference request”, “selection strategy can then be used to select one of the candidate inference outcomes as the inference outcome. For example, the selection strategy can be a maximum likelihood estimation or a multi-arm bandit approach”, ¶¶112-114, selection strategy is third inference model, ¶69, “model data store 204 may be a database (e.g., a relational database, an object database, a navigational database, an active database, a spatial database, or the like) or another data storage aspect (e.g., a file library, network cluster, or the like) “, ¶70, “The model data store 204 may store the models 212 based on the one or more tasks with which the models 212 are associated”);
selecting, from the model library, the at least one second inference model indicated by the model selection information (Fig. 9, step 904, ¶70, “For example, there may be a number of different models used for the task of determining a likelihood that a contact will become a customer. Each of those models may be stored together or in relation to one another within the model data store 204”, ¶71, “When the cognitive processes system 202 receives the inference request 210, it can use the included model identifier to identify the corresponding model 212 within the model data store 204”), operating the at least one second interface model selected, and inputting the first inference result into each of the at least one second inference model being operated, to obtain, from each of the at least one second inference model being operated, a second inference result of the first inference result (Fig. 9, step 904, ¶135, “Determining the inference outcome can include generating a plurality of candidate inference outcomes based on different processing of the target using one or more of the machine learning models identified by the inference request”, ¶52, “The inference outcomes 110 are results of inference performed based on the inference requests 108. An inference outcome 110 represents the output of a model 106 after a data set (e.g., a target) is fed into it. The inference outcome 110 may thus refer to a record representing a prediction, classification, recommendation, and/or other machine learning-based determinations resulting from performing inference for an inference request 108 … one of the candidate inference outcomes 110 can be selected as the inference outcome 110 representing the final output of the inference process”, ¶61, “the MLAAS system 100 iteratively trains a model 106 based on the particular version of the model 106 that was used to process an inference request 108 and based on the accuracy of the inference outcome 110 generated in response to the inference request 108, as represented in the feedback 112 received from the client 102”); and
outputting the second inference result obtained from each of the at least one second inference model being operated (Fig. 9, step 906, ¶136, “At 906, a response indicating the inference outcome is transmitted to the computer from which the inference request was received”).
Banis teach that the machine learning models include neural network model (¶49), but does not explicitly disclose that
Moto teach an information processing method that a processor executes using memory, the information processing method comprising:
inputting sensing data into a first inference model to obtain a first inference result of the sensing data from the first inference model (¶56, “The test data is used as test data of the learned model saved in server device 2. In the present embodiment, the test data is a face image of the user, and the face image (sensing data) of the user imaged by a camera not illustrated connected to user side device 3 is used as the test data”);
the third inference model being a neural network model (¶39, “deep learning, when learning image data or sound data are input to an input layer of a multilayered neural network”, ¶41, “ The learned model referred to in the present specification is a model generated by machine learning (for example, deep learning using a multilayered neural network, support vector machine, boosting, reinforcement learning, or the like) based on learning data and correct answer data (teacher data)”);
a plurality of second inference models held by a model library (¶43, “Server device 2 is a general computer device, and saves a plurality of learned models in learned model database”, ¶46, “The plurality of learned models are saved in learned model database 27 in advance”),
selecting, from the model library, the at least one second inference model (¶5, “calculating a performance of each of a plurality of learned models by applying the test data to each of the plurality of learned models saved in a database in advance, and selecting a learned model to be provided from the plurality of learned models to the user side device based on the calculated performance”, ¶13, ¶59, “ server device 2 selects and determines one or more models to be fine-tuned from the plurality of learned models saved in learned model database 27 based on the calculated correct rate”, ¶¶61-62), operating the at least one second interface model selected (¶58, “server device 2 applies the test data to the plurality of learned models saved in learned model database 27 respectively and calculates the performance of the learned models “, ¶60, “Next, server device 2 performs fine tuning on the selected model to be fine-tuned using the test data”, ¶61, “server device 2 applies the test data to the learned model subjected to the fine tuning and calculates the correct rate of the learned model subjected to the fine tuning”);
outputting the second inference result obtained from each of the at least one second inference model being operated (¶67, “Server device 2 transmits the learned model determined by user side device 3 to user side device 3”).
Banis and Moto are analogous art to the claimed invention because they are from a similar field of endeavor of to select a learned model optimal for use by a user side device from a plurality of learned models saved in advance and provide the selected learned model. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Banis resulting in resolutions as disclosed by Moto with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Banis as described above to select and provide the learned model optimal for use by the user side device from the plurality of learned models saved in the database in advance (Moto, ¶6) and improve the selection process by providing models that meet the requirements of a task without using less or more resources than what the task require as larger models offer greater representational power for complex, data-rich tasks, smaller models provide advantages in efficiency, deployment, and performance for resource-constrained or specialized applications and the choice between them involves a careful evaluation of these trade-offs. This is a use of known technique to improve similar devices (methods, or products) in the same way and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
With regard to Claim 2,
Banis-Moto teach the information processing method according to claim 1, wherein the third inference model is a neural network model which (Banis, ¶135, “selection strategy can then be used to select one of the candidate inference outcomes as the inference outcome. For example, the selection strategy can be a maximum likelihood estimation or a multi-arm bandit approach”, selection strategy is third inference model, Moto, ¶39, “deep learning, when learning image data or sound data are input to an input layer of a multilayered neural network”, ¶41):
inputs training sensing data into the first inference model to obtain, from the first inference model, a training first inference result of the training sensing data (¶7, “ cognitive processes system receives an inference request including a model identifier and a target defining a set of features for use in processing the inference request”, ¶134, “ 902, an inference request is received from a computer. The inference request includes a target defining a set of features related to a task …”, ¶135, “At 904, an inference outcome is determined by processing the target using one or more machine learning models …”); and
has been trained using machine learning based on (i) at least one training second inference result (¶136, “feedback is received … indicates an accuracy of the inference outcome with respect to the task for which the inference request was made”, ¶138, “the one or more machine learning models used to determine the inference outcome are trained using the training data set”, ¶159, “feedback in the MLAAS system described herein may be used as input to the inference selection strategy … retraining a model that produced the inference to improve further the inferences to be tested.”) and (ii) training model selection information (¶133, “feedback is received … feedback includes an outcome that indicates an accuracy of the inference outcome with respect to the task … feedback can include the request identifier and an outcome indicating the accuracy of the inference outcome with respect to the task. The feedback can be associated with the inference request using the request identifier”, ¶135, “A selection strategy can then be used to select one of the candidate inference outcomes as the inference outcome. For example, the selection strategy can be a maximum likelihood estimation or a multi-arm bandit approach”, ¶159, “feedback in the MLAAS system described herein may be used as input to the inference selection strategy … ”);
(i) the at least one training second inference result being obtained by inputting, for each of the at least one second inference model that is selected from among the plurality of second inference models according to a corresponding one of selection patterns, the training first inference result into the at least one second inference model selected (¶¶135-136, “Determining the inference outcome can include generating a plurality of candidate inference outcomes based on different processing of the target using one or more of the machine learning models identified by the inference request”, ¶159, “a plurality of inferences … cognitive processes system 202 may produce a plurality of inferences at 1302 using, for example, different models with a given target or various features for a given model or a combination thereof. The set of inferences produced at 1302 may be processed at 1304 with a selection strategy to select a subset of inferences. At 1306, the subset of inference may be tested, such as by capturing feedback on use of each inference that may indicate with of the subset (e.g., which case in an A/B set of inferences is preferred) and optionally retraining a model that produced the inference to improve further the inferences to be tested”, ¶161, “ multi-arm bandit approach may explore several different options to determine possible scores for the user. The option (e.g., the candidate inference outcome) having the highest score based on the processing using the appropriate machine learning model can indicate the likelihood of that user making the subject purchase. Later, feedback indicating whether the user made the subject purchase can be received by the MLAAS system and used to train the applicable machine learning model”), (ii) the training model selection information being the model selection information obtained by inputting the training first inference result into the third inference model (¶135, “selection strategy can then be used to select one of the candidate inference outcomes as the inference outcome. For example, the selection strategy can be a maximum likelihood estimation or a multi-arm bandit approach”, ¶159, “feedback in the MLAAS system described herein may be used as input to the inference selection strategy … ”, ¶161, “ multi-arm bandit approach may explore several different options to determine possible scores for the user. The option (e.g., the candidate inference outcome) having the highest score based on the processing using the appropriate machine learning model can indicate the likelihood of that user making the subject purchase. Later, feedback indicating whether the user made the subject purchase can be received by the MLAAS system and used to train the applicable machine learning model”). The same motivation to combine for claim 1 equally applies for current claim.
With regard to Claim 3,
Banis-Moto teach the information processing method according to claim 2,
wherein training of the third inference model includes training using machine learning (Banis, ¶159, “feedback in the MLAAS system described herein may be used as input to the inference selection strategy … ”, ¶¶114-115, Bandit selection strategy is an ML updated via reinforcement, ¶¶160-161) in which data generated according to a format of the model selection information based on the at least one training second inference result is reference data and the training model selection information is output data (Banis, ¶¶114-115, ¶135, “ selection strategy can be a maximum likelihood estimation or a multi-arm bandit approach”, ¶136, “feedback is received from the computer … The feedback includes an outcome that indicates an accuracy of the inference outcome with respect to the task for which the inference request was made … The feedback can be associated with the inference request using the request identifier”, ¶137, “At 910, a training data set is generated based on the feedback. The training data set includes the outcome and the request identifier”, ¶138, “the one or more machine learning models used to determine the inference outcome are trained using the training data set… training the identified version of the machine learning model using the outcome”, ¶159, “feedback in the MLAAS system described herein may be used as input to the inference selection strategy; thereby, using learning from the feedback to accomplish this A/B testing objective”). The same motivation to combine for claim 1 equally applies for current claim.
With regard to Claim 4,
Banis-Moto teach the information processing method according to claim 3,
wherein the model selection information includes first information corresponding to a task that a second inference model executes (Banis, ¶134, “inference request is received from a computer. The inference request includes a target defining a set of features related to a task to be processed by the MLAAS system.”, ¶135, “ inference outcome is determined by processing the target using one or more machine learning model”),
the at least one second inference model indicated by the model selection information executes the task corresponding to the first information (Banis, ¶135, “a plurality of candidate inference outcomes based on different processing of the target using one or more of the machine learning models identified by the inference request. A selection strategy can then be used to select one of the candidate inference outcomes as the inference outcome”), and
the information processing method further includes generating the reference data in such a manner that a second inference model that executes a task which has contributed to the at least one training second inference result is included in the at least one second inference model indicated by the training model selection information (Banis, ¶136, “feedback includes an outcome that indicates an accuracy of the inference outcome with respect to the task for which the inference request was made … The feedback can include the request identifier and an outcome indicating the accuracy of the inference outcome with respect to the task”, ¶137, “training data set is generated based on the feedback. The training data set includes the outcome and the request identifier as indicated in the feedback. The training data set may also include one or more features used to determine the outcome, such as features of particular relevance to the task underlying the inference request”, ¶159, “feedback in the MLAAS system described herein may be used as input to the inference selection strategy; thereby, using learning from the feedback to accomplish this A/B testing objective”). The same motivation to combine for claim 1 equally applies for current claim.
With regard to Claim 5,
Banis-Moto teach the information processing method according to claim 4,
wherein the generating of the reference data includes
generating the reference data in such a manner that a second inference model that executes a task belonging to a task set having a score higher than a threshold value is included (Banis, Fig. 13, ¶112, “ inference system 402 can use the maximum likelihood estimation to select an inference outcome based on the inference outcome having a highest score amongst the candidate inference outcomes”, ¶161, “multi-arm bandit approach may explore several different options to determine possible scores for the user. The option (e.g., the candidate inference outcome) having the highest score based on the processing using the appropriate machine learning model can indicate the likelihood of that user making the subject purchase”, ¶129, “multi-arm bandit approach may indicate to exploit the highest peak of the distribution, so as to select a candidate inference outcome corresponding to that highest peak”) in the at least one second inference model indicated by the training model selection information (Banis, ¶159, “feedback of a selected inference over time indicates the inference produces desirable results, the inference may, in embodiments, be selected more often. Additionally, the inference may be refined by the feedback; thereby not only providing benefits of A/B testing but also further improving performance through learning methodologies”), the score being based on the at least one training second inference result (Banis, ¶160, “feedback can then be used to further train the search-related model”, ¶161, “Later, feedback indicating whether the user made the subject purchase can be received by the MLAAS system and used to train the applicable machine learning model”, ¶138, “the one or more machine learning models used to determine the inference outcome are trained using the training data set”, ¶¶145-146, “transmit feedback including an outcome that can indicate an accuracy of the inference outcome with respect to the task “). The same motivation to combine for claim 1 equally applies for current claim.
With regard to Claim 6,
Banis-Moto teach the information processing method according to claim 3,
wherein the model selection information includes second information corresponding to a performance of the at least one second inference model (Banis, ¶112, “ inference system 402 can use the maximum likelihood estimation to select an inference outcome based on the inference outcome having a highest score amongst the candidate inference outcomes”, ¶129, “multi-arm bandit approach may indicate to exploit the highest peak of the distribution, so as to select a candidate inference outcome corresponding to that highest peak”),
the at least one second inference model indicated by the model selection information has a performance corresponding to the second information (Banis, ¶161, “multi-arm bandit approach may explore several different options to determine possible scores for the user. The option (e.g., the candidate inference outcome) having the highest score based on the processing using the appropriate machine learning model can indicate the likelihood of that user making the subject purchase”, ¶159, “feedback of a selected inference over time indicates the inference produces desirable results, the inference may, in embodiments, be selected more often”), and
the generating of the reference data includes generating the reference data in such a manner that a second inference model that satisfies a performance requirement in the at least one training second inference result is included in the at least one second inference model indicated by the training model selection information (Banis, ¶160, “feedback can then be used to further train the search-related model”, ¶161, “Later, feedback indicating whether the user made the subject purchase can be received by the MLAAS system and used to train the applicable machine learning model”, ¶138, “the one or more machine learning models used to determine the inference outcome are trained using the training data set. Training a machine learning model using the training data set can include identifying a version of the machine learning model used to generate the inference outcome based on the request identifier and training the identified version of the machine learning model using the outcome”, ¶¶145-146). The same motivation to combine for claim 1 equally applies for current claim.
With regard to Claim 7,
Banis-Moto teach the information processing method according to claim 6,
wherein the second information includes a degree of difficulty of processing of the at least one second inference model (Banis, ¶129, “multi-arm bandit approach for selecting the inference outcome may explore the other two peaks, such as by iterating on those values using one or more machine learning models and evaluating the results of the iteration. After a number of further trials are completed, the multi-arm bandit approach may indicate to exploit the highest peak of the distribution, so as to select a candidate inference outcome corresponding to that highest peak”, ¶114, “multi-arm bandit approach further exploits the highest overall peak for selection as the output inference outcome. For example, the multi-arm bandit approach may use an Epsilon-greedy policy, a Thomson policy, or another policy for exploring and exploiting the values in a candidate inference outcome distribution”), and the generating of the reference data includes generating the reference data in such a manner that a second inference model that has a performance according to the degree of difficulty is included in the at least one second inference model indicated by the training model selection information (Banis, ¶129, ¶114, ¶112, reference data is generating by recording the policy used (exploit or explore) and the corresponding selected model. Models matching difficulty is included). The same motivation to combine for claim 1 equally applies for current claim.
With regard to Claim 8,
Banis teach the information processing method according to claim 6, wherein the at least one second inference model includes a neural network model (Banis, ¶49, “machine learning models trained at least in part using a computer. Examples of the models 106 may include, but are not limited to, feedforward neural networks, deep neural networks, recurrent neural networks, modular neural networks, other neural networks, statistical models”),
the second information includes the performance of the at least one second inference model (Banis, ¶112, “use the maximum likelihood estimation to select an inference outcome based on the inference outcome having a highest score amongst the candidate inference outcomes”, ¶114, “multi-arm bandit approach further exploits the highest overall peak for selection as the output inference outcome. For example, the multi-arm bandit approach may use an Epsilon-greedy policy, a Thomson policy, or another policy for exploring and exploiting the values”, ¶136, “feedback includes an outcome that indicates an accuracy of the inference outcome with respect to the task”, ¶161, “candidate inference outcome) having the highest score based on the processing using the appropriate machine learning model can indicate the likelihood”), and
the generating of the reference data includes generating the reference data in such a manner that a second inference model that has the size information included in the second information is included in the at least one second inference model indicated by the training model selection information (Banis, ¶136, “feedback includes an outcome that indicates an accuracy of the inference outcome with respect to the task for which the inference request was made … The feedback can include the request identifier and an outcome indicating the accuracy of the inference outcome with respect to the task”, ¶137, “training data set is generated based on the feedback. The training data set includes the outcome and the request identifier as indicated in the feedback. The training data set may also include one or more features used to determine the outcome, such as features of particular relevance to the task underlying the inference request”, ¶138, “the one or more machine learning models used to determine the inference outcome are trained using the training data set. Training a machine learning model using the training data set can include identifying a version of the machine learning model used to generate the inference outcome based on the request identifier and training the identified version of the machine learning model using the outcome”).
Banis does not explicitly teach information includes size information indicating a size of the neural network model.
Moto teach the second information includes size information indicating a size of the neural network model, as the performance of the at least one second inference model (¶48, “The performance of the learned model is, for example, the correct rate (correctness degree), a relevance rate, a reappearance rate, a type or number of hierarchies of the neural network model”).
Banis and Moto are analogous art to the claimed invention because they are from a similar field of endeavor of to select a learned model optimal for use by a user side device from a plurality of learned models saved in advance and provide the selected learned model. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Banis resulting in resolutions as disclosed by Moto with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Banis as described above to improve the selection process by providing models that meet the requirements of a task without using less or more resources than what the task require as larger models offer greater representational power for complex, data-rich tasks, smaller models provide advantages in efficiency, deployment, and performance for resource-constrained or specialized applications and the choice between them involves a careful evaluation of these trade-offs. This is a use of known technique to improve similar devices (methods, or products) in the same way and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
With regard to Claim 9,
Claim 9 is similar to claim 1; therefore it is rejected under similar rationale. Examiner notes that the functions of generating the first inference results and providing the model selection information can be both be performed within a single model, and nothing prevent the first inference model from internally including multiple sub-models. Thus claim 9 is similar in scope to claim 1 where the first and third models are embodied together. This interpretation is further supported by Banis which teach a model providing system that calculate inference results and determine model selection information from the same processing flow See at least ¶¶58-62. Accordingly the difference in wording between claim 1 and claim 9 does not establish a distinction in scope. In addition Moto teach implementing a first inference model is neural network (¶39, “deep learning, when learning image data or sound data are input to an input layer of a multilayered neural network”, ¶41, “ The learned model referred to in the present specification is a model generated by machine learning (for example, deep learning using a multilayered neural network, support vector machine, boosting, reinforcement learning, or the like) based on learning data and correct answer data (teacher data)”).
With regard to Claim 10,
Claim 10 is similar to claim 2; therefore it is rejected under similar rationale.
With regard to Claim 11,
Claim 11 is similar to claim 3; therefore it is rejected under similar rationale.
With regard to Claim 12,
Claim 12 is similar to claim 4; therefore it is rejected under similar rationale. With regard to Claim 13,
Claim 13 is similar to claim 5; therefore it is rejected under similar rationale.
With regard to Claim 14,
Claim 14 is similar to claim 6; therefore it is rejected under similar rationale.
With regard to Claim 15,
Claim 15 is similar to claim 7; therefore it is rejected under similar rationale.
With regard to Claim 16,
Claim 16 is similar to claim 8; therefore it is rejected under similar rationale.
With regard to Claim 17,
Claim 17 is similar to claim 1; therefore it is rejected under similar rationale. Banis further teach an information processing system comprising: a memory configured to store a program; and a processor configured to execute the program and control the information processing system (¶¶148-151). In addition Moto teach implementing a first inference model is neural network (¶39, “deep learning, when learning image data or sound data are input to an input layer of a multilayered neural network”, ¶41, “ The learned model referred to in the present specification is a model generated by machine learning (for example, deep learning using a multilayered neural network, support vector machine, boosting, reinforcement learning, or the like) based on learning data and correct answer data (teacher data)”).
With regard to Claim 18,
Claim 18 is similar to claim 9; therefore it is rejected under similar rationale. Banis further teach an information processing system comprising: a memory configured to store a program; and a processor configured to execute the program and control the information processing system (¶¶148-151). In addition Moto teach implementing a first inference model is neural network (¶39, “deep learning, when learning image data or sound data are input to an input layer of a multilayered neural network”, ¶41, “ The learned model referred to in the present specification is a model generated by machine learning (for example, deep learning using a multilayered neural network, support vector machine, boosting, reinforcement learning, or the like) based on learning data and correct answer data (teacher data)”).
Response to Amendment
Examiner respectfully withdraw the 35 U.S.C. 112(b) rejection for claims 4-8, and 12-16 based on the applicant’s amendments.
Examiner maintain the 35 U.S.C. 112(b) rejection for claims 2-8, and 10-16 as the provided amendments does not solve the raised issues.
Applicant arguments (Remarks P. 19-29) regarding how the specification describes particular advantages of the claimed invention is persuasive; therefore the rejection under 35 USC 101 is respectfully withdrawn.
Applicant argue that Banis does not teach the amended claims.
Examiner respectfully disagrees, Banis teach the amended claims except for the requirements that identify the first inference model or third inference model is neural network (NN). Moto modify Banis to use NN See at least (¶39, “deep learning, when learning image data or sound data are input to an input layer of a multilayered neural network”, ¶41, “ The learned model referred to in the present specification is a model generated by machine learning (for example, deep learning using a multilayered neural network, support vector machine, boosting, reinforcement learning, or the like) based on learning data and correct answer data (teacher data)”). In other words, Banis teach inputting sensing data into a first inference model to obtain a first inference result of the sensing data from the first inference model (Fig. 9, ¶111, ¶134, “ 902, an inference request is received from a computer. The inference request includes a target defining a set of features related to a task to be processed by the MLAAS system”, ¶7, “ cognitive processes system receives an inference request including a model identifier and a target defining a set of features for use in processing the inference request from a computer”, “The cognitive processes system generates an inference outcome by processing the inference request using the target as input to the one or more machine learning models”, ¶135, “ Determining the inference outcome can include generating a plurality of candidate inference outcomes based on different processing of the target using one or more of the machine learning models”, ¶51, “ inference requests 108 are domain-specific requests for inferences based on a target. A target defines a set of features of the software product 104. Thus, a target may refer to a collection of one or more data items related to a task for which an inference is requested”), the sensing data being an image (¶79, “There may be a set of enhancement data source identification cross-references that allows the enhancement system to, for each word, phrase, image and the like expressed in the request (e.g., “user”, “upgrade”, “license”, and the like) determine other related types of information that may be used to enhance the target”);
inputting the first inference result into a third inference model to obtain, from the third inference model, model selection information indicating at least one second inference model to be operated from among a plurality of second inference models held by a model library, (¶135, “904, an inference outcome is determined by processing the target using one or more machine learning models. Determining the inference outcome can include generating a plurality of candidate inference outcomes based on different processing of the target using one or more of the machine learning models identified by the inference request”, “selection strategy can then be used to select one of the candidate inference outcomes as the inference outcome. For example, the selection strategy can be a maximum likelihood estimation or a multi-arm bandit approach”, ¶¶112-114, selection strategy is third inference model, ¶69, “model data store 204 may be a database (e.g., a relational database, an object database, a navigational database, an active database, a spatial database, or the like) or another data storage aspect (e.g., a file library, network cluster, or the like) “, ¶70, “The model data store 204 may store the models 212 based on the one or more tasks with which the models 212 are associated”);
selecting, from the model library, the at least one second inference model indicated by the model selection information (Fig. 9, step 904, ¶70, “For example, there may be a number of different models used for the task of determining a likelihood that a contact will become a customer. Each of those models may be stored together or in relation to one another within the model data store 204”, ¶71, “When the cognitive processes system 202 receives the inference request 210, it can use the included model identifier to identify the corresponding model 212 within the model data store 204”), operating the at least one second interface model selected, and inputting the first inference result into each of the at least one second inference model being operated, to obtain, from each of the at least one second inference model being operated, a second inference result of the first inference result (Fig. 9, step 904, ¶135, “Determining the inference outcome can include generating a plurality of candidate inference outcomes based on different processing of the target using one or more of the machine learning models identified by the inference request”, ¶52, “The inference outcomes 110 are results of inference performed based on the inference requests 108. An inference outcome 110 represents the output of a model 106 after a data set (e.g., a target) is fed into it. The inference outcome 110 may thus refer to a record representing a prediction, classification, recommendation, and/or other machine learning-based determinations resulting from performing inference for an inference request 108 … one of the candidate inference outcomes 110 can be selected as the inference outcome 110 representing the final output of the inference process”, ¶61, “the MLAAS system 100 iteratively trains a model 106 based on the particular version of the model 106 that was used to process an inference request 108 and based on the accuracy of the inference outcome 110 generated in response to the inference request 108, as represented in the feedback 112 received from the client 102”); and
outputting the second inference result obtained from each of the at least one second inference model being operated (Fig. 9, step 906, ¶136, “At 906, a response indicating the inference outcome is transmitted to the computer from which the inference request was received”).
Banis teach that the machine learning models include neural network model (¶49), but does not explicitly disclose that
Moto teach an information processing method that a processor executes using memory, the information processing method comprising:
inputting sensing data into a first inference model to obtain a first inference result of the sensing data from the first inference model (¶56, “The test data is used as test data of the learned model saved in server device 2. In the present embodiment, the test data is a face image of the user, and the face image (sensing data) of the user imaged by a camera not illustrated connected to user side device 3 is used as the test data”);
the third inference model being a neural network model (¶39, “deep learning, when learning image data or sound data are input to an input layer of a multilayered neural network”, ¶41, “ The learned model referred to in the present specification is a model generated by machine learning (for example, deep learning using a multilayered neural network, support vector machine, boosting, reinforcement learning, or the like) based on learning data and correct answer data (teacher data)”);
a plurality of second inference models held by a model library (¶43, “Server device 2 is a general computer device, and saves a plurality of learned models in learned model database”, ¶46, “The plurality of learned models are saved in learned model database 27 in advance”),
selecting, from the model library, the at least one second inference model (¶5, “calculating a performance of each of a plurality of learned models by applying the test data to each of the plurality of learned models saved in a database in advance, and selecting a learned model to be provided from the plurality of learned models to the user side device based on the calculated performance”, ¶13, ¶59, “ server device 2 selects and determines one or more models to be fine-tuned from the plurality of learned models saved in learned model database 27 based on the calculated correct rate”, ¶¶61-62), operating the at least one second interface model selected (¶58, “server device 2 applies the test data to the plurality of learned models saved in learned model database 27 respectively and calculates the performance of the learned models “, ¶60, “Next, server device 2 performs fine tuning on the selected model to be fine-tuned using the test data”, ¶61, “server device 2 applies the test data to the learned model subjected to the fine tuning and calculates the correct rate of the learned model subjected to the fine tuning”);
outputting the second inference result obtained from each of the at least one second inference model being operated (¶67, “Server device 2 transmits the learned model determined by user side device 3 to user side device 3”).
Banis and Moto are analogous art to the claimed invention because they are from a similar field of endeavor of to select a learned model optimal for use by a user side device from a plurality of learned models saved in advance and provide the selected learned model. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Banis resulting in resolutions as disclosed by Moto with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Banis as described above to select and provide the learned model optimal for use by the user side device from the plurality of learned models saved in the database in advance (Moto, ¶6) and improve the selection process by providing models that meet the requirements of a task without using less or more resources than what the task require as larger models offer greater representational power for complex, data-rich tasks, smaller models provide advantages in efficiency, deployment, and performance for resource-constrained or specialized applications and the choice between them involves a careful evaluation of these trade-offs. This is a use of known technique to improve similar devices (methods, or products) in the same way and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
As to the remaining dependent claims, applicant argue that they are allowable due to their respective direct and indirect dependencies upon one of the aforementioned Independent claims. The examiner respectfully disagrees, Independent claims were not allowable as stated in the paragraph above in this “Response to Arguments” section in this office action.
Conclusion
The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure.
US Patent Application Publication No. 2020/0219015 filed by Lee et al. that disclose the ability to use inference data to select model See at least Abstract, ¶113.
Examiner has pointed out particular references contained in the prior arts of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and Figures may apply as well. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior arts or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABOU EL SEOUD whose telephone number is (303)297-4285. The examiner can normally be reached Monday-Thursday 9:00am-6:00pm MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMED ABOU EL SEOUD/Primary Examiner, Art Unit 2148