DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/06/2026 has been entered.
Response to Amendment
The amendments filed 01/06/2026 have been entered.
Claims 1-4 and 7-15 remain pending within the application.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 4 recites the limitation "the specific learning data is learning data obtained by extracting a specific learning history that meets a predetermined condition from a learning history of the first machine learning model based on a database that records data regarding input information to the second machine learning model". However, claim 1 discloses “specific learning data out of second learning data used by a second machine-learning model that changes based on accumulation of the second learning data”. It is unclear whether “the specific learning data” in claim 4 refers to the same specific learning data in claim 1, as claim 1 refers to specific learning data used by the second machine learning model, and claim 4 recites specific learning data to be learning data of a learning history of the first machine learning model. There is insufficient antecedent basis for this limitation in claim 4. For examination purposes, the examiner is interpreting “the specific learning data is learning data obtained by extracting a specific learning history that meets a predetermined condition from a learning history of the first machine learning model based on a database that records data regarding input information to the second machine learning model” to refer to “the specific learning data is learning data obtained by extracting a specific learning history that meets a predetermined condition from a learning history of one machine learning model based on a database that records data regarding input information to another machine learning model”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7, 8, 9, and 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Manggala (Pub. No.: US 2020/0279315 A1), hereafter Manggala, in view of Ammad-ud-din et al. ("FEDERATED COLLABORATIVE FILTERING FOR PRIVACY-PRESERVING PERSONALIZED RECOMMENDATION SYSTEM "), hereafter Ammaduddin, in further view of Brundage et al. (Pub. No.: US 2018/0137390 A1), hereafter Brundage.
Regarding claim 1, Manggala discloses:
An information processing device comprising: processing circuitry configured to (Manggala, ¶[0020] and Fig. 1),
cause, for a first machine-learning model that changes based on accumulation of first learning data, re-learning of the first machine-learning model to be performed using the first learning data and specific learning data out of second learning data (Manggala, Fig. 3 and paragraph [0066] teaches a recommendation engine as a machine learning model that changes on a basis of accumulation of first user behavior data, i.e. first learning data, and second user behavior data associated with a user response as specific learning data out of second learning data),
wherein the first learning data is learning data generated by inputting input information of … a first user into the first machine-learning model (Manggala, paragraph [0066] teaches inputting data of a first user to train the first machine learning model),
receive input … data from the first user (Manggala, Fig. 1, Fig. 3 and ¶[0052-0053] teaches receiving input information from the first user),
generate at least one of audio or image information by inputting the … data into the relearned first machine-learning model; and output the generated at least one of audio or image information (Manggala, Fig. 5, ¶[0051-0053], and ¶[0076] teaches generating, using the recommendation model as a re-learned algorithm, recommendations as image information on a user interface based on the input information and outputting said image information).
While Manggala discloses cause, for a first machine-learning model that changes based on accumulation of first learning data, re-learning of the first machine-learning model to be performed using the first learning data and specific learning data out of second learning data…, they do not disclose: specific learning data out of second learning data used by a second machine learning model that changes based on accumulation of the second learning data.
Ammaduddin discloses:
specific learning data out of second learning data used by a second machine learning model that changes based on accumulation of the second learning data (Ammaduddin, Figure 1 teaches data used by the client 2 local model as specific learning data out of second learning data used by a second machine-learning model that changes based on accumulation of the second learning data, as the user-specific model is updated based on local data).
While Manggala discloses wherein the first learning data is learning data generated by inputting input information of … a first user into the first machine-learning model, they do not disclose the first learning data to be generated by inputting input information of only a first user into the first machine-learning model.
Ammaduddin discloses:
first learning data is learning data generated by inputting input information of only a first user into the first machine-learning model (Ammaduddin, Figure 1 and Figure 1 caption “Each user-specific model X (user-factor matrix) remains on the local client, and is updated on the client using the local user data and Y from the server” teaches learning data to be generated by input information of only a single user into the model).
Manggala discloses second learning data, but does not teach wherein the second learning data is learning data generated by inputting input information of only a second user, different from the first user, into the second machine-learning model.
Ammaduddin teaches:
the second learning data is learning data generated by inputting input information of only a second user, different from the first user, into the second machine-learning model (Ammaduddin, Figure 1 caption “Each user-specific model X (user-factor matrix) remains on the local client, and is updated on the client using the local user data and Y from the server” teaches learning data to be generated by input information of only a single user into the model).
Manggala does not disclose:
wherein a part of the first learning data is removed and replaced with at least a part of the specific learning data of the second learning data,
the re-learning of the first machine-learning model is performed without the removed part of the first learning data.
Ammaduddin discloses:
wherein a part of the first learning data is removed and replaced with at least a part of the specific learning data of the second learning data (Ammaduddin, Figure 1 and page 4, point 1 “All the item factor vectors yi, i = 1, … ,M are updated on the server and then distributed to each client u.” teaches replacement of item factor vectors in learning data of first client with at least a part of a specific learning data of the second client’s learning data through server update),
the re-learning of the first machine-learning model is performed without the removed part of the first learning data (Ammaduddin, Figure 1 and page 4, point 2 and 3 teach updating the local model on client 1 as re-learning the first machine learning model without the outdated item factor vector as the removed part of the first learning data).
Manggala and Ammaduddin are analogous art because they are from the same field of endeavor, recommendation systems.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Manggala to include specific learning data out of second learning data used by a second machine learning model that changes based on accumulation of the second learning data, first learning data is learning data generated by inputting input information of only a first user into the first machine-learning model, the second learning data is learning data generated by inputting input information of only a second user, different from the first user, into the second machine-learning model, wherein a part of the first learning data is removed and replaced with at least a part of the specific learning data of the second learning data, and the re-learning of the first machine-learning model is performed without the removed part of the first learning data, based on the teachings of Ammaduddin. The motivation for doing so would have been to enhance user privacy with no loss of accuracy (Ammaduddin, page 2, paragraph 3, last line, “no loss of accuracy while simultaneously enhancing user privacy.”).
Manggala teaches receive input … data from the first user, and generate at least one of audio or image information by inputting the … data into the relearned first machine-learning model; and output the generated at least one of audio or image information, but does not explicitly teach the data input to be voice data from …user.
Brundage teaches:
voice data from …user (Brundage, Fig. 13 and ¶[0312] teaches voice data input by the user),
generate … information by inputting the voice data into … machine-learning model (Brundage, Fig. 13 and ¶[0311-0312] teaches training machine learning models to generate information by inputting voice data from user interface input devices).
Manggala, Ammaduddin, and Brundage are analogous art because they are from the same field of endeavor, recommendation systems.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Manggala, in view of Ammaduddin, to include voice data input by …user and generate … information by inputting the voice data into … machine-learning model, based on the teachings of Brundage. The motivation for doing so would have been to improve predictions for recommendation systems (Brundage, ¶[0079] “improved evaluations, estimations, predictions, probabilities, class probability estimates, and/or classifications”).
Regarding claim 2, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Ammaduddin further discloses:
wherein the first learning data includes data regarding output information from the first machine learning model based on input information to the first machine learning model (Ammaduddin, Fig. 1 and Algorithm 1, function “ClientUpdate(Y)”).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Manggala to include wherein the first learning data includes data regarding output information from the first machine learning model based on input information to the first machine learning model, based on the teachings of Ammaduddin. The motivation for doing so would have been to enhance user privacy with no loss of accuracy (Ammaduddin, page 2, paragraph 3, last line, “no loss of accuracy while simultaneously enhancing user privacy.”).
Regarding claim 3, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Ammaduddin further discloses:
wherein the first learning data is based on data accumulated under a first environment in which the first machine learning model is used (Ammaduddin, Figure 1 discloses local data accumulated under a first environment in which the first machine learning model is used).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Manggala to include wherein the first learning data is based on data accumulated under a first environment in which the first machine learning model is used, based on the teachings of Ammaduddin. The motivation for doing so would have been to enhance user privacy with no loss of accuracy (Ammaduddin, page 2, paragraph 3, last line, “no loss of accuracy while simultaneously enhancing user privacy.”).
Regarding claim 4, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Ammaduddin further discloses:
wherein the specific learning data is learning data obtained by extracting a specific learning history that meets a predetermined condition from a learning history of the first machine learning model based on a database that records data regarding input information to the second machine learning model (Ammaduddin, Figure 1 and Figure 1 caption teaches extracting the master model for local updates on clients as extracting a specific learning history that meets a predetermined condition, given by algorithm 1 line 6 and equations 10 and 11, from a learning history of the one machine learning model based on a database, i.e. the server containing the model aggregator, that records data regarding input information to the other machine learning model).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Manggala to include wherein the specific learning data is learning data obtained by extracting a specific learning history that meets a predetermined condition from a learning history of the first machine learning model based on a database that records data regarding input information to the second machine learning model, based on the teachings of Ammaduddin. The motivation for doing so would have been to enhance user privacy with no loss of accuracy (Ammaduddin, page 2, paragraph 3, last line, “no loss of accuracy while simultaneously enhancing user privacy.”).
Regarding claim 7, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Ammaduddin further discloses:
wherein the specific learning data is converted to a predetermined data format, and then the re-learning of the machine learning model is performed (Ammaduddin, page 6, final paragraph, lines 4-5 “We transform the explicit ratings to the implicit feedback scenario of our proposed approach.” Teaches converting the collected dataset to a pre-determined data format before re-learning).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Manggala to include wherein the specific learning data is converted to a predetermined data format, and then the re-learning of the machine learning model is performed, based on the teachings of Ammaduddin. The motivation for doing so would have been to enhance user privacy with no loss of accuracy (Ammaduddin, page 2, paragraph 3, last line, “no loss of accuracy while simultaneously enhancing user privacy.”).
Regarding claim 8, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Manggala further discloses:
wherein a degree of influence is adjusted between the first learning data and the specific learning data, and the re- learning of the first machine learning model is performed (Manggala, paragraph [0069], lines 6-11 "if a recommendation influenced by CLRBivj is not accepted, the system may update the function such that in the future, W_BLS .... will be lower than before ( e.g., signifying the fact that the system has lost a degree of confidence on these datasets)" teaches W_BLS as a degree of influence is adjusted between the first learning data and the specific learning data).
Regarding claim 9, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Manggala further discloses:
wherein the first learning data and the second learning data are learning data in which predetermined recommendation information is output information with respect to input information (Manggala, paragraph [0003], lines 8-10 "generating, by the recommendation engine, a recommendation to the user based on the classifying of the labeled user behavior" and paragraph [0073], lines 8-9 “the recommendation may use the labels coming from x as features for producing recommendations “ teaches predetermined labels for recommendation as output information with respect to input information).
Regarding claim 11, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Manggala further discloses:
wherein the specific learning data is data converted into a predetermined feature amount (Manggala, paragraph [0073], lines 3-7 “any number of (label, C) pairs … from different surfaces and contents can be used as features for a given recommendation” teaches converting data to a predetermined feature amount to be used as specific learning data).
Regarding claim 12, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Manggala further discloses:
wherein the specific learning data is data received from another information processing device (Manggala, paragraph [0008], lines 15-17 "a second user behavior data...display, on a second computer device," teaches data received from another information processing device).
Regarding claim 13, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Ammaduddin further discloses:
wherein the re-learning of the first machine learning model is caused to be performed by another information processing device (Ammaduddin, Figure 1 teaches relearning to be performed by both server and client devices).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Manggala to include wherein the re-learning of the first machine learning model is caused to be performed by another information processing device, based on the teachings of Ammaduddin. The motivation for doing so would have been to enhance user privacy with no loss of accuracy (Ammaduddin, page 2, paragraph 3, last line, “no loss of accuracy while simultaneously enhancing user privacy.”).
Regarding claim 14, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 13. Ammaduddin further discloses:
wherein the first learning data is transmitted to the another information processing device, and the re-learning is caused to be performed by the another information processing device (Ammaduddin, Figure 1 teaches transmitting local model data to servers and for relearning to be performed by both server and client devices).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Manggala to include wherein the first learning data is transmitted to the another information processing device, and the re-learning is caused to be performed by the another information processing device, based on the teachings of Ammaduddin. The motivation for doing so would have been to enhance user privacy with no loss of accuracy (Ammaduddin, page 2, paragraph 3, last line, “no loss of accuracy while simultaneously enhancing user privacy.”).
Regarding claim 15, Claim 15 is substantially similar to claim 1, and thus is rejected on the same basis as claim 1.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Manggala (Pub. No.: US 2020/0279315 A1), hereafter Manggala, in view of Ammad-ud-din et al. ("FEDERATED COLLABORATIVE FILTERING FOR PRIVACY-PRESERVING PERSONALIZED RECOMMENDATION SYSTEM "), hereafter Ammaduddin, in further view of Brundage et al. (Pub. No.: US 2018/0137390 A1), hereafter Brundage, in further view of Badsha et. al. (Pub. No.: US 2017/0228810 A1), hereafter Badsha.
Regarding claim 10, Manggala, in view of Ammaduddin, in further view of Brundage, discloses the information processing device according to claim 1. Manggala, in view of Ammaduddin, in further view of Brundage, discloses specific learning data, but does not disclose the specific learning data to be encrypted data.
Badsha teaches:
learning data is encrypted data (Badsha, page 162, right column, paragraph 3, lines 11-12 “to perform the computations for recommendation on encrypted data” teaches learning data to be encrypted data).
Manggala, Ammaduddin, Brundage, and Badsha are analogous art because they are from the same field of endeavor, recommendation systems.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of in view of Ammaduddin, in further view of Brundage, to include the specific learning data to be encrypted data, based on the teachings of Badsha. The motivation for doing so would have been to preserve user privacy without compromising recommendation accuracy and efficiency (Badsha, page 162, right column, paragraph 3, lines 2-5 “allows the computations required for recommendations in a distributed manner and preserves user privacy without compromising recommendation accuracy and efficiency”).
Response to Arguments
Applicant's arguments filed 01/06/2026 have been fully considered with regards to the 35 U.S.C. 101 rejection, and they are persuasive. The 101 rejections of claims 1-4 and 7-15 have been withdrawn.
Applicant's arguments filed 01/06/2026 have been fully considered with regards to the 35 U.S.C. 102/103 rejection but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.Z.M./Examiner, Art Unit 2141
/MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141