DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 13 is objected to because of the following informalities: the claim should be written as an independent claim that imports the corresponding limitations of claim 6. Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-7 and 9-13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-11 of U.S. Patent No. 12,105,846 (the “‘846 patent) in view of Mathew et al. (US 2018/0025287). Claims 1-7 and 9-13 of the instant application are substantially similar to the ‘846 patent except the independent claims of the instant application recite “anonymized” data. This limitation is taught by Mathew, and is obvious for the rationale provided below.
Regarding the combination of the ‘846 patent and Mathew, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the aggregation system of the ‘846 patent to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the aggregation system of the ‘846 patent, upon which the claimed invention can be seen as an “improvement” through the use of an anonymized data feature;
2) the prior art contained a “comparable” system, namely the data system of Mathew, that has been improved in the same way as the claimed invention through the anonymized data feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the anonymized data feature to the base aggregation system of the ‘846 patent, and the results would have been predictable to one of ordinary skill in the art.
Although the claims at issue are not identical, they are not patentably distinct from each other because the claims overlap in scope as detailed below by the claim-by-claim comparison table. As illustrated below, the claim limitations of claims 1-11 of the ‘846 patent are the same or represent obvious variations of claims 1-7 and 9-13 of the instant application.
Instant Application 18/896,711
US Pat. No. 12,105,846
1. A memory storing instructions that, when executed by an apparatus, cause the apparatus to:
receive, from one or more user devices,
anonymized data
about at least one user of the one or more user devices;
analyze the anonymized data with one or more machine learning models to obtain aggregated analytic data, the aggregated analytic data useable as an addressable segment for customized messaging;
send the aggregated analytic data to the one or more third party message providers; and
receive, from at least one of the one or more third party message providers, one or more messages for display on one of the one or more user devices, the one or more messages targeted to the addressable segment.
1. A non-transitory computer readable medium (CRM) comprising instructions that, when executed by an apparatus, cause the apparatus to:
receive, from each of one or more user devices,
Mathew ¶ [0031], “In an embodiment, some of the user's private data can be anonymized or de-identified and shared with the model server 130.”
analytics about a user of each of the one or more user devices, the analytics comprising probabilities for one or more demographics associated with the user of each of the one or more user devices, the analytics predicted by each of the one or more user devices for their respective users and based at least in part on device context information;
aggregate, using one or more machine learning models selected with respect to a selected demographic analysis, the received analytics to obtain aggregated analytic data, the aggregated analytic data useable as an addressable segment for customized messaging;
send the aggregated analytic data to the one or more third party message providers; and
receive, from at least one of the one or more third party message providers, one or more messages for display on one of the one or more user devices, the one or more messages targeted to the addressable segment.
2. The memory of claim 1, wherein the instructions are to further cause the apparatus to transmit one or more machine learning models to each of the one or more user devices.
2. The CRM of claim 1, wherein the instructions are to further cause the apparatus to transmit the one or more machine learning models to each of the one or more user devices.
3. The memory of claim 1, wherein the instructions are to further cause the apparatus to transmit the one or more messages to the one or more user devices.
3. The CRM of claim 1, wherein the instructions are to further cause the apparatus to transmit the one or more messages to the one or more user devices.
4. The memory of claim 1, wherein the instructions are to further cause the apparatus to use machine learning model parameters received from the one or more user devices to tune the one or more machine learning models.
4. The CRM of claim 1, wherein the instructions are to further cause the apparatus to use machine learning model parameters received from the one or more user devices to tune the one or more machine learning models.
5. The memory of claim 1, wherein the apparatus is a central server.
5. The CRM of claim 1, wherein the apparatus is a central server.
6. A method, comprising:
collecting, by a user device, data about a user of the user device;
analyzing, by the user device, the data about the user to obtain one or more
anonymized
demographics including probabilities for one or more demographics about the user that cannot be used identify the user and can be used as an addressable segment for customized messaging;
transmitting, by the user device, the anonymized demographics to a remote server; and
receiving, by the user device, one or more messages for display to the user, the one or more messages selected based upon the anonymized demographics.
6. A method, comprising:
collecting, by a user device, data about a user of the user device;
collecting, by the user device, contextual information about the user device;
analyzing, by the user device using a machine learning model,
Mathew ¶ [0031], “In an embodiment, some of the user's private data can be anonymized or de-identified and shared with the model server 130.”
the data about the user and the contextual information to obtain one or more analytics comprising probabilities for one or more demographics about the user that cannot be used to identify the user and can be used as an addressable segment for customized messaging, the machine learning model selected based on a target profile of the user device;
transmitting, by the user device, the analytics to a remote server;
receiving, by the user device, a plurality of messages, the plurality of messages selected based upon the analytics; and
selecting, by the user device, a message of the plurality of messages for display to the user.
7. The method of claim 6, wherein collecting data about the user of the user device comprises collecting data about the user's activities on the device, including location, app usage, and browsing history.
7. The method of claim 6, wherein collecting data about the user of the user device comprises collecting data about the user's activities on the user device, including location, device metadata, app usage, and browsing history.
9. The method of claim 8, further comprising receiving, by the user device, the machine learning model from a central server.
8. The method of claim 6, further comprising receiving, by the user device, the machine learning model from a central server.
10. The method of claim 9, further comprising receiving, by the user device, periodic updates to the machine learning model from the central server.
9. The method of claim 8, further comprising receiving, by the user device, periodic updates to the machine learning model from the central server.
11. The method of claim 8, further comprising using the one or more anonymized demographics to adjust the machine learning model.
10. The method of claim 6, further comprising using the data about the user and the contextual information to adjust the machine learning model.
12. The method of claim 8, further comprising analyzing the data using a plurality of machine learning models, each of the plurality of machine learning models configured to obtain a different type of anonymized demographic.
11. The method of claim 6, further comprising analyzing the data about the user and the contextual information using a plurality of machine learning models, each machine learning model of the plurality of machine learning models configured to obtain a different type of analytic.
13. The method of claim 6, wherein the method is implemented upon the user device using a non-transitory computer-readable medium comprising instructions that are executable by a processor of the user device.
See claim 6
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The following conventions apply to the mapping of the prior art to the claims:
Italicized text – claim language.
Parenthetical plain text – Examiner’s citation and explanation.
Citation without an explanation – an explanation has been previously provided for the respective limitation(s).
Quotation marks – language quoted from a prior art reference.
Underlining – language quoted from a claim.
Brackets – material altered from either a prior art reference or a claim, which includes the Examiner’s explanation that relates a claim limitation to the quoted material of a reference.
Braces – a limitation taught by another reference, but the limitation is presented with the mapping of the instant reference for context.
Numbered superscript – a first phrase to be moved upwards to the primary reference analysis.
Lettered superscript – a second phrase to be moved after the movement of the first phrase from which it was lifted, or more succinctly, move numbered material first, lettered material last.
A. Claims 1-5 are rejected under 35 U.S.C. 103 as being unpatentable over Mathew et al. (US 2018/0025287, “Mathew”) in view of Wu et al. (US 2010/0138370, “Wu”).
Regarding Claim 1
Mathew discloses
A memory storing instructions that, when executed by an apparatus (Fig. 7, ¶¶ [0086]-[0089], “The computing system of FIG. 7 may be used to provide the computing device and/or the server device [apparatus].”; and “Computing system 700 further may include random access memory (RAM) or other dynamic storage device 720 (referred to as main memory), coupled to bus 705 and may store information and instructions that may be executed by processor(s) 710.”), cause the apparatus to:
receive, from one or more user devices, anonymized data about at least one user of the one or more user devices (¶ [0031], “Private data is retained on the client [user] device 110. In an embodiment, a user can agree to voluntarily share [transmit and then receive] some, or all, of the user's private data with a model server [apparatus] 130. In an embodiment, some of the user's private data can be anonymized or de-identified and shared with the model server 130.”);
analyze the anonymized data with one or more machine learning models to obtain aggregated analytic data (¶ [0049], “In operation 315, a user can interact with the client device 110 using an application 230. Data generated by the application 230 can be used as training data for the selected proxy model. Machine learning module 212 can train the selected proxy model using the user's private training data.”; and ¶ [0062], “In operation 405, model server [apparatus] 130, received data module 250 can receive anonymized, or private (paid) user data. … Classify received data module 255 can classify [analyze] the received and de-identified [anonymized] user data into one or more classifications [aggregated analytic data] and, optionally, sub-classifications. A classification can be, e.g., user data that provides a model … a prediction module that can predict media items that a user would like, et al. A sub-classification of e.g. a media prediction model would be a prediction model by media type (video v. music), genre (rock, jazz, country, R&B), by context (gym, work, relaxing, party, date), etc.”),
the aggregated analytic data…1 (¶ [0062]);
2 …; and
3 ….
Mathew doesn’t disclose
1 … useable as an addressable segment for customized messaging;
2 send the aggregated analytic data to the one or more third party message providers;
3 receive, from at least one of the one or more third party message providers, one or more messages for display on one of the one or more user devices, the one or more messages targeted to the addressable segment.
Wu, however, discloses
1 … useable as an addressable segment for customized messaging (Fig. 3, ¶ [0026], “The rules are generated by the modelling system using modelling or machine learning techniques such as logistic regression using anonymized user click stream data in order to identify and quantify, as rules, any patterns that may be used to predict a user's behaviour and so assign them to a category, or assign a value to a category.”; and ¶ [0020], “The profile data stored on storage device 106 can be accessed [is usable] by external servers 190 to enable determination of appropriate content [addressable segment] or advertising [messaging] for a particular [customization] user associated with the profile data.”);
2 send the aggregated analytic data to the one or more third party message providers (Fig. 2, ¶ [0021], “The user profile [classifications/aggregated analytic data] can be provided to the external system [third party] 190 such as, for example, an advertising server [message provider] that uses the user profile in the targeting (i.e. selecting based on the user's preferences) of content (e.g. advertisements) to be presented to the user.”);
3 receive, from at least one of the one or more third party message providers, one or more messages for display on one of the one or more user devices, the one or more messages targeted to the addressable segment (Fig. 2, ¶ [0021], “The user profile can be provided to the external system [third party] 190 such as, for example, an advertising server [message provider] that uses the user profile [classifications/aggregated analytic data] in the targeting (i.e. selecting based on the user's preferences [and associated addressable segment]) of content (e.g. advertisements [messages]) to be presented [received] to the user [on the display of the user device].”).
Regarding the combination of Mathew and Wu, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the data aggregation system of Mathew to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the data aggregation system of Mathew, upon which the claimed invention can be seen as an “improvement” through the use of a messaging feature;
2) the prior art contained a “comparable” system, namely the data aggregation of Wu, that has been improved in the same way as the claimed invention through the messaging feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the messaging feature to the base data aggregation system of Mathew, and the results would have been predictable to one of ordinary skill in the art.
Regarding Claim 2
Mathew in view of Wu (“Mathew-Wu”) discloses the memory of claim 1, and Mathew further discloses
wherein the instructions are to further cause the apparatus (Fig. 7, ¶¶ [0086]-[0089]) to transmit one or more machine learning models to each of the one or more user devices (Fig. 4, ¶¶ [0066]-[0068], “If, in operation 425, a request for updated proxy models 270 is received from a client device 110, then in operation 430, model server [apparatus] 130 can invoke update clients model 265 to transmit one or more updated proxy models [machine learning] 270 for the classification to requesting client [user] device 110.”; and ¶ [0049], “In operation 315, a user can interact with the client device 110 using an application 230. Data generated by the application 230 can be used as training data for the selected proxy model. Machine learning [models] module 212 can train the selected proxy model using the user's private training data.”).
Regarding Claim 3
Mathew-Wu discloses the memory of claim 1, and Mathew further discloses
wherein the instructions are to further cause the apparatus (Fig. 7, ¶¶ [0086]-[0089]) to…1
Wu further discloses
1 …transmit the one or more messages to the one or more user devices (¶ [0021], “The user profile can be provided to the external system 190 such as, for example, an advertising server that uses the user profile in the targeting (i.e. selecting based on the user's preferences) of content (e.g. advertisements [messages]) to be presented [transmitted] to the user [device].”).
Regarding the combination of Mathew and Wu, the rationale to combine is the same as provided for claim 1 due to the overlapping subject matter of claims 1 and 4.
Regarding Claim 4
Mathew-Wu discloses the memory of claim 1, and Mathew further discloses
wherein the instructions are to further cause the apparatus (Fig. 7, ¶¶ [0086]-[0089]) to use machine learning model parameters received from the one or more user devices to tune the one or more machine learning models (¶ [0059], “In operation 350, client device 110 can notify model server 130 that no proxy model sufficiently matches the proxy model selected in operation 310 and trained in operation 315 for the application 230. Model server 130 may automatically solicit the user of the client [user] device 110 to provide [receive] his private data [acting as machine learning model parameters] 205 for the application 230 to the model server 130 so that the model server 130 can generate and train [tune] a proxy [machine learning] model that fits this particular user, and other similar users.”).
Regarding Claim 5
Mathew-Wu discloses the memory of claim 1, and Mathew further discloses
wherein the apparatus is a central server (¶ [0030], “The local machine learning can be compared with one or more of the proxy models provided by model [central] server 130 to determine a closest matching proxy model to the actual user of the client device 110.”).
B. Claims 6-13 are rejected under 35 U.S.C. 103 as being unpatentable over Mathew in view of Wu, and further in view of Zhu et al. (US 2013/0158506, “Zhu”).
Regarding Claim 6
Mathew discloses
A method (abstract), comprising:
collecting, by a user device, data about a user of the user device (Fig. 3, ¶ [0047], “In operation 307, client [user] device 110 can collect [gather] private user data generated by a user interacting with an application 230 on the client [user] device 110.”; and ¶ [0041], “Receive module 250 can receive anonymized private data [about a user] (‘public data 275’) from a large plurality of client devices 110 (‘crowdsourced data’).”);
analyzing, by the user device, the data about the user…1 (¶ [0049], “In operation 315, a user can interact with the client [user] device 110 using an application 230. Data generated by the application 230 can be used as training data for the selected proxy model. Machine learning module 212 can train [via an analysis of the data about the user] the selected proxy model using the user's private training data.”);
transmitting, by the user device, the anonymized {demographics } to a remote server (¶ [0031], “Private data is retained on the client [user] device 110. In an embodiment, a user can agree to voluntarily share [transmit] some, or all, of the user's private data with a model [remote] server 130. In an embodiment, some of the user's private data can be anonymized or de-identified and shared with the model server 130.”); and
2 ….
Mathew doesn’t disclose
1 …to obtain one or more anonymized demographics including probabilities for one or more demographics about the user that cannot be used identify the user and can be used as an addressable segment for customized messaging;
2 receiving, by the user device, one or more messages for display to the user, the one or more messages selected based upon the anonymized demographics.
Wu, however, discloses
2 receiving, by the user device, one or more messages for display to the user, the one or more messages selected based upon the anonymized demographics (Fig. 2, ¶ [0021], “The user profile can be provided to the external system 190 such as, for example, an advertising server that uses the user profile in the targeting (i.e. selecting based on the user's preferences) of content (e.g. advertisements [messages]) to be presented [received] to the user [on the display of the user device].”).
a (see Wu below) … can be used as an addressable segment for customized messaging (Fig. 3, ¶ [0026], “The rules are generated by the modelling system using modelling or machine learning techniques such as logistic regression using anonymized user click stream data in order to identify and quantify, as rules, any patterns that may be used to predict a user's behaviour and so assign them to a category, or assign a value to a category.”; and ¶ [0020], “The profile data stored on storage device 106 can be accessed [is usable] by external servers 190 to enable determination of appropriate content [addressable segment] or advertising [messaging] for a particular [customization] user associated with the profile data.”).
Regarding the combination of Mathew and Wu, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the data aggregation system of Mathew to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the data aggregation system of Mathew, upon which the claimed invention can be seen as an “improvement” through the use of a messaging feature;
2) the prior art contained a “comparable” system, namely the data aggregation of Wu, that has been improved in the same way as the claimed invention through the messaging feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the messaging feature to the base data aggregation system of Mathew, and the results would have been predictable to one of ordinary skill in the art.
Zhu, however, discloses
1 …to obtain one or more {anonymized Mathew ¶ [0031]} demographics including probabilities for one or more demographics about the user (that cannot be used identify the user (Mathew ¶ [0031])} and…a (see Wu above) (¶ [0045], “Any form of machine learning may be used to model the user demographics of the webpage characteristics. According to various implementations, a logistic regression, linear regression, naïve Bayesian, or other approach may be used to model user demographics as they relate to webpage characteristics. In some implementations, an artificial neural network can be trained using the demographics data and the webpage characteristics. For example, the probability that a webpage characteristic corresponds to a particular demographic can be determined. In some cases, different webpage characteristics can be combined in the model to determine an overall probability of a user belonging to a demographic. For example, a word cluster related to baseball may have an associated probability of 0.55 that a reader of a word in the cluster is male. Another word cluster related to boxing may have an associated probability of 0.85 that a reader of a word in the cluster is male.”);
Regarding the combination of Mathew-Wu and Zhu, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the data aggregation system of Mathew-Wu to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the data aggregation system of Mathew-Wu, upon which the claimed invention can be seen as an “improvement” through the use of a demographics feature;
2) the prior art contained a “comparable” system, namely the advertising system of Zhu, that has been improved in the same way as the claimed invention through the demographics feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the demographics feature to the base data aggregation system of Mathew-Wu, and the results would have been predictable to one of ordinary skill in the art.
Regarding Claim 7
Mathew in view of Wu, and further in view of (“Mathew-Wu-Zhu”) discloses the method of claim 6, and Mathew further discloses
wherein collecting data about the user of the user device comprises collecting data about the user's activities on the device, including location, app usage, and…1 (¶ [0038], “Machine learning module 212 can train [for data analysis] on the features in the media data selected or played. Features of a model can further be associated with a user activity [and associated app usage], such as working, exercising [tracked via an app usage], relaxing, driving [and associated GPS for location], etc. such that the machine learning module 212 learns the music that a particular user likes to listen to while exercising, ...”).
Zhu further discloses
1 … browsing history (¶ [0045], “Any form of machine learning may be used to model the user demographics of the webpage characteristics [and browsing history thereof]. According to various implementations, a logistic regression, linear regression, naïve Bayesian, or other approach may be used to model user demographics as they relate to webpage characteristics.”).
Regarding the combination of Mathew-Wu and Zhu, the rationale to combine is the same as provided for claim 6 due to the overlapping subject matter of claims 6 and 7.
Regarding Claim 8
Mathew-Wu-Zhu discloses the method of claim 6, and Mathew further discloses
wherein analyzing the data about the user…1 (¶ [0049])
Zhu further discloses
1 …to obtain one or more {anonymized (Mathew ¶ [0031])} demographics about the user (¶ [0045])} comprises analyzing, by the user device, the data using a machine learning model (¶ [0045], “Any form of machine learning may be used to model the user demographics of the webpage characteristics.”).
Regarding the combination of Mathew-Wu and Zhu, the rationale to combine is the same as provided for claim 6 due to the overlapping subject matter of claims 6 and 8.
Regarding Claim 9
Mathew-Wu-Zhu discloses the method of claim 8, and Mathew further discloses
further comprising receiving, by the user device, the machine learning model from a central server (Fig. 3, ¶ [0046], “In operation 305, client [user] device 110 can optionally receive a plurality of proxy [machine learning] models from model [central] server 130.”).
Regarding Claim 10
Mathew-Wu-Zhu discloses the method of claim 9, and Mathew further discloses
further comprising receiving, by the user device, periodic updates to the machine learning model from the central server (¶¶ [0065]-[0066], “In operation 420, generate proxy models job 260 can optionally invoke update clients module 265 [of the user device] to deploy a plurality of updated proxy [machine learning] models 270 for the classification.”).
Regarding Claim 11
Mathew-Wu-Zhu discloses the method of claim 8, and Mathew further discloses
further comprising using the one or more anonymized (¶ [0031]) {demographics (Zhu ¶ [0045])} to adjust the machine learning model (¶ [0059], “In operation 350, client device 110 can notify model server 130 that no proxy model sufficiently matches the proxy model selected in operation 310 and trained in operation 315 for the application 230. Model server 130 may automatically solicit the user of the client device 110 to provide his private data 205 for the application 230 to the model server 130 so that the model server 130 can generate and train [adjust] a proxy [machine learning] model that fits this particular user, and other similar users.”).
Regarding the combination of Mathew-Wu and Zhu, the rationale to combine is the same as provided for claim 6 due to the overlapping subject matter of claims 6 and 11.
Regarding Claim 12
Mathew-Wu-Zhu discloses the method of claim 8, and Mathew further discloses
further comprising analyzing the data (¶ [0049]) using a plurality of machine learning models, each of the plurality of machine learning models configured to obtain a different type of anonymized {demographic (Zhu ¶ [0045])} (¶ [0049], “In operation 315, a user can interact with the client device 110 using an application 230. Data generated by the application 230 can be used as training data for the selected proxy model. Machine learning module 212 can train the selected proxy model using the user's [different] private training [anonymized] data.”; and ¶ [0038], “Machine learning module 212 can apply one or more [plurality] machine learning algorithms [models] to private user data 205 collected by one or more applications 230. Machine learning algorithms can include Bayes, Naive Bayes, linear regression, and other forms of machine learning. Features can be identified within the private user data 205 by the machine learning algorithm(s), or by pre-identified features that may exist within metadata or data structures that collect the private user data 205 on the client device. ”).
Regarding the combination of Mathew-Wu and Zhu, the rationale to combine is the same as provided for claim 6 due to the overlapping subject matter of claims 6 and 12.
Regarding Claim 13
Mathew-Wu-Zhu discloses the method of claim 8, and Mathew further discloses
wherein the method is implemented upon the user device using a non-transitory computer-readable medium comprising instructions that are executable by a processor of the user device (¶ [0012], “In an embodiment a non-transitory computer readable medium can store executable instructions, that when executed by a processing system [processor], can perform any of the functionality described above.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to D'ARCY WINSTON STRAUB whose telephone number is (303)297-4405. The examiner can normally be reached Monday-Friday 9:00-5:00 Mountain Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WILLIAM KORZUCH can be reached at (571)272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D'Arcy Winston Straub/Primary Examiner, Art Unit 2491