DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/25/2025 has been entered.
Response to Amendment
Claims 21-40 were previously pending and subject to final action filed on 07/30/2025. In the response filed 09/25/2025, claims 21, 28 and 35 were amended. Therefore, claims 21-40 are currently pending and subject to non-final action below.
Response to Arguments
Applicant's arguments, see pages 7-8, filed 09/26/2025, with respect to claims 21-40 under 35 U.S.C. 101 have been fully considered but they are not persuasive.
Applicant’s Argument: The Office Action rejects claims 21-40 alleging that the claims are directed to an abstract idea without significantly more. Without agreeing with the rejection and for the purpose of expediting prosecution, independent claims 21, 28, and 35 have been amended herein.
Applicant respectfully asserts that to the extent any alleged abstract idea is recited in the claims, the alleged abstract idea is integrated into a practical application because the claims improve computer capabilities. In particular, the claims provide for intercepting a query prior to the query reaching a deployed machine learning model and in response to determining that the entity is an adversary, passing the query to a shadow machine learning model, instead of the deployed machine learning model, to generate an altered response. As stated in paragraph [0023] of Applicant's specification, the claims provide a more efficient and effective way of managing machine learning model adversaries than conventional systems which require the machine learning model developer to change the machine learning model, which may be unfeasible in many machine learning models. As would be apparent to one of ordinary skill in the art, the claims provide a way to protect the deployed machine learning model from being stolen and/or retrained, as well as preventing the adversary from knowing that they have been identified as an adversary. Further benefits are described, for example, in paragraphs [0032] and [0034]. For at least these reasons, Applicant respectfully requests that the rejections be withdrawn.
Examiner Response: After careful consideration and review of applicant’s argument. The examiner respectfully disagrees.
Step 1 Analysis: Claims 21, 28 and 35 are directed to a method, and apparatus, which is directed to at least a process and is one of the statutory categories.
Step 2A prong 1: Does the claim recite a judicial exception? Yes, claims 21, 28 and 35 recite similar limitation of “determining, that the entity is an adversary; and in response to determining that the entity is an adversary: and providing, the altered response to the entity” describes an evaluation a person can perform mentally and therefore recites an abstract idea (mental process).
Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, individually and in combination, integrate the judicial exception into a practical application? The claim recites additional element of “receiving a query from an entity, at a cloud service provider, the query directed to a deployed machine learning model of the cloud service provider;” These steps amount to gathering data for use in the claimed process which is consider insignificant extra-solution activity (MPEP 2106.05 (g)).
The claims also recite additional elements of “intercepting, by an adversary detector within the cloud service provider, the query prior to the query reaching the deployed machine learning model; determining, by the adversary detector, that the entity is an adversary; and in response to determining that the entity is an adversary: passing the query to a shadow machine learning model, instead of the deployed machine learning model, to generate an altered response, and providing, the altered response to the query from the shadow machine learning model to the entity instead of a response from the deployed machine learning model.” This recites using a generically recited “adversary detector within cloud service provider” to compute an alter response and perform passing an altered response to a “shadow machine learning model”. Thus falling under “apply it” consideration (MPEP 2106.05 (f)). The use of a “adversary detector” within a cloud service provider is similar to an “off the shelf” component.
The additional elements, alone and in combination, fail to integrate the abstract idea into a practical application. Thus, the claims are directed to an abstract idea.
Step 2B: Do the additional elements, considered individually and in combination, amount to significantly more than the judicial exception? No, As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “determining, by the adversary detector, that the entity is an adversary; and in response to determining that the entity is an adversary: passing the query to a shadow machine learning model, instead of the deployed machine learning model, to generate an altered response, and providing, the altered response to the query from the shadow machine learning model to the entity instead of a response from the deployed machine learning model” only amount to “applying” the abstract idea with generic computer components (MPEP 2106.05 (f)). The receiving of a query from an entity at the cloud service provider and intercepting (obtaining) by an adversary detector amounts to insignificant extra-solution activity as well as WURC activity similar to “receiving or transmitting data over a network, e.g., using the internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information)” (see MPEP 2106.05 (d)).
These limitations, taken alone or in combination, fail to provide an inventive concept. Thus, the claim is not patent eligible.
The dependent claims the additional limitations (in claims 22-27, 29-34 and 36-40) also constitute concepts to “apply it” which fall within the “Mental Processes” and groupings of abstract ideas.
This judicial exception is not integrated into a practical application and amount to no more than adding insignificant extra-solution activity/specifications related to data gathering, data input, or data transmittal. These additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above.
Applicant's arguments, see pages 8-10, filed 11/26/2025, with respect to claims 21-40 under 35 U.S.C. 103 have been fully considered but they are not persuasive.
Applicant’s Argument: Without agreeing with the rejections and for the purpose of expediting prosecution, independent claims 21, 28, and 35 have been amended herein. Applicant respectfully asserts that the cited references fail to teach or suggest at least "intercepting, by an adversary detector within the cloud service, the query prior to the query reaching the deployed machine learning model" and "in response to determining that the entity is an adversary: passing the query to a shadow machine learning model, instead of the deployed machine learning model, to generate an altered response," as recited in independent claims 21, 28, and 35 as amended herein. Examiner agreed in the above-mentioned interview that the amendment appeared to overcome the current rejections. For at least this reason, Applicant respectfully requests that the rejections of claims 21, 28, and 35, and their dependent claims, be withdrawn.
Examiner Response: During the interview, the examiner informed applicant’s attorney that their arguments was understood. But further consideration was needed regarding the amendments discuss during the interview. The examiner discuss further clarifying amendments, such as clarifying situation/reason a query would be intercepted. In additional to what type of analysis is done for determining that a query is an adversary. The examiner apologizes if there was any misunderstanding between the applicant’s attorney and the examiner during the interview discussion.
After careful consideration and review of applicant’s argument and prior art. The examiner respectfully disagrees.
Mote teaches: A method, comprising: receiving a query input, from an entity, (Mote − [Col. 6 5-25] Fig. 3, Fraudulent submission detector 306 may include a submission analyzer 324. [Col. 13 ll. 10-21] Fig. 4, step 404, a fraudulent submission detector may receive a plurality of -user submissions such as (e.g., ratings, comments, reviews, abuse flags, shares, likes, +1's or other social networking interactions, etc.). The user submissions are inputs sent to the fraudulent submission detector that includes a submission analyzer 324. Submission analyzer 324 may use various routines, algorithms and the like that identify comments that are likely to be spam. For example, the routines, algorithms and the like may utilize trained (e.g., machine-learning) models to detect different types of spam messages. Machine learning model receiving user submission that can be fraudulent submission.)
at a cloud service provider, (Mote − [Col. 12. ll. 53-67] It should be understood that other services may implement similar techniques. For example, a social networking service or a cloud video service may analyze received comments, likes, +1's, etc. to detect fraudulent submissions, and may penalize users/accounts responsible for the submissions and/or pages, accounts, etc. that are the target of the submissions. Various other services (e.g., on-line product shopping, on-line review of services rendered, on-line review of establishments, etc.) that accept user submissions (e.g., ratings, comments, etc.) may implement similar techniques. Examiner Notes: Cloud video service and social network service are types of cloud service providers.)
the query directed to a deployed machine learning model of the cloud service provider; (Mote − [Col. 5 ll. 29-35] Fraudulent submission detector 206 may receive submissions (e.g., ratings, comments, reviews, abuse flags, shares, likes, +1's or other social networking interactions, etc.) from one or more client devices (e.g., client device 202) and/or from an application store 208. [Col. 8 ll. 1-10] Submission analyzer 324 may use various routines, algorithms and the like that identify comments that are likely to be spam. For example, the routines, algorithms and the like may utilize trained (e.g., machine-learning) models to detect different types of spam messages. [Col. 12. ll. 53-67])
intercepting, by an adversary detector within the cloud service provider, the query; (Mote − [Col. 6 5-25] Fig. 3, Fraudulent submission detector 306 may include a submission analyzer 324. [Col. 13 ll. 10-21] Fig. 4, step 404, a fraudulent submission detector may receive a plurality of -user submissions such as (e.g., ratings, comments, reviews, abuse flags, shares, likes, +1's or other social networking interactions, etc.). The user submissions are inputs sent to the fraudulent submission detector that includes a submission analyzer 324. Submission analyzer 324 may use various routines, algorithms and the like that identify comments that are likely to be spam. [Col. 12. ll. 53-67])
determining, by the adversary detector, that the entity is an adversary; (Mote − [Col. 13 ll. 10-21] Fig. 4, At step 406, the fraudulent submission detector may analyze one or more of the submissions to detect a bad or undesirable submission (e.g., a fraudulent, spam-infested or otherwise not legitimate submission) related to at target application (e.g., one of the applications hosted by the application service). At step 408, the fraudulent submission detector may generate a detection conclusion related to the target applications, as explained in detail herein. At step 410, various services, modules, routines or the like may use the detection conclusion to alter their behavior.)
Goldberg teaches: receiving a query input, from an entity, (Goldberg − [0053] Fig.4 An electronic communication transmitting system 402 is capable of exchanging electronic communication(s) 404 with an electronic communication receiver 406. For example, an electronic communication may be an e-mail message (query input). [0076] For example, assume that the electronic communication is addressed to the addressed entity 412 shown in FIG. 4, and asks the addressed entity 412 for his/her bank account number.)
at a cloud service provider, (Goldberg − [0045] Referring now to FIG. 2, illustrative cloud computing environment 50 is depicted.)
intercepting, by an adversary detector within the cloud service provider, (Goldberg − Fig. 2, 5 [0073] Returning now to FIG. 5, a query is made as to whether or not the suspicion level of the initial electronic communications exceeds a predetermined level (query block 506). If so, then a communication switching device reroutes the initial electronic communication from the addressed entity to a cognitive honeypot (block 508).) the query prior to the query reaching the deployed machine learning model; (Goldberg − Fig. 2, 5 [0073] Returning now to FIG. 5, a query is made as to whether or not the suspicion level of the initial electronic communications exceeds a predetermined level (query block 506). If so, then a communication switching device reroutes the initial electronic communication from the addressed entity to a cognitive honeypot (block 508). Reroutes to cognitive honeypot is the shadow machine learning model.) The deployed model is address entity 412.
and in response to determining that the entity is an adversary: passing the query to a shadow machine learning model, instead of the deployed machine learning model, to generate an altered response, (Goldberg − [0056-0057] away from the addressed entity 412 and towards a natural language processing (NLP) based deep question/answer honeypot 414. Fig. 5, [0073] Returning now to FIG. 5, a query is made as to whether or not the suspicion level of the initial electronic communications exceeds a predetermined level (query block 506). If so, then a communication switching device reroutes the initial electronic communication from the addressed entity to a cognitive honeypot (block 508). Reroutes to cognitive honeypot is the shadow machine learning model.)
and providing, the altered response to the query from the shadow machine learning model to the entity instead of a response from the deployed machine learning model. (Goldberg − 0076] For example, assume that the electronic communication is addressed to the addressed entity 412 shown in FIG. 4, and asks the addressed entity 412 for his/her bank account number. [0076-0077] the NLP-based deep Q/A honeypot 414 will ask inappropriate (inane) questions of the sender of the initial electronic communication, such as “Who is your favorite movie star?” After several such non sequitur responses, the nefarious sender of the initial electronic communication will give up and move on to another target.)
Goldberg teach that the query is determined exceed a suspicion level. The query is passed to the shadow machine learning model (honeypot) and not to the deployed model (entity address 412). Therefore the rejection is maintained for the independent claims 21, 28 and 35.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 21, 23, 26, 28, 30, 33, 35, and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Mote (US PAT: US 9479516 B2, Feb. 11, 2013) in view of Goldberg (US PGPUB: 20160119377 A1, Pub Date: Apr. 28, 2016).
Regarding independent claim 21, Mote teaches: A method, comprising: receiving a query input, from an entity, (Mote − [Col. 6 5-25] Fig. 3, Fraudulent submission detector 306 may include a submission analyzer 324. [Col. 13 ll. 10-21] Fig. 4, step 404, a fraudulent submission detector may receive a plurality of -user submissions such as (e.g., ratings, comments, reviews, abuse flags, shares, likes, +1's or other social networking interactions, etc.). The user submissions are inputs sent to the fraudulent submission detector that includes a submission analyzer 324. Submission analyzer 324 may use various routines, algorithms and the like that identify comments that are likely to be spam. For example, the routines, algorithms and the like may utilize trained (e.g., machine-learning) models to detect different types of spam messages. Machine learning model receiving user submission that can be fraudulent submission.)
at a cloud service provider, (Mote − [Col. 12. ll. 53-67] It should be understood that other services may implement similar techniques. For example, a social networking service or a cloud video service may analyze received comments, likes, +1's, etc. to detect fraudulent submissions, and may penalize users/accounts responsible for the submissions and/or pages, accounts, etc. that are the target of the submissions. Various other services (e.g., on-line product shopping, on-line review of services rendered, on-line review of establishments, etc.) that accept user submissions (e.g., ratings, comments, etc.) may implement similar techniques. Examiner Notes: Cloud video service and social network service are types of cloud service providers.)
the query directed to a deployed machine learning model of the cloud service provider; (Mote − [Col. 5 ll. 29-35] Fraudulent submission detector 206 may receive submissions (e.g., ratings, comments, reviews, abuse flags, shares, likes, +1's or other social networking interactions, etc.) from one or more client devices (e.g., client device 202) and/or from an application store 208. [Col. 8 ll. 1-10] Submission analyzer 324 may use various routines, algorithms and the like that identify comments that are likely to be spam. For example, the routines, algorithms and the like may utilize trained (e.g., machine-learning) models to detect different types of spam messages. [Col. 12. ll. 53-67])
intercepting, by an adversary detector within the cloud service provider, the query; (Mote − [Col. 6 5-25] Fig. 3, Fraudulent submission detector 306 may include a submission analyzer 324. [Col. 13 ll. 10-21] Fig. 4, step 404, a fraudulent submission detector may receive a plurality of -user submissions such as (e.g., ratings, comments, reviews, abuse flags, shares, likes, +1's or other social networking interactions, etc.). The user submissions are inputs sent to the fraudulent submission detector that includes a submission analyzer 324. Submission analyzer 324 may use various routines, algorithms and the like that identify comments that are likely to be spam. [Col. 12. ll. 53-67])
determining, by the adversary detector, that the entity is an adversary; (Mote − [Col. 13 ll. 10-21] Fig. 4, At step 406, the fraudulent submission detector may analyze one or more of the submissions to detect a bad or undesirable submission (e.g., a fraudulent, spam-infested or otherwise not legitimate submission) related to at target application (e.g., one of the applications hosted by the application service). At step 408, the fraudulent submission detector may generate a detection conclusion related to the target applications, as explained in detail herein. At step 410, various services, modules, routines or the like may use the detection conclusion to alter their behavior.)
Mote teaches determining that the entity is an adversary (Col. 12 ll. 5-35) but does not explicitly teach: intercepting, the query prior to the query reaching the deployed machine learning model; and in response to determining that the entity is an adversary: passing the query to a shadow machine learning model, instead of the deployed machine learning model, to generate an altered response,
However, Goldberg teaches: receiving a query input, from an entity, (Goldberg − [0053] Fig.4 An electronic communication transmitting system 402 is capable of exchanging electronic communication(s) 404 with an electronic communication receiver 406. For example, an electronic communication may be an e-mail message (query input). [0076] For example, assume that the electronic communication is addressed to the addressed entity 412 shown in FIG. 4, and asks the addressed entity 412 for his/her bank account number.)
at a cloud service provider, (Goldberg − [0045] Referring now to FIG. 2, illustrative cloud computing environment 50 is depicted.)
intercepting, by an adversary detector within the cloud service provider, (Goldberg − Fig. 2, 5 [0073] Returning now to FIG. 5, a query is made as to whether or not the suspicion level of the initial electronic communications exceeds a predetermined level (query block 506). If so, then a communication switching device reroutes the initial electronic communication from the addressed entity to a cognitive honeypot (block 508).) the query prior to the query reaching the deployed machine learning model; (Goldberg − Fig. 2, 5 [0073] Returning now to FIG. 5, a query is made as to whether or not the suspicion level of the initial electronic communications exceeds a predetermined level (query block 506). If so, then a communication switching device reroutes the initial electronic communication from the addressed entity to a cognitive honeypot (block 508). Reroutes to cognitive honeypot is the shadow machine learning model.) The deployed model is address entity 412.
and in response to determining that the entity is an adversary: passing the query to a shadow machine learning model, instead of the deployed machine learning model, to generate an altered response, (Goldberg − [0056-0057] away from the addressed entity 412 and towards a natural language processing (NLP) based deep question/answer honeypot 414. Fig. 5, [0073] Returning now to FIG. 5, a query is made as to whether or not the suspicion level of the initial electronic communications exceeds a predetermined level (query block 506). If so, then a communication switching device reroutes the initial electronic communication from the addressed entity to a cognitive honeypot (block 508). Reroutes to cognitive honeypot is the shadow machine learning model.)
and providing, the altered response to the query from the shadow machine learning model to the entity instead of a response from the deployed machine learning model. (Goldberg − 0076] For example, assume that the electronic communication is addressed to the addressed entity 412 shown in FIG. 4, and asks the addressed entity 412 for his/her bank account number. [0076-0077] the NLP-based deep Q/A honeypot 414 will ask inappropriate (inane) questions of the sender of the initial electronic communication, such as “Who is your favorite movie star?” After several such non sequitur responses, the nefarious sender of the initial electronic communication will give up and move on to another target.)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote and Goldberg as each inventions relates to preventing attacks on service providers. Adding the teaching of Goldberg provides Mote with a honeypot algorithm for providing different type of responses to an attack on a service provider infrastructure. One of ordinary skill in the art would have been motivated to make such modification in order to detect nefarious types of communications such as stealing personal information.
Regarding dependent claim 23, depends on claim 21, Mote teaches: wherein the determining comprises comparing a profile of the entity to profiles of known adversaries. (Mote − [Col. 8 ll. 25-41] Submission analyzer 324 may compare a current submission to one or more previously received submissions (e.g., submissions stored/logged in submission information log 322). Similarity between submissions may be determined by looking at various pieces of information/data associated with the submissions, for example,…IP address, account information associated with the user that entered the submission (e.g., age of account), and various other pieces of information.)
Regarding dependent claim 26, depends on claim 21, Mote does not explicitly teach: wherein the shadow machine learning model has different performance characteristics than the deployed machine learning model.
However, Goldberg teaches: wherein the shadow machine learning model has different performance characteristics than the deployed machine learning model. (Goldberg − [0076] For example, assume that the electronic communication is addressed to the addressed entity 412 shown in FIG. 4, and asks the addressed entity 412 for his/her bank account number.) [0076-0077] the NLP-based deep Q/A honeypot 414 will ask inappropriate (inane) questions of the sender of the initial electronic communication, such as “Who is your favorite movie star?” After several such non sequitur responses, the nefarious sender of the initial electronic communication will give up and move on to another target. Examiner notes a different response than what the entity was asking the service provider )
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote and Goldberg as each inventions relates to preventing attacks on service providers. Adding the teaching of Goldberg provides Mote with a honeypot algorithm for providing different type of responses to an attack on a service provider infrastructure. One of ordinary skill in the art would have been motivated to make such modification in order to detect nefarious types of communications such as stealing personal information.
Regarding independent claim 28, is directed to an apparatus. Claim 28 have similar/same technical features/limitation as claim 1 and claim 28 is rejected under the same rationale.
Regarding dependent claim 30, depends on claim 28, Mote teaches: wherein the determining comprises comparing a profile of the entity to profiles of known adversaries. (Mote − [Col. 8 ll. 25-41] Submission analyzer 324 may compare a current submission to one or more previously received submissions (e.g., submissions stored/logged in submission information log 322). Similarity between submissions may be determined by looking at various pieces of information/data associated with the submissions, for example,…IP address, account information associated with the user that entered the submission (e.g., age of account), and various other pieces of information.)
Regarding dependent claim 33, depends on claim 28, Mote does not explicitly teach: wherein the shadow machine learning model has different performance characteristics than the deployed machine learning model.
However, Goldberg teaches: wherein the shadow machine learning model has different performance characteristics than the deployed machine learning model. (Goldberg − [0076] For example, assume that the electronic communication is addressed to the addressed entity 412 shown in FIG. 4, and asks the addressed entity 412 for his/her bank account number.) [0076-0077] the NLP-based deep Q/A honeypot 414 will ask inappropriate (inane) questions of the sender of the initial electronic communication, such as “Who is your favorite movie star?” After several such non sequitur responses, the nefarious sender of the initial electronic communication will give up and move on to another target. Examiner notes a different response than what the entity was asking the service provider )
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote and Goldberg as each inventions relates to preventing attacks on service providers. Adding the teaching of Goldberg provides Mote with a honeypot algorithm for providing different type of responses to an attack on a service provider infrastructure. One of ordinary skill in the art would have been motivated to make such modification in order to detect nefarious types of communications such as stealing personal information.
Regarding independent claim 35, is directed to a computer program product. Claim 35 have similar/same technical features/limitation as claim 1 and claim 35 is rejected under the same rationale.
Regarding dependent claim 39, depends on claim 35, Mote does not explicitly teach: wherein the shadow machine learning model has different performance characteristics than the deployed machine learning model.
However, Goldberg teaches: wherein the shadow machine learning model has different performance characteristics than the deployed machine learning model. (Goldberg − [0076] For example, assume that the electronic communication is addressed to the addressed entity 412 shown in FIG. 4, and asks the addressed entity 412 for his/her bank account number.) [0076-0077] the NLP-based deep Q/A honeypot 414 will ask inappropriate (inane) questions of the sender of the initial electronic communication, such as “Who is your favorite movie star?” After several such non sequitur responses, the nefarious sender of the initial electronic communication will give up and move on to another target. Examiner notes a different response than what the entity was asking the service provider )
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote and Goldberg as each inventions relates to preventing attacks on service providers. Adding the teaching of Goldberg provides Mote with a honeypot algorithm for providing different type of responses to an attack on a service provider infrastructure. One of ordinary skill in the art would have been motivated to make such modification in order to detect nefarious types of communications such as stealing personal information.
Claim(s) 27, 34 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Mote and Goldberg as applied to claims 21, 28 and 35 above, and further in view of MICHIELS (US PGPUB: 20200104673 A1, Filed Date: Sep. 28, 2018).
Regarding dependent claim 27, depends on claim 21, Mote does not explicitly teach: wherein the shadow machine learning model is one of a plurality of shadow machine learning models and wherein each shadow machine learning model is used to provide responses to a different adversary.
However, MICHIELS teaches: wherein the shadow machine learning model is one of a plurality of shadow machine learning models and wherein each shadow machine learning model is used to provide responses to a different adversary. (MICHIELS − [0009] Each of a plurality of machine learning models in the ensemble are implemented differently so that they may produce a different output in response to receiving the same input. Using a piecewise function conceals from which regions of an input space the different machine learning algorithms provide a different output. This makes the machine learning ensemble more difficult to copy.)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote Goldberg and MICHIELS as each inventions relates to preventing attacks on machine learning models. Adding the teaching of MICHIELS provides Mote with a plurality of different machine learning models for provide different response to an adversary attack. One of ordinary skill in the art would have been motivated to make such modification in order to detect nefarious types of communications such as stealing personal information.
Regarding dependent claim 34, depends on claim 28, Mote does not explicitly teach: wherein the shadow machine learning model is one of a plurality of shadow machine learning models and wherein each shadow machine learning model is used to provide responses to a different adversary.
However, MICHIELS teaches: wherein the shadow machine learning model is one of a plurality of shadow machine learning models and wherein each shadow machine learning model is used to provide responses to a different adversary. (MICHIELS − [0009] Each of a plurality of machine learning models in the ensemble are implemented differently so that they may produce a different output in response to receiving the same input. Using a piecewise function conceals from which regions of an input space the different machine learning algorithms provide a different output. This makes the machine learning ensemble more difficult to copy.)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote Goldberg and MICHIELS as each inventions relates to preventing attacks on machine learning models. Adding the teaching of MICHIELS provides Mote with a plurality of different machine learning models for provide different response to an adversary attack. One of ordinary skill in the art would have been motivated to make such modification in order to detect nefarious types of communications such as stealing personal information.
Regarding dependent claim 40, depends on claim 35, Mote does not explicitly teach: wherein the shadow machine learning model is one of a plurality of shadow machine learning models and wherein each shadow machine learning model is used to provide responses to a different adversary.
However, MICHIELS teaches: wherein the shadow machine learning model is one of a plurality of shadow machine learning models and wherein each shadow machine learning model is used to provide responses to a different adversary. (MICHIELS − [0009] Each of a plurality of machine learning models in the ensemble are implemented differently so that they may produce a different output in response to receiving the same input. Using a piecewise function conceals from which regions of an input space the different machine learning algorithms provide a different output. This makes the machine learning ensemble more difficult to copy.)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote Goldberg and MICHIELS as each inventions relates to preventing attacks on machine learning models. Adding the teaching of MICHIELS provides Mote with a plurality of different machine learning models for provide different response to an adversary attack. One of ordinary skill in the art would have been motivated to make such modification in order to detect nefarious types of communications such as stealing personal information.
Claim(s) 22, 29 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Mote and Goldberg as applied to claims 21, 28 and 35 above, and further in view of Peppe (US PGPUB: 20170346839 A1, Filed Date: Feb. 11, 2013).
Regarding dependent claim 22, depends on claim 21, Mote does not explicitly teach: wherein the determining comprises determining that the entity has provided a predetermined number of inputs to the deployed machine learning model within a predetermined length of time.
However, Peppe teaches: wherein the determining comprises determining that the entity has provided a predetermined number of inputs to the deployed machine learning model within a predetermined length of time. (Peppe − [0031] For example, an attack vector that performed 100 data communication transactions with another attack vector or asset in a predetermined time period may be assigned a weight value of 1.0, while another attack vector that performed 80 data communication transactions in the predetermined time period may be assigned a weight value of 0.8.)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote, Goldberg and Peppe as each inventions relates to preventing attacks. Adding the teaching of Peppe provides Mote with digraph that maps threat to the system. Therefore, providing the benefit of preventing attacks on deployed machine learning models.
Regarding dependent claim 29, depends on claim 28, Mote does not explicitly teach: wherein the determining comprises determining that the entity has provided a predetermined number of inputs to the deployed machine learning model within a predetermined length of time.
However, Peppe teaches: wherein the determining comprises determining that the entity has provided a predetermined number of inputs to the deployed machine learning model within a predetermined length of time. (Peppe − [0031] For example, an attack vector that performed 100 data communication transactions with another attack vector or asset in a predetermined time period may be assigned a weight value of 1.0, while another attack vector that performed 80 data communication transactions in the predetermined time period may be assigned a weight value of 0.8.)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote, Goldberg and Peppe as each inventions relates to preventing attacks. Adding the teaching of Peppe provides Mote with digraph that maps threat to the system. Therefore, providing the benefit of preventing attacks on deployed machine learning models.
Regarding dependent claim 36, depends on claim 35, Mote does not explicitly teach: wherein the determining comprises determining that the entity has provided a predetermined number of inputs to the deployed machine learning model within a predetermined length of time.
However, Peppe teaches: wherein the determining comprises determining that the entity has provided a predetermined number of inputs to the deployed machine learning model within a predetermined length of time. (Peppe − [0031] For example, an attack vector that performed 100 data communication transactions with another attack vector or asset in a predetermined time period may be assigned a weight value of 1.0, while another attack vector that performed 80 data communication transactions in the predetermined time period may be assigned a weight value of 0.8.)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote, Goldberg and Peppe as each inventions relates to preventing attacks. Adding the teaching of Peppe provides Mote with digraph that maps threat to the system. Therefore, providing the benefit of preventing attacks on deployed machine learning models.
Claim(s) 24-25, 31-32, and 37-38 are rejected under 35 U.S.C. 103 as being unpatentable over Mote and Goldberg as applied to claims 21, 28 and 35 above, and further in view of Crabtree (US PGPUB: 20180183766 A1, Filed Date: Dec. 11, 2017).
Regarding dependent claim 24, depends on claim 21, Mote teaches: that identifies an attack threshold corresponding to an input pattern that is indicative of the deployed machine learning model being attacked. (Mote − [Col. 7 ll. 10-11] submission analyzer 324 may detect trends, patterns, etc. [Col. 7 ll. 59-67] Submission analyzer 324 may use various rules and/or thresholds based on past experience with and/or expert knowledge regarding user comments.)
Mote does not explicitly teach: resiliency score for the deployed machine learning model
However, Crabtree teaches: wherein the determining comprises computing a resiliency score for the deployed machine learning model that identifies an attack threshold corresponding to an input pattern that is indicative of the deployed machine learning model being attacked. (Crabtree − [0098] FIG. 14 is a flow diagram of an exemplary method 1400 for continuous network resilience scoring,)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote, Goldberg and Crabtree as each inventions relates to preventing attacks. Adding the teaching of Crabtree provides Mote with statistical data for determining a system vulnerability risk. Therefore, providing the benefit of preventing attacks on deployed machine learning models.
Regarding dependent claim 25, depends on claim 24, Mote teaches: wherein the determining comprises: observing a pattern of input provided by the entity; and identifying the entity as an adversary when the pattern of input reaches the attack threshold. (Mote − [Col. 7 ll. 10-11] submission analyzer 324 may detect trends, patterns, etc. [Col. 7 ll. 59-67] Submission analyzer 324 may use various rules and/or thresholds based on past experience with and/or expert knowledge regarding user comments.)
Regarding dependent claim 31, depends on claim 28, Mote teaches: that identifies an attack threshold corresponding to an input pattern that is indicative of the deployed machine learning model being attacked. (Mote − [Col. 7 ll. 10-11] submission analyzer 324 may detect trends, patterns, etc. [Col. 7 ll. 59-67] Submission analyzer 324 may use various rules and/or thresholds based on past experience with and/or expert knowledge regarding user comments.)
Mote does not explicitly teach: resiliency score for the deployed machine learning model
However, Crabtree teaches: wherein the determining comprises computing a resiliency score for the deployed machine learning model that identifies an attack threshold corresponding to an input pattern that is indicative of the deployed machine learning model being attacked. (Crabtree − [0098] FIG. 14 is a flow diagram of an exemplary method 1400 for continuous network resilience scoring,)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote, Goldberg and Crabtree as each inventions relates to preventing attacks. Adding the teaching of Crabtree provides Mote with statistical data for determining a system vulnerability risk. Therefore, providing the benefit of preventing attacks on deployed machine learning models.
Regarding dependent claim 32, depends on claim 31, Mote teaches: wherein the determining comprises: observing a pattern of input provided by the entity; and identifying the entity as an adversary when the pattern of input reaches the attack threshold. (Mote − [Col. 7 ll. 10-11] submission analyzer 324 may detect trends, patterns, etc. [Col. 7 ll. 59-67] Submission analyzer 324 may use various rules and/or thresholds based on past experience with and/or expert knowledge regarding user comments.)
Regarding dependent claim 37, depends on claim 35, Mote teaches: that identifies an attack threshold corresponding to an input pattern that is indicative of the deployed machine learning model being attacked. (Mote − [Col. 7 ll. 10-11] submission analyzer 324 may detect trends, patterns, etc. [Col. 7 ll. 59-67] Submission analyzer 324 may use various rules and/or thresholds based on past experience with and/or expert knowledge regarding user comments.)
Mote does not explicitly teach: resiliency score for the deployed machine learning model
However, Crabtree teaches: wherein the determining comprises computing a resiliency score for the deployed machine learning model that identifies an attack threshold corresponding to an input pattern that is indicative of the deployed machine learning model being attacked. (Crabtree − [0098] FIG. 14 is a flow diagram of an exemplary method 1400 for continuous network resilience scoring,)
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Mote, Goldberg and Crabtree as each inventions relates to preventing attacks. Adding the teaching of Crabtree provides Mote with statistical data for determining a system vulnerability risk. Therefore, providing the benefit of preventing attacks on deployed machine learning models.
Regarding dependent claim 38, depends on claim 37, Mote teaches: wherein the determining comprises: observing a pattern of input provided by the entity; and identifying the entity as an adversary when the pattern of input reaches the attack threshold. (Mote − [Col. 7 ll. 10-11] submission analyzer 324 may detect trends, patterns, etc. [Col. 7 ll. 59-67] Submission analyzer 324 may use various rules and/or thresholds based on past experience with and/or expert knowledge regarding user comments.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARL E BARNES JR whose telephone number is (571)270-3395. The examiner can normally be reached Monday-Friday 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CARL E BARNES JR/Examiner, Art Unit 2178
/STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178