DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1, 9, 17 are currently amended.
Claims 2, 6, 10, 14, 18 and 21 are cancelled.
Claims 1, 3-9, 11-13, 15-17, 19-20, 22-27 are pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/06/2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 3-9, 11-1, 19-26 rejected under 35 U.S.C. 103 have been considered but are moot because the new ground of rejection.
Upon review of the applicant’s remarks received on 02/06/2026 (hereinafter “remarks”), regarding the rejection of claims 1, 3 through 9, 11 through 17, and 19 through 23 under 35 U.S.C. 112(a) for enablement, the examiner is persuaded that the specification narrowly meets the enablement requirement. However, the examiner is not persuaded by the applicant’s arguments with respect to demonstrating an adequate written description for the full scope of the claimed invention, for the reasons set forth below.
Applicant argues on pg. 9-10 of the remarks:
“The application describes, in explicit structural and functional detail, the exact architecture and flow recited in the independent claims … The Summary and Detailed Description set out these precise components and interactions, including the customer environment 110 (transformation module 115, ML model 125, detection system sensor/proxy 120), the system environment 130 (processing engine 145, alert engine 150, response engine 155, dashboards), and the end-to-end methods of intercepting inputs and outputs, coupling, transmitting to the processing engine, computing an attack score, thresholding, and substituting the response at the proxy in lieu of the model's output. This disclosure aligns one-to-one with the independent claim elements and demonstrates possession of the claimed invention's genus and representative species … “The Detailed Description provides further, claim-mirroring particulars. It explains that the proxy/sensor 120 sits between the transformation module and the first model, collects vectorized inputs and the model's output, couples those data, and transmits the coupled pair to the processing engine 145; it further explains that the proxy can forward the model's output when no attack is detected or, when the system detects a malicious act, "generate a different response based on data received from the response engine 155," thereby substituting a response "in place of' the model output. This is express possession of the claimed interception, coupling, transmission, scoring, and output substitution sequence, with the proxy's control-point role illustrated and described in operational detail … The specification also describes the processing engine's function and the nature of the "attack score." It teaches that "processing the received coupled data may include applying one or more machine learning modeling techniques" to determine whether a malicious act is occurring, and that "the processing engine generates an attack score and provides that score to alert engine 150," which in turn passes the score to the response engine for response selection. The disclosure frames the score as "an indicator as to the likelihood or a predictor of whether the machine learning model ... is currently under attack or will be under attack in the near future," and then teaches using thresholds at multiple levels to drive alerts and responses. Those passages document possession of the claimed "attack score" construct, its meaning, and its role in the threshold-gated enforcement loop”
Examiner respectfully disagrees. Applicant argues that the specification provides explicit structural and functional detail that maps one to one to the claimed architecture and flow, and therefore demonstrates possession of the claimed genus and representative species. The identified components and dataflows do appear in the Summary and Detailed Description. However, the claims recite a broad computer implemented function of processing the vectorization data and the output to generate an attack score using a second machine learning model without disclosing any particular structure or algorithm that performs the transformation from the claimed vectorization data and output to the claimed attack score.
This creates two issues that renders the claim non-statutory under 35 U.S.C. 112(a):
I. The inventor fails to demonstrate the full possession of the genus “machine learning model”.
As discussed in MPEP § 2161.01, “[p]roblems satisfying the written description requirement for original claims often occur when claim language is generic or functional, or both. Ariad, 593 F.3d at 1349, 94 USPQ2d at 1171” and “whether a person skilled in the art would understand the inventor to have invented, and been in possession of, the invention as broadly claimed” while citing LizardTech v. Earth Resource Mapping, Inc., 424 F.3d 1336, 1346, 76 USPQ2d 1731, 1733 (Fed. Cir. 2005) in which the courts found that “a generic method of making a seamless discrete wavelet transformation (DWT) were held invalid … because the specification taught only one particular method for making a seamless DWT and there was no evidence that the specification contemplated a more generic method.”
As in the LizardTech example, the applicant claims broadly any type of machine learning algorithm based on either statistical models (e.g. Linear regression, Naive Bayes, Hidden Markov models), neural network models (e.g. feed-forward neural networks, convolutional neural networks, transformers), tree-based models (e.g. decision trees, random forests), clustering/unsupervised models (k-means clustering, Gaussian mixture models, Autoencoders), reinforcement learning models (q-learning, policy gradient models), or something entirely different.
While the specification at ¶38 appears to disclose: “Performing machine learning to analyze the data may include performing unsupervised learning or clustering on the received data, timeseries modeling, classification modeling, or some other machine learning based analysis and/or modeling on the coupled data.”, it fails to disclose any other species of machine learning models that are known in the art. To overcome these concerns, the applicant may elect to amend the claims to positively recite “….processing, by the processing engine executing a clustering model…”. This would alleviate the concerns of claiming a broad genus that is not fully described in the provided specification.
II. The inventor fails to adequately disclose how the claimed second machine learning model transforms the inputted “vectorization data” and “output” into an “attack score … indicating the likelihood of a malicious action towards the first machine learning model via the vectorization data” by disclosing an algorithm or steps/procedure in sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed.
As discussed further in MPEP § 2161.01: “[s]imilarly, original claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP §§ 2163.02 and 2181, subsection IV.”
The claims recite generating an attack score indicating a likelihood of a malicious action toward the first machine learning model. However, the specification does not adequately describe how the second machine learning model derives this attack score from the inputs. The disclosure largely states the result without describing the steps by which the result is achieved. For example, the specification states that “[t]he vectorization data and machine learning model output data are processed to determine whether the machine learning model is being subject to a malicious act” (Spec, ¶[0003]) and that “[t]he output of the processing may indicate an attack score, for example in the form of a prediction whether the machine learning model is subject to malicious act” (Spec, ¶[0018]). These passages merely state that the data is processed and that the result of the processing is an attack score, but they do not describe how the second machine learning model evaluates the inputs to reach the result.
In a related matter, the specification provides only minimal information regarding the inputs themselves. The disclosure explains that the “vectorization data” is generated from raw input data such as a stream of time-series data and that the vectorized data “may include an array of float numbers” (Spec, ¶¶[0016], [0020], [0033]). These passages indicate only that the input to the first machine learning model may be represented as numerical vectors but do not describe what the vectors represent, what features (if any) they contain, or what characteristics of those vectors would indicate malicious behavior directed toward the first machine learning model.
The specification also provides only high-level descriptions of the processing performed by the second machine learning model. For instance, the disclosure states that the processing “may include feeding the vectorization data and output into one or more of several machine learning models” (Spec,¶[0017]) and that the processing engine may apply techniques such as “unsupervised learning or clustering, timeseries modeling, classification modeling, and other modeling techniques (Spec, ¶[0029]). These statements merely list broad categories of machine learning techniques but do not explain how those techniques would use the specific inputs recited in the claims, namely the vectorization data and the first model’s output, to generate the claimed attack score indicating malicious activity toward the first machine learning model.
As explained in MPEP § 2161.01, functional claim language must be supported by a disclosure that describes the structure or steps for achieving the claimed function and cautions that merely reciting a result without describing how the result is accomplished does not demonstrate possession of the claimed invention. Here, the specification repeatedly states that the coupled vectorization data and model output are analyzed to generate an attack score that indicates a likelihood of a malicious action towards the first machine learning model, but it does not disclose what features are extracted from the inputs, how the inputs are evaluated together, or how the second machine learning model determines that the first machine learning model is under attack. The claims therefore effectively cover any technique that could analyze the input and output of the first machine learning model and produce a score indicating malicious activity, which is a classic example of result-oriented functional claiming without corresponding disclosure of the mechanism that performs the function.
An analogy illustrates the deficiency. The disclosure is similar to describing a system that receives a patient’s symptoms and laboratory results and then “processes the data to generate a disease risk score,” without describing what indicators of the symptoms or test results correspond to the disease or how those indicators are evaluated to generate the score. Simply stating that the data is processed to produce a risk score does not explain the diagnostic mechanism. Likewise, the present specification states that vectorization data and model output are processed to produce an attack score but does nto disclose how the processing actually determines that a malicious action towards the first machine learning model is occurring”.
Accordingly, while the architecture and control plane are described, that mapping does not supply the missing algorithmic disclosure for generating the attack score, nor does it provide representative species or adaptation guidance for the very broad genus of machine learning model. For these reasons, the written description rejection remains appropriate.
Applicant argues on pg. 10 of the remarks:
Applicant argues:
“Finally, the specification expressly describes the "vector traffic instance" and its mirroring pipeline as part of the same architecture. It explains that "a vector traffic instance may be implemented," that traffic mirroring collects traffic from the vector traffic instance and forwards it via mirror targets and a load balancer "through a series of traffic mirror worker applications" to the processing engine, and that the response engine then provides response data back through the mirror workers to transmit "the response to the vector traffic instance." This directly supports the claimed vector-traffic-instance workflow, demonstrating possession of that ML-specific telemetry and enforcement path. In combination, the Summary, figures, and detailed methods provide the structural relationships and functional interplay among the proxy/sensor, first model, processing engine/second model, alert engine, and response engine, as well as the meaning and operational use of the "attack score," the multi-threshold alert logic, the substituted response at the proxy, and the vector traffic instance pipeline. Under Ariad's possession standard, this narrative and architectural disclosure reasonably convey to a skilled artisan that the inventors had possession of the claimed invention at filing.”
Examiner respectfully disagrees. Applicant argues that the specification’s description of the vector traffic instance and its mirroring pipeline demonstrates possession of the claimed workflow and, when combined with the Summary and figures, conveys possession of the overall invention under Ariad. Examiner respectfully disagrees because the cited passages address transport, telemetry collection, and routing of responses, not the core claimed function of computing an attack score from coupled inputs. The mirror targets, load balancer, and traffic mirror workers show how data reaches the processing engine and how a response is routed back, but they do not disclose the algorithm or steps that transform vectorization data and the first model output into an attack score. As MPEP 2161 and 2161.01 explain, written description is distinct from enablement, and for computer implemented functional limitations the specification must disclose the computer and the algorithm or steps that perform the claimed function in sufficient detail. See Ariad Pharm., Inc. v. Eli Lilly & Co., Finisar Corp. v. DirecTV Grp., Inc., and Vasudevan Software, Inc. v. MicroStrategy, Inc. Describing the pipeline that carries a score does not show possession of how the score is computed.
Moreover, under a broadest reasonable interpretation, the term first machine learning model is a broad genus that reads on neural networks, support vector machines, decision tree ensembles, clustering models, sequence models, and large language models among others. The specification does not provide representative species of first models or explain how the scoring approach is adapted across these different model types. The vector traffic instance pipeline is orthogonal to that missing substance. It demonstrates where telemetry flows, not how the second model operates to generate the claimed attack score across the breadth of first model types. Under Ariad and LizardTech, generic and functional claim language requires disclosure that shows possession of the genus, which typically includes species or other concrete detail commensurate with the claim scope.
Accordingly, while the vector traffic instance and mirroring pipeline support data transport and enforcement routing, they do not cure the absence of an algorithm for computing the attack score or the lack of representative species for the very broad first machine learning model genus. For these reasons, the written description rejection is maintained.
Applicant argues on page 15-19, filed on 02/06/2026, with respect to the rejection of claims 1, 3-9, 11-17, 19-20, 22-27 under 35 U.S.C. § 101 have been fully considered but they are not persuasive.
Applicant argues on pg. 15-16 as follows:
“Each independent claim is directed to a specific, computer-implemented architecture that uses a proxy in a customer environment to interpose on an ML service's data and output path, forwards coupled inputs to a remote processing engine that executes a second ML model to compute an attack score, and then conditionally substitutes the first model's output with a different response when a threshold is met. These claim-required steps reconfigure the control plane and output behavior of a running ML inference service and integrate any data analysis into a practical application that improves the security and reliability of ML systems. The examiner's reliance on Recentive is misplaced because, unlike the claims there, each independent claim-15- recites a threshold-gated enforcement mechanism that changes the live system's output path via a proxy, tied to specific components and dataflows disclosed in the specification”.
Examiner respectfully disagrees. As set forth in the rejection, the mental process characterization is based on the nature of the recited steps under their broadest reasonable interpretation, not on the presence or absence of an explicit “human” actor. Claim 1 recites, in substance, “monitoring…for malicious act…intercepting…vectorization data…receiving an output generated…processing…the vectorization data and the output…generate an attack score, the attack score indicating a likelihood of a malicious action… transmitting…the attack score; determining…the attack score…; applying a response…the response applied…” Each of these is expressed at a high level of generality as information intake, mental process, and information output. Nothing in the claim language itself imposes any limitation on how these operations are carried out beyond being “Monitoring a machine learning-based system for malicious acts.” Under the 2019 Revised Patent Subject Matter Eligibility Guidance, a claim recites a mental process where the steps are, under their broadest reasonable interpretation, practically performable in the human mind even if the claim nominally recites that a computer performs them.
Applicant’s reliance on Enfish (self- referential tables improving computer functionality), Ancora (improving computer security by storing a license in a specific memory location), Finjan (behavior-based malware detection with a concrete security effect), and McRO (rule-based automation that constrained how the result is achieved) does not compel a different result. Those cases require that broadest reasonable interpretation be consistent with how the inventor describes the invention in the specification, they do not forbid recognizing that claimed steps such as monitoring, intercepting, receiving an output, processing the data and the output, generate score, transmitting the score; determining score…; applying a response, are of a kind that could be carried out mentally or with pen and paper. Here, the specification indeed describes computer implemented embodiments in a telecommunications or cryptography context, but it does not redefine “monitoring,” “intercepting”, “receiving”, “processing”, “generating”, “transmitting”, “applying” in any way that would exclude their performance as abstract information processing steps. The mere recitation that the method is implemented “by a system”, “one or more servers”, “the processors” is treated under the guidance as a generic computer implementation of otherwise abstract mental process. It does not transform those operations into something that cannot, in principle, be performed mentally, nor does it prevent the Office from recognizing them as mental processes under Step 2A Prong One. Accordingly, the Office’s interpretation is consistent with both the claim language and the specification and remains the broadest reasonable interpretation, and the Office Action is responsive to applicant’s earlier arguments because it squarely explains that, even when framed as computer implemented, the recited steps remain mental or mathematical in character and thus fall within the “mental processes” grouping of abstract ideas.
Applicant further argues on page 16-17:
“Step 2A, Prong Two: The claim integrates any alleged abstraction into a practical application that improves a specific computer-technology process. Even if the "attack-score" evaluation were deemed an abstract analysis, each independent claim integrates that analysis into a concrete enforcement workflow that changes the operation of a machine learning service in real-16- time. The proxy performs in-line interception of both inputs and the first model's output-and, critically, intercepts the output before it is transmitted to the requestor-thereby establishing a defined control point in the ML inference pipeline. Each independent claim further mandates cross-environment processing in which the coupled input and output are sent to a remote processing engine that executes a second model to compute an attack score, which is returned to the proxy. When the score exceeds a threshold, the proxy applies a response in place of and different from the first model's output, reconfiguring the system's output behavior in a threshold- gated manner. This control-plane enforcement is a practical application that mitigates adversarial interactions targeting the first model and improves the resilience and security of the ML inference service. The specification's architecture and methods confirm that these are not generic add-ons but core operational flows: sensor/proxy collects and couples the vectorization and output, the processing engine computes the attack score, the alert engine thresholds the score, and the response engine instructs the proxy to substitute the output accordingly. Such threshold-gated substitution improves the functioning of the ML service by hardening the output path against adversarial inputs, which aligns with decisions recognizing that claims improving computer security or network operation are directed to patent-eligible improvements to computer technology, including Ancora and SRI”.
Examiner respectfully disagrees. The specification may describe prior art deficiencies in electronic signature schemes and may characterize the disclosed cryptographic construction as improving “computer security or network operation” but eligibility must be assessed based on what the claims themselves recite. Claim 1, as drafted, does not recite any particular network protocol, memory structure, hardware configuration, or other concrete computer implementation that improves the functioning of a computer or another technology. Instead, the claim generically recites “monitoring…for malicious act…intercepting…vectorization data…receiving an output generated…processing…the vectorization data and the output…generate an attack score, the attack score indicating a likelihood of a malicious action… transmitting…the attack score; determining…the attack score…; applying a response…the response applied…” to detect malicious behavior. These are high level functional descriptions of what mental process relationships among data should achieve, and they do not by themselves amount to a specific technological solution of the type found eligible in BASCOM or in Example 35.
Applicant argues on pages 17-19:
“Step 2B: The claim recites an inventive concept in the ordered combination. The
examiner asserts that, under Recentive, applying generic ML is insufficient. But Recentive focused on claims that do no more than apply machine learning to data without showing a concrete technological improvement or non-conventional application. Here, the claimed ordered combination is materially different. The independent claims couple the vectorization data and the first model's output and require using that coupled pair in a second model to generate an attack score, and then use that score to gate a proxy-enforced substitution of the first model's output. This is a particular integration of detection and enforcement at the application boundary of an ML service, not merely analyzing data. The proxy's location and role are claim-specific: it executes in the customer environment, intercepts the first model's output before it reaches the requester, and conditionally applies a different response in place of the first model's output based on the returned score-a non-conventional use of a proxy in the ML inference path to alter output behavior at run time. The specification teaches the multi-engine architecture (processing, alert, response) and explicit thresholding flows that inform these proxy-level substitutions, reinforcing that this is not routine or result-oriented recitation but a concrete enforcement sequence. Taken together, these elements go beyond the application of ML to data. They define a specific technical arrangement that detects adversarial use of an ML model and then reconfigures the system's output path through a proxy when a threshold is met. That ordered combination supplies the requisite inventive concept under cases like BASCOM, which held that a non- conventional and non-generic arrangement of known components can provide an inventive concept, and DDR Holdings, which upheld claims that solved a problem specifically arising in computer networks with a particular technical solution. The examiner's contrary reading of Recentive omits these proxy-enforced, threshold-gated output-substitution steps and treats the claim as data analysis only, which it is not. Recentive rejected claims that merely applied generic ML to a data environment without reciting how the technology was improved or how the system's operation was changed. The examiner imports that rationale here, but each independent claim is materially different. The claims recite concrete control-plane enforcement: the proxy intercepts and then substitutes a different response in place of and different from the first model's output when the threshold condition is met, changing the ML service's output path in operation. They also recite a particularized system architecture in which a customer-environment proxy communicates with a remote processing engine according to defined dataflows and division of labor, as reinforced by the specification's processing, alert, and response engines and thresholding flows. And they apply ML-based detection to actively defend the ML service itself by gating and substituting outputs, which is a technical security improvement for ML inference systems rather than a mere improvement to a business or information result. Thus, even accepting the examiner's framing of the attack-score evaluation as an abstract analysis, each independent claim integrates that analysis into a practical application and, in the alternative, recites an inventive concept in its ordered combination, distinguishing Recentive and satisfying § 101”.
Examiner respectfully disagrees, In BASCOM, the claims recited a particular non-conventional filtering architecture installed at a remote server in a specific way that yielded a concrete improvement in network level content filtering, but here the architecture is limited to an abstract “…a proxy application…a first machine learning model executing on a server in a customer computing environment…a processing engine executing in a system computing environment…a second machine learning model…” with no claimed details regarding how those entities are implemented in hardware or software, how they are arranged in a network, or how they change any underlying computer behavior beyond executing the claimed calculating score. Likewise, while the summary mentions that the collecting data, transmitting the score…apply response, claim 1 expresses this as a result oriented outcome and does not require any particular algorithmic or structural implementation beyond generic derivation of monitoringdata. Under the 2019 Revised Patent Subject Matter Eligibility Guidance and cases such as Alice and Electric Power Group, simply improving the efficiency or privacy of an abstract data manipulation scheme does not by itself integrate the abstract idea into a practical application when the claim does not recite a specific non generic way in which the computer is configured or operates differently. The additional detail in dependent claims, such as “collecting the vectorization data”, “collecting output”, coupling the vectorization data and output”, transmitting the coupled vectorization data and output” further defines the mental process or using pen and paper content of the scheme but remains part of the abstract idea itself and does not add a technological implementation akin to the server-side filtering architecture in BASCOM. Accordingly, while the disclosure may describe an intended improvement at a high level, claim 1 and the other independent claims remain directed to mental processes implemented on generic computer components, and the cited features do not provide the type of specific discrete technological implementation that would amount to significantly more than the abstract idea under 35 U.S.C. 101.
To summarize, there is no actual improvement to the machine learning model disclosed in the claim. In Recentive, claims that “do not more than claim the application of generic machine learning to a new data environment, without disclosing improvements to the learning model” were held ineligible. Here, the specification does nothing more than say use a generic machine learning model of monitoring malicious act of the user. There is no disclosure of a novel network architecture, no unusual training regimen, no non-routine feature extraction, and no explanation of why the machine learning model is not just an abstract idea of analyzing data.
A proxy application act as intermediator of applying a response. Because none of these recitations describe a fundamental improvement to computer technology itself (e.g., a new network protocol or machine learning algorithm), the claim is “directed to” an abstract idea.
Accordingly, the examiner maintains the rejection under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 101
Abstract Idea:
Monitoring…system for malicious act…intercepting…vectorization data provided by a requestor as part of a request…receiving an output generated…in response to receiving vectorization data…processing…the vectorization data and the output…to generate an attack score, the attack score indicating a likelihood of a malicious action…applying a response to a request associated with the requestor, the response based at least in part on the attack score, the response applied in place of the output
Additional elements that are not abstract:
…a proxy application…
…a first machine learning model executing on a server in a customer computing environment…
…a processing engine executing in a system computing environment…
…a second machine learning model…
Claims 1, 3-9, 11-17, 19-20, 22-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claims when analyzed under 2019 Revised Patent Subject Matter Eligibility Guidance, are directed to abstract idea.
Claim 1 for example, recites a system and, therefore, is a machine.
The claim recites the limitation of “Monitoring…system for malicious act… intercepting…vectorization data provided by a requestor as part of a request…receiving an output generated…in response to receiving vectorization data…processing…the vectorization data and the output…to generate an attack score, the attack score indicating a likelihood of a malicious action… transmitting…the attack score; determining…the attack score is above a pre-defined threshold; applying a response to a request associated with the requestor, the response based at least in part on the attack score, the response applied in place of the output”.
These limitations, under broadest reasonable, fall under “Using a computer as a tool to perform a mental process” grouping.
Additional elements that are not abstract:
“…a proxy application…a first machine learning model executing on a server in a customer computing environment…a processing engine executing in a system computing environment…a second machine learning model…”
Thus, the claim recites a mental process when analyzed under step 2A prong 1.
Claim 1 is further analyzed in step 2A prong 2, to evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by identifying whether there are any additional elements recited in the claim beyond the judicial exception, and evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. However, each of the remaining limitation “…a proxy application…a first machine learning model executing on a server in a customer computing environment…a processing engine executing in a system computing environment…a second machine learning model…” appear to be generic computer functions which do not constitute meaningful limitations that would amount to significantly more than the abstract idea. The combination of these additional elements is no more than generic computer functions. Thus, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limitations on practicing the abstract idea.
Claim 1 is additionally analyzed under Step 2B to evaluates whether the claim as a whole amount to significantly more than the recited exception, whether any additional element, or combination of additional elements, adds an inventive concept to the claim. When claims evaluated under step 2B, it is no more than what is well-understood, routine, conventional activity in the field. The specification does not provide any indication anything other than a generic computer component. The mere “monitoring…system for malicious act…intercepting…vectorization data provided by a requestor as part of a request…receiving an output generated…in response to receiving vectorization data…processing…the vectorization data and the output…to generate an attack score, the attack score indicating a likelihood of a malicious action… transmitting…the attack score; determining…the attack score is above a pre-defined threshold; applying a response to a request associated with the requestor, the response based at least in part on the attack score, the response applied in place of the output” is a well-understood, routing and conventional function when it is claimed in a merely generic manner as it is here.
The claimed invention is directed to the abstract idea of generating an output based on vectorization data and generating an attack score indicating a likelihood of a malicious action using the vectorization data and the output using generic machine learning techniques. As established in Recentive Analytics, Inc. v. Fox Corp., No 2023-2437 (Fed. Cir. Apr. 18, 2025), applying established machine learning methods to a new data environment, without disclosing specific improvements to the machine learning models themselves, does not render the claims patent-eligible under 35 U.S.C. § 101. The court held that “patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” Furthermore, the court emphasized that features such as iterative training and dynamic adjustments are inherent to the nature of machine learning and do not constitute an inventive concept. Therefore, the claim fails both prongs of the Alice-Mayo test: they are directed to an abstract idea and lack an inventive concept that transforms the abstract idea into a patent-eligible application.
While the claimed method identifies a useful purpose―detecting malicious behavior targeting a machine learning model―the claim achieves this by applying known ML techniques in a conventional manner. There is no technical detail regarding the architecture or operation of the ML models, nor is there an improvement to the functioning of the machine learning models or computer systems. The claim, therefore, is directed to an abstract idea of analyzing data and lacks an inventive concept that transforms it into a patent-eligible application.
Independent claims 9 and 17 include limitations similar to the limitations of claim 1 and is rejected under 35 U.S.C. 101 as being directed to abstract idea for the same reasons discussed above with respect to claim 1.
Regarding Independent claim 27, recites: additional abstract ideas:
Step 1 Statutory category:
Claim 27 is directed to a “method” and therefore recites a process. Thus, the claim fall within one of the four statutory categories of invention.
Step 2A Prong I: Judicial exception:
Under the 2019 Revised Patent Subject Matter Eligibility Guidance, each independent claim is evaluated to determine whether it recites a judicial exception, including abstract ideas such as mental processes and method of organizing human activity, which have been recognized as abstract idea. Thus, the analysis moves towards step 2A, Prong II.
For this analysis, generic references to ““the vector traffic instance “, “a traffic mirror target”, “a network load balancer”, “processing engine”, “the vector traffic instance”, “a series of traffic mirror worker applications” are disregarded, and the focus is on the remaining substantive language.
For claim 27, once the generic computer implementation language is removed, the method recites that it:
“collecting…traffic…from…”;
“providing…the collected traffic…to…”
“providing…the collected mirror traffic…to…”;
“forwarding…the collected traffic to...”;
“transmitting…the response to…”.
Accordingly, under Step 2A Prong I of the 2019 Guidance, dependent claim 27 recite an abstract idea in the form of mental processes, even when generic references to electronic or computer implementation are disregarded.
Step 2A Prong II Integration into a practical application:
Under Step 2A Prong II, the claims are evaluated to determine whether any additional elements, viewed individually and in combination, integrate the identified abstract idea into a practical application.
In claim 27 the elements beyond the abstract mental steps that the method steps are implemented “Claim 27 further analyzed in step 2A prong 2, to evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by identifying whether there are any additional elements recited in the claim beyond the judicial exception, and evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. However, each of the remaining limitation “the vector traffic instance “, “a traffic mirror target”, “a network load balancer”, “processing engine”, “the vector traffic instance”, “a series of traffic mirror worker applications” appears to be generic computer functions which do not constitute meaningful limitations that would amount to significantly more than the abstract idea.
The combination of these additional element is no more than generic computer functions. Thus, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limitations on practicing the abstract idea.
Independent claim 27 therefore do not integrate the abstract idea into a practical application under Step 2A Prong II.
Step 2B Inventive concept:
Claim 27 is additionally analyzed under Step 2B to evaluates whether the claim as a whole amount to significantly more than the recited exception, whether any additional element, or combination of additional elements, adds an inventive concept to the claim. When claims evaluated under step 2B, it is no more than what is well-understood, routine, conventional activity in the field. The specification does not provide any indication anything other than a generic computer component. The mere “collecting…traffic…from…”; “providing…the collected traffic…to…”; “providing…the collected mirror traffic…to…”; “forwarding…the collected traffic to...”; “transmitting…the response to…” is a well-understood, routing and conventional function when it is claimed in a merely generic manner as it is here.
Regarding dependent Claims 2-8, 11-16, 19-23, 24-27 these claims fail to cure the deficiencies of their parent claim(s), therefore, inherits the rejection.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 1 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
I. Failure to disclose the algorithm for the claimed scoring function:
Claim 1 recites a computer implemented function that “processes” the coupled vectorization data and the output of a first machine learning model to “generate an attack score.” The claim is not limited to any particular structure or algorithm for performing this function and is not presented under 35 U.S.C. 112(f). For computer implemented functional limitations, the specification must disclose the computer and the algorithm or steps that perform the claimed function in sufficient detail to show possession. See MPEP 2161 and 2161.01. Finisar Corp. v. DirecTV Grp., Inc., 523 F.3d 1323, 1340, 86 USPQ2d 1609, 1623, confirms that an algorithm may be expressed in prose, a flow chart, or a formula, but it must be disclosed. Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 681 to 683, 114 USPQ2d 1349, 1356 to 1357, explains that restating the function is not sufficient.
The cited disclosure describes that a processing engine produces an “attack score,” and it describes thresholding and alert tiers that consume that score. It also names high level technique families such as clustering, time series modeling, and classification. It does not disclose the algorithmic steps that transform the specific coupled inputs into the attack score. The specification does not set out feature definitions or extraction procedures from the vectorization data and first model outputs. It does not identify model architectures for the second model, training or calibration procedures, label construction, or validation methods. Merely stating that machine learning may be applied and that a score is produced defines a result and the downstream use of that result. LizardTech, Inc. v. Earth Res. Mapping, Inc., 424 F.3d 1336, 1346, 76 USPQ2d 1724, 1733, teaches that disclosure of a desired outcome does not establish possession of any and all means for achieving it. Absent an algorithm or concrete steps, the record does not show possession of the claimed scoring function. See also MPEP 2163.02 and 2181.
II. Failure to provide written description for the broad genus of “first machine learning model”
Under a broadest reasonable interpretation, the genus term “first machine learning model” in claim 1 is very broad. It reads on diverse model classes and modalities such as neural networks, support vector machines, decision trees and ensembles, clustering models, sequence models, and large language models. Different first model types produce different kinds of outputs and expose different behaviors and attack surfaces. For example, logits or token probabilities for neural networks, support vectors and margins for SVMs, tree vote distributions for ensembles, or embeddings and generative outputs for large language models. A scoring approach that uses the coupled vectorization data and model output would require adaptations that are specific to each such class, including different feature constructions, normalizations, context handling, and detection strategies.
The specification does not identify representative species of first machine learning models, nor does it provide adaptation procedures that show how the claimed scoring function operates across these different model types. Ariad holds that the written description requirement is distinct from enablement and requires a showing of possession commensurate with the claim scope. LizardTech further explains that disclosure of one embodiment does not entitle an applicant to claim all ways of achieving the objective. Here, architectural descriptions of a proxy, processing engine, alert engine, response engine, and a vector traffic instance pipeline show transport and control flow, but they do not supply representative species or concrete adaptations for the broad genus “first machine learning model.” The absence of representative species or other detail that ties the scoring approach to the breadth of first model types means the specification does not demonstrate possession of the genus as claimed.
For these reasons, claim 1 is rejected under 35 U.S.C. 112(a) for lack of written description because the specification fails to disclose an algorithm for the scoring function and fails to provide representative support for the broad genus of “first machine learning model.” The rejection will be reconsidered upon identification of specific passages that disclose algorithmic steps for computing the attack score from the coupled inputs and passages that provide representative species or adaptations that are commensurate with the full claim scope.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-9, 11-17, 19-26 are rejected under 35 U.S.C. 103 as being unpatentable over KRAUS et al (U. S. PGPub. No. 2020/0285737 A1) (hereinafter “Kraus”) in view of Wang et al. (2024/0007469 A1) (hereinafter “Wang”); GODFREY et al (U. S. PGPub. No. 2020/0279192 A1) (hereinafter “Godfrey”) and SANKARANARAYANAN et al. (U. S. PGPub. No. 2023/0281281 A1) (hereinafter “Sankaranarayanan”);
Regarding Claim 1, Kraus teaches:
Intercepting, by a proxy application (Kraus: [0077] “Service” means a consumable program offering in a cloud computing environment (=examiner interpreting that proxy application is a part of cloud computing with provides a cloud-based proxy acts as intermediatory between a client and the internet) offers or other network or computing system environment. [0254], the coarse detector 420 could trigger 1216 the sequence anomalies detector 408 by passing anchor events 520 to the sequence anomalies detector 408, e.g., an event 520 identifying the packet whose source IP address is on a list of suspect or low-reputation IP addresses, or an event 520 reciting the number of packets that reached the firewall or a proxy), vectorization data (Kraus: [abstract: Event sequences extracted from logs or other event lists are vectorized and embedded in a vector space. [0007], vectorizing it by embedding it in a vector space as a candidate vector), provided by a requestor as a part of a request (Kraus: [0007], perform operations that include acquiring a candidate event sequence to be tested for anomalousness, vectorizing it by embedding it in a vector space as a candidate vector) and intended for a first machine learning model executing on a server in a customer computing environment (Kraus: [0077] “Service” means a consumable program offering in a cloud computing environment (=system computing environment) or other network or computing system environment),
Kruas does not explicitly teaches:
the proxy application being executed in the customer computing environment, the vectorization data being ingested by the first machine learning model to generate an output;
However, in an analogous art, Wang teaches:
the proxy application being executed in the customer computing environment (Wang: [0053] Referring to the embodiment of FIG. 4, a proxy 420 operates between user device 411 and service provider 412. Proxy 420 communicates with user device 411 and service provider 412 through communication connections. In some other embodiments, proxy 420 runs on a device as a program module (=proxy application), such as, on user device 411 (=customer computing environment), or another device communicable with user device 411… In some embodiments, some components or functions of proxy 420 may be distributed among and between multiple computers connected in data communication), the vectorization data being ingested by the first machine learning model to generate an output (Wang: [0064], proxy 420 transforms data record 511 to obtain a transformed record, and then applies the transformed record (=vectorization data) as an input (=ingested) to first model 421. [0058] In some embodiment, first model 421 may be a machine learning model created by proxy 420 using one or more machine learning algorithms).
It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Kraus’s method of vectorizing data and generating an output from the machine learning model by applying Wang’s method of providing proxy application which runs on a user device as a program module, in order to communicates through communication connections.
Kraus in view of Wang doesn’t explicitly disclose below claim limitations, however, in an analogous art, Godfrey teaches:
receiving, by the proxy application, an output generated by the first machine learning model in response to receiving the vectorization data (Godfrey: [0015] Machine learning models may be deployed in a manner in which a first machine learning model provides an output that is then subsequently passed to a second machine learning model and is used by the second machine learning model as an input for performing a machine learning operation),
the proxy application intercepting the output from the first machine learning model before the output is transmitted to the requestor (Godfrey: [0031] The source device model 220 can provide an output of data corresponding to an assessment (e.g., prediction) that is then sent to a downstream model (e.g., the destination device model 260) that uses the assessment as input….the destination device model 260 can utilize server-side signals 270 (or utilize a rule-based mechanism) in conjunction with the assessment (=output) received (=intercepting) from the source device model 220 in order to make a decision or initiate an action to be performed by the server 120).
transmitting, by the proxy application, the vectorization data and the output to a processing engine executing in a system computing environment (Godfrey: [0026], The electronic device 110, for example, may communicate with the server 120 to provide an output from its deployed machine learning model, which is then provided as input to the machine learning model deployed on the server 120),
the system computing environment being separate and distinct from the customer computing environment (Godfrey: [0027], FIG. 2 illustrates an example computing architecture for a system providing semantics preservation of machine learning models, in accordance with one or more implementations. For explanatory purposes, the computing architecture is described as being provided by the electronic device 110 (=Customer computing system), and the server 120 (=computing environment system) of FIG. 1 and 2, such as by a processor and/or memory of the electronic device 110 and/or the server 120; however, the computing architecture may be implemented by any other electronic devices. Not all of the depicted components may be used in all implementations, however, and one or more implementations may include additional or different components than those shown in the figure. Variations in the arrangement and type of the components may be made without departing from the spirit or scope of the claims as set forth herein. Additional components, different components, or fewer components may be provided)
Processing, by the processing engine executing a second machine learning model, the vectorization data and the output to cause the second machine learning model to generate an attack score (Godfrey: [0015] Machine learning models may be deployed in a manner in which a first machine learning model provides an output that is then subsequently passed to a second machine learning model and is used by the second machine learning model as an input for performing a machine learning operation)
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Kraus in view of Wang by applying the well-known technique as disclosed by Godfrey receiving output from the first machine learning model which is deployed on a client electronic device and transmitting to the second machine learning model as input. The motivation is to determine whether incoming data has a distribution that significantly deviates from the distribution of the training data in order to determine whether retraining the model would be beneficial (Godfrey: [0016]).
Kraus in view of Wang and Godfrey does not explicitly disclose below claim limitations, however, in a an analogous art, Shankaranarayanan teaches:
the attack score indicating a likelihood of a malicious action towards the first machine learning model via the vectorization data (Shankaranarayanan: [0050] At 310, based on the risk score (=attack score) (i.e., the amount of similarity of the responses of the ML model 125 and the shadow model), it is determined whether the requests from a particular user may likely form a reverse engineering attack based on a threshold. [0073] In one embodiment, the risk score is determined as follows: Reverse engineering risk score=(((User profile activity)+(Feature importance activity)+(Feature correlation activity)+(Data type activity)+(Algorithm identification activity))/50)*100)
transmitting, by the processing engine to the proxy application, the attack score (Sankarnanrayanan: [0075] In embodiments, the risk score is logged periodically from the pipeline 306 to monitor for the customer tenancy per model. The logging can be initiated based on combining outputs of the shadow model and clustering and applying the above risk score formula to determine the overall risk score)
determining, by the processing engine, that the attack score is above a pre-defined threshold (Sankaranarayanan: [0072] Embodiments determine reverse engineering risk score based on the above scoring, with the risk score ranging from 0-100%, with zero means no/less risk and 100% is high risk.)
and applying, by the proxy application, a response to the request by the requestor the response based at least in part on the attack score the response applied in place of and different from the output of the first machine learning model (Sankaranarayanan: [0054] At 316, it is determined if a real alert is present and a threshold is breached, an attack on ML model 125 is indicated….If an attack is determined, at 320, prediction responses (=applying response to the request) from further requests from the user are blocked and that attack is reported. [0055] Model guard 314 acts on the alert by preventing high-risk clients from reversing the model. This can be done by a combination of one or more of the following protective measures (= responses): [0056] Removing all class probabilities for classification problems; [0057] Only returning the predicted class label; [0058] Adding noise to the prediction probabilities; [0059] Throttling the requests; [0060] Blocking the requests; or [0061] Replacing the model).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Kraus in view of Wang and Godfrey by applying the well-known technique as disclosed by Sankaranarayanan of preventing high risk client from reversing the model by applying one or more protective measures. The motivation is to determine whether the first user is attempting the reverse engineering attack on the ML model (Sankaranarayanan: [Absract]).
Regarding Claim 3, Kraus in view of Wang and Godfrey, Sankaranarayanan teaches:
The method of claim 1 (see rejection of claim 1 above),
collecting the vectorization data (Kraus: [abstract: Event sequences extracted from logs or other event lists are vectorized and embedded in a vector space), by the proxy application (Kraus: [0077] “Service” means a consumable program offering in a cloud computing environment (=examiner interpreting that proxy application is a part of cloud computing with provides a cloud-based proxy acts as intermediatory between a client and the internet) offers or other network or computing system environment. [0254], the coarse detector 420 could trigger 1216 the sequence anomalies detector 408 by passing anchor events 520 to the sequence anomalies detector 408, e.g., an event 520 identifying the packet whose source IP address is on a list of suspect or low-reputation IP addresses, or an event 520 reciting the number of packets that reached the firewall or a proxy)
Regarding Claim 4, Kraus in view of Wang and Godfrey teaches:
The method of claim 1 (see rejection of claim 1 above),
wherein the proxy application is created in a computing environment that proxies the first machine learning model (Kraus: [0077] “Service” means a consumable program offering in a cloud computing environment (=examiner interpreting that proxy application is a part of cloud computing with provides a cloud-based proxy acts as intermediatory between a client and the internet) offers or other network or computing system environment. [0242] It is expected that in many environments of interest, storage items 210 will be located in a cloud 424 as cloud-based storage items which are allocated by a cloud service 426 using infrastructure 428. In a given situation, the infrastructure 428 may provide redundancy through parity, replication, or other mechanisms, and may provide access efficiencies through load balancing or decisions about which devices actually hold the stored data, for example. [0254]…alert when the number of packets reaching a firewall or a proxy is more than one standard deviation away from a moving average))
Regarding Claim 5, Kraus in view of Wang and Godfrey, Sankaranarayanan teaches:
The method of claim 1 (see rejection of claim 1 above),
collecting the output generated by the first machine learning model by the proxy application (Kraus: [0231, Components of an embodiment may be grouped into interacting functional modules based on their inputs, outputs, and/or their technical effects, for example. [0296], In this context, a sequence-anomaly algorithm for cloud storage as taught herein may complement the basic, univariate, anomaly detection. A sequence anomaly detection algorithm may be used in a layered defense as an additional standalone detector that triggers alerts upon identifying anomalous event sequences, or may be combined with other detectors. In the latter case, one may use the algorithm for pinpointing anomalous event sequences when an alert (of another detector) is raised. This capability assists security experts in alert investigation. The implementation was combined with a data exfiltration detector 122 which detects abnormal reads of large data volumes, to demonstrate that the implementation assisted in investigating data exfiltration alerts);
coupling the vectorization data and output by the proxy application (Kraus: [0020] FIG. 10 is a data flow diagram illustrating some aspects of machine learning model training, testing, tuning, and usage for cybersecurity. dataflow in an example architecture that includes both creating a trained model and utilizing the trained model for risk management. [0187] 1006 vectors or underlying event sequences used in tuning a machine learning model; [0304] This embodiment constructs an account's model by feeding 1224 its event sequence documents into the doc2vec algorithm 1226. Doc2vec embeds 1106 the documents into a lower dimensional vector space 810 and learns 1228 a similarity metric 432 between the sequences which considers events' context. The final model 402 contains or consists of sequence vectors 814 and supports an efficient and fast similarity search. This model construction dataflow may be represented as: [account's events]->[one document per event sequence]->{doc2vec embedding}->[account's model]);
and transmitting the coupled vectorization data and output to the processing engine by the proxy application (Kraus: [0190] 1012 tuning a machine learning model, that is, performing operations to improve one or more performance characteristics of the model, such as memory usage efficiency, execution speed, fitting accuracy, perceived clarity of relationships between candidate event sequence and anomaly score, and so on. [0304] This embodiment constructs an account's model by feeding 1224 its event sequence documents into the doc2vec algorithm 1226. Doc2vec embeds 1106 the documents into a lower dimensional vector space 810 and learns 1228 a similarity metric 432 between the sequences which considers events' context. The final model 402 contains or consists of sequence vectors 814 and supports an efficient and fast similarity search. This model construction dataflow may be represented as:
[account's events]->[one document per event sequence]->{doc2vec embedding}->[account's model])).
Regarding Claim 7, Kraus in view of Wang and Godfrey, Sankaranarayanan teaches:
The method of claim 1 (see rejection of claim 1 above),
generating an alert based on the attack score (Sankaranarayanan:[0052] In other embodiments, a combination of both approaches can be used to determine similarity between the models. [0053] If yes at 310, at 312 alerts are generated and sent to a model guard service 314. For example, a determination that the responses from the shadow model are identical or nearly identical to the actual responses of ML model 125. [0062], training shadow models to detect a reverse engineering attack, the requests and responses at 302 can be used directly profile and classify the client behavior to determine a risk threshold and an alert).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Kraus in view of Wang and Godfrey by applying the well-known technique as disclosed by Sankaranarayanan of generating alerts based on a risk threshold . The motivation is to determine whether the first user is attempting the reverse engineering attack on the ML model (Sankaranarayanan: [Absract]).
Regarding Claim 8, Kraus in view of Wang and Godfrey, Sankaranarayanan teaches:
The method of claim 1 (see rejection of claim 1 above),
reporting attack data to a user through a graphical interface, the attack data based at least in part on the attack score (Kraus: [0177] 914 generating or sending an alert about a state or event detected in a computing system, thereby alerting a human or a software process or both, e.g., by text, email, visible alert (=using graphical interface), signal, or other alert transmission [0178] 916 flagging (=reporting attack) a data structure, storage item, or other artifact in a computing system to denote a risk or indicate further investigation is prudent).
Regarding Claim 9, this claim contains identical limitations found within that of claim 1 above albeit directed to a different statutory category (non-transitory computer readable medium). For this reason, the same grounds of rejection are applied to claim 9.
Regarding Claim 11, this claim contains identical limitations found within that of claim 3 above albeit directed to a different statutory category (non-transitory computer readable medium). For this reason, the same grounds of rejection are applied to claim 11.
Regarding Claim 12, this claim contains identical limitations found within that of claim 4 above albeit directed to a different statutory category (non-transitory computer readable medium). For this reason, the same grounds of rejection are applied to claim 12.
Regarding Claim 13, this claim contains identical limitations found within that of claim 5 above albeit directed to a different statutory category (non-transitory computer readable medium). For this reason, the same grounds of rejection are applied to claim 13.
Regarding Claim 15, this claim contains identical limitations found within that of claim 7 above albeit directed to a different statutory category (non-transitory computer readable medium). For this reason, the same grounds of rejection are applied to claim 15.
Regarding Claim 16, this claim contains identical limitations found within that of claim 8 above albeit directed to a different statutory category (non-transitory computer readable medium). For this reason, the same grounds of rejection are applied to claim 16.
Regarding Claim 17, Kraus teaches:
one or more servers (Kraus: [0071] As used herein, a “computer system” (a.k.a. “computing system”) may include, for example, one or more servers, motherboards, processing nodes, laptops, tablets, personal computers (portable or not), personal digital assistants, smartphones, smartwatches, smartbands, cell or mobile phones, other mobile devices having at least a processor and a memory, video game systems, augmented reality systems, holographic projection systems, televisions, wearable computing systems, and/or other device(s) providing one or more processors controlled at least in part by instructions) including a memory (Kraus: [0061] RAM: random access memory [0062] ROM: read only memory [0071], a processor and a memory, video game systems, augmented reality systems, holographic projection systems, televisions, wearable computing systems, and/or other device(s) providing one or more processors controlled at least in part by instructions. The instructions may be in the form of firmware or other software in memory and/or specialized circuitry) and a processor (Kraus: [0073] A “processor” is a thread-processing unit, such as a core in a simultaneous multithreading implementation. A processor includes hardware. A given chip may hold one or more processors. Processors may be general purpose, or they may be tailored for specific uses such as vector processing, graphics processing, signal processing, floating-point arithmetic processing, encryption, I/O processing, machine learning, and so on. [0229] Each computer system 102 includes at least one processor 110);
and one or more modules stored in the memory (Kraus: an embodiment may include hardware logic components 110, 128 such as Field-Programmable Gate Arrays (FPGAs), Application-Specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-Chip components (SOCs), Complex Programmable Logic Devices (CPLDs), and similar components. Components of an embodiment may be grouped into interacting functional modules based on their inputs, outputs, and/or their technical effects) and when executed by the processor result in operations comprising (Kraus: [0231], software instructions executed by one or more processors in a computing device (e.g., general purpose computer, server, or cluster)):
This claim contains identical limitations found within that of claim 1 above albeit directed to a different statutory category (system medium). For this reason, the same grounds of rejection are applied to claim 17.
Regarding Claim 19, This claim contains identical limitations found within that of claim 3 above albeit directed to a different statutory category (system medium). For this reason, the same grounds of rejection are applied to claim 19.
Regarding Claim 20, This claim contains identical limitations found within that of claim 4 above albeit directed to a different statutory category (system medium). For this reason, the same grounds of rejection are applied to claim 20.
Regarding Claim 22, Kraus in view of Wang and Godfrey, Sankaranarayanan teaches:
The system of claim 21 (see rejection of claim 21above),
wherein the operations further comprise generating an alert based on the attack score (Sankaranarayanan:[0052] In other embodiments, a combination of both approaches can be used to determine similarity between the models. [0053] If yes at 310, at 312 alerts are generated and sent to a model guard service 314. For example, a determination that the responses from the shadow model are identical or nearly identical to the actual responses of ML model 125. [0062], training shadow models to detect a reverse engineering attack, the requests and responses at 302 can be used directly profile and classify the client behavior to determine a risk threshold and an alert).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Kraus in view of Wang and Godfrey by applying the well-known technique as disclosed by Sankaranarayanan of generating alerts based on a risk threshold . The motivation is to determine whether the first user is attempting the reverse engineering attack on the ML model (Sankaranarayanan: [Absract]).
Regarding Claim 23, Kraus in view of Wang and Godfrey, Sankaranarayanan teaches:
The system of claim 21 (see rejection of claim 21above),
wherein the operations further comprise: reporting attack data to a user through a graphical interface, the attack data based at least in part on the attack score (Kraus: [0177] 914 generating or sending an alert about a state or event detected in a computing system, thereby alerting a human or a software process or both, e.g., by text, email, visible alert (=using graphical interface), signal, or other alert transmission [0178] 916 flagging (=reporting attack) a data structure, storage item, or other artifact in a computing system to denote a risk or indicate further investigation is prudent).
Claim(s) 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over KRAUS et al (U. S. PGPub. No. 2020/0285737 A1) (hereinafter “Kraus”) in view of Wang et al. (2024/0007469 A1) (hereinafter “Wang”) and GODFREY et al (U. S. PGPub. No. 2020/0279192 A1) (hereinafter “Godfrey”), SANKARANARAYANAN et al. (U. S. PGPub. No. 2023/0281281 A1) (hereinafter “Sankaranarayanan”); and in further view of McClintock et al. (U. S. Pat. No. 9,350,748 B1) (hereinafter “McClintock”);
Regarding Claim 24, Kraus in view of Wang and Godfrey, Sankaranarayanan teaches:
The method of claim 1 (see rejection of claim 1above),
The Kraus in view of Wang and Godfrey, Sankaranarayanan does not explicitly disclose below claim limitation, however in an analogous art, McClintock teaches:
wherein applying the response comprises: returning a series of false values to the requestor (McClintock: [Col 7, lines 45-48], (28) An email attack may respond to emails from an identified attacker by taking one or more similar obfuscating actions that give an attacker an overwhelming number of false positive responses. [Col 12, lines 39-45], the fake response may serve to delay and/or foil the attack, may serve to provide false information to the attacker such as, for example, the implied existence of one or more non-existent services or users, may serve to solicit information from the attacker such as information related to the identity or location of the attacker and/or may serve other such attack management purposes).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Kraus in view of Wang and Godfrey, Sankaranarayanan by applying the well-known technique as disclosed by McClintock of provide false positive response to the attacker. The motivation is to detecting attacks and for preventing those attacks by presenting the attacker with an altered representation of the computer system and thereby delaying or frustrating the attack and the attacker (McClintock: [Abstract]).
Regarding Claim 25, Kraus in view of Wang and Godfrey, Sankaranarayanan, McClintock teaches:
The method of claim 1 (see rejection of claim 1above),
The Kraus in view of Wang and Godfrey, Sankaranarayanan does not explicitly disclose below claim limitation, however in an analogous art, McClintock teaches:
wherein applying the response comprises: returning a randomized output to the requestor (McClintock: [Abstract], The behavior of the computer system is modified so that responses to communications requests to ports on the computer system are altered, presenting the attacker with an altered representation of the computer system and thereby delaying or frustrating the attack and the attacker. [Col 6, line 7-20], The host computer may provide these false positive connections by altering the behavior of the host operating system and, rather than not responding to requests on unused ports, may instead respond to requests on unused ports. In some embodiments, the behavior of the host system may be altered by changing one or more operating system behaviors. The behavior of the host system may be altered by instantiating one or more services configured to at least provide responses to the attacking system and connecting those one or more services to one or more of the unused ports. The behavior of the host system may also be altered by instantiating one or more services on one or more other computer systems and redirecting communications originated by the identified attacker to the one or more other services).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Kraus in view of Wang and Godfrey, Sankaranarayanan by applying the well-known technique as disclosed by McClintock of behavior of the host system may be altered by changing one or more operating system behaviors and presenting the attacker with an altered representation of the computer system. The motivation is to detecting attacks and for preventing those attacks by presenting the attacker with an altered representation of the computer system and thereby delaying or frustrating the attack and the attacker (McClintock: [Abstract]).
Regarding Claim 26, Kraus in view of Wang and Godfrey, Sankaranarayanan teaches:
The method of claim 1 (see rejection of claim 1above),
The Kraus in view of Wang and Godfrey, Sankaranarayanan does not explicitly disclose below claim limitation, however in an analogous art, McClintock teaches:
wherein applying the response comprises: implementing a honeypot response (McClintock: [Col 8, lines 10-18], (29) In addition to providing an attacker with an overwhelming number of false positive responses, the host computer system and/or one or more other services may engage in a number of other behaviors to respond to the attack. For example, a host computer system may accept connections long enough to identify the attacker and then, for each subsequent communication attempt from the attacker, the host computer system may immediately terminate (=honeypot response) any connection from the attacker on any port).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Kraus in view of Wang and Godfrey, Sankaranarayanan by applying the well-known technique as disclosed by McClintock of identifying attacker and terminating any connection from the attacker on any port. The motivation is to detecting attacks and for preventing those attacks by terminating any connection from the attacker. (McClintock: [Abstract]).
Claim(s) 27 is rejected under 35 U.S.C. 103 as being unpatentable over KRAUS et al (U. S. PGPub. No. 2020/0285737 A1) (hereinafter “Kraus”) in view of Wang et al. (2024/0007469 A1) (hereinafter “Wang”) and GODFREY et al (U. S. PGPub. No. 2020/0279192 A1) (hereinafter “Godfrey”), SANKARANARAYANAN et al. (U. S. PGPub. No. 2023/0281281 A1) (hereinafter “Sankaranarayanan”); and further in view of Dawani et al. (U. S. 2020/0403826 A1) (hereinafter “Dawani”) and Ma et al. (U. S. 2021/0218673 A1) (hereinafter “Ma”)
Regarding Claim 27, Kraus in view of Wang and Godfrey, Sankaranarayanan teaches:
The method of claim 1 (see rejection of claim 1above),
The Kraus in view of Wang and Godfrey, Sankaranarayanan does not explicitly disclose,
wherein the proxy application comprises a vector traffic instance, and the method further comprises
However, in an analogous art, Dawani teaches:
wherein the proxy application comprises a vector traffic instance, and the method further comprises (Dawani: [0037] As discussed herein, traffic mirroring is directed at allowing customers of a service provider to mirror any amount of their VPC traffic, without requiring the customer to install and utilize agents on instances. In some examples, traffic mirroring allows customers to monitor traffic at any Elastic Network Interface (ENI) in their VPC, including elastic network interfaces (ENIs) on Network Address Translation (NAT) Gateways, Load Balancers, IGs, VPC endpoints, interfaces for Elastic Container Service (ECS), interfaces for an Elastic Kubnernetes Service (EKS), interfaces for a compute engine that runs containers (e.g., AWS® Fargate), interfaces for a service that runs code without provisioning or managing servers (e.g., AWS® Lambda), and more. Customers can also apply a wide variety of filters to only copy desired network traffic rather than all of the data. Rather than being restricted to proprietary solutions, traffic mirroring assists customers in being able to provide mirrored data to a variety of different endpoints):
collecting, by a traffic mirror source, traffic originating from the vector traffic instance (Dawani: [0034], the traffic mirror source receives data from VMs (=vector traffic instance) 132 and mirrors the data received by the traffic mirror source 120B to the traffic mirror target 130C…. a traffic mirror source may receive data packets from many different sources).
providing, by the traffic mirror source, the collected traffic to a traffic mirror target (Dawani: [0014], A “traffic mirror target”, is a destination for mirrored traffic. [0015] Using techniques described herein, customers of a service provider can configure one or more traffic mirroring sessions to mirror traffic from a traffic mirroring source within a service provider network to one or more traffic mirroring targets);
providing, by the traffic mirror target, the collected traffic to a network load balancer (Dawani: [0034], mirrors the data received by the traffic mirror source 120B to the traffic mirror target 130C [0107] At 520, one or more traffic mirror targets is identified. As discussed above, a traffic mirror target serves as a destination for mirrored traffic. The traffic mirror target can be associated with the same customer, or a different customer. For example, a traffic mirror target may be a network load balancer in a VPC associated with another customer of the service provider);
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Kraus in view of Wang and Godfrey, Sankaranarayanan by applying the well-known technique as disclosed by Dawani of receiving data traffic from traffic mirror resource to the traffic mirror target in order to monitoring network traffic using traffic mirroring. The motivation is to improve monitoring the network traffic process which can be costly, difficult to scale, pose a security risk, may affect performance, and the like (Dawani: [0002]).
The above cited combination of Kraus in view of Wang, Godfrey, Sankaranarayanan and Dawani does not explicitly disclose:
forwarding, by the network load balance through a series of traffic mirror worker applications, the collected traffic to the processing engine to generate the response providing, by a response engine to the traffic mirror workers, the response;
and transmitting, by the traffic mirror workers, the response to the vector traffic instance.
However, in an analogous art, Ma teaches:
forwarding, by the network load balance through a series of traffic mirror worker applications, the collected traffic to the processing engine to generate the response (Ma:
[0054], the forwarders 406-1 to 406-4 (=series of traffic mirror worker applications) may enable mirrored data traffic to be forwarded to various networks 402-1 to 402-4 in the hybrid network environment 400. [0061], The destination forwarder (=traffic mirror worker) may forward the mirrored data packet to a destination node within the destination network… [0041] the mirrored data traffic 110, including the mirrored data packets form the video game service, can be transmitted to the analyzer service at the destination 112.)),
providing, by a response engine to the traffic mirror workers, the response (Ma: [0037], the destination 112 may analyze the mirrored traffic 110 and/or generate one or more analysis reports (=response) that can be transmitted to and/or displayed on the computing device 108. [0041], Upon receiving the mirrored data traffic 110, the analyzer engine (=response engine) may analyze the mirrored data traffic 110. For example, the analyzer engine may identify that an abnormal number amount of data traffic is being received by the source 104 from a suspicious IP address. The analyzer engine may indicate the suspicious IP address to the organization (e.g., in a report (=response) displayed at the computing device 108
and transmitting, by the traffic mirror workers, the response to the vector traffic instance (Ma: [0022], a source 104 may be a node in the source network 102. The source 104 may, in various implementations, include a software instance and/or VM hosted by computing resources in the source network 102. The source 104 may include one or more interfaces (e.g., virtual interface(s)) (=vector traffic instance) by which the source 104 receives and/or transmits data traffic 106. [0041], The analyzer engine may indicate the suspicious IP address to the organization (e.g., in a report displayed at the computing device 108))
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Kraus in view of Wang, Godfrey, Sankaranarayanan and Dawani by applying the well-known technique as disclosed by Ma of receiving mirrored traffic in order to analyze the mirrored traffic, and generating report based on analysis of the received mirrored traffic. The motivation is to provide particular improvements to the field of computer networking and improve interoperability of different networks in a hybrid network environment (Ma: [0018]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art.
Chandrasekaran et al. (U. S. PGPub. No. 2023/0111744 A1): Methods and systems for implementing traffic mirroring for network telemetry are disclosed. An embodiment of a method for implementing traffic mirroring for network telemetry involves identifying network traffic at a network appliance that is to be subjected to traffic mirroring for network telemetry, and selecting from available options of transmitting enhanced mirrored network traffic from the network appliance to a collector, wherein the enhanced mirrored network traffic is generated at the network appliance by at least one of compressing and encrypting the network traffic, and transmitting mirrored network traffic from the network appliance to the collector without compressing or encrypting the network traffic.
SRINIVASAN et al. (U. S. PGPub. No. 2020/0092299 A1): The disclosed system implements techniques to enable a tenant of a cloud-based platform to effectively and efficiently apply a policy that copies data packets communicated to or from a virtual machine in the tenant's own virtual network. When applied, the policy mirrors data traffic associated with a workload executing on a virtual machine in the tenant's virtual network. To mirror the data traffic, a copy of a data packet is streamed to another virtual machine so that network analytics can be performed (e.g., performance analytics, security analytics, etc.). In various examples, the policy can be a role-based mirroring policy that defines a plurality of roles in association with a role-based access model that scales operations and that provides improved security for a tenant's virtual network.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUPALI DHAKAD whose telephone number is (571)270-3743. The examiner can normally be reached M-F 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at 5712705143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.D./Examiner, Art Unit 2437
/ALEXANDER LAGOR/Supervisory Patent Examiner, Art Unit 2437