DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
In response to communications filed on 11 March 2025, claims 1-20 are presently pending in the application, of which, claims 1, 9 and 17 are presented in independent form.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11 March 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings, filed 11 March 2025, have been reviewed and accepted by the Examiner.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Rejections - 35 USC § 101
Regarding claims 1-20, under Step 2A claims 1-8 recite a judicial exception (abstract idea) that is not integrated into a practical application and does not provide significantly more.
Under Step 2A (prong 1), and taking claim 1 as representative, claim 1 recites:
monitoring, by at least one processor, execution of a first computer model configured to analyze aggregated transaction data by dynamically generating and evaluating data slices based on entropy and information gain values;
collecting, by the at least one processor, data associated with operation of the first computer model, the data comprising at least one of:
attributes used to generate data slices, entropy values calculated for the data slices, information gain values calculated for the data slices, or traversal paths taken by the first computer model within a hierarchy of data slices;
training, by the at least one processor, a second computer model using the collected data, wherein the second computer model is configured to learn patterns in data slicing and anomaly detection from the first computer model; and
executing, by the at least one processor, the second computer model on a new set of aggregated transaction data to predict an anomalous data slice.
These limitations recite mental processes, such as concepts performed in the human mind (see: 2019 PEG, p. 52). This is because the limitations above recite a series of steps by which an evaluation is made for an abstract data. This represents a judgement or decision which are concepts performed in the human mind and falls under certain methods of mental processes. Accordingly, under step 2A (prong 1) the claim recites an abstract idea because the claim recites limitations that fall within the “Certain methods of mental processes” grouping of abstract ideas (see again: 2019 PEG, p. 52).
Under Step 2A (prong 2), the abstract idea is not integrated into a practical application. The Examiner acknowledges that representative claim 1 does recite additional elements, including hardware processing circuitry.
Although reciting these additional elements, taken alone or in combination these elements are not sufficient to integrate the abstract idea into a practical application. This is because the additional elements of claim 1 are recited at a high level of generality (i.e. as generic computing hardware) such that they amount to nothing more than the mere instructions to implement or apply the abstract idea on generic computing hardware (or, merely uses a computer as a tool to perform an abstract idea). Further, the additional elements do no more than generally link the use of a judicial exception to a particular technological environment or field of use (such as the Internet or computing networks).
Secondly, the additional elements are insufficient to integrate the abstract idea into a practical application because the claim fails to (i) reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field, (ii) implement the judicial exception with, or use the judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, (iii) effect a transformation or reduction of a particular article to a different state or thing, or (iv) applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.
In view of the above, under Step 2A (prong 2), claim 1 does not integrate the recited exception into a practical application (see again: 2019 Revised Patent Subject Matter Eligibility Guidance).
Under Step 2B, examiners should evaluate additional elements individually and in combination to determine whether they provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself). In this case, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Returning to representative claim 1, taken individually or as a whole the additional elements of claim 1 do not provide an inventive concept (i.e. they do not amount to “significantly more” than the exception itself). As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements used to perform the claimed process amount to no more than the mere instructions to apply the exception using a generic computer and/or no more than a general link to a technological environment.
Furthermore, the additional elements fail to provide significantly more also because the claim simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. For example, the additional elements of claim 1 utilize operations the courts have held to be well-understood, routine, and conventional (see: MPEP 2106.05(d)(lI)), including at least:
• receiving or transmitting data over a network, and/or
• storing and retrieving information in memory
• performing repetitive calculations
Even considered as an ordered combination (as a whole), the additional elements of claim 11 do not add anything further than when they are considered individually.
In view of the above, representative claim 1 does not provide an inventive concept (“significantly more”) under Step 2B, and is therefore ineligible for patenting.
Dependent claims 2-8 also do not integrate the abstract idea into a practical application. Notably, claims 2-8 recite more complexities descriptive of the abstract idea itself. Such complexities do not themselves provide further additional elements in addition to the abstract ideas themselves. Further, claims 2-8 rely upon at least similar additional elements (e.g. third service system) that are specified at a high level of generality. Considered both individually and as a whole, claims 2-8 do not integrate the recited exception into a practical application for at least similar reasons as discussed above.
Considered individually or as a whole, claims 2-8 also fail to result in “significantly more” than the abstract idea under step 2B. This is again because the claims merely apply the exception on generic computing hardware, generally link the exception to a technological environment, and append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (see discussion above).
Even when viewed as an ordered combination (as a whole), the additional elements of the dependent claims do not add anything further than when they are considered individually.
In view of the above, claims 2-8 do not provide an inventive concept (“significantly more”) under Step 2B, and are therefore ineligible for patenting.
Claims 9-16 appear to include similar subject matter as in claims 1-8 as discussed above. More specifically, independent claim 9 additionally recites a non-transitory machine-readable storage medium and is recited at a high level of generality and are recited as performing generic computer functions routinely used in computer applications. Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. All the comments made with respect to the rejection of claims 1-8 equally apply and therefore stand rejected.
Claims 17-20 appear to include similar subject matter as in claims 1-8 as discussed above. More specifically, independent claim 17 additionally recites a system which is recited at a high level of generality and are recited as performing generic computer functions routinely used in computer applications. Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. All the comments made with respect to the rejection of claims 1-8 equally apply and therefore stand rejected.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being unpatentable by Muddu, Sudhakar, et al (U.S. 2018/0302423 and known hereinafter as Muddu).
As per claim 1, Muddu teaches a method comprising:
monitoring, by at least one processor, execution of a first computer model configured (e.g. Muddu, see paragraph [0274], which discloses machine leanring CEP engine that monitors for computer security issues) to analyze aggregated transaction data (e.g. Muddu, see paragraph [0158], which discloses the output may be analyzed by various applications such as a threat detection application.) by dynamically generating and evaluating data slices (e.g. Muddu, see paragraphs [02743-0275], which discloses generating and analyzing machine-learning based CEP based time slices) based on entropy and information gain values (e.g. Muddu, see paragraph [0623], which discloses machine-generated nature of a character-based identifier (e.g. value) is a high degree of entropy or randomness in the sequencing of characters.);
collecting, by the at least one processor, data associated with operation of the first computer model (e.g. Muddu, see paragraphs [0161-0162], which discloses event data is collected over batch processing for detecting anomalies, thread indicators and threat, where the event data contains a collection of events that have arrived over a batch period.), the data comprising at least one of:
attributes used to generate data slices, entropy values calculated for the data slices, information gain values calculated for the data slices, or traversal paths taken by the first computer model within a hierarchy of data slices (e.g. Muddu, see paragraphs [0278-0279], which discloses machine learning models enable to perform many types of analysis from event data sources in various contextual settings, where the machine leaning model can perform entity-specific behavior analysis, such as entropy/randomness/n-gram analysis based on a time slice of the event data.);
training, by the at least one processor, a second computer model using the collected data (e.g. Muddu, see paragraphs [0273-0274], which discloses machine learning based engine utilizes distributed training and deliberation of one or more machine learning models, where the machine learning model involves processing data through a model state of the machine learning model.), wherein the second computer model is configured to learn patterns in data slicing and anomaly detection from the first computer model (e.g. Muddu, see paragraphs [0205-0206], which discloses performing pattern matching for all known formats to determine the most likely format of a particular event data, where the format detector can employ a number of heuristics that can use a hierarchical way to perform pattern matching on event data.); and
executing, by the at least one processor, the second computer model on a new set of aggregated transaction data to predict an anomalous data slice (e.g. Muddu, see paragraphs [0273-0274], which discloses machine learning based engine utilizes distributed training and deliberation of one or more machine learning models, where the machine learning model involves processing data through a model state of the machine learning model and continuously receive new incoming event feature sets and reacts to new incoming feature set by processing it through at least one machine learning model, based on time slice of the unbounded stream prior to when a subsequent time slice from the unbounded stream become available.).
As per claim 9, Muddu teaches a non-transitory machine-readable storage medium having computer-executable instructions stored thereon that, when executed by one or more processors, cause the one or more processors to perform operations comprising:
monitoring, by at least one processor, execution of a first computer model configured (e.g. Muddu, see paragraph [0274], which discloses machine leanring CEP engine that monitors for computer security issues) to analyze aggregated transaction data (e.g. Muddu, see paragraph [0158], which discloses the output may be analyzed by various applications such as a threat detection application.) by dynamically generating and evaluating data slices (e.g. Muddu, see paragraphs [02743-0275], which discloses generating and analyzing machine-learning based CEP based time slices) based on entropy and information gain values (e.g. Muddu, see paragraph [0623], which discloses machine-generated nature of a character-based identifier (e.g. value) is a high degree of entropy or randomness in the sequencing of characters.);
collecting, by the at least one processor, data associated with operation of the first computer model (e.g. Muddu, see paragraphs [0161-0162], which discloses event data is collected over batch processing for detecting anomalies, thread indicators and threat, where the event data contains a collection of events that have arrived over a batch period.), the data comprising at least one of:
attributes used to generate data slices, entropy values calculated for the data slices, information gain values calculated for the data slices, or traversal paths taken by the first computer model within a hierarchy of data slices (e.g. Muddu, see paragraphs [0278-0279], which discloses machine learning models enable to perform many types of analysis from event data sources in various contextual settings, where the machine leaning model can perform entity-specific behavior analysis, such as entropy/randomness/n-gram analysis based on a time slice of the event data.);
training, by the at least one processor, a second computer model using the collected data (e.g. Muddu, see paragraphs [0273-0274], which discloses machine learning based engine utilizes distributed training and deliberation of one or more machine learning models, where the machine learning model involves processing data through a model state of the machine learning model.), wherein the second computer model is configured to learn patterns in data slicing and anomaly detection from the first computer model (e.g. Muddu, see paragraphs [0205-0206], which discloses performing pattern matching for all known formats to determine the most likely format of a particular event data, where the format detector can employ a number of heuristics that can use a hierarchical way to perform pattern matching on event data.); and
executing, by the at least one processor, the second computer model on a new set of aggregated transaction data to predict an anomalous data slice (e.g. Muddu, see paragraphs [0273-0274], which discloses machine learning based engine utilizes distributed training and deliberation of one or more machine learning models, where the machine learning model involves processing data through a model state of the machine learning model and continuously receive new incoming event feature sets and reacts to new incoming feature set by processing it through at least one machine learning model, based on time slice of the unbounded stream prior to when a subsequent time slice from the unbounded stream become available.).
As per claim 17, Muddu teaches a system comprising at least one processor configured to:
monitoring, by at least one processor, execution of a first computer model configured (e.g. Muddu, see paragraph [0274], which discloses machine leanring CEP engine that monitors for computer security issues) to analyze aggregated transaction data (e.g. Muddu, see paragraph [0158], which discloses the output may be analyzed by various applications such as a threat detection application.) by dynamically generating and evaluating data slices (e.g. Muddu, see paragraphs [02743-0275], which discloses generating and analyzing machine-learning based CEP based time slices) based on entropy and information gain values (e.g. Muddu, see paragraph [0623], which discloses machine-generated nature of a character-based identifier (e.g. value) is a high degree of entropy or randomness in the sequencing of characters.);
collecting, by the at least one processor, data associated with operation of the first computer model (e.g. Muddu, see paragraphs [0161-0162], which discloses event data is collected over batch processing for detecting anomalies, thread indicators and threat, where the event data contains a collection of events that have arrived over a batch period.), the data comprising at least one of:
attributes used to generate data slices, entropy values calculated for the data slices, information gain values calculated for the data slices, or traversal paths taken by the first computer model within a hierarchy of data slices (e.g. Muddu, see paragraphs [0278-0279], which discloses machine learning models enable to perform many types of analysis from event data sources in various contextual settings, where the machine leaning model can perform entity-specific behavior analysis, such as entropy/randomness/n-gram analysis based on a time slice of the event data.);
training, by the at least one processor, a second computer model using the collected data (e.g. Muddu, see paragraphs [0273-0274], which discloses machine learning based engine utilizes distributed training and deliberation of one or more machine learning models, where the machine learning model involves processing data through a model state of the machine learning model.), wherein the second computer model is configured to learn patterns in data slicing and anomaly detection from the first computer model (e.g. Muddu, see paragraphs [0205-0206], which discloses performing pattern matching for all known formats to determine the most likely format of a particular event data, where the format detector can employ a number of heuristics that can use a hierarchical way to perform pattern matching on event data.); and
executing, by the at least one processor, the second computer model on a new set of aggregated transaction data to predict an anomalous data slice (e.g. Muddu, see paragraphs [0273-0274], which discloses machine learning based engine utilizes distributed training and deliberation of one or more machine learning models, where the machine learning model involves processing data through a model state of the machine learning model and continuously receive new incoming event feature sets and reacts to new incoming feature set by processing it through at least one machine learning model, based on time slice of the unbounded stream prior to when a subsequent time slice from the unbounded stream become available.).
As per claims 2, 10, and 18, Muddu teaches the method of claim 1, the non-transitory machine-readable storage medium of claim 9, and the system of claim 17, respectively, further comprising:
presenting, by the at least one processor on a user interface, a visual representation of the predicted anomalous data slice (e.g. Muddu, see paragraphs [0438-0439], which discloses visualization features are generated to illustrate trends, recent activity, and relationship between different data.).
As per claims 3, 11, and 19, Muddu teaches the method of claim 2, the non-transitory machine-readable storage medium of claim 10, and the system of claim 18, respectively, wherein the visual representation is a graph indicating a traverse path associated with a set of data slices within the new set of aggregated transaction data and the predicted anomalous data slice (e.g. Muddu, see paragraphs [0438-0439], which discloses visualization features are generated to illustrate trends, recent activity, and relationship between different data.).
As per claims 4 and 12, Muddu teaches the method of claim 1 and the non-transitory machine-readable storage medium of claim 9, respectively, further comprising:
receiving, by the at least one processor, an indication of a false positive anomaly (e.g. Muddu, see paragraph [0477], which discloses a user to tag a threat with false positive.); and
recalibrating, by the at least one processor, the second computer model to revise at least one variable used by the first computer model in accordance with an attribute of the false positive anomaly (e.g. Muddu, see paragraph [0151], which discloses anomalies and threats detected by real-time processing path may be employed to automatically trigger an action, where false positives can be provided as feedback data in order to update and improve the model.).
As per claims 5, 13, and 20, Muddu teaches the method of claim 4, the non-transitory machine-readable storage medium of claim 12, and the system of claim 17, respectively, wherein the attribute of the false positive anomaly is an authorization rate (e.g. Muddu, see paragraph [0151], which discloses anomalies and threats detected by real-time processing path may be employed to automatically trigger an action, where false positives can be provided as feedback data in order to update and improve the model.).
As per claims 6 and 14, Muddu teaches the method of claim 1 and the non-transitory machine-readable storage medium of claim 9, respectively, wherein the second computer model uses a boosted tree algorithm optimized using the information gain values of at least one data slice (e.g. Muddu, see paragraph [0277], which discloses the machine language-based CEP engine can train a decision tree based on the historical events.).
As per claims 7 and 15, Muddu teaches the method of claim 1 and the non-transitory machine-readable storage medium of claim 9, respectively, further comprising:
periodically executing, by the at least one processor, the second computer model on the new set of aggregated transaction data (e.g. Muddu, see paragraphs [0273-0274], which discloses machine learning based engine utilizes distributed training and deliberation of one or more machine learning models, where the machine learning model involves processing data through a model state of the machine learning model and continuously receive new incoming event feature sets and reacts to new incoming feature set by processing it through at least one machine learning model, based on time slice of the unbounded stream prior to when a subsequent time slice from the unbounded stream become available.); and
transmitting, by the at least one processor, an alert when the predicted anomalous data slice is identified (e.g. Muddu, see paragraphs [0273-0274], which discloses machine learning based engine utilizes distributed training and deliberation of one or more machine learning models, where the machine learning model involves processing data through a model state of the machine learning model and continuously receive new incoming event feature sets and reacts to new incoming feature set by processing it through at least one machine learning model, based on time slice of the unbounded stream prior to when a subsequent time slice from the unbounded stream become available.).
As per claims 8 and 16, Muddu teaches the method of claim 1 and the non-transitory machine-readable storage medium of claim 9, respectively, further comprising:
generating, by the at least one processor, a predicted remedial action corresponded to the predicted anomalous data slice (e.g. Muddu, see paragraphs [0273-0274], which discloses machine learning based engine utilizes distributed training and deliberation of one or more machine learning models, where the machine learning model involves processing data through a model state of the machine learning model.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See attached PTO-892 that includes additional prior art of record describing the general state of the art in which the invention is directed to.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARHAN M SYED whose telephone number is (571)272-7191. The examiner can normally be reached M-F 8:30AM-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached at 571-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FARHAN M SYED/Primary Examiner, Art Unit 2161 January 20, 2026