Notice of Pre-AIA or AIA Status
Claims 1-15 are currently presented for Examination.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. EP21306797.8, filed on 12/16/2021.
Claim objections
The claims have numerous issues with antecedent basis. The Examiner suggests amending the claims such that the first recitation of each distinct element uses articles such as “a”/”an”, later recitations referring back to the same distinct element uses articles such as “the”/”said”, to use disambiguating modifiers (e.g., first, second, etc.) when there are multiple distinct elements with the same base term, and that the use of modifiers for each distinct element is kept consistent. Below is a non-exhaustive list of examples of these issues: Claim 2-3, 9-10 recites the limitation “the at least on possible action”. There is insufficient antecedent basis for this limitation in the claim. This should be typographical error. Change it to “the at least one action”. Appropriate correction is required.
Claim 1, 8 and 15 recites the limitation “current operating indictor”. This should be typographical error. Change it to “current operating indicator”. Appropriate correction is required.
Claim Interpretation
4. The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitation(s) is/are:
a data processing module (claim 1, claim 6),
an operator indicator module (claim 1)
an alerting module (claim 1, 4)
an action triggering module (claim 1-3, 5)
an action reporting module (claim 1)
a prediction engine (claim 1, 7)
a simulation engine (claim 1)
a recommendation engine (claim 1, 3)
The above module and engine do not have corresponding structure found in Specification.
Each of the above generic placeholder is specifically excluded from being interpreted as software per se. See MPEP §2181(II)(B) third to last paragraph.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For the purposes of claim limitations examination, the Examiner will be interpreting the above module and engine as processing unit in view of instant fig 12.
Claim Rejections – 35 USC § 112, First Paragraph
5. The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
6. Claim 1-7 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims cite the limitations
a. a data processing module (claim 1, claim 6),
b. an operator indicator module (claim 1)
c. an alerting module (claim 1, 4)
d. an action triggering module (claim 1-3, 5)
e. an action reporting module (claim 1)
f. a prediction engine (claim 1, 7)
g. a simulation engine (claim 1)
h. a recommendation engine (claim 1, 3)
These limitations invoke 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, because they use the generic placeholders without reciting sufficient structure to achieve the function or to modify the generic placeholder. The above module does not have corresponding structure found in specification. According to MPEP § 2181(II)(B) "the structure corresponding to a 35 U.S.C. 112(f) claim limitation for a computer-implemented function must include the algorithm needed to transform the general-purpose computer or microprocessor disclosed in the specification.” The specification does not specifically link any algorithms to the above module and engine for performing the claimed function. Thus, the written description fails to disclose the corresponding structure, material, or acts for the claimed function. The written description does not include the structural elements to carry out these specifically claimed functions.
Claim 2-7 are dependent claims of claim 1 and do not overcome the deficiencies of claim 1 and thus rejected as well.
Claim Rejections - 35 USC § 112, Second Paragraph
7. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
8. Claim 1-7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AlA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AlA the applicant regards as the invention Claims cite the limitations
a. a data processing module (claim 1, claim 6),
b. an operator indicator module (claim 1)
c. an alerting module (claim 1, 4)
d. an action triggering module (claim 1-3, 5)
e. an action reporting module (claim 1)
f. a prediction engine (claim 1, 7)
g. a simulation engine (claim 1)
h. a recommendation engine (claim 1, 3)
These limitations invoke 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, because they use the generic placeholders without reciting sufficient structure to achieve the function or to modify the generic placeholder. The above module does not have corresponding structure found in Specification. In particular, note that “For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b)” [MPEP 2181 II.B]. The specification does not specifically link any algorithms to all the different module and engine for performing the claimed function. Thus, the written description fails to disclose the corresponding structure, material, or acts for the claimed function and are indefinite.
Claim 2-7 are dependent claims of claim 1 and do not overcome the deficiencies of claim 1 and thus rejected as well.
Applicant may:
(a) Amend the claims so that the claim limitations will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph; or
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the claimed function, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01 (o) and 2181.
9. Claim 1-15 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the applicant regards as the invention. Claim 1, 8 and 15 recites 'without interaction' which is vague and subjective. The specification (para [0042]) attempts to define this as 'prediction engine 214 forecasts... based on historical data... stored in... database 210', but leaves open whether the 'awareness of dependencies' (mentioned in the spec description) is a required limitation. Thus, the specification does not provide a clear definition or technical standard for what constitutes an 'interaction,' nor does it define the boundary between 'interaction' and 'no interaction.' It is unclear, for example, whether this implies a completely autonomous system, or merely one that requires no user input during the forecasting calculation itself. As written, the metes and bounds of the claim are unclear to a person of ordinary skill in the art.
Claim 2-7 are dependent claims of claim 1 and do not overcome the deficiencies of claim 1 and thus rejected as well.
Claim 9-14 are dependent claims of claim 8 and do not overcome the deficiencies of claim 8 and thus rejected as well.
Claim Rejections - 35 USC §101
10. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. These claims are directed to an abstract idea without significantly more.
(Step 1) Is the claims to a process, machine, manufacture, or composition of matter?
Claims: 1-7 are directed to apparatus or machine that falls on one of statutory category.
Claims: 8-14 are directed to method or process that falls on one of statutory category.
Claim: 15 is directed to a non-transitory computer-readable storage medium that falls on one of statutory category that is manufacture.
Step 2A Prong 1
Claim 1, 8 and 15 recites
calculate a current operating indicator from the live data stored in the environment data database (This claim is directed to the abstract idea of analyzing data (a mental process) because the steps of calculating and evaluating can be performed mentally, or with pen and paper, and are recited at a high level of generality. Under the broadest reasonable interpretation, these limitations are process steps that cover mental processes including an evaluation or judgment that could be performed in the human mind or with the aid of pencil and paper. If a claim, under its broadest reasonable interpretation, covers a mental process, then it falls under the “Mental Process” of abstract idea. Also, the calculation itself is a mathematical algorithm so it also falls under the “Mathematical Concepts” of abstract ideas.)
forecast a development of the operating indicator without interaction based on historical data stored in the environment data database, (this is the mental process for example a person looking at historical data and can predicts a future trend which are cognitive, observational, and evaluative, which are classic human mental processes. See MPEP 2106.04(a)(2)(III))
create real-time alerts in response to determining a deviation of the current operating indicator and/or the forecasted operating indicator caused by a disruption in the environment; (The steps—receiving data (input), comparing against a threshold (analysis), determining a deviation (judgment), and triggering an alert (action)—map directly to human cognitive processes. Under the broadest reasonable interpretation, these limitations are process steps that cover mental processes including an evaluation or judgment that could be performed in the human mind or with the aid of pencil and paper. If a claim, under its broadest reasonable interpretation, covers a mental process, then it falls under the “Mental Process” of abstract idea. It simply uses a computer as a tool to implement the mental process.)
simulate a development of the operating indicator for at least one action based on the historical data stored in the environment data database and action data stored in the action data database; (Developing an operating indicator for an action by analyzing historical databases is a mental process, as it involves evaluating past environment conditions and action impacts, which can be done with pen and paper. This process also constitutes an abstract idea—a method of organizing information to make a decision—which can be performed mentally or with paper.)
determine at least one possible action based on the simulated operating indicator, the forecasted operating indicator and/or the current operating indictor; (A person reviews a simulated, forecasted, or current indicator (e.g., "expected 20% spike in demand next week" or "current stock at 10%") and decides to place an order for more inventory. This decision-making process is fundamentally cognitive. A person can review data, calculate the necessary increase, and make a decision to act without a computer, relying on memory, experience, or manual, simple arithmetic.)
trigger an action selected from the at least one possible action; and to generate additional action data in response to the triggered action. (The steps of "triggering an action" (evaluating a condition) and "generating additional data in response" are mental activities that can be performed in the human mind. The entire process (selecting an action based on criteria and updating a record) can be performed using pen and paper, such as a person reviewing a report, deciding to initiate a task, and writing down the result in a ledger.)
Step 2A, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application?
In accordance with Step 2A, Prong 2, the judicial exception is not integrated into a practical application. In particular, claim 1, 8 and 15 recites the additional elements of receive live data from an environment, to process the live data, and to store the processed live data in the environment data database” it is simply "a computer receives, processes, and stores data" without adding "meaningful limitations" (e.g., a specific improvement to the functioning of the computer, a specific algorithm that is not conventional, or a specific, non-generic data source), it is merely using a computer as a tool to perform the abstract process faster. See MPEP 2106.05(f)(2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. The additional elements of a system comprising: an environment data database, an action data database, a data processing module, an operator indicator module, an alerting module, an action triggering module, an action reporting module, a prediction engine, a simulation engine and a recommendation engine which are mere instructions to implement an abstract idea on a computer, or merely using a generic computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); The additional elements of store the action data in the action data database and store the processed live data in the environment data database is an insignificant extra-solution activity, as it is merely a repository for the data generated by the mental process. The additional elements of a non-transitory computer-readable storage medium comprising computer-readable instructions that, upon execution by a processor of a computing device in claim 15 is mere instructions to implement an abstract idea on a computer, or merely using a generic computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); The claim is directed to an abstract idea.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
In view of Step 2B, the claim as a whole does not amount to significantly more than the recited exception,
i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim In particular, claim 1, 8 and 15 recites the additional elements of receive live data from an environment, to process the live data, and to store the processed live data in the environment data database” it is simply "a computer receives, processes, and stores data" without adding "meaningful limitations" (e.g., a specific improvement to the functioning of the computer, a specific algorithm that is not conventional, or a specific, non-generic data source), it is merely using a computer as a tool to perform the abstract process faster. See MPEP 2106.05(f)(2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. The additional elements of a system comprising: an environment data database, an action data database, a data processing module, an operator indicator module, an alerting module, an action triggering module, an action reporting module, a prediction engine, a simulation engine and a recommendation engine which are mere instructions to implement an abstract idea on a computer, or merely using a generic computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); The additional elements of store the action data in the action data database and store the processed live data in the environment data database is an insignificant extra-solution activity, as it is merely a repository for the data generated by the mental process and are basic computer functions that is well‐understood, routine, and conventional functions see MPEP 2106.05(d)(II)(iv) Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; The additional elements of a non-transitory computer-readable storage medium comprising computer-readable instructions that, upon execution by a processor of a computing device in claim 15 is mere instructions to implement an abstract idea on a computer, or merely using a generic computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); The claim is directed to an abstract idea. Thus, claim 1, 8 and 15 are not patent eligible.
Claim 2 and 9 further recites a user interface configured to visualize at least one of the current operating indicator, the forecasted development of the operating indicator, the simulated development of the operating indicator, and the real-time alerts, and to display the at least on possible action and at least one selectable button for the at least one possible action, wherein the action triggering module is further configured to receive the selected action from the user interface in response to a selection of the respective selectable button. The claim element describes a user interface for visualizing indicators (current, forecasted, simulated, alerts) and displaying actions via selectable buttons. This is generally considered a method of organizing and presenting information, which falls into a judicial exception as an abstract idea (specifically, a method of organizing human activity or a form of information presentation). The additional elements of "user interface configured to visualize" and "display the at least one possible action and at least one selectable button" describes a generic computer component performing an insignificant activity. Simply using a user interface or a button to present information and receive input is a well-understood, routine, and conventional computer function. Claim therefore, when taken as a whole, still does not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception. Claim recites unpatentable ineligible subject matter for the same reasoning and analysis as mentioned for claim 1.
Claim 3 and 10 further recites wherein the recommendation engine is further configured to select an action from the at least on possible action based on the simulated operating indicator, and the action triggering module is further configured to receive the selected action from the recommendation engine. "Selecting an action" based on simulated data is a cognitive process that can be performed mentally or with basic tools. It falls under "Mental Processes" (MPEP 2106.04(a)(2)(III)). Simply having a module "receive" a selection an action is often considered "insignificant extra-solution activity" or "mere instructions to apply an exception". It does not necessarily transform the abstraction into a specific, inventive, and practical, technology-based solution. Claim therefore, when taken as a whole, still does not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception. Claim recites unpatentable ineligible subject matter for the same reasoning and analysis as mentioned for claim 1.
Claim 4 and 11 further recites a rule database configured to store deviation scenario indicators, wherein the alerting module is configured to determine the deviation by comparing the current operating indicator and/or the forecasted operating indicator with at least one of the deviation scenario indicators. "Comparing" data points to identify a discrepancy is a form of mathematical analysis or mental process often performable in the human mind or by a generic computer (MPEP 2106.04(a)(2)(III)). The additional elements of a rule database configured to store deviation scenario indicators is an insignificant extra-solution activity, as it is merely a repository for the data generated by the mental process and are basic computer functions that is well‐understood, routine, and conventional functions see MPEP 2106.05(d)(II)(iv) Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;Claim therefore, when taken as a whole, still does not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception. Claim recites unpatentable ineligible subject matter for the same reasoning and analysis as mentioned for claim 1.
Claim 5 and 12 further recites wherein the action triggering module is further configured to trigger the selected action on an external computing platform through a dedicated interface. Claim recites "trigger the selected action on an external computing platform" describes a functional result—an outcome—rather than the specific technological means for achieving that outcome. It is a concept similar to "sending data" or "executing a command remotely" without describing how the action is triggered. This limitation constitutes a "mere instruction to apply" (See MPEP 2106.05(f)) an exception because it merely tells the computer to use a tool (the dedicated interface) to accomplish a result (trigger an action) without specifying how the tool is used to solve a technical problem. Claim therefore, when taken as a whole, still does not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception. Claim recites unpatentable ineligible subject matter for the same reasoning and analysis as mentioned for claim 1.
Claim 6 and 13 further recites wherein the data processing module is further configured to differentiate normal operation trends of the historical data from trends of the historical data resulting from actions and create a link of the historical data with the respective action data. This is a data analysis step directed to comparing data sets to detect anomalies or patterns. The process of separating "normal" data from "action-based" data is a form of intellectual, analytical, or mental comparison. Humans can, and do, look at logs or graphs and mentally distinguish between routine behavior and a change caused by a specific intervention. Creating a link between historical data and action data is a method of organizing, associating, or classifying information. Claim therefore, when taken as a whole, still does not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception. Claim recites unpatentable ineligible subject matter for the same reasoning and analysis as mentioned for claim 1.
Claim 7 and 14 further recites wherein the prediction engine is configured to determine a dependency between a first operating indicator and a second operating indicator of a plurality of operating indicators for forecasting and/or simulating the development of the first operating indicator and/or the second operating indicator. Identifying a relationship or "dependency" between two operating indicators can be performed in the human mind by an analyst observing data trends. Forecasting is the act of predicting future events based on past data and relationships, which is a mental process. Claim therefore, when taken as a whole, still does not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception. Claim recites unpatentable ineligible subject matter for the same reasoning and analysis as mentioned for claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
11. Claim(s) 1-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Andrabi et al. (PUB NO: US20230007023A1) in view of Zhang et al. "(PUB NO: US20190384257A1)
Regarding claim 1
Andrabi teaches a system comprising (see fig 12):
an environment data database; (see para 54-Additionally, as shown in FIG. 1, the system 100 includes the databases 110. The databases 110 can include, but are not limited to, server devices, cloud service computing devices, or any other types of computing devices (including those explained below with reference to FIG. 12). In one or more embodiments, the databases 110 can include various stored data of the content management system 104. For example, the databases 110 can include multiple sources of data that manage various aspects (or components) of the content management system 104. In particular, the multiple sources of data can include data such as, but not limited to, user information for users of the content management system 104, stored digital content, digital action event information for action events that occur within the content management system 104. In one or more embodiments, the anomalous-event-detection system 106 utilizes the data from the multiple sources of the databases 110 to generate a knowledge graph that connects the data components utilized by the anomaly-detection model to detect anomalous events.)
an action data database; (see para 54-Additionally, as shown in FIG. 1 , the system 100 includes the databases 110. The databases 110 can include, but are not limited to, server devices, cloud service computing devices, or any other types of computing devices (including those explained below with reference to FIG. 12). In one or more embodiments, the databases 110 can include various stored data of the content management system 104. For example, the databases 110 can include multiple sources of data that manage various aspects (or components) of the content management system 104. In particular, the multiple sources of data can include data such as, but not limited to, user information for users of the content management system 104, stored digital content, digital action event information for action events that occur within the content management system 104. In one or more embodiments, the anomalous-event-detection system 106 utilizes the data from the multiple sources of the databases 110 to generate a knowledge graph that connects the data components utilized by the anomaly-detection model to detect anomalous events. See para 66-More specifically, the anomalous-event-detection system 106 can include digital actions, such as digital content deletions, digital content modifications, digital content creations)
a data processing module configured to receive live data from an environment, to process the live data, and to store the processed live data in the environment data database; (see para 48- In particular, the anomalous-event-detection system 106 can receive digital actions (in association with digital content) from the client devices 112 a-112 n via the network 108. In particular, the anomalous-event-detection system 106 can receive digital actions (in association with digital content) from the client devices 112 a-112 n via the network 108. See para 54-61- Additionally, as shown in FIG. 1 , the system 100 includes the databases 110. The databases 110 can include, but are not limited to, server devices, cloud service computing devices, or any other types of computing devices (including those explained below with reference to FIG. 12). In one or more embodiments, the databases 110 can include various stored data of the content management system 104. For example, the databases 110 can include multiple sources of data that manage various aspects (or components) of the content management system 104. In particular, the multiple sources of data can include data such as, but not limited to, user information for users of the content management system 104, stored digital content, digital action event information for action events that occur within the content management system 104. As also shown in the act 202 of FIG. 2 , the anomalous-event-detection system 106 further identifies parameters for the digital action, such as an action type, number of files, user location, time, and user role for the identified digital action (e.g., as described in greater detail below in FIG. 3 ).As previously mentioned, in some embodiments, the anomalous-event-detection system 106 can monitor digital actions executed across a digital-content-synchronization platform in real (or near-real) time to identify digital actions and other data corresponding to the digital actions. See also para 66- For example, the anomalous-event-detection system 106 can utilize a machine-learning model to extract features of digital actions taken by users that represent latent features of the action sequence in which users take digital actions and/or other features of the digital action.)
an operating indicator module configured to calculate a current operating indicator from the live data stored in the environment data database; (see para 58-As further shown in act 204 of FIG. 2 , the anomalous-event-detection system 106 detects an anomalous action using an anomaly-detection model. In particular, based on parameters of the digital action, the anomalous-event-detection system 106 can utilize the anomaly detection model to generate an anomaly indicator. As shown in FIG. 2 , the anomaly indicator includes a confidence score that indicates a likelihood of the digital action being an anomalous action. In describing FIGS. 4 and 5 below, this disclosure describes the anomalous-event-detection system 106 utilizing an anomaly-detection model to generate an anomaly indicator based on parameters of a digital action. See para 39- Additionally, as used herein, the term “anomaly indicator” refers to a data object that includes metrics or text to identify or indicate a probability for whether a digital action is anomalous.)
an alerting module configured to create real-time alerts in response to determining a deviation of the current operating indicator and/or the forecasted operating indicator caused by a disruption in the environment; (see para 146-In particular, the anomalous-event-detection system 106 can utilize one or both of a sensitivity level 720 and a severity level 722 to determine whether the alert threshold 718 is satisfied for the anomaly indicator 708 and anomalous action type. If the alert threshold 718 is satisfied by the anomaly indicator 708 and anomalous action type, the anomalous-event-detection system 106 can provide the electronic communication including the anomalous action alert to the administrator device 710 and/or perform a remedial action 724. See para 40- As used herein, the term “anomalous action” refers to a digital action that is inconsistent with (or an outlier with respect to) a normal dataset (e.g., normal behavioral data) for a set of digital actions.)
a recommendation engine configured to determine at least one possible action based on the simulated operating indicator, the forecasted operating indicator and/or the current operating indictor; (see para 165 and fig 9- In addition to severity or sensitivity options, in one or more embodiments, the anomalous-event-detection system 106 can provide, for display within the graphical user interface 904 of the administrator device 902, selectable options 910 to toggle remedial actions that can be automatically performed upon detecting an anomalous action. For example, upon receiving a selection of the “recover files” option from the selectable options 910, the anomalous-event-detection system 106 can automatically perform a file recovery upon detecting an anomalous file deletion and/or file transfer. see para 175-178 and fig 11- As further shown in FIG. 11, the series of acts 1100 include an act 1130 of utilizing an anomaly-detection model to generate an anomaly indicator. Additionally, the act 1140 can include performing a remedial action within a content management system in response to an anomalous action. Moreover, the act 1140 can include performing a remedial action by automatically recovering one or more deleted digital content items, restricting a user account corresponding to a digital action from performing additional digital actions, or modifying a user permission of a user account.)
an action triggering module configured to trigger an action selected from the at least one possible action; (see para 26-27-Upon detecting an anomalous action and in response to the anomalous action, in some embodiments, the anomalous-event-detection system automatically performs (or provides selectable options to an administrator device for performing) a remedial action to neutralize or contain an anomalous action. As also part of the electronic communication, the anomalous-event-detection system can provide selectable options to respond to the anomalous action (e.g., select a remedial action to perform. See also fig 9 and para 165)
and an action reporting module configured to generate additional action data in response to the triggered action and to store the action data in the action data database. (see para 29- In addition to alerts or remedial actions, in some embodiments, the anomalous-event-detection system utilizes the data received from an administrator device to modify the anomaly-detection model. More specifically, in one or more embodiments, the anomalous-event-detection system receives indications of which selectable options (as the data) were selected from the administrator device. Based on the received selections to respond or ignore the detected anomalous action, the anomalous-event-detection system can modify the anomaly-detection model (e.g., adjust settings of the model, adjust machine learning parameters of the model). In particular, in one or more embodiments, the anomalous-event-detection system utilizes data (e.g., interactions) received from administrator devices responding to detected anomalous actions as training data (e.g., labels for the training data) for the anomaly-detection model. When the administrator device disregards the anomalous action alert or cancels a remedial action taken for the detected anomalous action…when the administrator device indicates a selection to recover the particular file deletion or confirms an automatic remedial action. See para 54- Additionally, as shown in FIG. 1 , the system 100 includes the databases 110. The databases 110 can include, but are not limited to, server devices, cloud service computing devices, or any other types of computing devices (including those explained below with reference to FIG. 12 ). In one or more embodiments, the databases 110 can include various stored data of the content management system 104. For example, the databases 110 can include multiple sources of data that manage various aspects (or components) of the content management system 104.)
Andrabi does not teach a prediction engine configured to forecast a development of the operating indicator without interaction based on historical data stored in the environment data database and a simulation engine configured to simulate a development of the operating indicator for at least one action based on the historical data stored in the environment data database and action data stored in the action data database.
In the related field of invention, Zhang teaches a prediction engine configured to forecast a development of the operating indicator without interaction based on historical data stored in the environment data database; (see para 44- RL is used to automatically learn health indicators based on the historical operation and sensory data, with or without failure/fault labels. see para 88-90- In an example implantation to perform prediction tasks, the learned health indicators can be fed into regression models to build a prediction model of h(t). As a result, given { . . . , h(t−2), h(t−1), h(t)}, the method can predict the future health indicators {h(t+1), h(t+2), . . . , h(t+w)} where w is the failure prediction window. The FP problem can be converted to the anomaly detection problem and can be solved using the AD approach as described herein. Further, when only initial state st and a set of policies {πi(a|s=st)} are given, h(t) can be predicted using the regression model. Operation recommendations with the objective of maintain higher health indicators can be achieved.)
a simulation engine configured to simulate a development of the operating indicator for at least one action based on the historical data stored in the environment data database and action data stored in the action data database; (see para 48-In an example implementation, MDP is used for the state machine with actions (e.g., a1, a2, a3) and rewards r. The example MDP model can discover evolution of health indicators automatically by learning values for N discrete sensory states as discussed in reference to FIGS. 5-7. The MDP model uses operation data (e.g., actions a1, a2, a3) separately from sensory data (e.g., states S1, S2, S3, S4). see para 52-In an example application phase, the learned health indicators can be combined with regression models to predict failures based on an initial state and a given policy. A health indicator sequence representing the evolution of equipment performance is used with a learned failure threshold and a confidence level to identify failures in the sequence. For example, when the initial state is matched to a health indicator sequence using the MDP modeling, FP and RUL tasks can be performed. When multiple policies or actions are given, the predicted health indicators are used to compare and find the health indicators in the sequence that lead to undesired health degradation and alert operators to avoid them in a timely manner.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of detecting anomalous digital actions utilizing an anomalous-detection model as disclosed by Andrabi to include a prediction engine configured to forecast a development of the operating indicator without interaction based on historical data stored in the environment data database and a simulation engine configured to simulate a development of the operating indicator for at least one action based on the historical data stored in the environment data database and action data stored in the action data database as taught by Zhang in the system of Andrabi for predictive maintenance with health indicators using reinforcement learning. The maintenance process is conducted by performing the necessary actions on the equipment to achieve one or more of these objectives. Equipment maintenance can be conducted in one of the following strategies: (a) Corrective Maintenance: taking corrective actions after the equipment or one of its components fails to retain its working status; (b) Preventive Maintenance (also known as time-based maintenance): performing maintenance actions on a regular basis regardless of the condition of the equipment; (c) Predictive Maintenance (also known as condition-based maintenance): continually monitoring the condition of the equipment to determine maintenance actions need to be taken at certain times. Predictive maintenance can reduce the chance of unexpected failures, increases the equipment availability, and accordingly decreases the overall cost of the maintenance process (see para [0001-0003], Zhang)
Regarding claim 8 and 15
Regarding claim 8-Andrabi teaches a method comprising (see fig 12):
Regarding claim 15- Andrabi teaches a non-transitory computer-readable storage medium comprising computer-readable instructions that, upon execution by a processor of a computing device, cause the computing device to (see fig 12)
to receive live data from an environment, to process the live data, and to store the processed live data in the environment data database; (see para 48- In particular, the anomalous-event-detection system 106 can receive digital actions (in association with digital content) from the client devices 112 a-112 n via the network 108. In particular, the anomalous-event-detection system 106 can receive digital actions (in association with digital content) from the client devices 112 a-112 n via the network 108. See para 54-61- Additionally, as shown in FIG. 1 , the system 100 includes the databases 110. The databases 110 can include, but are not limited to, server devices, cloud service computing devices, or any other types of computing devices (including those explained below with reference to FIG. 12 ). In one or more embodiments, the databases 110 can include various stored data of the content management system 104. For example, the databases 110 can include multiple sources of data that manage various aspects (or components) of the content management system 104. In particular, the multiple sources of data can include data such as, but not limited to, user information for users of the content management system 104, stored digital content, digital action event information for action events that occur within the content management system 104. As also shown in the act 202 of FIG. 2 , the anomalous-event-detection system 106 further identifies parameters for the digital action, such as an action type, number of files, user location, time, and user role for the identified digital action (e.g., as described in greater detail below in FIG. 3 ).As previously mentioned, in some embodiments, the anomalous-event-detection system 106 can monitor digital actions executed across a digital-content-synchronization platform in real (or near-real) time to identify digital actions and other data corresponding to the digital actions. See also para 66- For example, the anomalous-event-detection system 106 can utilize a machine-learning model to extract features of digital actions taken by users that represent latent features of the action sequence in which users take digital actions and/or other features of the digital action.)
calculate a current operating indicator from the live data stored in the environment data database; (see para 58-As further shown in act 204 of FIG. 2 , the anomalous-event-detection system 106 detects an anomalous action using an anomaly-detection model. In particular, based on parameters of the digital action, the anomalous-event-detection system 106 can utilize the anomaly detection model to generate an anomaly indicator. As shown in FIG. 2 , the anomaly indicator includes a confidence score that indicates a likelihood of the digital action being an anomalous action. In describing FIGS. 4 and 5 below, this disclosure describes the anomalous-event-detection system 106 utilizing an anomaly-detection model to generate an anomaly indicator based on parameters of a digital action. See para 39- Additionally, as used herein, the term “anomaly indicator” refers to a data object that includes metrics or text to identify or indicate a probability for whether a digital action is anomalous.)
create real-time alerts in response to determining a deviation of the current operating indicator and/or the forecasted operating indicator caused by a disruption in the environment; (see para 146-In particular, the anomalous-event-detection system 106 can utilize one or both of a sensitivity level 720 and a severity level 722 to determine whether the alert threshold 718 is satisfied for the anomaly indicator 708 and anomalous action type. If the alert threshold 718 is satisfied by the anomaly indicator 708 and anomalous action type, the anomalous-event-detection system 106 can provide the electronic communication including the anomalous action alert to the administrator device 710 and/or perform a remedial action 724. See para 40- As used herein, the term “anomalous action” refers to a digital action that is inconsistent with (or an outlier with respect to) a normal dataset (e.g., normal behavioral data) for a set of digital actions. )
determine at least one possible action based on the simulated operating indicator, the forecasted operating indicator and/or the current operating indictor; (see para 165 and fig 9- In addition to severity or sensitivity options, in one or more embodiments, the anomalous-event-detection system 106 can provide, for display within the graphical user interface 904 of the administrator device 902, selectable options 910 to toggle remedial actions that can be automatically performed upon detecting an anomalous action. For example, upon receiving a selection of the “recover files” option from the selectable options 910, the anomalous-event-detection system 106 can automatically perform a file recovery upon detecting an anomalous file deletion and/or file transfer. see para 175-178 and fig 11- As further shown in FIG. 11 , the series of acts 1100 include an act 1130 of utilizing an anomaly-detection model to generate an anomaly indicator. Additionally, the act 1140 can include performing a remedial action within a content management system in response to an anomalous action. Moreover, the act 1140 can include performing a remedial action by automatically recovering one or more deleted digital content items, restricting a user account corresponding to a digital action from performing additional digital actions, or modifying a user permission of a user account.)
trigger an action selected from the at least one possible action; (see para 26-27-Upon detecting an anomalous action and in response to the anomalous action, in some embodiments, the anomalous-event-detection system automatically performs (or provides selectable options to an administrator device for performing) a remedial action to neutralize or contain an anomalous action. As also part of the electronic communication, the anomalous-event-detection system can provide selectable options to respond to the anomalous action (e.g., select a remedial action to perform. See also fig 9 and para 165)
generate additional action data in response to the triggered action and to store the action data in the action data database. (see para 29- In addition to alerts or remedial actions, in some embodiments, the anomalous-event-detection system utilizes the data received from an administrator device to modify the anomaly-detection model. More specifically, in one or more embodiments, the anomalous-event-detection system receives indications of which selectable options (as the data) were selected from the administrator device. Based on the received selections to respond or ignore the detected anomalous action, the anomalous-event-detection system can modify the anomaly-detection model (e.g., adjust settings of the model, adjust machine learning parameters of the model). In particular, in one or more embodiments, the anomalous-event-detection system utilizes data (e.g., interactions) received from administrator devices responding to detected anomalous actions as training data (e.g., labels for the training data) for the anomaly-detection model. When the administrator device disregards the anomalous action alert or cancels a remedial action taken for the detected anomalous action…when the administrator device indicates a selection to recover the particular file deletion or confirms an automatic remedial action. See para 54- Additionally, as shown in FIG. 1 , the system 100 includes the databases 110. The databases 110 can include, but are not limited to, server devices, cloud service computing devices, or any other types of computing devices (including those explained below with reference to FIG. 12 ). In one or more embodiments, the databases 110 can include various stored data of the content management system 104. For example, the databases 110 can include multiple sources of data that manage various aspects (or components) of the content management system 104.)
Andrabi does not teach forecast a development of the operating indicator without interaction based on historical data stored in the environment data database and simulate a development of the operating indicator for at least one action based on the historical data stored in the environment data database and action data stored in the action data database.
In the related field of invention, Zhang teaches forecast a development of the operating indicator without interaction based on historical data stored in the environment data database; (see para 44- RL is used to automatically learn health indicators based on the historical operation and sensory data, with or without failure/fault labels. see para 88-90- In an example implantation to perform prediction tasks, the learned health indicators can be fed into regression models to build a prediction model of h(t). As a result, given { . . . , h(t−2), h(t−1), h(t)}, the method can predict the future health indicators {h(t+1), h(t+2), . . . , h(t+w)} where w is the failure prediction window. The FP problem can be converted to the anomaly detection problem and can be solved using the AD approach as described herein. Further, when only initial state st and a set of policies {πi(a|s=st)} are given, h(t) can be predicted using the regression model. Operation recommendations with the objective of maintain higher health indicators can be achieved.)
simulate a development of the operating indicator for at least one action based on the historical data stored in the environment data database and action data stored in the action data database; (see para 48-In an example implementation, MDP is used for the state machine with actions (e.g., a1, a2, a3) and rewards r. The example MDP model can discover evolution of health indicators automatically by learning values for N discrete sensory states as discussed in reference to FIGS. 5-7. The MDP model uses operation data (e.g., actions a1, a2, a3) separately from sensory data (e.g., states S1, S2, S3, S4). see para 52-In an example application phase, the learned health indicators can be combined with regression models to predict failures based on an initial state and a given policy. A health indicator sequence representing the evolution of equipment performance is used with a learned failure threshold and a confidence level to identify failures in the sequence. For example, when the initial state is matched to a health indicator sequence using the MDP modeling, FP and RUL tasks can be performed. When multiple policies or actions are given, the predicted health indicators are used to compare and find the health indicators in the sequence that lead to undesired health degradation and alert operators to avoid them in a timely manner.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of detecting anomalous digital actions utilizing an anomalous-detection model as disclosed by Andrabi to include forecast a development of the operating indicator without interaction based on historical data stored in the environment data database and simulate a development of the operating indicator for at least one action based on the historical data stored in the environment data database and action data stored in the action data database as taught by Zhang in the system of Andrabi for predictive maintenance with health indicators using reinforcement learning. The maintenance process is conducted by performing the necessary actions on the equipment to achieve one or more of these objectives. Equipment maintenance can be conducted in one of the following strategies: (a) Corrective Maintenance: taking corrective actions after the equipment or one of its components fails to retain its working status; (b) Preventive Maintenance (also known as time-based maintenance): performing maintenance actions on a regular basis regardless of the condition of the equipment; (c) Predictive Maintenance (also known as condition-based maintenance): continually monitoring the condition of the equipment to determine maintenance actions need to be taken at certain times. Predictive maintenance can reduce the chance of unexpected failures, increases the equipment availability, and accordingly decreases the overall cost of the maintenance process (see para [0001-0003], Zhang)
Regarding claim 2 and 9
Andrabi further teaches a user interface configured to visualize at least one of the current operating indicators, the forecasted development of the operating indicator, the simulated development of the operating indicator, and the real-time alerts, (see para 23- Based on the anomaly indicator, the anomalous-event-detection system can display (e.g., on an administrator device) an electronic communication that indicates the digital action as anomalous. See para 156-Based on the anomaly indicator, the anomalous-event-detection system can display (e.g., on an administrator device) an electronic communication that indicates the digital action as anomalous. see para 159-In addition to such alert options, the anomalous-event-detection system 106 can also provide, for display within a graphical user interface of an administrator device, a severity label for an anomalous alert. For example, the anomalous-event-detection system 106 can utilize a severity score (or level) determined for an anomalous alert to label the anomalous alert with a severity label. To illustrate, the severity label can indicate whether the anomalous alert is a high severity alert and/or a low severity alert. and
to display the at least on possible action and at least one selectable button for the at least one possible action, (see para 161-165- and fig 9-As further suggested above, in one or more embodiments, the anomalous-event-detection system 106 receives setting configurations from an administrator device. For example, FIG. 9 illustrates the anomalous-event-detection system 106 providing, for display within a graphical user interface 904 of an administrator device 902, selectable options to configure one or more settings of the anomalous-event-detection system 106. The anomalous-event-detection system 106 can utilize selections indicated on the graphical user interface 904 to configure how remedial actions are performed and/or how alerts are taken in response to detected anomalous actions. n addition to severity or sensitivity options, in one or more embodiments, the anomalous-event-detection system 106 can provide, for display within the graphical user interface 904 of the administrator device 902, selectable options 910 to toggle remedial actions that can be automatically performed upon detecting an anomalous action.)
wherein the action triggering module is further configured to receive the selected action from the user interface in response to a selection of the respective selectable button. (see para 29- More specifically, in one or more embodiments, the anomalous-event-detection system receives indications of which selectable options (as the data) were selected from the administrator device. See para 120- In addition to remedial actions, in one or more embodiments, the anomalous-event-detection system 106 provides, for display on a graphical user interface of an administrator device, an electronic communication indicating ransomware based on a sequence of server-side digital actions. See also para 179- In certain instances, the act 1140 can include providing, for display on a graphical user interface of an administrator device, a selectable option for a remedial action in response to the digital action)
Regarding claim 3 and 10
Andrabi further teaches wherein the recommendation engine is further configured to select an action from the at least on possible action based on the simulated operating indicator, and the action triggering module is further configured to receive the selected action from the recommendation engine. (see para 26-27-Upon detecting an anomalous action and in response to the anomalous action, in some embodiments, the anomalous-event-detection system automatically performs (or provides selectable options to an administrator device for performing) a remedial action to neutralize or contain an anomalous action. As also part of the electronic communication, the anomalous-event-detection system can provide selectable options to respond to the anomalous action (e.g., select a remedial action to perform. see para 165 and fig 9- In addition to severity or sensitivity options, in one or more embodiments, the anomalous-event-detection system 106 can provide, for display within the graphical user interface 904 of the administrator device 902, selectable options 910 to toggle remedial actions that can be automatically performed upon detecting an anomalous action. For example, upon receiving a selection of the “recover files” option from the selectable options 910, the anomalous-event-detection system 106 can automatically perform a file recovery upon detecting an anomalous file deletion and/or file transfer. see para 175-178 and fig 11- As further shown in FIG. 11 , the series of acts 1100 include an act 1130 of utilizing an anomaly-detection model to generate an anomaly indicator. Additionally, the act 1140 can include performing a remedial action within a content management system in response to an anomalous action. Moreover, the act 1140 can include performing a remedial action by automatically recovering one or more deleted digital content items, restricting a user account corresponding to a digital action from performing additional digital actions, or modifying a user permission of a user account.)
Regarding claim 4 and 11
Andrabi further teaches a rule database configured to store deviation scenario indicators, (see para 28-To determine whether to transmit an alert about a detected anomalous action, in some embodiments, the anomalous-event-detection system determines and compares a severity level and a sensitivity level corresponding to an anomaly indicator to an alert threshold. See para 164-In particular, the anomalous-event-detection system 106 can utilize a machine-learning model that is trained to determine a sensitivity threshold based on characteristics of the digital action, a user account, historical reactions to anomalous actions, and/or an organization corresponding to the user account. See para 54- Additionally, as shown in FIG. 1 , the system 100 includes the databases 110. The databases 110 can include, but are not limited to, server devices, cloud service computing devices, or any other types of computing devices (including those explained below with reference to FIG. 12 ). In one or more embodiments, the databases 110 can include various stored data of the content management system 104. For example, the databases 110 can include multiple sources of data that manage various aspects (or components) of the content management system 104.)
wherein the alerting module is configured to determine the deviation by comparing the current operating indicator and/or the forecasted operating indicator with at least one of the deviation scenario indicators. (See para 28- To determine whether to transmit an alert about a detected anomalous action, in some embodiments, the anomalous-event-detection system determines and compares a severity level and a sensitivity level corresponding to an anomaly indicator to an alert threshold. For example, the anomalous-event-detection system can determine a severity level indicating the importance (e.g., the impact or harmfulness) of an anomalous action based on historical data with alerts for similarly detected anomalous actions. Moreover, the anomalous-event-detection system can determine a sensitivity level indicating if the anomaly-detection model detected an anomalous action with a threshold confidence prior to transmitting an anomalous action alert or performing a remedial action. See also para 39-40-Additionally, as used herein, the term “anomaly indicator” refers to a data object that includes metrics or text to identify or indicate a probability for whether a digital action is anomalous. As used herein, the term “anomalous action” refers to a digital action that is inconsistent with (or an outlier with respect to) a normal dataset (e.g., normal behavioral data) for a set of digital actions.)
Regarding claim 5 and 12
Andrabi further teaches wherein the action triggering module is further configured to trigger the selected action on an external computing platform through a dedicated interface. (see para 60- the anomalous-event-detection system 106 can monitor digital actions executed across a digital-content-synchronization platform in real (or near-real) time to identify digital actions and other data corresponding to the digital actions. See para 176-Furthermore, as shown in FIG. 11, the series of acts 1100 include an act 1140 of performing an action based on the anomaly indicator. See para 194-Communication interface 1210 can include hardware, software, or both. In any event, communication interface 1210 can provide one or more interfaces for communication (such as, for example, packet-based communication) between computing device 1200 and one or more other computing devices or networks.)
Regarding claim 6 and 13
Andrabi further teaches wherein the data processing module is further configured to differentiate normal operation trends of the historical data from trends of the historical data resulting from actions and create a link of the historical data with the respective action data. (see para 40-41-As used herein, the term “anomalous action” refers to a digital action that is inconsistent with (or an outlier with respect to) a normal dataset (e.g., normal behavioral data) for a set of digital actions. As also used herein, the term “context for identifying a digital action as anomalous” refers to information explaining or providing a reason for classifying or identifying a digital action as anomalous or explaining the circumstances of the digital action identified as anomalous. To illustrate, context can include information of a user account corresponding to the anomalous action, a time of the anomalous action, a reason for identifying the digital action as anomalous, or information describing or identifying historical behavior of the user account (e.g., historically normal behavior of a user). See para 125- In particular, the anomalous-event-detection system 106 can compare parameters corresponding to a digital action (e.g., a number of digital content items affected by a digital action and/or a size of the files affected by the digital action) to a statistical model of historical digital actions to determine whether the digital action is anomalous (e.g., an outlier action).
Regarding claim 7 and 14
Andrabi does not teach wherein the prediction engine is configured to determine a dependency between a first operating indicator and a second operating indicator of a plurality of operating indicators for forecasting and/or simulating the development of the first operating indicator and/or the second operating indicator.
However, Zhang further teaches wherein the prediction engine is configured to determine a dependency between a first operating indicator and a second operating indicator of a plurality of operating indicators for forecasting and/or simulating the development of the first operating indicator and/or the second operating indicator. (See para 43- The health indicator sequences 160 are derived for each state of the equipment as described in reference to FIGS. 2A and 2B. During the application phase, the learned value function 150 can be applied to observed equipment states to construct a sequence of health indicators that can be used to perform AD, FP, RUL, and OR as described in reference to FIG. 4. see para 48-In an example implementation, MDP is used for the state machine with actions (e.g., a1, a2, a3) and rewards r. The example MDP model can discover evolution of health indicators automatically by learning values for N discrete sensory states as discussed in reference to FIGS. 5-7. The MDP model uses operation data (e.g., actions a1, a2, a3) separately from sensory data (e.g., states S1, S2, S3, S4). see para 52-In an example application phase, the learned health indicators can be combined with regression models to predict failures based on an initial state and a given policy. A health indicator sequence representing the evolution of equipment performance is used with a learned failure threshold and a confidence level to identify failures in the sequence. For example, when the initial state is matched to a health indicator sequence using the MDP modeling, FP and RUL tasks can be performed. When multiple policies or actions are given, the predicted health indicators are used to compare and find the health indicators in the sequence that lead to undesired health degradation and alert operators to avoid them in a timely manner. see para 88-90- In an example implantation to perform prediction tasks, the learned health indicators can be fed into regression models to build a prediction model of h(t). As a result, given { . . . , h(t−2), h(t−1), h(t)}, the method can predict the future health indicators {h(t+1), h(t+2), . . . , h(t+w)} where w is the failure prediction window. The FP problem can be converted to the anomaly detection problem and can be solved using the AD approach as described herein.)
Examiner note: The health indicator (first operating indicator) is dependent on the specific operational state (second operating indicator) of the equipment. By using an MDP to derive sequences for each state, the system determines how a specific state (input) causes a specific progression in the health indicator (output/dependency)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of detecting anomalous digital actions utilizing an anomalous-detection model as disclosed by Andrabi to include wherein the prediction engine is configured to determine a dependency between a first operating indicator and a second operating indicator of a plurality of operating indicators for forecasting and/or simulating the development of the first operating indicator and/or the second operating indicator as taught by Zhang in the system of Andrabi for predictive maintenance with health indicators using reinforcement learning. Predictive maintenance can reduce the chance of unexpected failures, increases the equipment availability, and accordingly decreases the overall cost of the maintenance process (see para [0001-0003], Zhang)
Conclusion
12. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bulusu et al. US 10853867 B1
ii. Discussing the method for providing action recommendations to a user that are likely to result in performance of a high-value action. The user is compared to one or more other users in order to identify high-value actions for that user. Once at least one high-value action has been identified, a sequence of actions may be generated to include that high-value action using prediction model data that includes probability information. The sequence of actions is then assessed to determine a gateway action within the sequence of actions that is likely to be performed by the user and has a high likelihood of resulting in subsequent performance of the high-value action.
13. All claims 1-15 are rejected.
14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PURSOTTAM GIRI whose telephone number is (469)295-9101. The examiner can normally be reached 7:30-5:30 PM, Monday to Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RENEE CHAVEZ can be reached at 5712701104. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PURSOTTAM GIRI/
Examiner, Art Unit 2186
/RENEE D CHAVEZ/Supervisory Patent Examiner, Art Unit 2186