DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in response to the amendment filed on ----11/17/2025 for application 18/368,384. Claim 1, 3 – 5, 9 – 17 are pending and have been examined.
Claim 1, 9, 10, 12 – 17 are amended.
Claim 2, 6, 7, 8 are canceled.
Respond to Amendment
Applicant’s amendment filed on 11/17/2025 has been entered.
Respond to Argument
Applicant's remark filed on 11/17/2025 has been fully considered but they are not persuasive.
Regarding claim rejection under 101, Applicant state that “claimed approach provides an improvement in a technology or technical field, and is therefore within the statutory reach of 35 U.S.C. § 101”, “the amended claims now clarify the generation of a data structure defining nodes and levels, and transitioning nodes in response to evaluation of a sensed intrusion possibility, for allowing probabilistic evaluation and subsequent action” thus “demonstrates a practical application”. Examiner respectfully disagrees. Examiner notes that “An inventive concept ‘cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself’” Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) MPEP 2106.05. The recited steps: generating nodes, identifying successive state, designating nodes to level, comparing nodes, evaluate/compute data and establish/determining transition of nodes and evaluate probability can practically be performed in human mind with or without physical aid and thus falls under the mental processes group of abstract ideas. The improvement is to the probability evaluation which is identified as abstract idea not technology.
Regarding claim rejection under 102/103, applicant stated that Wang reference “teaches a system specific to Underwater Unmanned Vehicles”, Examiner notes that unmanned vehicle is considered an autonomous robotic system and thus is analogous to the claimed “autonomous system” of Claim 1 and “robotic system” of Claim 17.
Applicant further states that “Wang '087 formula refers to a probability of an event, not a signal of intrusion. A sensor signal is a definite, physical external occurrence, not a mere computed probability”, “Wang '087 does not show, teach or disclose, alone or in combination, the claimed steps of gathering sensor input, loading or adding the sensor based input as a node into a graph representation, and then computing the risk metric.” Examiner notes that DiLuoffo teaches using “Dynamic BN where temporal events are represented in the model” (page 14) “having the capability to reason about new evidence gained from sensors” (Page 24) and mentioned at least “Intrusion Detection System (IDS) sensor” (page 14) thus fulfill the claimed limitation of “receiving, from one or more sensors, a signal indicative of an intrusion; evaluating at one of the nodes, the signal”. Wang also teaches intrusion analysis in a robotic system using dynamic BN technique and more specifically describe ” time state transition probability” (translation page 8) calculated for the transition to the successive nodes. Thus the combination renders obviousness of the claimed limitation.
The remaining arguments are essentially the same as those addressed above and/or below and are unpersuasive for at least the same reasons. Therefore, Examiner is unpersuaded and maintains the corresponding rejections.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1, 3 – 5, 9 – 17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1,
Step 1 Analysis
Claim 1 is directed to a method, which is one of the statutory categories.
Step 2A Prong One Analysis:
Claim 1 recites the abstract ideas in the following limitations:
developing a model for identifying a transition from a prior machine state to a current machine state;
generating a set of nodes, each node of the set of nodes indicative of a relevant state
identifying a set of nodes indicative of a successive state
designating, for each node, a level, the level indicative of an intrusion point in the autonomous system, the levels including system, hardware, software, AI robustness and supply chain, each level configured for determining an intrusion probability associated with an attack directed to the respective level, the probability based on: i) an assurance value of the level, ii) a potential reward to an adversary, iii) a probability of adversary exploit damage, and iv) a probability of an adversary taking action to exploit"
comparing the generated set of nodes with the nodes indicative of the successive state to identify a probability of a security intrusion;
evaluating, at one of the nodes, the signal·
computing and establishing a transition to the successive node based on a result of the evaluation·
evaluating a probability that the current machine state is indicative of a breach
The steps of developing, identifying, designating, comparing, and evaluating recite observation, evaluation and judgement mental processes and can practically be performed in human mind with or without physical aid and thus falls under the mental processes group of abstract idea.
The steps of generating nodes and computing/establishing a transition to successive node, within BRI, involving drawing graph and calculating on a paper which are also full under the observation, evaluation and judgement mental processes and may involve mathematical calculations. Thus, these steps also full under abstract ideas.
And thus, the claim falls within judicial exception of abstract idea and requires further analysis under Step 2A Prong Two.
Step 2A Prong Two Analysis:
Claim 1 recites the following additional elements along with the abstract ideas:
an autonomous system
receiving, from one or more sensors, a signal indicative of an intrusion
deploying the model in an autonomous system;
receiving, from one or more sensors, a signal indicative of an intrusion·
The additional element of an autonomous system is recited in high generality and generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)).
The step of receiving and deploying are recited at high level generality which add insignificant extra solution activity to the judicial exception (MPEP 2106.05(g)).
Claim 1 does not integrate the abstract idea into a practical application. Claim 1 directs to abstract idea.
Step 2B Analysis:
The additional element of an autonomous system is recited in high generality and generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)).
The step of receiving and deploying are well-understood, routine, conventional activity recognized in MPEP 2106.05(d)i - receiving or transmitting data over a network.
Claim 1 does not recite additional element. Base on the depending claim, Claim 1 do not contribute inventive concept.
Regarding Claim 3 – 5, 9 – 16,
Claim 3 – 5, 9 – 16 fails to remedy these deficiencies and thus rejected with the same reason.
Regarding Claim 17, Claim 17 is corresponding system claim of Claim 1. The recited element of using memory to store nodes and relations in a Bayesian network are recited in high generality and amounts to no more than a recitation of the words "apply it" (or an equivalent), or no more than mere instructions to implement an abstract idea or other exception on a computer (MPEP 2106.05(f)). Thus, Claim 17 is rejected with same reason.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 3 – 5, 9 – 17 are rejected under 35 U.S.C. 103 as being unpatentable over DiLuoffo et al., (hereinafter DiLuoffo), “A Survey on Trust Metrics for Autonomous Robotic Systems” in view of Wang et al., (hereinafter Wang), CN109711087.
Regarding Claim 1, DiLuoffo discloses: A method for internal cognitive assurance of an autonomous system (page 1, “security assurance … robotic (and other) systems”), comprising:
developing a model (page 24, “A probabilistic approach to analysis using BNs (model) provides a natural way to reason about uncertainty”) for identifying a transition from a prior machine state to a current machine state (page 5, “We summarize some of the system search results that are attractive to incorporate into our holistic security model. These findings include the need to: support attack paths; determine behavioral states of an autonomous system, and integrate the concepts of trust, resilience, and agility”; page 24, “BNs provides a natural way to reason about uncertainty. BN-based models allow for efficient factorization of the set of system states (machine state)”; page 25, “conditional probability given the parents”; the parent-child probability is the link/transition between the states);
generating a set of nodes, each node of the set of nodes indicative of a relevant state; identifying a set of nodes indicative of a successive state (refer to the mapping above, Bayesian model include nodes represents a system state with parent/child relationship);
designating, for each node, a level, the level indicative of an intrusion point in the autonomous system, the levels including system, hardware, software, Al robustness and supply chain (page 25, “In order to represent an autonomous robotic system architecture and assess the security of it, we have discussed the different layers (system, hardware, software, Cognitive/AI, and supplier chain) as independent trust metrics”), each level configured for determining an intrusion probability associated with an attack directed to the respective level, the probability based on: i ) an assurance value of the level, ii) a potential reward to an adversary, iii ) a probability of adversary exploit damage, and iv) a probability of an adversary taking action to exploit (DiLuoffo, page 25, “TM = LV*AER* AED *ATA, where TM = trust metric, LV = level value (assurance value of the level), AER = probability of adversary exploit reward, AED = probability of adversary exploit damage, and ATA = likelihood of an adversary taking action to exploit”).
deploying the model in an autonomous system (refer to the mapping above & page 1, “autonomous robotic system”; the model is to be deployed/operated in an autonomous system); and
receiving, from one or more sensors, a signal indicative of an intrusion; evaluating, at one of the nodes, the signal (refer to the mapping above & page 14, “Intrusion Detection System (IDS) sensor providing false or negative readings or a file system integrity checker such as Tripwire that alerts to a file being changed”, “uncertainty is related to an attack being successful, the uncertainty of an attacker’s path choice, and/or the uncertainty from imperfect IDS sensors”; page 24, “BN model … technique is well suited for autonomous robotics system because of their nature in uncertainty and having the capability to reason about new evidence gained from sensors about tasks or environment”; BN model is a graph model with nodes that representing states and calculate probabilities of intrusion. Some of the input are sensor signals);
evaluating a probability that the current machine state is indicative of a breach (page 23, “BN are Probabilistic Graphical Models … that level of belief can be formulated to indicate the level of trust that is placed in a system”; trust is the reverse of breach; page 24, “Posterior is after an observation has occurred, Likelihood is the probability that the event will happen”, “BN model can update the posterior probabilities of other states of the system when evidence is set”;).
DiLuoffo does not explicitly teach:
comparing the generated set of nodes with the nodes indicative of the successive state to identify a probability of a security intrusion
computing and establishing a transition to the successive node based on a result of the evaluation
Wang, in the same field of endeavor, explicitly teach:
comparing the generated set of nodes with the nodes indicative of the successive state to identify a probability of a security intrusion (Wang, formula 1 & translation page 7, “P (x | y) is the probability of x event occurrence under the probability of y event occurrence, and P (xy) is the probability of x and y events occurring simultaneously”; the probability is calculated by P(xy) over P(y), which is a form of comparison)
computing and establishing a transition to the successive node based on a result of the evaluation (Wang, translation page 6, “dynamic threat situation assessment method comprises … adding time state transition probability on the basis of a static Bayesian network, establishing a dynamic Bayesian network model”; i.e., based on the state transition probability, the system can compute/calculate the threat of the successive nodes)
DiLuoffo and Wang both teach the use of Bayesian network for the threat assessment of an autonomous robot and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the conditional probability calculation of Wang’s teaching in the system of DiLuoffo to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification in order to get the conditional probability.
Regarding Claim 3, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: the model includes a Bayesian Network (BN) (page 24, “Bayesian Network”).
Regarding Claim 4, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: the autonomous system includes an untethered robotic element in free space (page 1, “Autonomous robotic systems, such as autonomous vehicles”; autonomous vehicle is untethered and operate in free space).
Regarding Claim 5, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: a plurality of levels in the autonomous system, each level susceptible to an intrusion (page 25, “In order to represent an autonomous robotic system architecture and assess the security of it, we have discussed the different layers (system, hardware, software, Cognitive/AI, and supplier chain) as independent trust metrics”).
Regarding Claim 9, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: for each level, denoting one or more nodes, each node representing a variable concerning an intrusion and a causal relation to at least one other node, the causal relation being either a cause or effect of an intrusion based on the variable (DiLuoffo, page 23, “BN are Probabilistic Graphical Models (PGM) that represent causality inference using variable conditional dependence.”).
Regarding Claim 10, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: each node includes a CPT (Conditional Probability Table) indicative of a transition to a successive node, the CPT generated based on the intrusion probability corresponding to the level on which the node resides (DiLuoffo, page 25, “The BN will provide the casual inference by linking these components and the values for the conditional probability tables.”).
Regarding Claim 11, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: defining, for each level, a score indicative of the probability of intrusion for each node on the respective level (DiLuoffo, tbl 4, each level has a value range of [0,1] which represents probability).
Regarding Claim 12, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: for the system level, generating a system score based on the assurance value of the system level, a cost for development to achieve that level, a time taken to achieve the level (DiLuoffo, page 7, “Common Criteria (CC) … Evaluation Assurance Levels (EAL) are from 1 to 7 … Depending on the EAL, the time and cost can range considerably.”), a collateral damage resulting from the intrusion, a potential reward to an adversary, and a likelihood of an adversary taking action to exploit (DiLuoffo, refer to the mapping in Claim 7).
Regarding Claim 13, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: for the hardware level, generating a hardware score based on a hardware design trust metric (DiLuoffo, page 10, “determining a design integrity trust metric for hardware design”; page 11, “having a trust metric based on hardware design properties provides a degree of assurance”), a collateral damage resulting from the intrusion, a potential reward to an adversary and likelihood of an adversary taking action to exploit (DiLuoffo, refer to the mapping in Claim 7).
Regarding Claim 14, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: for the software level, generating a software score based on a technical impact from an intrusion (DiLuoffo, fig. 8 & page 13, “exploitability defines the complexity to achieve the exploit and the impact defines the result of the exploit”), a collateral damage resulting from the intrusion, a potential reward to an adversary and a likelihood of an adversary taking action to exploit (DiLuoffo, refer to the mapping in Claim 7).
Regarding Claim 15, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: for the supply chain level, generating a supplier score based on a supplier trust metric (DiLuoffo, page 20, “the supplier is an important entity to incorporate into the holistic security architecture”; “defining supplier trust metrics”), a collateral damage resulting from the intrusion, a potential reward to an adversary and likelihood of an adversary taking action to exploit (DiLuoffo, refer to the mapping in Claim 7).
Regarding Claim 16, DiLuoffo and Wang combination renders obviousness all the limitation of Claim 1. DiLuoffo further teach: for the Al robustness level, generating an Al robustness score based on a distance function of an Al implementation employed (DiLuoffo, page 19, “Since each defense technique has a distance/error from the certified area (boundary) where perturbations can be detected, we can consider these values as trust metrics”), a collateral damage resulting from the intrusion, a potential reward to an adversary and likelihood of an adversary taking action to exploit (DiLuoffo, refer to the mapping in Claim 7).
Regarding Claim 17, Claim 17 is the autonomous robotic system claim corresponding to the combination of Claim 1 – 3. Wang further teaches: a memory and a process (translation page 3, “the field computer”; i.e., the system is implemented as a computer with processors and memory. The instruction and data are stored in the memory).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further implement the system as a program logic stored in memory as taught by Wang to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification in order to implement an inference model.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIEN MING CHOU whose telephone number is (571)272-9354. The examiner can normally be reached Monday- Friday 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HELAL ALGAHAIM can be reached on (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIEN MING CHOU/Examiner, Art Unit 3666
/HELAL A ALGAHAIM/SPE , Art Unit 3666