DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), of which papers have been placed in the file wrapper.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a system for monitoring a function for providing data, the function model being configured to determine, and the function model being configured to determine, and the system being configured to perform the following steps” in claims 20-24.
The “system for monitoring” and the “function model” are disclosed in applicants’ specification on p. 8, “it may be provided, for example, that the function model is executed on an application processor, which is safe according to ISO standard 26262 and includes a safe operating system, and the monitoring model is located on the same system”
Because these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 13-15, 18, and 20-24 are rejected under 35 U.S.C. 103 as being unpatentable over Olgiati et al. (US Pub. No. 2021/0097432 A1) in view of Baum et al. (US Pub. No. 2022/0100601 A1).
Regarding claim 13, Olgiati discloses, a computer-implemented method for monitoring a function model for providing data for at least one function (See Olgiati ¶53, “FIG. 5 is a flowchart illustrating a method for automated problem detection for machine learning models, according to some embodiments. As shown in 500, inferences (predictions) may be generated using a machine learning model and based (at least in part) on inference input data.”)
the function model being configured to determine at least one intermediate result based on input data in at least one first processing step, and the function model being configured to determine an output of the function model based on the intermediate result in at least one further processing step, (See Olgiati ¶53, “As shown in 510, data may be collected that is associated with the use of the machine learning model. The collected inference data may represent data associated with the use of the machine learning model to produce inferences over some window of time (e.g., a twenty-four hour period). The data collection may, for individual inference requests, collect inference data such as the inference input data, the resulting inference output, and/or various elements of model metadata (e.g., a model identifier, model version identifier, endpoint identifier, a timestamp, a container identifier, and so on). The data collection may, for individual inference requests, collect model data artifacts representing intermediate results before the final prediction is generated.”)
the method comprising the following steps: providing the intermediate result and the output of the function model to a monitoring model for anomaly detection; carrying out anomaly detection based on the intermediate result and the output of the function model; (See Olgiati ¶57, “As shown in 550, the data may be analyzed in an attempt to detect one or more types of problems, e.g., with the model or the input data to the model. As will be discussed in greater detail with respect to FIG. 6 through FIG. 11, the analysis may automatically detect problems or anomalies such as models that fail golden examples, outliers in input data, inference data distribution changes, label distribution changes, label changes for individual entities, ground truth discrepancies, and/or other forms of data drift or model drift.”)
and validating a functionality of the function model and/or a functionality of the monitoring model. (See Olgiati ¶58, “FIG. 6 illustrates further aspects of the example system environment for automated problem detection for machine learning models, including golden example discrepancy analysis for a machine learning model, according to some embodiments. In one embodiment, the analysis 172 may automatically detect problems or anomalies such as models that fail verified or “golden” examples.”)
Olgiati discloses the above limitations but he fails to disclose, a computer-implemented method for monitoring a function model for providing data for at least one function of a computer-controlled machine, the at least one function including an image recognition algorithm.
However, Baum discloses, a computer-implemented method for monitoring a function model for providing data for at least one function of a computer-controlled machine, (See Baum ¶345, “In one embodiment, the NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The safety mechanisms disclosed herein function to detect and promptly flag (i.e. report) the occurrence of an error and with some of the safety mechanisms correction of the error is also possible. These features are highly desired or even mandatory in certain applications such as use in autonomous vehicles.”)
the at least one function including an image recognition algorithm, (See Baum ¶13, “Examples include industrial factories where machine vision is used on the assembly line in the manufacture of goods, autonomous vehicles where machine vision is used to detect objects in the path of and surrounding the vehicle, etc.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the detecting errors in the neural network of an autonomous vehicle as suggested by Baum to Olgiati’s detection of anomalies and outliers in a neural network. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because detecting of faults in neural networks of autonomous vehicles is critical for ensuring passenger safety, preventing accidents, and maintaining system reliability.
Regarding claim 14, Olgiati and Baum disclose, the method as recited in claim 13, further comprising: determining a reference output based on a reference input using the function model; and checking the reference output including comparing the reference output to ground truth data, using the monitoring model. (See Olgiati ¶58, “In one embodiment, the analysis 172 may automatically detect problems or anomalies such as models that fail verified or “golden” examples. Golden example discrepancy analysis 672 may use a repository of testing data 114 associated with one or more golden examples. The testing data 114 may be regularly executed in batch or against the endpoint to check that the model 135 continues to work as expected with the testing data 114, e.g., by comparing results of the inference production 152A to expected results 115 of the testing data.’)
Regarding claim 15, Olgiati and Baum disclose, the method as recited in claim 14, wherein the determination of a reference output based on a reference input using the function model, and the comparing of the reference output to the ground truth data using the monitoring model, are executed periodically. (See Olgiati ¶58, “The testing data 114 may be regularly executed in batch or against the endpoint to check that the model 135 continues to work as expected with the testing data 114, e.g., by comparing results of the inference production 152A to expected results 115 of the testing data.”)
Regarding claim 18, Olgiati and Baum disclose, the method as recited in claim 13, further comprising: monitoring a communication between a first instance on which the function model is executed and a further instance on which the monitoring model is executed. (See Olgiati ¶51, “In some embodiments, two different versions of a model may be trained, tested, and used to produce inferences in parallel or serially. … The two versions may be used to produce inferences 156 during the same window of time or during different windows of time. In one embodiment, both models may be applied to the same set of inference input data 116, e.g., when two alternative versions are tested in parallel. The collected inference data may be stored in the data store 160 and used by the analysis system 170 to perform a comparison 472 of the two model versions.” Whereby serial inference and inference comparison requires monitoring of communication between instances.)
Regarding claim 20, Olgiati and Baum disclose, a system for monitoring a function model for providing data for at least one function (See Olgiati ¶53, “FIG. 5 is a flowchart illustrating a method for automated problem detection for machine learning models, according to some embodiments. As shown in 500, inferences (predictions) may be generated using a machine learning model and based (at least in part) on inference input data.”
Further see Olgiati ¶104, “In various embodiments, computing device 3000 may be a uniprocessor system including one processor or a multiprocessor system including several processors.”)
the function model being configured to determine at least one intermediate result based on input data in at least one first processing step, and the function model being configured to determine an output of the function model based on the intermediate result in at least one further processing step, (See Olgiati ¶53, “As shown in 510, data may be collected that is associated with the use of the machine learning model. The collected inference data may represent data associated with the use of the machine learning model to produce inferences over some window of time (e.g., a twenty-four-hour period). The data collection may, for individual inference requests, collect inference data such as the inference input data, the resulting inference output, and/or various elements of model metadata (e.g., a model identifier, model version identifier, endpoint identifier, a timestamp, a container identifier, and so on). The data collection may, for individual inference requests, collect model data artifacts representing intermediate results before the final prediction is generated.”)
the system being configured to perform the following steps using a monitoring model: providing the intermediate result and the output of the function model to a monitoring model for anomaly detection, carrying out anomaly detection based on the intermediate result and the output of the function model, (See Olgiati ¶57, “As shown in 550, the data may be analyzed in an attempt to detect one or more types of problems, e.g., with the model or the input data to the model. As will be discussed in greater detail with respect to FIG. 6 through FIG. 11, the analysis may automatically detect problems or anomalies such as models that fail golden examples, outliers in input data, inference data distribution changes, label distribution changes, label changes for individual entities, ground truth discrepancies, and/or other forms of data drift or model drift.”)
and validating a functionality of the function model and/or a functionality of the monitoring mode, (See Olgiati ¶58, “FIG. 6 illustrates further aspects of the example system environment for automated problem detection for machine learning models, including golden example discrepancy analysis for a machine learning model, according to some embodiments. In one embodiment, the analysis 172 may automatically detect problems or anomalies such as models that fail verified or “golden” examples.”)
at least those of the steps that are executed using the monitoring model being executed on at least one first instance of the system, and the function model being executed on at least one second instance of the system. (See Olgiati ¶51, In some embodiments, two different versions of a model may be trained, tested, and used to produce inferences in parallel or serially. For example, one version of a model may be represented using trained model 125A and tested model 135A, and another version of the model may be represented using trained model 125B and tested model 135B. … The two versions may be deployed to one or more endpoints such as endpoint 150A and 150B. For example, the model 135A may be used for inference production 152A and data collection 154A at one endpoint 150A, and the model 135B may be used concurrently for inference production 152B and data collection 154B at another endpoint 150B. The two versions may be used to produce inferences 156 during the same window of time or during different windows of time.”)
Olgiati discloses the above limitations but he fails to disclose, a function model for providing data for at least one function of a computer-controlled machine, the at least one function including an image recognition algorithm,
However, Baum discloses, a function model for providing data for at least one function of a computer-controlled machine, (See Baum ¶345, “In one embodiment, the NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The safety mechanisms disclosed herein function to detect and promptly flag (i.e. report) the occurrence of an error and with some of the safety mechanisms correction of the error is also possible. These features are highly desired or even mandatory in certain applications such as use in autonomous vehicles.”)
the at least one function including an image recognition algorithm, (See Baum ¶13, “Examples include industrial factories where machine vision is used on the assembly line in the manufacture of goods, autonomous vehicles where machine vision is used to detect objects in the path of and surrounding the vehicle, etc.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the detecting errors in the neural network of an autonomous vehicle as suggested by Baum to Olgiati’s detection of anomalies and outliers in a neural network. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because detecting of faults in neural networks of autonomous vehicles is critical for ensuring passenger safety, preventing accidents, and maintaining system reliability.
Regarding claim 21, Olgiati and Baum disclose, the system as recited in claim 20, wherein at least the first instance meets an Automotive Safety Integrity Level (ASIL) according to ISO standard 26262. (See Baum ¶345, “In one embodiment, the NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The safety mechanisms disclosed herein function to detect and promptly flag (i.e. report) the occurrence of an error and with some of the safety mechanisms correction of the error is also possible. These features are highly desired or even mandatory in certain applications such as use in autonomous vehicles as dictated by the ISO 26262 standard.”)
Regarding claim 22, Olgiati and Baum disclose, the system as recited in claim 20, wherein the first instance and the second instance are each an instance of a common System-on-Chip (SoC), an operating system that is assigned to the first instance being executable on a separate computing core of the System-on- Chip. (See Baum ¶301-302, “A diagram illustrating a first example multi-NN processor SoC system of the present invention is shown in FIG. 24. In one embodiment, the NN processor core (or engine) as described supra and shown in FIGS. 4 and 5 can be replicated and implemented as a System on Chip (SoC). The intellectual property (IP) for the NN processor core can be used to implement a monolithic integrated circuit (IC). Alternatively, physical NN processor core dies can be integrated and implemented on an SoC. [0302] Implemented as a monolithic semiconductor or an SoC, the NN processor SoC, generally referenced 700, comprises a plurality of NN processor cores 706 interconnected via an internal bus 710, one or more external interface circuits 702, one or more ‘external’ L5 memory circuits 708, bootstrap and preprocess circuit 704, and postprocess circuit 712.”)
Regarding claim 23, Olgiati and Baum disclose, the system as recited in claim 20, wherein a hypervisor virtualizes a hardware level, and the first instance is a first domain provided by the hypervisor and the second instance is a further domain provided by the hypervisor. (See Olgiati ¶47, “The compute resources may, in some embodiments, be offered to clients in units called “instances,” such as virtual or physical compute instances. A virtual compute instance may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor).”)
Regarding claim 24, Olgiati and Baum disclose, the system as recited in claim 20, wherein: the system is used in a computer-controlled machine, the computer-controlled machine being: i) an E/E system of a motor vehicle for providing functions of autonomous driving, semiautonomous driving, and/or driver assistance functions, or ii) a robot, or iii) a domestic appliance, or iv) a power tool, or v) a manufacturing machine, or vi) a device for automatic optical inspection, or vii) an access system; (See Baum ¶16, “Further, modern ANNs are complex computational graphs that are prone to random errors and directed deception using adversarial strategies. This is especially acute when ANNs are used in critical roles such as autonomous vehicles, robots, etc. Thus, there is a need for mechanisms that attempts to provide a level of safety to improve system immunity.”)
the function model in the computer-controlled machine providing data for at least one function of the computer-controlled machine based on input data including image data of an image sensor including a video, or radar, or LiDAR, or ultrasonic, or movement, or thermal imaging sensor data; (See Baum ¶13, “Today, a common application for neural networks is in the analysis of video streams, i.e. machine vision. Examples include industrial factories where machine vision is used on the assembly line in the manufacture of goods, autonomous vehicles where machine vision is used to detect objects in the path of and surrounding the vehicle, etc.”)
at least one control signal for executing the function of the computer-controlled machine being provided based on the provided data; (See Baum ¶352, “In this embodiment, the NN processor serves as a dedicated NN accelerator. It receives processed sensor data or other processed data from other sources. The NN processor device outputs insights back to the main application processor, which may be used for decisions on actuation or steering in connected systems.”)
and at least a part of the computer-controlled machine, and/or a function of the computer-controlled machine being transferred to a defined state. (See Baum ¶365, “A diagram illustrating example fault tolerance, detection, and reaction timing is shown in FIG. 44. In the scheme shown, generally referenced 1000, at some point during normal operation 1002 a fault 1008 occurs in the NN processor 1004. The fault is detected 1010 within a fault detection time 1012. Within a fault reaction time 1014, the system enters a safe state 1006. The combined fault detection time 1012, fault reaction time 1014 and time in the system safe state 1006 is the system fault tolerant time interval 1016.”)
Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Olgiati et al. (US Pub. No. 2021/0097432 A1) in view of Baum et al. (US Pub. No. 2022/0100601 A1) and in further view of Huang et al. (US Pub. No. 2022/0043441 A1).
Regarding claim 16, Olgiati and Baum disclose, the method as recited in claim 13, but they fail to disclose, further comprising: providing a hash value using the monitoring model; signing the hash value using the function model; providing the signed hash value to the monitoring model; and checking the signed hash value including comparing the hash value to the hash value calculated from the signed hash value, using the monitoring model.
However, Huang discloses, further comprising: providing a hash value using the monitoring model; (See Huang ¶27, “In an aspect, the data recorded to the blocks of the blockchain 134 may include a hash of the operational data (e.g., the sensor data, the navigation data, the data output by the AI/ML algorithms, the data provided as inputs to the AI/ML algorithms, etc.).”)
signing the hash value using the function model; (See Huang ¶43, “Each block of the blockchain 134 may include a plurality of fields, such as a block identifier (ID) or block number, a timestamp that records the time the block was created, a hash value derived from the previous block, a data packet and a digital signature which, in one implementation may be a hash value of the entire block.”)
providing the signed hash value to the monitoring model; (See Huang ¶27, “To illustrate, the sensor data, the navigation data, the outputs of the AI/ML algorithms, etc. may be recorded to a record of the database 136 and a hash of the data recorded to the database 136 may be stored to a block of the blockchain 134 (along with the hash of the previous block).”)
and checking the signed hash value including comparing the hash value to the hash value calculated from the signed hash value, using the monitoring model. (See Huang ¶27, “For example, to verify the authenticity of the data recorded to the database 136, a record of the database may be used to calculate a hash value and that hash value may be compared to the hash recorded to the blockchain. If the hash values match, the data recorded to the database 136 may be authenticated and if the hash values do not match the data may be determined to have been altered or manipulated.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the verification of data of a machine learning model using signed hash values as suggested by Huang to Olgiati and Baum’s machine learning validation method. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because hash values ensure data integrity and authenticity throughout the model’s pipeline. It enables immediate detection of data corruption or unauthorized tempering, and ensures consistency between training and production.
Regarding claim 17, Olgiati, Baum, and Huang disclose, the method as recited in claim 16, wherein the signing of the hash value includes: adding a signature to the hash value in the first processing step of the function model, and adding a further signature in at least one further processing step of the function model. (See Huang ¶43, “Each block of the blockchain 134 may include a plurality of fields, such as a block identifier (ID) or block number, a timestamp that records the time the block was created, a hash value derived from the previous block, a data packet and a digital signature which, in one implementation may be a hash value of the entire block.”)
Allowable Subject Matter
Claim 19 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 19, the method as recited in claim 16, wherein, as a function of a result of the anomaly detection and/or as a function of a result of the monitoring of the communication, at least one of the following steps is executed:
a) checking a result of the comparison of the reference output to the ground truth data using the monitoring model,
b) checking a result of the comparison of the hash value to the hash value calculated from the signed hash value, using the monitoring model,
c) providing a control signal for activating at least a part of the computer- controlled machine, and/or a function of the computer-controlled machine,
d) transferring at least a part of the computer-controlled machine, and/or a function of the computer-controlled machine, to a defined state,
e) transferring at least the part of the computer-controlled machine, and/or the function of the computer-controlled machine to the defined state as a function of a result from step a) and/or step b).
(The disclosed prior art of record fails to disclose all of the limitations of claim 19.)
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant’s disclosure.
Kimura (US Pub. No. 2022/0261307 A1) A monitoring-data processing method is executable by at least one processor included in a monitoring-data processing system that includes a control apparatus that retrieves monitoring data indicative of a surrounding of a vehicle. The monitoring-data processing method includes performing processing of the monitoring data retrieved by the control apparatus, and determining whether there is a malfunction in the monitoring-data processing system. The monitoring-data processing method additionally includes performing, in response to a malfunction determination that there is a malfunction in the monitoring-data processing system, a task of (i) switching the control apparatus to be in a reset state and thereafter (ii) restarting the control apparatus while holding data indicative of the malfunction in an internal storage of the control apparatus.
Kopetz (US Pub. No. 2022/0236762 A1) The invention is part of the field of computer technology. It describes the architecture of a secure automation system and a method for safe autonomous operation of a technical apparatus, in particular a motor vehicle. The architecture disclosed herein solves the problem that any Byzantine error in one of the complex subsystems of a distributed real-time computer system, regardless of whether the error was triggered by a random hardware failure, a design error in the software or an intrusion, must be recognized and controlled in such a way that no security-relevant incident occurs. The architecture includes four largely independent subsystems which are arranged hierarchically and each form an isolated Fault-Containment Unit (FCU). At the top of the hierarchy is a secure subsystem, which executes simple software on fault-tolerant hardware. The other three subsystems are insecure because they contain complex software executed on non-fault-tolerant hardware.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID PERLMAN whose telephone number is (571) 270-1417.
The examiner can normally be reached on Monday - Friday; 10:00am -6:30pm.
Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is
(571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/DAVID PERLMAN/Primary Examiner, Art Unit 2673