Detailed Action
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
RCE filed on 02/05/2026 is acknowledged. Claims 1-17 are currently pending and have been considered below. Claim 1 is independent claim. Claim 1 has been amended. Claim 2 is canceled.
Priority
This application has PRO 63/331,400 04/15/2022. This application has PRO 63/327,902 04/06/2022.
Continued Examination under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/05/2026 has been entered.
Response to Arguments
Applicant's arguments added in the amendment filed on 02/05/2026 have been fully considered but moot in view of new ground of rejection. The reasons set forth below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US Patent Application Publication No 2021/0165883 A1) in view of Chase (US Patent Application Publication No 2022/0269796 A1) and in view of Linton (US Patent Application Publication No 2022/0321332 A1) and further in view of Ponsini (US Patent Application Publication No 2016/0134660 A1).
Regarding Claim 1, Zhang discloses an apparatus, comprising:
a first secured processor (Zhang, Fig-1, ¶[0094], the CPU, the AI processor, the memory controller and a TZPC);
two and more secured applications embedded in the first secured processor, each of the secured applications associated with an artificial intelligence (AI) model (Zhang, ¶[0034]-¶[0037], the controller determines an AI model and AI operator library code required to process the AI processing request. The AI processor processes the AI processing request in the target mode by using the AI model and the AI operator library code);
two and more first secured memories coupled to the first secured processor, each of the first secured memories configured to store an AI executable binary that is associated with a corresponding one of the AI models (Zhang, ¶[0038], the AI processor may extract the AI model and the AI operator library code from the target address);
a second secured processor coupled to the first secured memories, the second secured processor configured to execute the AI executable binaries stored in the first secured memories (Zhang, ¶[0039], the AI model and the AI operator library code that are required to process the AI processing request are loaded to the AI processor so that the AI processor processes the AI processing request by using the AI model and the AI operator library code);
a sub-system coupled to the second secured processor (Zhang, ¶[0039], the AI model and the AI operator library code that are required to process the AI processing request are loaded to the AI processor so that the AI processor processes the AI processing request by using the AI model and the AI operator library code); and
an AI session manager coupled to the sub-system and the secured applications, the AI session manager configured to receive from the sub-system an AI session that identifies one of the AI models, and prepare and store an AI executable binary associated with the AI model to one of the first secured memories that corresponds to the AI executable binary (Zhang, ¶[0041], the controller switches the running environment from the REE to the TEE, to initiate the AI processing request to the AI processor by using the driver in the TEE. ¶[0043], the controller switches the running environment from the TEE to the REE, to initiate the AI processing request to the AI processor),
wherein the sub-system triggers the second secured processor to execute the AI executable binary stored in the first secured memory (Zhang, ¶[0110], the AI processor cannot access software and hardware resources in the TEE in the protection mode, but software and hardware in the TEE can access the AI processor in the protection mode);
wherein a secure operating system (OS) is embedded in the first secured processor, the secure OS is configured to provide the TEE within which the secured applications are protected (Zhang, ¶[0030], the controller initiates the AI processing request to the AI processor by using the driver in the TEE, to improve security of processing the AI processing request. ¶[0092], the TEE side components at the application layer may include an AI trusted application configured to provide a security AI algorithm service).
wherein a secure operating system (OS) embedded in the first secured processor, the secure OS configured to provide a trusted execution environment (TEE) within which the secured applications are protected (Zhang, ¶[0030], the controller initiates the AI processing request to the AI processor by using the driver in the TEE, to improve security of processing the AI processing request. ¶[0092], the TEE side components at the application layer may include an AI trusted application configured to provide a security AI algorithm service).
Zhang does not explicitly discuss the following limitation that Chase teaches:
secured processor to execute AI executable binary (Chase, ¶[0068], ML frameworks can include boosted trees, neural networks. The binary classification model can be trained on a variety of raw features. The binary classification model can be exposed via API and be called by sending the primary classification model the required data).
and the first secured memories, the second secured processor, and the sub-system are simultaneously protected by multiple firewalls (Chase, ¶[0031], the firewall can protect the AI model from being deceived by external data. The firewall can patch loopholes identified by the model assessment engine to create an additional layer of security. ¶[0033], assessment engine and firewall can be integrated. ¶[0049], Fig-5, firewall can use machine learning model for a detection engine. ¶[0066], if the firewall detects suspicious data in the external data, the firewall can flag that).
Zhang in view of Chase are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area”. Namely, they pertain to the field of “securely deploying artificial intelligence model”. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the invention of Zhang in view of Chase to include the idea of securely deploying AI models because fraudsters are now executing algorithmic attacks on AI. Those attacks are automated, enabling fraudsters to counteract defensive updates much more quickly. Those attacks can be used not only to spoof the Al models, but also to steal sensitive user data or information about the AI systems (Chase, ¶[0004]).
Zhang in view of Chase does not explicitly discuss the following limitation that Linton teaches:
wherein the AI model is protected by an isolated execution environment (Linton, ¶[0037], AI model inferences generator module generates AI model inferences using a GPU in a trusted execution environment (TEE). Fig-4, ¶[0049]);
Zhang in view of Chase and Linton are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area”. Namely, they pertain to the field of “securely deploying artificial intelligence model”. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the invention of Zhang in view of Chase and Linton to include the idea of securely deploying AI models because fraudsters are now executing algorithmic attacks on AI. Those attacks are automated, enabling fraudsters to counteract defensive updates much more quickly. Those attacks can be used not only to spoof the Al models, but also to steal sensitive user data or information about the AI systems (Linton, ¶[0002]).
Zhang in view of Chase and Linton does not explicitly discuss the following limitation that Ponsini teaches:
wherein in response to any of the two and more secured applications requesting to access the secure OS, the secure OS returns a handle to said any of the two and more secured applications, for acting as a token to access the secure OS (Ponsini, ¶[0015], TEE is a secure area in the client device. ¶[0018], the authorization entity includes functionality to generate authorization tokens. The authorization entity authorizes remote operations of the TEE by signing related authorization tokens. ¶[0019], the TSM receives the authorization token generated by the authorization entity and passes the authorization token and related administrative operations to the counterpart component representing the authorization entity within the TEE).
Zhang in view of Chase, Linton and Ponsini are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area”. Namely, they pertain to the field of “enforcing secure processes”. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the invention of Zhang in view of Chase, Linton and Ponsini to include the idea of authenticating the user and enforcing user-specific and device specific security (Ponsini, abstract).
Regarding claim 3, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 2, wherein the AI session manager is embedded in the first secured processor and protected within the TEE (Zhang, ¶[0031], when the target mode is the first mode, the CPU initiates the AI processing request to the AI processor by using a driver in the TEE).
Regarding claim 4, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 2, wherein the first secured memories and the second secured processor are protected by a first firewall (Chase, ¶[0031], the firewall can protect the AI model from being deceived by external data. The firewall can patch loopholes identified by the model assessment engine to create an additional layer of security. ¶[0066], the firewall can alert with dynamically evolving behaviors of fraudsters to keep fraud detection up to date).
Regarding Claim 5, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 4, wherein the sub-system is protected by a second firewall (Chase, ¶[0163], modifications on more important features will result in a higher attack perceptibility and allowing the attack to be more easily be detected by a human reviewer. ¶[0166], the reviewer can use the results from model security systems on the given data to more easily determine the validity of the given data. The erroneous data can be passed back to the intake machine to be rejected).
Regarding Claim 6, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 5, wherein the first firewall provides a higher security level than the second firewall (Chase, ¶[0163], modifications on more important features will result in a higher attack perceptibility and allowing the attack to be more easily be detected by a human reviewer. ¶[0166], the reviewer can use the results from model security systems on the given data to more easily determine the validity of the given data. The erroneous data can be passed back to the intake machine to be rejected).
Regarding Claim 7, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 2, further comprising: a second memory coupled to the second secured processor, the second memory configured to store data on which the second secured processor executes the AI executable binary (Zhang, ¶[0110], the AI processor can not access software and hardware resources in the TEE in the protection mode, but software and hardware in the TEE can access the AI processor in the protection mode).
Regarding Claim 8, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 7, further comprising:
an image signal processor (ISP) coupled to the second memory, the ISP configured to process images and store the processed images into the second memory (Chase, ¶[0098], facial recognition algorithms can involve extraction of features and classification of features), and
a facial biometric pattern secured within the TEE (Chase, ¶[0165], the model security system can receive non tabular data like handwriting samples, images, audio and biometric inputs),
wherein the second secured processor executes the AI executable binary to determine whether any one of the processed images matches the facial biometric pattern (Chase, ¶[0099], feature extraction algorithm can include binary patterns which can return vectors of the face. ¶[0100], some techniques can use deep learning models).
Regarding Claim 9, Zhang in view of Chase, Linton and Ponsini discloses he apparatus of claim 1, wherein the first secured memories and the second secured processor are protected by a first firewall (Chase, ¶[0031], the firewall can protect the AI model from being deceived by external data. The firewall can patch loopholes identified by the model assessment engine to create an additional layer of security. ¶[0066], the firewall can alert with dynamically evolving behaviors of fraudsters to keep fraud detection up to date).
Regarding Claim 10, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 9, wherein the AI session manager is embedded in the second secured processor (Zhang, ¶[0034]-¶[0037], the controller determines an AI model and AI operator library code required to process the AI processing request. The AI processor processes the AI processing request in the target mode by using the AI model and the AI operator library code).
Regarding Claim 11, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 9, wherein the AI session manager is protected by the first firewall (Chase, ¶[0031], the firewall can protect the AI model from being deceived by external data. The firewall can patch loopholes identified by the model assessment engine to create an additional layer of security. ¶[0066], the firewall can alert with dynamically evolving behaviors of fraudsters to keep fraud detection up to date).
Regarding Claim 12, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 9, wherein the sub-system is protected by a second firewall (Chase, ¶[0163], modifications on more important features will result in a higher attack perceptibility and allowing the attack to be more easily be detected by a human reviewer. ¶[0166], the reviewer can use the results from model security systems on the given data to more easily determine the validity of the given data. The erroneous data can be passed back to the intake machine to be rejected).
Regarding Claim 13, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 12, wherein the first firewall provides a higher security level than the second firewall (Chase, ¶[0163], modifications on more important features will result in a higher attack perceptibility and allowing the attack to be more easily be detected by a human reviewer. ¶[0166], the reviewer can use the results from model security systems on the given data to more easily determine the validity of the given data. The erroneous data can be passed back to the intake machine to be rejected).
Regarding Claim 14, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 1, wherein the sub-system includes a sensor hub (Zhang, ¶[0207], the peripheral system may include a sensor management module).
Regarding Claim 15, Zhang in view of Chase, Linton and Ponsini discloses the apparatus of claim 1, wherein the first secured processor includes a secured central processing unit (CPU) (Zhang, ¶[0012], the controller may be a CPU, a GPU or another processor).
Claim 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US Patent Application Publication No 2021/0165883 A1) in view of Chase (US Patent Application Publication No 2022/0269796 A1) and further in view of Linton (US Patent Application Publication No 2022/0321332 A1) and further in view of Ponsini (US Patent Application Publication No 2016/0134660 A1) and further in view of Dwivedi (US Patent Application Publication No 2023/0097169 A1).
Regarding Claim 16, Zhang in view of Chase, Linton and Ponsini does not disclose the following limitation that Dwivedi teaches:
the apparatus of claim 1, wherein the second secured processor includes a secured deep learning accelerator (DLA) (Dwivedi, ¶[0180], accelerators may include deep learning accelerator (DLA)).
Zhang in view of Chase, Linton and Ponsini and Dwivedi are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area”. Namely, they pertain to the field of “securely deploying artificial intelligence models and deriving AI model from plurality of A models”. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the invention of Zhang in view of Chase, Linton and Ponsini and Dwivedi to include the idea of reformatting first AI model and second AI model. The reformatted first AI model and second AI model can be analyzed by the AI model manager to identify common architecture, common layer etc (Dwivedi, ¶[0060]).
Regarding Claim 17, Zhang in view of Chase, Linton and Ponsini and Dwivedi discloses the apparatus of claim 16, wherein the DLA includes an accelerated processing unit (APU) (Dwivedi, ¶[0062], the derived AI model can be processed by another type of processor, such as at least one accelerated processing unit (APU) enhance the processing capability of the CPU).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (see PTO-Form 892).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WASIKA NIPA whose telephone number is (571)272-8923. The examiner can normally be reached on M-F, 8 am to 5 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Pwu can be reached on 571-272-6798. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WASIKA NIPA/ Primary Examiner, Art Unit 2433