Prosecution Insights
Last updated: April 19, 2026
Application No. 19/050,038

ADVERSARIAL-ROBUST VECTOR QUANTIZED VARIATIONAL AUTOENCODER WITH SECURE LATENT SPACE FOR TIME-SERIES DATA

Non-Final OA §103
Filed
Feb 10, 2025
Examiner
BOSTWICK, SIDNEY VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
AtomBeam Technologies Inc.
OA Round
3 (Non-Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
71 granted / 136 resolved
-2.8% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
68 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/2025 has been entered. Remarks This Office Action is responsive to Applicants' Amendment filed on December 22, 2025, in which claims 1 and 11 are currently amended. Claims 1-20 are currently pending.Response to Arguments Applicant’s arguments with respect to rejection of claims 1-20 under 35 U.S.C. 101 based on amendment have been considered and are persuasive. The rejection under 35 U.S.C. 101 is withdrawn as necessitated by applicant's amendments and remarks made to the rejections. Applicant’s arguments with respect to rejection of claims 1-20 under 35 U.S.C. 103 based on amendment have been considered and are persuasive. The argument is moot in view of a new ground of rejection set forth below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 4, 6-11, 13, 14, 16-20 are rejected under U.S.C. §103 as being unpatentable over the combination of Hu (“Robust Semantic Communications With Masked VQ-VAE Enabled Codebook”, 2023) and Morin (US20200145816A1). PNG media_image1.png 480 1490 media_image1.png Greyscale FIG. 2 of Hu Regarding claim 1, Hu teaches A system for adversarial-robust compression and reconstruction of data using a vector quantized variational autoencoder (VQ-VAE), ([p. 8708] "We propose a DL-enabled end-to-end robust semantic communication system to combat the semantic noise […] the masked vector quantized-variational autoencoder (VQ-VAE) with vision Transformer (ViT) blocks [37], [38] is designed as the architecture of the robust semantic communication system. A novel strategy is proposed to mask a portion of the original image, where the semantic noise appears with a high probability") detect potential adversarial attacks in input data through multi-channel monitoring;([p. 8710] "Semantic Noise at Transmitter: This kind of semantic noise is generated in the encoding stage. Consider the scenario where a malicious attacker downloads the image dataset, adds semantic noise to each image, and then uploads the modified dataset. The semantic noise has serious impact on the encoding process and will mislead the DL models to generate wrong results" [p. 8711] "we need to assume that the attacker knows: (i) the exact channel between the attacker and the receiver, Ha [...] we first generate N channel realizations" Hu explicitly detects noise and ties the noise to malicious attackers (adversaries) in a multi-channel monitoring system. Hu uses FIM to suppress noise-related features and a masking strategy to target patches where noise appears frequently.) by processing the input data through a neural network trained using backpropagation of gradients ([p. 8712] "Update ν to maximize L(·) by one step forward and backward propagation in (11) with fixed θ and ∆si" [p. 8713] "in back propagation, we approximate the gradient by straight-through estimator […] during the back propagation, the gradient […] is passed unaltered to the encoder") compress validated input data into a discrete latent representation using adaptive defensive parameters ([p. 8712] "we randomly mask patches from the input images and aim to reconstruct the missing patches […] the encoder only needs to process a small portion of the unmasked patches and maps them to the encoded features for transmission" [p. 8713] "The model takes an input s and it passes through an encoder to produce the encoded feature vector, ze(s)" Hu's masking strategy is explicitly designed against semantic noise and varies based on noise statistic providing adaptive defense behavior integrated into the compression path. See also FIG. 2) by processing the validated input data through a neural network encoder to generate a continuous latent representation; ([p. 8713] "The model takes an input s and it passes through an encoder to produce the encoded feature vector, ze(s)" Input s interpreted as validated input data. Encoded feature vector ze(s) interpreted as a continuous latent representation) and quantizing the continuous latent representation by mapping the continuous latent representation to the discrete latent representation by performing nearest-neighbor lookup in a codebook of learnable vector embeddings ([p. 8713] "Codebook Design: […] The model takes an input s and it passes through an encoder to produce the encoded feature vector, ze(s). Then, it is mapped to a basis vector, zb(s), by the nearest neighbor look-up" See also FIG. 2) store the discrete latent representation within bounded constraints that maintain security against adversarial manipulation([p. 8713] "we denote the code book of the encoded features as E ≜ e1,e2,··· ,eJ ∈ RJ×D, which consists of J basis vectors {ej ∈ RD,j ∈ 1, 2,··· ,J} and D is the dimension of each basis vector, ej […] the encoded feature vector, ze(s). Then, it is mapped to a basis vector, zb(s), by the nearest neighbor look-up" zb(s) is necessarily and by definition bounded {ej ∈ RD,j ∈ 1, 2,··· ,J} by the finite codebook set to maintain security against adversarial manipulation) monitor latent code distributions and enforce consistency constraints through hierarchical projection; ([p. 8714] "Based on semantic similarity, we add term ∥ETE∥2 into loss function (13) and try to make the basis vectors in the codebook, E, mutually orthogonal, i.e., the semantic similarity between two basis vectors is small and their distance is large" [p. 8711] "To increase the impact of ∆si on the system, we propose to employ the iterative process [See Eqn. 7] where k denotes the iteration index and Π is the projection operator" [p. 8720] "the semantic similarity term of loss function proposed in Section III-C.2, ∥ETE∥2, can make the basis vectors in the codebook nearly mutually orthogonal" Examiner notes that while He explicitly uses hierarchical projection for semantic noise generation and boundary enforcement during training, the autoencoder architecture could similarly be reasonably be interpreted as hierarchical projection which explicitly enforces the consistency constraints by backpropagating the loss, the loss explicitly used to make the basis vectors mutually orthogonal) reconstruct the discrete latent representation using multi-stage reconstruction with progressive validation by processing the discrete latent representation through a neural network decoder([p. 8710] "zi denotes the target associated with si, e.g., the true label for the classification task and the original image for image reconstruction task, etc." [p. 8712] "the lightweight decoder reconstructs the image from the encoded features" [p. 8713] "zb(s)=argmin||ze(s) […] zb(s) is input to the decoder" As stated above ze(s) is interpreted as the discrete latent representation which is explicitly processed through a neural network decoder through eqn. 12. See also FIG. 2 which explicitly shows image reconstruction by the decoder) coordinate defensive responses across compression and reconstruction processes when potential attacks are detected; ([p. 8710] "Semantic Noise at Transmitter: This kind of semantic noise is generated in the encoding stage. Consider the scenario where a malicious attacker downloads the image dataset, adds semantic noise to each image, and then uploads the modified dataset. The semantic noise has serious impact on the encoding process and will mislead the DL models to generate wrong results" [p. 8711] "we need to assume that the attacker knows: (i) the exact channel between the attacker and the receiver, Ha [...] we first generate N channel realizations" Hu anticipates semantic noise originating from malicious attackers and treats such noise as potential attacks. In response, Hu coordinates defensive mechanisms across both compression and reconstruction by jointly applying masked VQ-VAE encoding, feature-importance suppression, discrete codebook representations, and end-to-end adversarially robust training, ensuring that both encoder and decoder respond coherently to attack-induced noise.) and train the neural network encoder and the neural network decoder by: calculating a joint loss function combining reconstruction loss and defensive parameters; ([p. 8716] "Execute Algorithm 2 to do adversarial training via employing the sum of Lc, Lf, and Ls" [p. 8713] "Differentiable Loss Function: Our designed loss function consists of three components representing different parts of parameters [...] The first term is the reconstruction loss that trains the parameters of encoder and decoder." Hu explicitly discloses a reconstruction loss term training the encoder and decoder with additional defensive loss parameters (FIM loss Lf and codebook semantic similarity loss Ls) combined into a joint objective) and updating parameters of the neural network encoder and the neural network decoder using backpropagation of gradients and an optimization algorithm.([p. 8713] "in back propagation, we approximate the gradient by straight-through estimator" [p. 8711] "the trainable parameters, θ, and semantic noise, ∆si, are updated iteratively to improve the model robustness."). However, Hu does not explicitly teach comprising: a computing device comprising at least a memory and a processor; a plurality of programming instructions stored in the memory and operable on the processor, wherein the plurality of programming instructions, when operating on the processor, cause the computing device to: and implement recovery mechanisms to restore system integrity when security violations are detected. Morin, in the same field of endeavor, teaches comprising: a computing device comprising at least a memory and a processor; a plurality of programming instructions stored in the memory and operable on the processor, wherein the plurality of programming instructions, when operating on the processor, cause the computing device to:([¶0028] "any suitable computer usable or computer readable medium (or media) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-usable, or computer-readable, storage medium (including a storage device associated with a computing device or client electronic device) may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a digital versatile disk (DVD), a static random access memory (SRAM), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, a media such as those supporting the internet or an intranet, or a magnetic storage device. Note that the computer-usable or computer-readable medium could even be a suitable medium upon which the program is stored, scanned, compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of the present disclosure, a computer-usable or computer-readable, storage medium may be any tangible medium that can contain or store a program for use by or in connection with the instruction execution system, apparatus, or device") and implement recovery mechanisms to restore system integrity when security violations are detected([¶0073] "Security process 10 may orchestrate, integrate, and synchronize data and information inputs and outputs between systems and an AI expert system" [¶0115] "By offensive, security process 10 may provide sensing, understanding, decision, or action output to take proactive or reactive adversarial action(s) that reduces the capability or strength of an unfriendly system" [¶0123] "element 2 may include a kernel-level binary system integrity validation and restoration software of security process 10" [¶0127] "If a bit is “out of place” or “flipped” since its last check, this will be detected, reported, recorded, then executes response per policy, user guidance, or approved user automation. It may execute up to a full system restoration based applying a pre-stored baseline. Security process 10 may interface with, e.g., IT, OT, and IoT hardware via dongle module hardware direct connection, cellular, WiFi or disconnected Over-the-Air (“OTA”) interface"). Hu as well as Morin are directed towards system integrity and maintenance. Therefore, Hu as well as Morin are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Hu with the teachings of Morin by using the adversarial attack model in Hu as the AI expert system in Morin ([¶0068] " security process 10 may provide 318 action for one or more automated functions of an integrative platform associated with the centralized topology of containers. For example, security process 10 may include an AI expert system applied to assist AI vendors to attribute automated decision and action from machine learning data that bifurcates between network dependent and network independent data". Morin provides as additional motivation for combination ([¶0117] "Applying security process 10's information transport and network capability but non-network dependency and security as a complement or supplement may improve the application of AI and robotic capability and sharing of information between these capabilities."). This motivation for combination also applies to the remaining claims which depend on this combination. Regarding claim 3, the combination of Hu, and Morin teaches The system of claim 1, wherein adaptive defensive parameters are automatically adjusted based on: detected threat levels; (Hu [p. 8711] "we select the scaling factor,α, satisfying Kα>ϵ to ensure that we can take full advantage of the noise power, ϵ. To maximize the received power of semantic noise and effectively fool the decoder," [p. 8713] "to reduce the impact of semantic noise, we increase the masking probability of the patches effected severely by the semantic noise based on its statistics" [p. 8714] "The FIM dynamically learns and incorporates the feature importance into the training phase to train a DNN model that inherently suppresses those noise-related and task-unrelated features." Semantic-noise strength interpreted as threat level. Hu ties adaptation to severity "patches effected severely" and models semantic noise strength via a power constraint parameter) historical attack patterns; (Hu [p. 8713] "to reduce the impact of semantic noise, we increase the masking probability of the patches effected severely by the semantic noise based on its statistics" semantic noise statistics over past samples interpreted as "historical attack patterns") and current system performance metrics(Hu [p. 8714] "the SNR is incorporated into the FIM, which ensures that the proposed system can successfully operate with different SNR levels" SNR interpreted as current system performance metric). Regarding claim 4, the combination of Hu, and Morin teaches the system of claim 1, wherein bounded constraints comprise: micro-constraints governing individual latent codes;(Hu [p. 8713] "we denote the code book of the encoded features as E ≜ e1,e2,··· ,eJ ∈ RJ×D, which consists of J basis vectors {ej ∈ RD,j ∈ 1, 2,··· ,J} and D is the dimension of each basis vector, ej […] the encoded feature vector, ze(s). Then, it is mapped to a basis vector, zb(s), by the nearest neighbor look-up" boundary {ej ∈ RD,j ∈ 1, 2,··· ,J} of the finite codebook set interpreted as micro-constraints governing individual latent codes) meso-constraints regulating local neighborhoods of codes; (Hu [p. 8714] "Hence, the (i,j)-th element of matrix ETE denotes the semantic similarity of basis vectors ei and ej. [...] Based on semantic similarity, we add term ∥ETE∥2 into loss function (13) and try to make the basis vectors in the codebook, E, mutually orthogonal, i.e., the semantic similarity between two basis vectors is small and their distance is large" pairwise semantic similarity between codebook entries via E^TE interpreted as meso-constraints regulating local neighborhoods of codes) and macro-constraints enforcing global properties of the latent space(Hu [p. 8714] "we add term ∥ETE∥2 into loss function (13) and try to make the basis vectors in the codebook, E, mutually orthogonal, i.e., the semantic similarity between two basis vectors is small and their distance is large" Loss term that pushes all basis vectors toward mutual orthogonality is a quintessential global ("macro") constraint. It enforces a global geometric property of the full codebook/latent representation space, improving robustness by increasing inter-code distances). Regarding claim 6, the combination of Hu, and Morin teaches the system of claim 1, wherein multi-stage reconstruction comprises: evaluating confidence levels across multiple dimensions; (Hu [p. 8710] "the mean square error for image reconstruction and the cross entropy for classification task" See FIG. 1 86.7% confidence and 98.6% confidence values for semantic inference outcomes (classification confidence) across multiple dimensions (image reconstruction and classification)) implementing progressive reconstruction through increasing resolution levels; (Hu [p. 8712] "we randomly mask patches from the input images and aim to reconstruct the missing patches. The masked VQ-VAE belongs to the autoencoder, but can reconstruct the original image from partial observations") and validating reconstruction quality at each stage(Hu [p. 8710] "The goal of the semantic communication system is to minimize the loss function for serving a specific task, e.g., the mean square error for image reconstruction"). Regarding claim 7, the combination of Hu, and Morin teaches the system of claim 1, wherein coordinating defensive responses comprises: sharing threat information across compression and reconstruction processes; (Hu [p. 8713] "Transmitting the encoded features of the unmasked patches and the mask tokens to the decoder at the receiver leads to a large reduction in transmission overhead. […] we increase the masking probability of the patches effected severely by the semantic noise based on its statistics." Hu's masking strategy targets patches where semantic noise appears "more frequently" and the system then sends mask tokens to the decoder. The mask tokens therefore communicate (share) the defense-relevant threat information from the compression side to the reconstruction side) implementing synchronized defensive actions; (Hu [p. 8708] "Both the transmitter and receiver are represented by DNNs […] a discrete codebook shared by the transmitter and the receiver is designed for encoded feature representation" [p. 8709] "we design a feature importance module (FIM) that dynamically learns and incorporates the feature importance to the masked VQ-VAE. It inherently suppresses the task-unrelated and noise-related features" Hu’s defenses are architected as an end-to-end synchronized transmitter/receiver pair with shared codebook and coordinated encoder/decoder behavior.) and maintaining system stability during defensive operations(Hu [p. 8715] "different SNR levels result in different importance of features. Thus, to ensure that the proposed semantic communication system can operate in a wide range of SNR levels" Hu explicitly conditions defenses on SNR (system performance metric) to ensure operation across varying channel operations). Regarding claim 8, the combination of Hu, and Morin teaches the system of claim 1, wherein implementing recovery mechanisms comprises: local repair of affected data regions; (Morin [¶0112] " the use of the security process 10 capability to overcome their local vulnerabilities and gain local strengths") neighborhood reconstruction when necessary; (Hu [p. 8712] "we randomly mask patches from the input images and aim to reconstruct the missing patches. The masked VQ-VAE belongs to the autoencoder, but can reconstruct the original image from partial observations") and global reorganization for severe security violations(Morin [¶0087] "large threat surface communications […] Symmetrical topology generally applies omnidirectional tactical communications within the radio frequency spectrum or high-power outer space-based communications, emanating transmission with geographic signature. This symmetry presents single points of failure and presents simple and greater threat surfaces for competing systems to degrade" [¶0088] "security process 10 may be executed on an asymmetric redundancy that may complement a more agile tactical data communication solution that may both share and apply broadband (e.g., IT, OT, IoT) cybersecurity and relatively much larger big data sets with AI attribution and auditability. Security process 10 may complement by, e.g., providing a better data and cyber delivery then supplied by current data communication capability. Generally, the implementation of security process 10 may reduce SWaP and threat surface, and may present greater ubiquity and greater self-healing opportunity" Large threat surface interpreted as severe security violation.). Regarding claim 9, the combination of Hu, and Morin teaches the system of claim 1, further comprising maintaining comprehensive audit trails of: detected threats; (Morin [¶0073] "Attribution may be generally described as, e.g., data or other informational input (e.g., cyber threat data)") defensive actions taken; (Hu [p. 8715] "the feature relationship is captured and different weights are generated for different features to increase or suppress their connection strength to the next layer." Hu maintaining parameters that encode defensive actions interpreted as maintaining an audit trail of defensive actions) system performance metrics; (Hu [p. 8715] "different SNR levels result in different importance of features. Thus, to ensure that the proposed semantic communication system can operate in a wide range of SNR levels") and recovery operations(Morin [¶0067] "Data signals showing validation or lack of validation and/or restoration per node"). Regarding claim 10, the combination of Hu, and Morin teaches The system of claim 1, wherein the system continuously refines defensive strategies through: analysis of threat detection effectiveness; (Morin [¶0083] " security process 10 may process the relevant information, which may be stored and accessed in an AI expert system, and may provide the critical information and knowledge base to efficiently and effectively provide continuous education and training of AI knowledge to technical support staff, decision makers, their staffs, and subordinate users.") evaluation of defensive response outcomes; (Hu [p. 8721] "Simulation results show that our proposed method can be applied in many downstream tasks and significantly improve the robustness of semantic communication systems against semantic noise with much reduced transmission overhead.") and adaptation of security parameters(Morin [¶0063] "Security process may apply the mesh topology across the nodal points of a centralized topology using containerization (e.g., Docker containerization) to synch system data and applying updates to data deltas efficiently"). Regarding claims 11, 13, 14, and 16-20, claims 11, 13, 14, and 16-20 are directed towards the method performed by claims 1, 3, 4, and 6-10. Therefore the rejections applied to claims 1, 3, 4, and 6-10 also apply to claims 11, 13, 14, and 16-20. Claims 2 and 12 are rejected under U.S.C. §103 as being unpatentable over the combination of Hu and Morin and in further view of Chen (“Latent Regularized Generative Dual Adversarial Network For Abnormal Detection”, 2021). Regarding claim 2, the combination of Hu, and Morin teaches calculating threat scores using weighted combinations of detection results; (Hu [p. 8715] "It is employed as the weight of importance for the n-th feature in the l-th layer associated with output z. We apply a "softmax" layer to scale these weights into range [0,1] […] the feature relationship is captured and different weights are generated for different features to increase or suppress their connection strength to the next layer" softmax output interpreted as threat score using weighted combination of detection results) and dynamically adjusting detection parameters based on historical attack patterns (Morin [¶0136] "Security process 10 may ingest and store logistics and sensor data with historical information"). However, the combination of Hu, and Morin doesn't explicitly teach The system of claim 1, wherein detecting potential adversarial attacks comprises: monitoring input data across multiple time horizons;. Chen, in the same field of endeavor, teaches The system of claim 1, wherein detecting potential adversarial attacks comprises: monitoring input data across multiple time horizons;([p. 764] "Figure 4: Training loss comparison between single autoencoder and dual autoencoder." The two graphs show time horizons of 20, 40, 60, 80, and 100 epochs.). The combination of Hu, and Morin as well as Chen are directed towards system integrity and maintenance. Therefore, the combination of Hu, and Morin as well as Chen are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Hu, and Morin with the teachings of Chen by running the training for varying epochs (different time horizons). While this would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, FIG. 4 of Chen reinforces this intuition by illustrating the effect of training over different numbers of epochs. Regarding claim 12, claim 12 is directed towards the method performed by the system of claim 2. Therefore, the rejection applied to claim 2 also applies to claim 12. Allowable Subject Matter Claims 5 and 15 are objected to as being dependent upon rejected base claims, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Below are the closest cited references, each of which disclose various aspects of the claimed invention: Hu (“Robust Semantic Communications With Masked VQ-VAE Enabled Codebook”, 2023) Morin (US20200145816A1) Chen (“Latent Regularized Generative Dual Adversarial Network For Abnormal Detection”, 2021) Marimont (“ANOMALY DETECTION THROUGH LATENT SPACE RESTORATION USING VECTOR QUANTIZED VARIATIONAL AUTOENCODERS”, 2021) Hu, Chen, and Marimont all disclose VQ-VAE for anomaly detection. However, Hu does not explicitly disclose “tracking real-time usage patterns; analyzing temporal evolution of representations; detecting anomalous transitions”, Marimont does not explicitly disclose detecting adversarial attacks and none of Hu, Chen, or Marimont explicitly disclose implementing recovery mechanisms to restore system integrity when security violations are detected. While it would be obvious to combine Morin who explicitly discloses implementing recovery mechanisms to restore system integrity when security violations are detected, explicitly anticipating using a machine learning model as a detection agent, it would not have been obvious to combine Morin with Hu, Chen, or Marimont to arrive at the claimed invention before the effective filing date. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Marimont (“ANOMALY DETECTION THROUGH LATENT SPACE RESTORATION USING VECTOR QUANTIZED VARIATIONAL AUTOENCODERS”, 2021) is directed towards a VQ-VAE with codebook for temporal anomaly detection. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Feb 10, 2025
Application Filed
Apr 21, 2025
Non-Final Rejection — §103
Jul 29, 2025
Response Filed
Aug 18, 2025
Final Rejection — §103
Dec 22, 2025
Request for Continued Examination
Jan 10, 2026
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561604
SYSTEM AND METHOD FOR ITERATIVE DATA CLUSTERING USING MACHINE LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12547878
Highly Efficient Convolutional Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12536426
Smooth Continuous Piecewise Constructed Activation Functions
2y 5m to grant Granted Jan 27, 2026
Patent 12518143
FEEDFORWARD GENERATIVE NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12505340
STASH BALANCING IN MODEL PARALLELISM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
90%
With Interview (+38.2%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month