DETAILED ACTION
1. This office action is in response to the Application No. 18914240 filed on 11/17/2025. Claims 2 and 6 has been cancelled. Claims 1, 3-5, 7 and 8 are presented for examination and are currently pending.
Priority
2. The Examiner notes that the following applications 15975741 filed 05/09/2018, 16200466 filed 11/26/2018, 16455655 filed 06/27/2019, 16716098 filed 12/16/2019, 16923039 filed 05/19/2020, 16923039 filed 07/07/2020, 17180439 filed 01/21/2021, 17180439 filed 02/19/2021, 17234007 filed 04/19/2021, 18305305 filed 08/11/2021, 17404699 filed 08/17/2021, 17404699 filed 08/17/2021, 17458747 filed 08/27/2021, 17514913 filed 10/29/2021, 17727913 filed 04/25/2022, 18190044 filed 07/12/2022, 17875201 filed 07/27/2022, 18305305 filed 02/16/2023, 18190044 filed 03/24/2023, 18305305 filed 04/21/2023, 18503135 filed 11/06/2023 has no support for the following limitations:
- “deep learning architecture”.
- “homomorphically compressed and encrypted data”.
- “quantize the input data into discrete intervals”.
- “latent transformer”.
- “plurality of inference or prediction”.
Furthermore, the above listed applications appears to disclose “codewords”, but it does not appear to disclose “codewords corresponding to the discrete intervals”.
However, application 18755627 filed 06/26/2024 discloses “deep learning architecture”, “homomorphically compressed and encrypted data”, “quantize the input data into discrete intervals”, “compress the input data into a plurality of compressed codewords by dividing the input data into sections”, but does not disclose “inference” or prediction and has no support for “transformer” and “latent transformer”.
Application 18770652 filed 07/12/2024 discloses “homomorphically compressed and encrypted data” but does not disclose “deep learning architecture”, “quantize the input data into discrete intervals”. The application has no support for “codewords corresponding to the discrete intervals”. Although it discloses “codewords”, it does not appear to disclose “codewords corresponding to the discrete intervals”.
However, application 18737906 filed 06/07/2024 has support for “deep learning architecture”, “homomorphically compressed and encrypted data”, “quantize the input data into discrete intervals”, “codewords corresponding to the discrete intervals”, latent transformer architecture and produces a plurality of inferences.
As a result, for the purpose of prosecution, the effective filling date for application 18737906 filed 06/07/2024 has been used for the prior art rejection.
Response to Arguments
3. On page 4 of the remarks, the Applicant argued that “The cited references, even when considered together, fail to teach or suggest all limitations of the claimed invention. Most significantly, neither Li nor Milton teaches or suggests the claimed step of processing encrypted and compressed codewords through a machine learning core while the data remains encrypted”.
The Examiner notes the argument above is not persuasive because the Office Action clearly cites Li’s teachings of a Compression encoder 215A … → Encrypter 225… → Compression Decoder 250 in Figure 2 and Li’s teachings that Decoders 250 may decode the compression encoding of various types of data 126 [0123] reads on the claimed limitation of “process the plurality of encrypted and compressed codewords through a machine learning core of the deep learning architecture.
Furthermore, the Examiner notes that the argument above that “while the data remains encrypted” is not a claimed limitation, and as a result, the Applicant is arguing about limitations that are not claimed.
On page 4 of the remarks, the Applicant argued that “Claim 1 recites: "process the plurality of encrypted and compressed codewords through a machine learning core of the deep learning architecture, wherein the machine learning core comprises a latent transformer architecture and produces a plurality of inferences based on the encrypted and compressed codewords." This limitation requires that the machine learning core operate on and produce inferences from data that remains in its encrypted and compressed form throughout processing. Li fails to teach processing encrypted data through a machine learning core. Li explicitly describes a decrypt-then-process workflow. As stated in Li at [0083]: "a decrypter 240, a bit-stream deshuffler 245, a compression decoder 250, and a data mixer 255, all of which process and reconstruct data 126." Li's architecture requires decryption (via decrypter 240) before any processing occurs. The deep- learning components cited by the Examiner, such as the autoencoder and CNN, operate on unencrypted data after this decryption step. This is the opposite of the claimed invention, where the machine learning core performs inference while the data remains encrypted”.
The above arguments are not persuasive because Li teaches (an autoencoder can include three parts: encoder, code, and decoder [0242]; Compression encoder 215A … → Encrypter 225… → Compression Decoder 250 in Figure 2), this indicates Li’s autoencoder as machine learning core processes plurality of encrypted and compressed data event though Li also teaches a decryption process before the Decoder. As a result, the broadest reasonable interpretation of the limitation “process the plurality of encrypted and compressed codewords through a machine learning core of the deep learning architecture” is taught by Li since encrypted and compressed data passes though the autoencoder even though a decryption process also occurs.
Furthermore, Li’s teaching of (compression encoder 215A may determine a prediction model of the data [0086]; Compression encoder 215A … → Encrypter 225 in Figure 2) indicates that Li’s teaches reads on the claimed “produces a plurality of inferences based on the encrypted and compressed codewords”.
In addition, the Applicant’s arguments above that “produce inferences from data that remains in its encrypted and compressed form throughout processing” and “where the machine learning core performs inference while the data remains encrypted” are not claimed limitations, and as a result, the Applicant is arguing what is not claimed.
On page 5 of the remarks, the Applicant argued that “Milton does not remedy Li's deficiencies. While Milton mentions homomorphic encryption among several possible methods ( [0070]), it does not teach or suggest applying homomorphic encryption in conjunction with machine-learning processing. Milton's transformer model, described at [0094], operates on standard unencrypted inputs. The Office Action cites no portion of Milton showing that its transformer processes homomorphically encrypted data or produces inferences from encrypted inputs. Milton's references to homomorphic encryption and to transformer models appear in entirely separate contexts and are never linked within its disclosure”.
The above argument is not persuasive because Milton’s [0070] cited in the Office Action actually teaches using encrypted data in a machine learning process. Milton teaches some embodiments may include encryption methods adapted for machine-learning methods. Example encryption methods may include partially homomorphic encryption methods [0070].
Furthermore, since Milton teaches a transformer model for machine-learning [0094] and some embodiments may implement a fully homomorphic encryption method to increase robustness and adaptability with machine-learning operations [0070], a person having ordinary skill in the art would have modified Li’s encrypted data with homomorphic encryption as taught by Milton because Li teaches data compression and encryption as shown in Figure 2 cited in the Office Action, while Milton also similarly teaches data may be encrypted and compressed [0044].
In addition, Milton as secondary reference encrypted sensor data processed through a machine learning operation for prediction purposes. Milton teaches some embodiments may apply a machine-learning operation trained to predict the riskiest or most operationally vulnerable portion of a vehicle based on sensor data [0037]; The sent data may include either or both the unprocessed sensor data provided by the set of vehicle sensors 104 or analysis results generated from the one or more vehicle agents 108, wherein the data may be encrypted, compressed, or feature-reduced [0044].
On page 5 of the remarks, the Applicant argued that “Accordingly, the combination of Li and Milton fails to teach the critical limitation that the machine learning core processes encrypted data and produces inferences while the data remains encrypted. This is not a routine design choice but a distinct system architecture requiring specialized training of the model to operate on encrypted data, a capability that neither reference teaches or suggests”.
On page 5 of the remarks, the Applicant argued that “The Examiner appears to interpret the limitation "process the plurality of encrypted and compressed codewords through a machine learning core" too broadly. Under that interpretation, any system that encrypts data, transmits it, decrypts it, and then processes it with machine learning would satisfy the limitation. Such an interpretation disregards the claim's express requirement that the machine learning core actually operate on the encrypted codewords themselves, not merely on data that was once encrypted. The claim requires that inferences be produced based on the encrypted and compressed codewords, not based on decrypted data. A person of ordinary skill in the art would understand that processing encrypted data through a machine learning core means the core performs computations directly on encrypted representations, functionality uniquely enabled by homomorphic encryption”.
On pages 5-6 of the remarks, the Applicant argued that “Furthermore, the combination lacks a reasoned motivation supported by evidence. The Office Action asserts that the modification would be obvious "for the benefit of homomorphic encryption to increase the speed of computational operations based on the data model" (Milton [0125]). This rationale is technically unsound and unsupported by the cited references. Homomorphic encryption increases computational cost and latency compared to operations on plaintext. The cited portions of Milton neither state nor imply that such encryption improves computational speed. Moreover, Li's explicit teaching is to decrypt before processing (Li [0083], [0123]); thus, Li directs away from maintaining encryption during machine-learning operations. A skilled artisan following Li would have no reason to retain encryption during processing”.
The arguments above are not persuasive because the broadest reasonable interpretation of the claimed limitation "process the plurality of encrypted and compressed codewords through a machine learning core" requires that encrypted and compressed codewords are received by a machine learning core, and the Applicant is reminded that the machine learning core processing operations can include decryption. As a result, Li as primary reference meets the limitation because Li’s machine learning core receives the encrypted and codewords and processes it, even though Li also performs decryption during processing.
Furthermore, Li as primary reference also meets the claimed limitation “produces a plurality of inferences based on the encrypted and compressed codewords” because the prediction obtained by Li’s machine learning core was initially based on the received encrypted and compressed codewords, even though Li also performs decryption before prediction.
In addition, a person having ordinary skill in the art would have been clearly motivated to modify Li with the teachings of Milton. This is because Li as primary reference desires data compression and encryption as shown in Figure 2 cited in the Office Action, while Milton also similarly teaches data may be encrypted and compressed [0044], and as a result, a person having ordinary skill in the art has a reason to modify to Li with Milton.
Since Li as primary reference desires lossless data compression [0086] and Huffman coding [0087]. Milton as secondary reference similarly teaches lossless data compression methods such as Huffman coding methods [0036], then it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li to incorporate the teachings of Milton for the benefit of homomorphic encryption method to increase the adaptability with machine-learning operations [0070] that increase the speed of computational operations based on the data model (Milton [0125])
On page 6 of the remarks, the Applicant argued that “The Examiner has also not addressed the separate limitation requiring "a latent transformer architecture.' Milton discloses a standard transformer model with encoder and decoder modules ([0094]), but Applicant's latent transformer architecture, described in the specification at Figures 27-28, operates on latent-space vectors produced by a VAE Encoder Subsystem and processes those latent representations before decoding. This structural configuration differs fundamentally from the conventional transformer of Milton and is neither taught nor suggested by the cited combination”.
The argument above is not persuasive because Applicant is arguing what is not claimed. There are no claim limitations directed to the VAE Encoder Subsystem and decoding as argued above. The transformer of Milton reads on claimed latent transformer architecture of Applicant. This is because Milton teaches neural network operations on latent space of various previously disparate records or data [0123] and they discloses a neural network having an attention mechanism. A transformer includes attention mechanisms [0094]
Some embodiments may include attention mechanisms by using an agent executing program code to run a transformer model for machine-learning … a
On page 6 of the remarks, the Applicant argued that “Accordingly, the cited references do not teach or suggest processing encrypted and compressed codewords through a machine learning core while the data remains encrypted, nor do they disclose or suggest the latent-transformer architecture recited in Claim 1. The Examiner's stated motivation to combine is unsupported by the art and contradicts Li's own teachings. Because the combination fails to disclose each and every limitation of the claim, the rejection does not establish a prima facie case of obviousness”.
The Applicant’s argument above that “processing encrypted and compressed codewords through a machine learning core while the data remains encrypted” is not persuasive because the limitation “while the data remains encrypted” is not a claimed limitation. The teachings of Li modified by Milton do not contradict each other because they are analogous art. Furthermore, a person having ordinary skill in the art would have been clearly motivated to modify Li with the teachings of Milton. This is because Li as primary reference desires data compression and encryption as shown in Figure 2 cited in the Office Action, while Milton also similarly teaches data may be encrypted and compressed [0044], and as a result, a person having ordinary skill in the art has a reason to modify Li with Milton in order to increase adaptability with machine-learning operations [0070] that increase the speed of computational operations based on the data model (Milton [0125])
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1, 3-5, 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Li (US20200128307 filed 06/28/2019) in view of Milton (US20200017117 filed 07/14/2019)
Regarding claim 1, Li teaches a system for operating a deep learning architecture on … compressed and encrypted data (Deep learning technology can be applied to the embodiments for lossless data compression [0242]; One embodiment may use a CNN (Convolutional Neural Network) classifier [0245]; With reference to FIG. 20, remote unit 110 may also include a data classifier 2010, a one-dimensional telemetry data compression encoder 2015, a bit stream shuffler 2020, an encrypter 2025 [0126]),
comprising one or more computers with executable instructions that, when executed, cause a deep learning system to (The embodiments include combining an autoencoder, an unsupervised deep learning algorithm, and a convolutional neural network (CNN) to generate prediction models automatically that can reach higher compression ratios [0198]):
sense input data from a data source (… data 126 may include data from various sources (e.g., measurement data 126 from multiple transducers 125, some of the measurement data being specifically image data 261 [0098]);
quantize the input data into discrete intervals (Remote unit 110 may convert measurement data 126 from transducers 125 into PCM-encoded measurement data 260 … Linear PCM, in which signal levels are quantized and indexed based on constant quantization intervals, will be referred to as LPCM. Logarithmic PCM, in which signal levels are segmented into unequal quantization segments, and each quantization segment in divided and indexed based on equal quantization intervals within the segment, will be referred to as logarithmic PCM. The term PCM will generally refer to either or both LPCM and logarithmic PCM. Such PCM-encoded data 260 may be a component of data 126 [0085]);
generate a codebook comprising codewords corresponding to the discrete intervals (Segment encoder 420 may receive a segment data stream 422 comprising the bits of logarithmic PCM data 410 that correspond to the quantization segments [0089]; In FIG. 30, the multi-state encoder 2110 may input a segment sample X at Step 3010, and at Step 3015 may calculate the number of bits needed for Huffman coding [0172]; At Step 3125 the multi-state encoder 2110 computes the Huffman code tree, and at 3130 the multi-state encoder 2110 builds a Huffman codebook [0175]);
compress the input data into a plurality of compressed codewords (The input data contains N blocks of data 4506; the state-of-the-art algorithm 4504 must wait 4508 to receive the entire dataset with a total input data duration of 4510 before starting the compression process and generates a compressed bit-stream [0193]) by dividing the input data into sections ( In some embodiments, image compression encoder 215B2 may analyze a video frame or a sequence of video frames and divide its contents into different shapes or object regions [0093]) and
allocating codewords from the codebook to sections of the input data (With reference back to FIG. 2, after data 126 has been compressed and encoded according to the characteristics of the data, bit-stream shuffler 220 may receive the compression-encoded data as a bit stream and proceed to shuffle the bit stream so as to remove possibly predictable patterns added by the compression encodings [0119]; The fourth level of randomness may be introduced by a parameter shuffler 1825 such as parameter shufflers 1825 1, 1825 2, and/or 1825 3, which may randomly assign bit allocation in individual encoding processes, based on segment states. For example, parameter shuffler 1825 1 may randomly assign bit allocation in the encoding process of segments in the transition state, parameter shuffler 1825 2 may do the same for segments in the periodical state, and parameter shuffler 1825 3 may do the same for segments in the slow-varying state [0120]);
encrypt the plurality of compressed codewords into a plurality of encrypted and compressed codewords (This processing may include compressing data 126. By classifying data 126 via data classifier 210, as discussed in further detail below, data 126 may be able to be compressed by compression encoders 215 with relatively high compression ratios and relatively low delay and computational costs. The compression-encoded data may be output as bit-streams, and bit-stream shuffler 220 may shuffle these bit-streams to remove possibly predictable patterns added by compression encoders 215. Encrypter 225 may encrypt the shuffled bit-stream [0083]);
process the plurality of encrypted and compressed codewords through a machine learning core of the deep learning architecture (deep learning based autoencoder [0237]; An autoencoder can include three parts: encoder, code, and decoder [0242]; a compression decoder 250, and a data mixer 255, all of which process and reconstruct data 126 [0083]; Decoders 250 may decode the compression encoding of various types of data 126 [0123]),
wherein the machine learning core … and produces a plurality of inferences based on the encrypted and compressed codewords (The embodiments include combining an autoencoder, an unsupervised deep learning algorithm, and a convolutional neural network (CNN) to generate prediction models automatically that can reach higher compression ratios [0198]); and
modify the data source or the input data based on the plurality of inferences (a decrypter 240, a bit-stream deshuffler 245, a compression decoder 250, and a data mixer 255, all of which process and reconstruct data 126 [0083]).
Li is silent about using a homomorphic encryption algorithm; and a latent transformer architecture
Milton teaches homomorphic encryption algorithm (Furthermore, these machine-learning results may be compressed, reduced, or encrypted before transmission [0038]; Example encryption methods may also include fully homomorphic encryption methods [0070])
wherein the machine learning core comprises a latent transformer architecture a latent transformer architecture and produces a plurality of inferences (Some embodiments may include attention mechanisms by using an agent executing program code to run a transformer model for machine-learning. A transformer model may include an encoder module, wherein the encoder module may include a first multi-head self-attention layer and a feed forward layer … The output of the decoder portion of transformer can be used to categorize an input or generate inferences [0094])
Since Li as primary reference desires lossless data compression [0086] and Huffman coding [0087]. Milton as secondary reference similarly teaches lossless data compression methods such as Huffman coding methods [0036], then it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Li to incorporate the teachings of Milton for the benefit of homomorphic encryption method to increase robustness and adaptability with machine-learning operations [0070] to increase the speed of computational operations based on the data model (Milton [0125])
Regarding claim 3, Li and Milton teaches the system of claim 1, Milton Li teaches wherein the machine learning core produces a plurality of predictions based on the plurality of encrypted and compressed codewords (analysis results generated from the one or more vehicle agents 108, wherein the data may be encrypted, compressed, or feature-reduced [0044]; Some embodiments may include attention mechanisms by using an agent executing program code to run a transformer model for machine-learning. A transformer model may include an encoder module, wherein the encoder module may include a first multi-head self-attention layer and a feed forward layer … The output of the feed forward layer can then be used by a decoder portion of the transformer … The output of the decoder portion of transformer can be used to categorize an input or generate inferences [0094]).
The same motivation to combine independent claim 1 applies here.
Regarding claim 4, Li and Milton teaches the system of claim 1, Li wherein the input data is quantized before allocating codewords to sections of the input data (Remote unit 110 may convert measurement data 126 from transducers 125 into PCM-encoded measurement data 260 … Linear PCM, in which signal levels are quantized and indexed based on constant quantization intervals, will be referred to as LPCM. Logarithmic PCM, in which signal levels are segmented into unequal quantization segments, and each quantization segment in divided and indexed based on equal quantization intervals within the segment, will be referred to as logarithmic PCM. The term PCM will generally refer to either or both LPCM and logarithmic PCM. Such PCM-encoded data 260 may be a component of data 126 [0085]).
Regarding claim 5, claim 5 is similar to claim 1. It is rejected in the same manner and reasoning applying.
Regarding claim 7, claim 7 is similar to claim 3. It is rejected in the same manner and reasoning applying.
Regarding claim 8, claim 8 is similar to claim 4. It is rejected in the same manner and reasoning applying.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.G./Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148