Prosecution Insights
Last updated: April 19, 2026
Application No. 17/677,009

COMPRESSING INFORMATION IN AN END NODE USING AN AUTOENCODER NEURAL NETWORK

Non-Final OA §103
Filed
Feb 22, 2022
Examiner
LEY, SALLY THI
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Silicon Laboratories Inc.
OA Round
3 (Non-Final)
15%
Grant Probability
At Risk
3-4
OA Rounds
3y 10m
To Grant
44%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
5 granted / 33 resolved
-39.8% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
35 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
29.2%
-10.8% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 30 Oct 2025 has been entered. Status of Claims This Office Action is in response to the communication filed 30 Oct 2025. Claims 1-23 are being considered on the merits. Claim Rejections – 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-6, 8, 12-14, and 17-23 are rejected under 35 U.S.C. 103 as being unpatentable over Kamath, Ajith M. (US 2014/0185862 A1; hereinafter “Kamath”) in view of Song, et. al. (US 2020/0043241 A1; hereinafter, “Song”) and further in view of García-Ordás, et. al. (“Detecting Respiratory Pathologies Using Convolutional Neural Networks and Variational Autoencoders for Unbalancing Data.” (2020) Sensors. 20. 10.3390/s20041214; hereinafter, “Garcia-Ordás”). Claim 1, Kamath as modified teaches: At least one non-transitory computer readable storage medium having stored thereon instructions, which if performed by a machine cause the machine to perform a method comprising: (Kamath, para. 0081: “The methods and processes described above may be implemented in programs executed from a system's memory (a computer readable medium, such as an electronic, optical or magnetic storage device). The methods, instructions and circuitry operate on electronic signals, or signals in other electromagnetic forms.”) generating, in at least one cloud server (Kamath para 0077) comprising the machine, an autoencoder comprising an encoder and a decoder, and (Song, para. 0292: “More specifically, FIG. 9(a) illustrates a general structure of the artificial neural network model, and FIG. 9(b) illustrates an autoencoder, that performs decoding after encoding and goes through a reconstruction step, among the artificial neural network model.”) generating a classifier, (Song, para. 0276: “Furthermore, the memory 25 may store the neural network model (e.g. the deep learning model 26) generated through a learning algorithm for classifying/recognizing data in accordance with the embodiment of the present disclosure.”) wherein the encoder is to encode a spectrogram into a compressed spectrogram, (Kamath, para. 0049: “After writing the message into the spectrogram, the spectrogram is converted to an audio signal suitable for play out, transmission or storage (converted to a standard audio signal and file format, possibly compressed to reduce its size).”) the decoder is to decode the compressed spectrogram into a recovered spectrogram, and (Kamath, para. 0031: “The metadata may be encoded in the machine readable information, embedded within the audio signal, such that only intended recipients can decode it”) the classifier is to identify one or more properties of real world information from the recovered spectrogram; (Song, para. 0008: “In an aspect, a method of controlling an intelligent device includes obtaining sound information from a photographed image; learning the obtained sound information and recognizing a sound based on the result of the learned sound information; and classifying the image based on the recognized sound.”) calculating a first loss of the (Garcia-Ordás, sec. 3.2: “For that reason, the loss function used to train a VAE is made up of two terms: ‘reconstruction term’, like in the vanilla autoencoder, that tends to make the encoder-decoder work accurately; and a ‘regularization term’ applied over the latent layer that tends to make the distributions created by the encoder close to a standard normal distribution using the Kulback-Leibler divergence”) and independently calculating a second loss of the classifier; (Garcia-Ordás, sec. 4.2.1: “We used Adam as the optimization algorithm and categorical crossentropy as the loss function” Examiner notes that Garia-Ordás teaches crossentropy for the classifier independent of the loss calculated for the autoencoder); jointly training the autoencoder and the classifier to minimize a loss function based at least in part on the first loss (Garcia-Ordás, sec. 3.2: “For that reason, the loss function used to train a VAE is made up of two terms: ‘reconstruction term’, like in the vanilla autoencoder, that tends to make the encoder-decoder work accurately; and a ‘regularization term’ applied over the latent layer that tends to make the distributions created by the encoder close to a standard normal distribution using the Kulback-Leibler divergence”) and the second loss (Garcia-Ordás, sec. 4.2.1: “We used Adam as the optimization algorithm and categorical crossentropy as the loss function”); (Garcia-Ordás, sec. 4.2.1 and Fig 5: “Furthermore, an augmentation of the less representative classes was done to balance the dataset. This augmentation step was carried out using our proposed VAE. A convolutional VAE scheme has been implemented in order to generate more samples for the Non-Chronic and healthy classes. In Figure 5, we can see the network configuration.” Examiner notes that figure 5 shows the interaction between the autoencoder and the classifier). storing the trained autoencoder and the trained classifier in a non-transitory storage medium for delivery of at least the encoder of the trained autoencoder to at least one end node wireless device to cause the encoder of the trained autoencoder to execute on the at least one end node wireless device to compress the real world information sensed by the at least one end node wireless device, the compressed real world information to be sent from the at least one end node wireless device to the at least one cloud server for processing; (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) receiving, in the at least one cloud server, the compressed real world information comprising speech information of a user; (Kamath, para. 0055: “ In particular, as depicted in FIG. 6 for example, one user authors a message (e.g., an image) on her phone with a mobile application program, and the mobile application writes it into the spectrogram of a host audio signal, as shown in block 112. The application converts the spectrogram to an output audio signal format (e.g., way file) in block 114 and plays that audio signal.” Examiner notes Kamath para. 0077 above teaches that any functionality, including receiving can be implemented on different devices including, “e.g. a remote server”). processing, in the at least one cloud server, the compressed real world information to determine, based at least in part on the compressed real world information, an operation requested by the user to be performed by the at least one end node wireless device; and (Song, para. 0079: “At this time, the AI server 16 may receive input data from the AI device (11 to 15), infer a result value from the received input data by using the learning model, generate a response or control command based on the inferred result value, and transmit the generated response or control command to the AI device (11 to 15).” Examiner notes Song teaches generating an inferred result i.e. processing. Examiner further notes Song teaches any devices being a wireless device in parap 0138 and including a robot as taught in para. 0090) sending, from the at least one cloud server, command information to the at least one end node wireless device to cause the at least one end node wireless device to perform the operation responsive to the command information. (Song, para. 0090: “ Also, the robot 11 may perform the operation or navigate the space by controlling its locomotion platform based on the control/interaction of the user. At this time, the robot 11 may obtain intention information of the interaction due to the user's motion or voice command and perform an operation by determining a response based on the obtained intention information.” Examiner notes Song teaches an end node being a robot where Kamath teaches an end node being a cell phone. Moreover Song teaches devices being wireless at para 0138) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Song into Kamath, as modified. Kamath teaches communicating a message between devices by passing an audio signal with the message written into the spectrogram of the audio signal. Song teaches an intelligent device and a method of controlling the same. One of ordinary skill would have been motivated to combine the teachings of Song into Kamath, as modified, in order increase reliability and latency to support smart grid control, industry automation, robotics, drone control and coordination (Song, para. 0062). Additionally, it would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Garcia-Ordás into Kamath, as modified. Garcia-Ordás teaches use of variational convolutional autoencoders to process sound of breaths in the form of spectrograms for diagnosing various ailments. One of ordinary skill would have been motivated to combine the teachings of Garcia-Ordás into Kamath, as modified, in order to classify the respiratory sounds into healthy, chronic, and non-chronic disease for the purpose of training a computer to classify respiratory sounds into healthy, chronic, and non-chronic disease. Claim 4, Kamath as modified teaches claim 1. Kamath as modified further teaches: wherein the method further comprises sending a trained encoder portion of the autoencoder to one or more end node wireless devices to enable the one or more end node wireless devices to compress spectrograms using the trained encoder portion and (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) send the compressed spectrograms to the at least one cloud server. (Kamath, para. 0049: “After writing the message into the spectrogram, the spectrogram is converted to an audio signal suitable for play out, transmission or storage (converted to a standard audio signal and file format, possibly compressed to reduce its size).” Examiner notes that Kamath teaches uploading and downloading operations between services and devices at para 0077) Claim 5, Kamath as modified teaches claim 4. Kamath as modified further teaches: wherein the method further comprises: receiving, in the at least one cloud server, uncompressed spectrograms from the one or more end node wireless devices; and (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.” Examiner notes Kamath teaches encoding and decoding signals at para. 0031 and compression of signals at 0049). incrementally training, in the at least one cloud server, one or more of the autoencoder or the classifier based at least in part on the uncompressed spectrograms. (Song, para. 0297 and 0298: “Referring to FIG. 9(b), an artificial neural network model according to an embodiment of the present disclosure may include an autoencoder. “ “The artificial neural network model that has been learned repeatedly several times may stop the learning and may be stored in a memory of an AI device if an error value is less than a reference value.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Song into Kamath as set forth above with respect to claim 1. Claim 6, Kamath as modified teaches claim 5. Kamath as modified further teaches: wherein the method further comprises sending an incrementally trained encoder portion of the autoencoder from the at least one cloud server to the one or more end node wireless devices. (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.” Examiner notes Kamath teaches encoding and decoding signals at para. 0031 and compression of signals at 0049). Claim 8, Kamath as modified teaches: A method comprising: generating, in at least one cloud server comprising the machine, an autoencoder comprising an encoder and a decoder, and (Song, para. 0292: “More specifically, FIG. 9(a) illustrates a general structure of the artificial neural network model, and FIG. 9(b) illustrates an autoencoder, that performs decoding after encoding and goes through a reconstruction step, among the artificial neural network model.”) generating a classifier, (Song, para. 0276: “Furthermore, the memory 25 may store the neural network model (e.g. the deep learning model 26) generated through a learning algorithm for classifying/recognizing data in accordance with the embodiment of the present disclosure.”) wherein the encoder is to encode a spectrogram into a compressed spectrogram, (Kamath, para. 0049: “After writing the message into the spectrogram, the spectrogram is converted to an audio signal suitable for play out, transmission or storage (converted to a standard audio signal and file format, possibly compressed to reduce its size).”) the decoder is to decode the compressed spectrogram into a recovered spectrogram, and (Kamath, para. 0031: “The metadata may be encoded in the machine readable information, embedded within the audio signal, such that only intended recipients can decode it”) the classifier is to identify one or more properties of real world information from the recovered spectrogram; (Song, para. 0008: “In an aspect, a method of controlling an intelligent device includes obtaining sound information from a photographed image; learning the obtained sound information and recognizing a sound based on the result of the learned sound information; and classifying the image based on the recognized sound.”) calculating a first loss of the (Garcia-Ordás, sec. 3.2: “For that reason, the loss function used to train a VAE is made up of two terms: ‘reconstruction term’, like in the vanilla autoencoder, that tends to make the encoder-decoder work accurately; and a ‘regularization term’ applied over the latent layer that tends to make the distributions created by the encoder close to a standard normal distribution using the Kulback-Leibler divergence”) and independently calculating a second loss of the classifier; (Garcia-Ordás, sec. 4.2.1: “We used Adam as the optimization algorithm and categorical crossentropy as the loss function” Examiner notes that Garia-Ordás teaches crossentropy for the classifier independent of the loss calculated for the autoencoder); jointly training the autoencoder and the classifier to minimize a loss function based at least in part on the first loss (Garcia-Ordás, sec. 3.2: “For that reason, the loss function used to train a VAE is made up of two terms: ‘reconstruction term’, like in the vanilla autoencoder, that tends to make the encoder-decoder work accurately; and a ‘regularization term’ applied over the latent layer that tends to make the distributions created by the encoder close to a standard normal distribution using the Kulback-Leibler divergence”) and the second loss (Garcia-Ordás, sec. 4.2.1: “We used Adam as the optimization algorithm and categorical crossentropy as the loss function”); (Garcia-Ordás, sec. 4.2.1 and Fig 5: “Furthermore, an augmentation of the less representative classes was done to balance the dataset. This augmentation step was carried out using our proposed VAE. A convolutional VAE scheme has been implemented in order to generate more samples for the Non-Chronic and healthy classes. In Figure 5, we can see the network configuration.” Examiner notes that figure 5 shows the interaction between the autoencoder and the classifier). storing the trained autoencoder and the trained classifier in a non-transitory storage medium for delivery of at least the encoder of the trained autoencoder to at least one end node wireless device to cause the encoder of the trained autoencoder to execute on the at least one end node wireless device to compress the real world information sensed by the at least one end node wireless device, the compressed real world information to be sent from the at least one end node wireless device to the at least one cloud server for processing; (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) receiving, in the at least one cloud server, the compressed real world information comprising speech information of a user; (Kamath, para. 0055: “ In particular, as depicted in FIG. 6 for example, one user authors a message (e.g., an image) on her phone with a mobile application program, and the mobile application writes it into the spectrogram of a host audio signal, as shown in block 112. The application converts the spectrogram to an output audio signal format (e.g., way file) in block 114 and plays that audio signal.” Examiner notes Kamath para. 0077 above teaches that any functionality, including receiving can be implemented on different devices including, “e.g. a remote server”). processing, in the at least one cloud server, the compressed real world information to determine, based at least in part on the compressed real world information, an operation requested by the user to be performed by the at least one end node wireless device; and (Song, para. 0079: “At this time, the AI server 16 may receive input data from the AI device (11 to 15), infer a result value from the received input data by using the learning model, generate a response or control command based on the inferred result value, and transmit the generated response or control command to the AI device (11 to 15).” Examiner notes Song teaches generating an inferred result i.e. processing. Examiner further notes Song teaches any devices being a wireless device in parap 0138 and including a robot as taught in para. 0090) sending, from the at least one cloud server, command information to the at least one end node wireless device to cause the at least one end node wireless device to perform the operation responsive to the command information. (Song, para. 0090: “ Also, the robot 11 may perform the operation or navigate the space by controlling its locomotion platform based on the control/interaction of the user. At this time, the robot 11 may obtain intention information of the interaction due to the user's motion or voice command and perform an operation by determining a response based on the obtained intention information.” Examiner notes Song teaches an end node being a robot where Kamath teaches an end node being a cell phone. Moreover Song teaches devices being wireless at para 0138) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Song into Kamath, as set forth above with respect to claim 1. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Garcia-Ordás into Kamath, as modified, as set forth above with respect to claim 1. Claim 12, Kamath as modified teaches claim 8. Kamath as modified further teaches: further comprising sending a trained encoder portion of the autoencoder from the computer system to one or more end node wireless devices. (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) Claim 13, Kamath as modified teaches claim 12. Kamath as modified further teaches: further sending a trained decoder portion of the autoencoder from the computer system to at least some of the one or more end node wireless devices. (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) Claim 14, Kamath as modified teaches claim 12. Kamath as modified further teaches: further comprising receiving, in the computer system, compressed spectrograms from at least some of the one or more end node wireless devices, the compressed spectrograms compressed using the trained encoder portion. (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) Claim 17, Kamath as modified teaches claim 12. Kamath as modified further teaches: further comprising: requesting, by the computer system, one or more uncompressed spectrograms from at least some of the one or more end node wireless devices; and (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) incrementally training, in the computer system, at least one of the autoencoder or the classifier based at least in part on the one or more uncompressed spectrograms. (Song, para. 0297 and 0298: “Referring to FIG. 9(b), an artificial neural network model according to an embodiment of the present disclosure may include an autoencoder. “ “The artificial neural network model that has been learned repeatedly several times may stop the learning and may be stored in a memory of an AI device if an error value is less than a reference value.”) Claim 18, Kamath as modified teaches: A system comprising: at least one processor; memory coupled to the at least one processor; and one or more non-transitory storage media, wherein the one or more non-transitory storage media comprises instructions which if performed by the system cause the system to perform a method comprising (Kamath, para. 0081: “The methods and processes described above may be implemented in programs executed from a system's memory (a computer readable medium, such as an electronic, optical or magnetic storage device). The methods, instructions and circuitry operate on electronic signals, or signals in other electromagnetic forms.”) generating, in at least one cloud server comprising the machine, an autoencoder comprising an encoder and a decoder, and (Song, para. 0292: “More specifically, FIG. 9(a) illustrates a general structure of the artificial neural network model, and FIG. 9(b) illustrates an autoencoder, that performs decoding after encoding and goes through a reconstruction step, among the artificial neural network model.”) generating a classifier, (Song, para. 0276: “Furthermore, the memory 25 may store the neural network model (e.g. the deep learning model 26) generated through a learning algorithm for classifying/recognizing data in accordance with the embodiment of the present disclosure.”) wherein the encoder is to encode a spectrogram into a compressed spectrogram, (Kamath, para. 0049: “After writing the message into the spectrogram, the spectrogram is converted to an audio signal suitable for play out, transmission or storage (converted to a standard audio signal and file format, possibly compressed to reduce its size).”) the decoder is to decode the compressed spectrogram into a recovered spectrogram, and (Kamath, para. 0031: “The metadata may be encoded in the machine readable information, embedded within the audio signal, such that only intended recipients can decode it”) the classifier is to identify one or more properties of real world information from the recovered spectrogram; (Song, para. 0008: “In an aspect, a method of controlling an intelligent device includes obtaining sound information from a photographed image; learning the obtained sound information and recognizing a sound based on the result of the learned sound information; and classifying the image based on the recognized sound.”) calculating a first loss of the (Garcia-Ordás, sec. 3.2: “For that reason, the loss function used to train a VAE is made up of two terms: ‘reconstruction term’, like in the vanilla autoencoder, that tends to make the encoder-decoder work accurately; and a ‘regularization term’ applied over the latent layer that tends to make the distributions created by the encoder close to a standard normal distribution using the Kulback-Leibler divergence”) and independently calculating a second loss of the classifier; (Garcia-Ordás, sec. 4.2.1: “We used Adam as the optimization algorithm and categorical crossentropy as the loss function” Examiner notes that Garia-Ordás teaches crossentropy for the classifier independent of the loss calculated for the autoencoder); jointly training the autoencoder and the classifier to minimize a loss function based at least in part on the first loss (Garcia-Ordás, sec. 3.2: “For that reason, the loss function used to train a VAE is made up of two terms: ‘reconstruction term’, like in the vanilla autoencoder, that tends to make the encoder-decoder work accurately; and a ‘regularization term’ applied over the latent layer that tends to make the distributions created by the encoder close to a standard normal distribution using the Kulback-Leibler divergence”) and the second loss (Garcia-Ordás, sec. 4.2.1: “We used Adam as the optimization algorithm and categorical crossentropy as the loss function”); (Garcia-Ordás, sec. 4.2.1 and Fig 5: “Furthermore, an augmentation of the less representative classes was done to balance the dataset. This augmentation step was carried out using our proposed VAE. A convolutional VAE scheme has been implemented in order to generate more samples for the Non-Chronic and healthy classes. In Figure 5, we can see the network configuration.” Examiner notes that figure 5 shows the interaction between the autoencoder and the classifier). storing the trained autoencoder and the trained classifier in a non-transitory storage medium for delivery of at least the encoder of the trained autoencoder to at least one end node wireless device to cause the encoder of the trained autoencoder to execute on the at least one end node wireless device to compress the real world information sensed by the at least one end node wireless device, the compressed real world information to be sent from the at least one end node wireless device to the at least one cloud server for processing; (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) receiving, in the at least one cloud server, the compressed real world information comprising speech information of a user; (Kamath, para. 0055: “ In particular, as depicted in FIG. 6 for example, one user authors a message (e.g., an image) on her phone with a mobile application program, and the mobile application writes it into the spectrogram of a host audio signal, as shown in block 112. The application converts the spectrogram to an output audio signal format (e.g., way file) in block 114 and plays that audio signal.” Examiner notes Kamath para. 0077 above teaches that any functionality, including receiving can be implemented on different devices including, “e.g. a remote server”). processing, in the at least one cloud server, the compressed real world information to determine, based at least in part on the compressed real world information, an operation requested by the user to be performed by the at least one end node wireless device; and (Song, para. 0079: “At this time, the AI server 16 may receive input data from the AI device (11 to 15), infer a result value from the received input data by using the learning model, generate a response or control command based on the inferred result value, and transmit the generated response or control command to the AI device (11 to 15).” Examiner notes Song teaches generating an inferred result i.e. processing. Examiner further notes Song teaches any devices being a wireless device in parap 0138 and including a robot as taught in para. 0090) sending, from the at least one cloud server, command information to the at least one end node wireless device to cause the at least one end node wireless device to perform the operation responsive to the command information. (Song, para. 0090: “ Also, the robot 11 may perform the operation or navigate the space by controlling its locomotion platform based on the control/interaction of the user. At this time, the robot 11 may obtain intention information of the interaction due to the user's motion or voice command and perform an operation by determining a response based on the obtained intention information.” Examiner notes Song teaches an end node being a robot where Kamath teaches an end node being a cell phone. Moreover Song teaches devices being wireless at para 0138) Claim 19, Kamath as modified teaches claim 18. Kamath as modified further teaches:: wherein the system comprises a remote cloud server, the remote cloud server to send the trained autoencoder (Song para 0292) to one or more end nodes coupled to the remote cloud server via a network. (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) Claim 20, Kamath as modified teaches claim 19. Kamath as modified further teaches: wherein the remote cloud server is to receive compressed spectrograms (Kamath, para. 0049: “After writing the message into the spectrogram, the spectrogram is converted to an audio signal suitable for play out, transmission or storage (converted to a standard audio signal and file format, possibly compressed to reduce its size).”) from at least some of the one or more end node devices and process the compressed spectrograms. (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) Claim 21, Kamath as modified teaches claim 1. Kamath as modified further teaches: wherein the command information is to cause the at least one end node wireless device to perform the operation comprising a playing of a media file. (Kamath, para. 0077: “Different of the functionality can be implemented on different devices. For example, in a system in which a cell phone communicates with a server at a remote service provider, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. For example, messages can be authored and communicated to other devices by servers in a cloud computing service by uploading message and host image content to servers in a cloud service or authored in a mobile device via a script program downloaded from an online authoring service provided from a network server. Also, messages and host signals may be stored on the cell phone—allowing the cell phone to write messages into host signals, transmit them, receive them, and render them—all without reliance on externals devices. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a cell phone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated. (Moreover, more than two devices may commonly be employed. E.g., a service provider may refer some tasks, functions or operations, to servers dedicated to such tasks.) In like fashion, data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.”) Claim 22, Kamath as modified teaches claim 1. Kamath as modified further teaches: wherein the command information is to cause the at least one end node wireless device to perform the operation comprising an operation in an automation network. (Song, para. 0073: “Referring to FIG. 1, in the AI system, at least one or more of an AI server 16, robot 11, self-driving vehicle 12, XR device 13, smartphone 14, or home appliance 15 are connected to a cloud network 10. Here, the robot 11, self-driving vehicle 12, XR device 13, smartphone 14, or home appliance 15 to which the AI technology has been applied may be referred to as an AI device (11 to 15).”) Claim 23, Kamath as modified teaches claim 8. Kamath as modified further teaches: wherein the command is to cause the at least one end node wireless device to turn on a light in the local environment. (Song, para. 0220: “The output unit 150 may be configured to output various types of information, such as audio, video, tactile output, and the like. The output unit 150 may include at least one of a display unit 151, an audio output unit 152, a haptic module 153, or an optical output unit 154.”) Claims 2-3 and 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Kamath, in view of Song, in view of García-Ordás and further in view of De la Cruz Jr, et. al. (“Jointly Pre-training with supervised, autoencoder, and value losses for deep reinforcement learning”, arXiv:1904.02206v1 [cs.LG] 3 Apr 2019; hereinafter, “de la Cruz”) Claim 2, Kamath as modified teaches claim 1. Kamath as modified further teaches:: wherein the method further comprises jointly training the autoencoder and the classifier based on a weighted sum of the first loss and the second loss. (De la Cruz sec. 3.2: “To obtain extra information in addition to the supervised features, we take inspiration from the supervised autoencoder framework which jointly trains a classifier and an autoencoder Le et al. (2018); we believe this approach will retain the important features learned through supervised pre-training and at the same time, learns additional general features from the added autoencoder loss. Finally, we blend in the value loss l o s s v s a e v with the supervised and autoencoder losses as L s a e v = L s s a e v W S F X i , y i + L a e s a e v W a e F X i , X i + L v s a e v W v F X i , x i ”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De la Cruz into Kamath, as modified. De la Cruz teaches a pre-training strategy that jointly trains a weighted supervised classification loss, an unsupervised reconstruction loss, and an expected return loss. One of ordinary skill would have been motivated to combine the teachings of De la Cruz into Kamath, as modified, in order to enable discovery of more useful features compared to independently training in supervised or unsupervised fashion (De la Cruz, Abstract). Claim 3, Kamath as modified teaches claim 1. Kamath as modified further teaches: wherein the method further comprises: calculating the first loss according to a correlation coefficient; and (Garcia-Ordás, sec. 3.2: “For that reason, the loss function used to train a VAE is made up of two terms: ‘reconstruction term’, like in the vanilla autoencoder, that tends to make the encoder-decoder work accurately; and a ‘regularization term’ applied over the latent layer that tends to make the distributions created by the encoder close to a standard normal distribution using the Kulback-Leibler divergence” Examiner notes that Garcia-Ordás teaches a first loss using a regularization term i.e. a coefficient that correlates the distributions to normal). calculating the second loss according to a binary cross-entropy. (De la Cruz, sec. 2.5: “assume the non-optimal human actions as the true labels for each game state. The network is pre-trained with the cross-entropy loss” Examiner notes that De la Cruz teaches multi-label cross-entropy, i.e. binary cross-entropy). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Garcia-Ordás into Kamath, as modified, as set forth above with respect to claim 1. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De la Cruz into Kamath, as modified, as set forth above with respect to claim 2. Claim 9, Kamath as modified teaches claim 8. Kamath as modified further teaches: wherein the method further comprises jointly training the autoencoder and the classifier based on a weighted sum of the first loss and the second loss. (De la Cruz sec. 3.2: “To obtain extra information in addition to the supervised features, we take inspiration from the supervised autoencoder framework which jointly trains a classifier and an autoencoder Le et al. (2018); we believe this approach will retain the important features learned through supervised pre-training and at the same time, learns additional general features from the added autoencoder loss. Finally, we blend in the value loss l o s s v s a e v with the supervised and autoencoder losses as L s a e v = L s s a e v W S F X i , y i + L a e s a e v W a e F X i , X i + L v s a e v W v F X i , x i ”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De la Cruz into Kamath, as modified, as set forth above with respect to claim 2. Claim 10, Kamath as modified teaches claim 8. Kamath as modified further teaches: The at least one non-transitory computer readable storage medium of claim 1, wherein the method further comprises: calculating the first loss according to a correlation coefficient; and (Garcia-Ordás, sec. 3.2: “For that reason, the loss function used to train a VAE is made up of two terms: ‘reconstruction term’, like in the vanilla autoencoder, that tends to make the encoder-decoder work accurately; and a ‘regularization term’ applied over the latent layer that tends to make the distributions created by the encoder close to a standard normal distribution using the Kulback-Leibler divergence” Examiner notes that Garcia-Ordás teaches a first loss using a regularization term i.e. a coefficient that correlates the distributions to normal). calculating the second loss according to a binary cross-entropy. (De la Cruz, sec. 2.5: “assume the non-optimal human actions as the true labels for each game state. The network is pre-trained with the cross-entropy loss” Examiner notes that De la Cruz teaches multi-label cross-entropy, i.e. binary cross-entropy). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Garcia-Ordás into Kamath, as modified, as set forth above with respect to claim 1. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of De la Cruz into Kamath, as modified, as set forth above with respect to claim 2. Responses to Applicant Remarks and Argument 35 USC §101 The previously asserted rejection pursuant to 35 U.S.C §101 has been withdrawn. 35 USC §103 As to the rejections pursuant to §103, applicant argues that Garcia-Ordás, Paraskevopoulos and Yang, in combination, do not teach the amended claims. Applicant further argues that Dong similarly does not teach the amended claims. However, such arguments are moot as independent claims now stand rejected over Kamath, in view of Song, in view of Garcia-Ordás. Dependent claims 2-3 and 9-10 additionally stand rejected over Kamath in view of Song, in view of Garcia-Ordás, and further in view of de la Cruz. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sally T. Ley whose telephone number is (571)272-3406. The examiner can normally be reached Monday - Thursday, 10:00am - 6:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STL/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Feb 22, 2022
Application Filed
Mar 25, 2025
Non-Final Rejection — §103
Aug 07, 2025
Response Filed
Aug 19, 2025
Final Rejection — §103
Oct 09, 2025
Interview Requested
Oct 17, 2025
Examiner Interview Summary
Oct 17, 2025
Applicant Interview (Telephonic)
Oct 30, 2025
Request for Continued Examination
Nov 05, 2025
Response after Non-Final Action
Feb 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443830
COMPRESSED WEIGHT DISTRIBUTION IN NETWORKS OF NEURAL PROCESSORS
2y 5m to grant Granted Oct 14, 2025
Patent 12135927
EXPERT-IN-THE-LOOP AI FOR MATERIALS DISCOVERY
2y 5m to grant Granted Nov 05, 2024
Patent 11880776
GRAPH NEURAL NETWORK (GNN)-BASED PREDICTION SYSTEM FOR TOTAL ORGANIC CARBON (TOC) IN SHALE
2y 5m to grant Granted Jan 23, 2024
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
15%
Grant Probability
44%
With Interview (+28.8%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month