DETAILED ACTION
1. The present application 18/427,160, filed on 03/29/2023, is being examined under the first inventor to file provisions of the AIA . A preliminary amendment filed on 03/29/2023 is acknowledged. Claims 1-6 and 8 are amended. Claim 7 is canceled. Claims 1-6 and 8 are pending.
Drawings
2. The drawings received on 03/29/2023 are accepted by the Examiner.
Information Disclosure Statement
3. The information disclosure statement (IDS) submitted on 03/29/2023 has been considered by the examiner.
Priority
4. This application makes reference to or appears to claim subject matter disclosed in Application No. PCT/JP2020/038, which filed on 10/08/2020.
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable by Ferreira (US 2021/0232968 A1) and in view of Coulombe (CA 2703048 C).
Referring to claims 1 and 8, Ferreira discloses decompression availability determination method executed by a computer (See para. [0011], a compression/decompression method which related to AI, machine learning, prediction models, neural networks), the method comprising:
generating compressed data by compressing input data using an encoder of an auto-encoder which has completed learning (See para. [0043] and Figure 5, the system receives input data a compressor, each of the compressor generates 504 compressed data, the input data is compressed such that the data can be used for machine learning training);
generating decompressed data by decompressing the compressed data using a decoder of the auto-encoder (See para. [0044], and para. [0045], the system’s decompressor decompresses the compressed received from multiple nodes or compressors);
determining whether the input data has been learned by the auto-encoder […] (see para. [0046], the system allows a compressor and/or decompressor to check whether the compressed or decompressed data of the input is up-to-data or need to be further training); and transmitting the compressed data via a network if it is determined that the input data has been learned, and transmitting the input data via the network if it is determined that the input data has not been learned (See para. [0043]-para. [0046] and Figure 5, the system transmits the compressed data to a central node based on latest retrained model using the most up-to date data or transmits the compressed or the decompressed input data to perform further training or retraining).
Ferreira does not explicitly disclose determining the input data has been learned by an encoder based on a difference between input data and decompressed data.
Coulombe discloses determining the input data has been learned by an encoder based on a difference between input data and the decompressed data (See attached PDF, pages 1-3, an encoder determines input images from a set of images that have been learned based on quality metrics from the input images which are based on the Maximum Difference, the Maximum Difference value is obtained from a decompressed module).
Therefore, it would have been obvious to a person of ordinary skill in computer art to modify the compression/decompression system of Ferreira to determining whether the input data has been learned by an encoder based on input data and decompressed data, as taught by Coulombe. A skilled artisan would have been motivated to improve training data quality corresponding to input data, since any defect in the training data could cause dramatic defects in the learning process and result in biases or low accuracy. In addition, both of the references (Ferreira and Coulombe) teach features that are directed to analogous art, and they are directed to the same field of endeavor, such as encoding and decoding using training models. This close relation between both of the references strongly suggests an expectation of success.
As to claim 2, Ferreira discloses causing the auto-encoder to perform learning using the data transmitted at the transmitting (See para. [0017] and Figure 1, the auto-encoder is an example of a deep neural network that is configured to learn to compress and decompress high-dimensional data).
Referring to claim 6, Ferreira discloses a decompression availability determination device comprising: a processor; and a memory (See para. [0029] and Figure 2, the system comprises processor, memory) that includes instructions, which when executed, cause the processor to execute:
generating compressed data by compressing input data using an encoder of an auto-encoder which has completed learning (See para. [0043] and Figure 5, the system receives input data a compressor, each of the compressor generates 504 compressed data, the input data is compressed such that the data can be used for machine learning training);
generating decompressed data by decompressing the compressed data using a decoder of the auto-encoder (See para. [0044], and para. [0045], the system’s decompressor decompresses the compressed received from multiple nodes or compressors);
determining whether the input data has been learned by the auto-encoder […] (see para. [0046], the system allows a compressor and/or decompressor to check whether the compressed or decompressed data of the input is up-to-data or need to be further training); (See para. [0043]-para. [0046] and Figure 5, the system transmits the compressed data to a central node based on latest retrained model using the most up-to date data or transmits the compressed or the decompressed input data to perform further training or retraining).
Ferreira does not explicitly disclose determining the input data has been learned by an encoder based on a difference between input data and decompressed data.
Coulombe discloses determining the input data has been learned by an encoder based on a difference between input data and the decompressed data (See attached PDF, pages 1-3, an encoder determines input images from a set of images that have been learned based on quality metrics from the input images which are based on the Maximum Difference, the Maximum Difference value is obtained from a decompressed module).
Therefore, it would have been obvious to a person of ordinary skill in computer art to modify the compression/decompression system of Ferreira to determining whether the input data has been learned by an encoder based on input data and decompressed data, as taught by Coulombe. A skilled artisan would have been motivated to improve training data quality corresponding to input data, since any defect in the training data could cause dramatic defects in the learning process and result in biases or low accuracy. In addition, both of the references (Ferreira and Coulombe) teach features that are directed to analogous art, and they are directed to the same field of endeavor, such as encoding and decoding using training models. This close relation between both of the references strongly suggests an expectation of success.
Claims 3-5 are rejected under 35 U.S.C. 103 as being unpatentable by Ferreira (US 2021/0232968 A1) and in view of Liang (US 2016/0004530 A1).
Referring to claim 3, Ferreira discloses a decompression availability determination method executed by a computer (See para. [0011], a compression/decompression method which related to AI, machine learning, prediction models, neural networks), the method comprising:
generating decompressed data by decompressing first compressed data generated by compressing input data using an encoder of an auto-encoder that has completed learning, by using a decoder of the auto-encoder (See para. [0044], and para. [0045], the system’s decompressor decompresses the compressed received from multiple nodes or compressors);
generating second compressed data […] using an encoder of the auto-encoder; and determining whether the input data has been learned by the auto-encoder […] (See para. [0043]-para. [0046] and Figure 5, the system transmits the compressed data to a central node based on latest retrained model using the most up-to date data or transmits the compressed or the decompressed input data to perform further training or retraining).
Ferreira does not explicitly disclose generating second compressed data from the decompressed data by using an encoder.
Liang discloses generating second compressed data from the decompressed data by using an encoder (See para. [0104], para. [0112] and Figure 3, the system compresses an old version file to obtain a second compressed file, the system concatenates unmatched block data and the local second compressed file, i.e., the compressed file of the old version to generate the concatenate compressed file, decompresses the concatenate compressed file to obtain the new version file, and upgrades the old version file according to the new version file); and determining whether the input data has been learned based on a difference between the first compressed data and the second compressed data (See para. [0107], para. [0108], the system determines new data [e.g., unmatched block] that needs to be learned or updated).
Therefore, it would have been obvious to a person of ordinary skill in computer art to modify the compression/decompression system of Ferreira to generate second compressed data from the decompressed data by using an encoder, as taught by Liang. A skilled artisan would have been motivated to generate a new version from a local old version file to reduce data traffic and occupied bandwidth resources (See Liang, abstract). In addition, both of the references (Ferreira and Liang) teach features that are directed to analogous art, and they are directed to the same field of endeavor, such as data compression and decompression. This close relation between both of the references strongly suggests an expectation of success.
As to claim 4, Ferreira discloses acquiring the input data via a network when it is determined that the input data has not been learned; and causing the auto-encoder to perform learning using the decompressed data of the input data determined to have been learned and the input data acquired at the acquiring (See para. [0043]-para. [0046] and Figure 5, the system transmits the compressed data to a central node based on latest retrained model using the most up-to date data or transmits the compressed or the decompressed input data to perform further training or retraining).
As to claim 5, Ferreira discloses receiving the first compressed data generated by compressing the input data using the encoder of the auto-encoder that has completed learning via a network (See para. [0043]-para. [0046] and Figure 5, the system receives input data a compressor, each of the compressor generates 504 compressed data, the input data is compressed and can be up-to-date training data).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Takagi et al. (US 2022/0141135 A1) discloses a technique for compressing and transmitting data without hampering real-time performance can be offered. In a data compression transmission system for collecting data generated by a plurality of devices at a central server via a network, an intermediate server is arranged between the devices and the central server. Each of the devices includes a packet cache processing unit for converting the generated data to a hash value based on a cache. The intermediate server includes a packet cache processing unit for decoding the hash value to original data based on the cache, a buffering unit for aggregating the data and outputting the data as a long packet, and a compression encoding unit for compressing the data and generating encoded data.
Van et al. (US 2022/0103839 A1) discloses compressing data using machine learning systems and tuning machine learning systems for compressing the data. An example process can include receiving, by a neural network compression system (e.g., trained on a training dataset), input data for compression by the neural network compression system. The process can include determining a set of updates for the neural network compression system, the set of updates including updated model parameters tuned using the input data. The process can include generating, by the neural network compression system using a latent prior, a first bitstream including a compressed version of the input data. The process can further include generating, by the neural network compression system using the latent prior and a model prior, a second bitstream including a compressed version of the updated model parameters. The process can include outputting the first bitstream and the second bitstream for transmission to a receiver.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUK TING CHOI whose telephone number is (571)270-1637. The examiner can normally be reached Monday-Friday 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AMY NG can be reached at 5712701698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YUK TING CHOI/Primary Examiner, Art Unit 2164