DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Guntoro et al. (USPAP 2019/0279,095), hereinafter, “Guntoro”, in view of Sekiguchi et al. (USPAP 2024/0219,895), hereinafter, “Sekiguchi”, and further in view of Jeon et al. (KR 20080112000 A, ENCODING AND DECODING USING THE RESEMBLANCE OF A TONALITY).
Regarding claim 1 Guntoro teaches, a memory; and at least one processor coupled to the memory (Please note, figure 1), the at least one processor being configured to: acquire a target image to be processed; and process the target image using the neural network including convolution processing, wherein: when an output feature map to be an output of the convolution processing is output, the processor outputs, to the memory, respective small regions dividing the output feature map. (Please note paragraph 0044. As indicated FIGS. 4A through 4F show an input feature map of a convolutional neural network, an input feature map divided into data tiles, an input feature map transformed into the frequency domain, an input feature map which has been transformed into the frequency domain and to which a compression matrix has been applied, a feature map which has been back-transformed from the frequency domain and to which the compression matrix has been applied, and a continuously shown input feature map.).
Guntoro does not expressly teach, when each of the small regions is output to the storage unit, in a case in which a feature included in the small region is the same as a predetermined feature or a feature of a small region output in the past, the processing unit compresses and outputs the predetermined feature or the feature of the small region output in the past to the memory.
Sekiguchi teaches, when each of the small regions is output to the memory, in a case in which a feature included in the small region is the same as a predetermined feature, the processor unit compresses and outputs the predetermined feature to the memory. (Please note, paragraph 0370. As indicated the feedback information generation unit 170 generates identification graph data indicating similarity or a degree of difference between latent feature data generated by compressing feature data acquired from a camera-captured image corresponding to movement of the robot based on the teaching data applied to the user operation executed in the teaching data execution unit 140 and latent feature data generated by compressing feature data acquired from a camera-captured image corresponding to movement of the robot based on teaching data executed in the past, and outputs the identification graph data to the output unit.).
Guntoro & Sekiguchi are combinable because they are from the same field of endeavor.
At the time before the effective filing date, it would have been obvious to a person of ordinary skill in the art to utilize this small region output of Sekiguchi in Guntoro’s invention.
The suggestion/motivation for doing so would have been as indicated on paragraph 0370, “indicating similarity or a degree of difference between latent feature data generated by compressing feature data acquired from a camera-captured image corresponding to movement of the robot based on the teaching data applied to the user operation executed in the teaching data”.
Guntoro and Sekiguchi do not expressly recite, in a case in which a feature included in the small region is the same as a feature of a small region output in the past, the processor compresses and outputs the feature of the small region output in the past to the memory.
Jeon recites in a case in which a feature included in the small region is the same as a feature of a small region output in the past, the processor compresses and outputs the feature of the small region output in the past to the memory. (Please note, page 2, 5th. Paragraph. As indicated a coding apparatus including: a storage unit storing timbre characteristics of a past input frame, a timbre characteristic of a current frame included in an input signal, and a past frame timbre characteristic of the storage unit And difference information, and if the calculated difference information is less than or equal to a threshold value, a tone feature extraction unit for outputting corresponding past frame information and difference information, and a bit packing unit for generating a bitstream using the past frame information and the difference information.).
Guntoro, Sekiguchi & Jeon are combinable because they are from the same field of endeavor.
At the time of the invention, it would have been obvious to a person of ordinary skill in the art to utilize this in a case in which a feature included in the small region is the same as a feature of a small region output in the past, the processor compresses and outputs the feature of the small region output in the past to the memory of Jeon in Guntoro & Sekiguchi’s invention.
The suggestion/motivation for doing so would have been as indicated on page 2, 4th paragraph, “for generating an audio signal by compensating the decoding information of the past frame according to the extracted difference information.”.
Therefore, it would have been obvious to combine Guntoro, Sekiguchi with Jeon to obtain the invention as specified in claim 1.
Regarding claim 4 Guntoro teaches, wherein the predetermined feature includes features in the small region which are the same. (Please note, paragraph 0029. As indicated this method may be applied particularly well to convolutional neural networks (CNN) which were trained by a conventional, non-specialized training. The convolutions are carried out during the execution of a convolutional neural network in the spatial domain. For the possibly lossy compression in the frequency domain, the data to be compressed are therefore divided into tiles of a smaller and more defined size.).
Regarding claims 5-6, analysis similar to those presented for claim 1 are applicable.
Allowable Subject Matter
Claims 2-3 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The closest applied Prior Art of record fails to disclose or reasonably suggest wherein: when the convolution processing is performed using the neural network including continuous convolution processing, the at least one processor reads an output feature map of a previous convolution processing from the storage unit, and performs the convolution processing for each of small regions obtained by dividing an input feature map constituting an input of the convolution processing, and when the convolution processing is performed for each of the small regions, in a case in which a feature included in the small region is the same as a predetermined feature or a feature of a small region processed in the past, the at least one processor does not perform the convolution processing on the small region, and outputs a result of processing on the predetermined feature or a result of processing in the past as a result of processing the small region.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Examiner’s Note
The examiner cites particular figures, paragraphs, columns and line numbers in the references as applied to the claims for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claims, other passages and figures may apply as well.
It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR ALAVI whose telephone number is (571)272-7386. The examiner can normally be reached on M-F from 8:00-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMIR ALAVI/Primary Examiner, Art Unit 2668 Thursday, March 19, 2026