DETAILED ACTION
1. Claims 1-18 are pending in the application.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
3. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
4. Claims 1, 6-10, and 15-18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6-9, and 14-16 of copending Application No. 17/718,323. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims are directed to the same limitations as the claimed limitations are in the claims of the ‘323 application. The only differences between the claims are minor wording differences. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Example mapping below:
Application 17/718,333
Application 17/718,323
1. A data processing method based on convolution computation, comprising: providing a sum register; reading a first convolution kernel group among a plurality of convolution kernels according to a size of the sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily storing a first convolution computation result of input data and the first convolution kernel group into the sum register through first input first output (FIFO).
7. The data processing method based on convolution computation according to claim 1, further comprising: reading a first convolution kernel group among a plurality of convolution kernels according to a size of a sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily storing a first convolution computation result of the input data and the first convolution kernel group into the sum register through first input first output (FIFO).
6. The data processing method based on convolution computation according to claim 1, further comprising: judging that a size of one of the convolution kernels is less than a computation amount of convolution computation; and repeatedly providing the input data for the convolution kernels to perform convolution computation.
8. The data processing method based on convolution computation according to claim 7, further comprising: judging that a size of one of the convolution kernels is less than a computation amount of convolution computation; and repeatedly providing the input data for the convolution kernels to perform convolution computation.
7. The data processing method based on convolution computation according to claim 1, further comprising: reading the input data from one of at least one memory according to location information, wherein the location information comprises a size of the input data and coordinates of at least one element in the input data.
8. The data processing method based on convolution computation according to claim 7, further comprising: in response to a coordinate of one of the at least one element being located outside the size of the input data, determining that a value of the element is one of the input data according to a padding mode.
1. A data processing method based on convolution computation, comprising: extending input data according to a padding mode to generate extended input data, wherein the input data is used for convolution computation; providing coordinates of a two-dimensional coordinate system to a plurality of elements in the extended input data; and reading the elements in the extended input data according to location information, wherein the location information comprises a size of non-extended input data and coordinates of the elements in the extended input data, and the step of reading the elements in the extended input data comprises: in response to a coordinate of one of the elements in the location information being located outside the non-extended input data in the two-dimensional coordinate system, converting the coordinate in the location information according to the padding mode, wherein the coordinate in the location information is mapped to a coordinate of the non-extended input data.
9. The data processing method based on convolution computation according to claim 7, wherein the at least one memory comprises a plurality of memories, and the data processing method further comprises: storing a plurality of third partial data in the input data into the memories according to a size of a storage space of a single address of each of the memories, wherein coordinates of at least one of the third partial data at each address in two-dimensional coordinates of the input data of any channel are different, and the address stores elements of a plurality of channels with same coordinates in the input data.
6. The data processing method based on convolution computation according to claim 1, wherein the input data is stored in a plurality of memories, and the data processing method further comprises: according to a size of a storage space of a single address of each of the memories, storing a plurality of first partial data in the input data into the memories, wherein coordinates of at least one of the first partial data at each address in two-dimensional coordinates of the input data of any channel are different, and the address stores elements of a plurality of channels with same coordinates in the input data.
10. A data processing circuit based on convolution computation, comprising: at least one memory, used to store a code; and a processor, coupled to the at least one memory and configured to load and execute the code to: provide a sum register; read a first convolution kernel group among a plurality of convolution kernels according to a size of the sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily store a first convolution computation result of input data and the first convolution kernel group into the sum register through first input first output.
15. The data processing circuit based on convolution computation according to claim 9, wherein the processor is further configured to: read a first convolution kernel group among a plurality of convolution kernels according to a size of a sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily store a first convolution computation result of the input data and the first convolution kernel group into the sum register through first input first output.
15. The data processing circuit based on convolution computation according to claim 10, wherein the processor is further configured to: judge that a size of one of the convolution kernels is less than a computation amount of convolution computation; and repeatedly provide the input data for the convolution kernels to perform convolution computation.
16. The data processing circuit according to claim 15, wherein the processor is further configured to: judge that a size of one of the convolution kernels is less than a computation amount of convolution computation; and repeatedly provide the input data for the convolution kernels to perform convolution computation.
16. The data processing circuit based on convolution computation according to claim 10, wherein the processor is further configured to: read the input data from one of the at least one memory according to location information, wherein the location information comprises a size of the input data and coordinates of at least one element in the input data.
17. The data processing circuit based on convolution computation according to claim 16, wherein the processor is further configured to: in response to a coordinate of one of the at least one element being located outside the size of the input data, determine that a value of the element is one of the input data according to a padding mode.
9. A data processing circuit based on convolution computation, comprising: at least one memory, used to store a code; and a processor, coupled to the at least one memory and configured to load and execute the code to: extend input data according to a padding mode to generate extended input data, wherein the input data is used for convolution computation; provide coordinates of a two-dimensional coordinate system to a plurality of elements in the extended input data; and read the elements in the extended input data according to location information, wherein the location information comprises a size of non-extended input data and coordinates of the elements in the extended input data, and the step of reading the elements in the extended input data comprises: in response to a coordinate of one of the elements in the location information being located outside the non-extended input data in the two-dimensional coordinate system, converting the coordinate in the location information according to the padding mode, wherein the coordinate in the location information is mapped to a coordinate of the non-extended input data.
18. The data processing circuit based on convolution computation according to claim 16, wherein the at least one memory comprises a plurality of memories, and the processor is further configured to: store a plurality of third partial data in the input data into the memories according to a size of a storage space of a single address of each of the memories, wherein coordinates of at least one of the third partial data at each address in two-dimensional coordinates of the input data of any channel are different, and the address stores elements of a plurality of channels with same coordinates in the input data.
14. The data processing circuit based on convolution computation according to claim 9, wherein the at least one memory comprises a plurality of memories, the input data is stored in the memories, and the processor is further configured to: according to a size of a storage space of a single address of each of the memories, store a plurality of first partial data in the input data into the memories, wherein coordinates of at least one of the first partial data at each address in two-dimensional coordinates of the input data of any channel are different, and the address stores elements of a plurality of channels with same coordinates in the input data.
5. Claims 1, 6-10, and 15-18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6-10, and 15-18 of copending Application No. 17/718,340. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims are directed to the same limitations as the claimed limitations are in the claims of the ‘340 application. The only differences between the claims are minor wording differences. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Example mapping below:
Application 17/718,333
Application 17/718,340
1. A data processing method based on convolution computation, comprising: providing a sum register; reading a first convolution kernel group among a plurality of convolution kernels according to a size of the sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily storing a first convolution computation result of input data and the first convolution kernel group into the sum register through first input first output (FIFO).
8. The data processing method based on convolution computation according to claim 6, further comprising: reading a first convolution kernel group among a plurality of convolution kernels according to a size of a sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily storing a first convolution computation result of the input data and the first convolution kernel group into the sum register through first input first output (FIFO).
6. The data processing method based on convolution computation according to claim 1, further comprising: judging that a size of one of the convolution kernels is less than a computation amount of convolution computation; and repeatedly providing the input data for the convolution kernels to perform convolution computation.
9. The data processing method based on convolution computation according to claim 8, further comprising: judging that a size of one of the convolution kernels is less than a computation amount of convolution computation; and repeatedly providing the input data for the convolution kernels to perform convolution computation.
7. The data processing method based on convolution computation according to claim 1, further comprising: reading the input data from one of at least one memory according to location information, wherein the location information comprises a size of the input data and coordinates of at least one element in the input data.
8. The data processing method based on convolution computation according to claim 7, further comprising: in response to a coordinate of one of the at least one element being located outside the size of the input data, determining that a value of the element is one of the input data according to a padding mode.
6. The data processing method based on convolution computation according to claim 1, further comprising: reading the input data from one of the memories according to location information, wherein the location information comprises a size of the input data and coordinates of at least one element in the input data.
7. The data processing method based on convolution computation according to claim 6, further comprising: in response to a coordinate of one of the at least one element being located outside the size of the input data, determining that a value of the element is one of the input data according to a padding mode.
9. The data processing method based on convolution computation according to claim 7, wherein the at least one memory comprises a plurality of memories, and the data processing method further comprises: storing a plurality of third partial data in the input data into the memories according to a size of a storage space of a single address of each of the memories, wherein coordinates of at least one of the third partial data at each address in two-dimensional coordinates of the input data of any channel are different, and the address stores elements of a plurality of channels with same coordinates in the input data.
1. A data processing method based on convolution computation, comprising: according to a size of a storage space of a first address of a first memory among a plurality of memories, storing first partial data in input data into the first address of the first memory, wherein a size of the first partial data is not greater than the size of the storage space of the first address; and according to a size of a storage space of a second address of a second memory among the memories, storing second partial data in the input data into the second address of the second memory, wherein a size of the second partial data is not greater than the size of the storage space of the second address, coordinates of the first partial data stored at the first address in two- dimensional coordinates of the input data of any channel are different from coordinates of the second partial data stored at the second address, and the first address stores elements of a plurality of channels with same coordinates in the input data.
10. A data processing circuit based on convolution computation, comprising: at least one memory, used to store a code; and a processor, coupled to the at least one memory and configured to load and execute the code to: provide a sum register; read a first convolution kernel group among a plurality of convolution kernels according to a size of the sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily store a first convolution computation result of input data and the first convolution kernel group into the sum register through first input first output.
17. The data processing circuit according to claim 15, wherein the processor is further configured to: read a first convolution kernel group among a plurality of convolution kernels according to a size of a sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily store a first convolution computation result of the input data and the first convolution kernel group into the sum register through first input first output.
15. The data processing circuit based on convolution computation according to claim 10, wherein the processor is further configured to: judge that a size of one of the convolution kernels is less than a computation amount of convolution computation; and repeatedly provide the input data for the convolution kernels to perform convolution computation.
18. The data processing circuit according to claim 17, wherein the processor is further configured to: judge that a size of one of the convolution kernels is less than a computation amount of convolution computation; and repeatedly provide the input data for the convolution kernels to perform convolution computation.
16. The data processing circuit based on convolution computation according to claim 10, wherein the processor is further configured to: read the input data from one of the at least one memory according to location information, wherein the location information comprises a size of the input data and coordinates of at least one element in the input data.
17. The data processing circuit based on convolution computation according to claim 16, wherein the processor is further configured to: in response to a coordinate of one of the at least one element being located outside the size of the input data, determine that a value of the element is one of the input data according to a padding mode.
15. The data processing circuit according to claim 10, wherein the processor is further 25 configured to: read the input data from one of the memories according to location information, wherein the location information comprises a size of the input data and coordinates of at least one element in the input data.
16. The data processing circuit according to claim 15, wherein the processor is further configured to: in response to a coordinate of one of the at least one element being located outside the size of the input data, determine that a value of the element is one of the input data according to a padding mode.
18. The data processing circuit based on convolution computation according to claim 16, wherein the at least one memory comprises a plurality of memories, and the processor is further configured to: store a plurality of third partial data in the input data into the memories according to a size of a storage space of a single address of each of the memories, wherein coordinates of at least one of the third partial data at each address in two-dimensional coordinates of the input data of any channel are different, and the address stores elements of a plurality of channels with same coordinates in the input data.
10. A data processing circuit based on convolution computation, comprising: a plurality of memories, used to store a code; and a processor, coupled to the memories and configured to load and execute the code to: according to a size of a storage space of a first address of a first memory among the memories, store first partial data in input data into the first address of the first memory, wherein a size of the first partial data is not greater than the size of the storage space of the first address; and according to a size of a storage space of a second address of a second memory among the memories, store second partial data in the input data into the second address of the second memory, wherein a size of the second partial data is not greater than the size of the storage space of the second address, coordinates of the first partial data stored at the first address in two- dimensional coordinates of the input data of any channel are different from coordinates of the second partial data stored at the second address, and the first address stores elements of a plurality of channels with same coordinates in the input data.
Allowable Subject Matter
6. Claims 1, 6-10, and 15-18 would be allowable if rewritten or amended to overcome the rejection(s) under Double Patenting, set forth in this Office action.
Claims 2-5 and 11-14 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
The claims recite at least reading a first convolution kernel group among a plurality of convolution kernels according to a size of the sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily storing a first convolution computation result of input data and the first convolution kernel group into the sum register through first input first output (FIFO).
The closest prior art of record US Pub. 2018/0046900 relates to convolutional neural networks, and more particularly to primitive operations of a sparse convolutional neural network accelerator. However, the prior art of record does not teach or suggest at least reading a first convolution kernel group among a plurality of convolution kernels according to a size of the sum register, wherein a number of the convolution kernels in the first convolution kernel group is the same as the size of the sum register; and temporarily storing a first convolution computation result of input data and the first convolution kernel group into the sum register through first input first output (FIFO).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Pub. 2021/0125041 – related to neural networks, and more specifically to a neural engine circuit of a neural network processor that performs three dimensional convolution operations.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL D YAARY whose telephone number is (571)270-1249. The examiner can normally be reached Mon-Fri 9-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571)272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL D. YAARY/Primary Examiner, Art Unit 2151