Detailed Action
Response to Amendment
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-34 are presented for examination.
Claims 10,23-26 and 30 are originally presented.
Claims 34 is previously presented.
Claims 1-9,11-22,27-29,31 and 33 are amended.
Claims 1-34 are rejected.
This Action is Final.
Response to Arguments
Applicant's arguments filed 11/12/2025 have been fully considered but they are not persuasive.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-34 are rejected under 35 U.S.C. 103 as being unpatentable over Ripple et al.(US Patent Application Pub. No: 20180176578 A1) in view of Stone et al.(US Patent Application Pub. No: 20200034976 A1).
As per claim1,Ripple teaches one or more processors [Paragraph 0082,…, a single processor or may be architectures employing multiple processor designs for increased computing capability.], comprising:
circuitry to: generate one or more contracted tensors from one or more tensors generated from image data [The content may be, for example, images, videos, or text.], by at least reducing dimensionality of the one or more tensors [Abstract, Paragraphs 0039;0047;0051, The encoder receives content and generates a tensor as a compact representation of the content. The content may be, for example, images, videos, or text. The decoder receives a tensor and generates a reconstructed version of the content. In one embodiment, the compression system trains one or more encoding components such that the encoder can adaptively encode different degrees of information for regions in the content that are associated with characteristic objects, such as human faces, texts, or buildings.]; and
generate, the image data, whose results are generated using the one or more contracted tensors [Paragraphs 0039;0047;0051, The corresponding weighted map m.sub.i=1 indicates weights 228 for elements of the tensor that are associated with the human faces 212 in the training content x.sub.i=1. The modified tensor 216 for the first piece of training content x.sub.i=1 contains a subset of weighted elements 218 corresponding to the human faces 212 in the content that contain a higher degree of information than the remaining elements.].
Ripple discloses compression of tensors and generate the image data but does not explicitly disclose generate a feature map corresponding to the image data by at least performing one or more convolutional operations.
Stone discloses generate a feature map corresponding to the image data by at least performing one or more convolutional operations [Abstract, Paragraphs 0015-0017, …receiving a first image input, generating a number of feature maps from the first image input using a number of convolution filters, generating a first number of fully connected layers directly based on the number of feature maps, and detecting a number of objects in the first image and determining a set of features for each object from the first number of fully connected layers.].
It would have been obvious one ordinary skill in the art before the effective filling date of the claimed invention, to include Stone’s neural network system in to Ripple 's encoder for adaptively encoding content for the benefit of facilitating efficient and stable training of a neural network system to improve tracking efficiency of objects (Stone,[0065]) to obtain the invention as specified in claim 1.
As per claim 2, Ripple and Stone teach all the limitations of claim 1 above, where Ripple and Stone teach, the one or more processors, wherein the one or more convolution operations include a first convolution operation with a first activation tensor and a filter tensor to generate a first feature map represented by an output tensor [Ripple, Paragraphs 0015-0017;0066,Convolution engine 502A may generate a first number of feature maps from the first image input using a number of convolution filters. Convolution engine 502B may generate a second number of feature maps from the second image input using the same number of convolution filters that are used by convolution engine 502A.], and the circuitry is to:
construct a second activation tensor that has a higher number of modes than the first activation tensor [LEE, Paragraphs 0007; 0080,… when executed by processing circuitry, causes the processing circuitry to perform a convolution operation on an input image to generate a feature map, to extract a region of interest based on an objectness score associated with an existence of an object from the feature map, to align the extracted region of interest to a region of interest having a reference size, ….]; and
generate the first feature map by performing a tensor contraction with the second activation tensor and the filter tensor [Ripple, Abstract, Paragraphs 0005-0006, The encoder receives content and generates a tensor as a compact representation of the content. The content may be, for example, images, videos, or text.].
As per claim 3, Ripple and Stone teach all the limitations of claim 2 above, where Ripple and Stone teach, the one or more processors, wherein the circuitry is to construct the second activation tensor based at least in part on:
identifying a mode of the first activation tensor that is not present in the filter tensor and is not present in the output tensor [LEE, Paragraph 0085, identifying individuals and individual biometric features in a biometric authentication system; object focusing in a camera; identification of objects in extended reality presentations, such as augmented reality and virtual reality applications; and three-dimensional modeling, such as for digital animation and manufacturing via three-dimensional printing.]; and
replacing the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second activation tensor [Ripple, Abstract, Paragraphs 0005-0006,0077, The encoder receives content and generates a tensor as a compact representation of the content. The content may be, for example, images, videos, or text.].
As per claim 4, Ripple and Stone teach all the limitations of claim 3 above, where Ripple and Stone teach, the one or more processors, wherein the circuitry is to construct the second activation tensor such that the first mode [Stone, Paragraph 0013, If the determined features of one detected object in a first frame are the same as the determined features of another detected object in a second frame, then the detected object in the first frame is the same object as the detected object in the second frame.], and the second mode of the second activation tensor have overlapping strides [Ripple, Abstract, Paragraphs 0005-0006,0077,The sender system 110 applies the tensor generator to content 910 to output the tensor 912 for the content 810. The sender system 110 identifies a weighted map 926 for the content 810.].
As per claim 5, Ripple and Stone teach all the limitations of claim 4 above, where Ripple and Stone teach, the one or more processors, wherein the identified mode of the first activation tensor has an identified stride [Stone, Paragraph 0013, If the determined features of one detected object in a first frame are the same as the determined features of another detected object in a second frame, then the detected object in the first frame is the same object as the detected object in the second frame.], and the circuitry are to set a first stride of the first mode and a second stride of the second mode of the second activation tensor to the identified stride [Ripple, Abstract, Paragraphs 0005-0006,0077,The sender system 110 applies the tensor generator to content 910 to output the tensor 912 for the content 810. The sender system 110 identifies a weighted map 926 for the content 810.].
As per claim 6, Ripple and Stone teach all the limitations of claim 2 above, where Ripple and Stone teaches, the one or more processors [Stone, Paragraph 0024, Processing resource 101 may, for example, be in the form of a graphics processing unit (GPU), central processing unit (CPU), a semiconductor-based microprocessor, a digital signal processor (DSP) such as a digital image processing unit,….], wherein the circuitry is to construct the second activation tensor using data elements of the first activation tensor without adding additional data elements [Ripple, Abstract, Paragraphs 0005-0006,0077,The sender system 110 applies the tensor generator to content 910 to output the tensor 912 for the content 810. The sender system 110 identifies a weighted map 926 for the content 810.].
As per claims 7-13, claims 7-13 are rejected in accordance to the same rational and reasoning as the above claims 1-6, wherein claims 14-20 are the device claims for the apparatus of claims 1-6.
As per claims 14-20, claims 14-20 are rejected in accordance to the same rational and reasoning as the above claims 1-6, wherein claims 14-20 are the device claims for the apparatus of claims 1-6.
As per claims 21-26, claims 21-26 are rejected in accordance to the same rational and reasoning as the above claims 1-6, wherein claims 21-26 are the device claims for the apparatus of claims 1-6.
As per claims 27-33, claims 27-33 are rejected in accordance to the same rational and reasoning as the above claims 7-13, wherein claims 27-33 are the method claims for the system of claims 7-13.
As per claim 34, Ripple and Stone teach all the limitations of claim 2 above, where Ripple and Stone teaches, the one or more processors [Stone, Paragraph 0024, Processing resource 101 may, for example, be in the form of a graphics processing unit (GPU), central processing unit (CPU), a semiconductor-based microprocessor, a digital signal processor (DSP) such as a digital image processing unit,….], wherein contracting the one or more tensors comprises reducing a number of dimensions used to represent the one or more tensors [Rippel, Paragraph 0051-0053, Specifically, a tensor y for content x may be generated by:y=f.sub.g(x; θ.sub.g)∈[AltContent: rect].sup.C×H×W where f.sub.g(⋅)denotes the functions of the tensor generator 452 associated with a set of parameters θ.sub.g. The tensor y has dimensions of width W, height H, and depth C, in which y.sub.chw denotes an element of the tensor at channel depth c=1, 2, . . . , C, height h=1, 2, . . . , H, and width w=1, 2, . . . , W. The tensor y is a compact representation of the content with respect to the structural features of the content.].
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final
action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
RELEVANT ART CITED BY THE EXAMINER
The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c).
References Considered Pertinent but not relied upon
KIM (US Patent Application Pub. No: 20180136844 A1) teaches a device has a first operation control circuit to control a first cell array so that first read data and second read data stored in the first cell array are outputted based on read signal and read address. KIM discloses an arithmetic circuit performs a predetermined arithmetic operation to generate first write data and second write data based on first read data and second read data. KIM suggests a second operation control circuit controls a second cell array so that first write data and second write data are stored in second cell array based on write signal and write address.
SUN et al.(US Patent Application Pub. No: 20180157976 A1) teaches a device has a first determiner configured to determine complexity of a database including multiple samples. SUN discloses a second determiner is configured to determine a classification capability of CNN model applicable to database based on complexity of database. SUN suggests a third determiner is configured to acquire a classification capability of each CNN models. SUN further discloses a matcher is configured to determine the CNN model applicable to the database based on the classification capability of each candidate CNN model and the classification capability of the CNN model applicable to the database.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GETENTE A YIMER whose telephone number is (571)270-7106. The examiner can normally be reached Monday-Friday 6:30-3:00.Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, IDRISS N ALROBAYE can be reached on 571-270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GETENTE A YIMER/Primary Examiner, Art Unit 2181