Prosecution Insights
Last updated: April 19, 2026
Application No. 16/559,544

PROCESSOR AND SYSTEM TO CONVERT TENSOR OPERATIONS IN MACHINE LEARNING

Final Rejection §103
Filed
Sep 03, 2019
Examiner
YIMER, GETENTE A
Art Unit
2181
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
5 (Final)
88%
Grant Probability
Favorable
6-7
OA Rounds
2y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
522 granted / 592 resolved
+33.2% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
24 currently pending
Career history
616
Total Applications
across all art units

Statute-Specific Performance

§101
8.6%
-31.4% vs TC avg
§103
82.6%
+42.6% vs TC avg
§102
2.0%
-38.0% vs TC avg
§112
1.6%
-38.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 592 resolved cases

Office Action

§103
Detailed Action Response to Amendment Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-34 are presented for examination. Claims 10,23-26 and 30 are originally presented. Claims 34 is previously presented. Claims 1-9,11-22,27-29,31 and 33 are amended. Claims 1-34 are rejected. This Action is Final. Response to Arguments Applicant's arguments filed 11/12/2025 have been fully considered but they are not persuasive. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-34 are rejected under 35 U.S.C. 103 as being unpatentable over Ripple et al.(US Patent Application Pub. No: 20180176578 A1) in view of Stone et al.(US Patent Application Pub. No: 20200034976 A1). As per claim1,Ripple teaches one or more processors [Paragraph 0082,…, a single processor or may be architectures employing multiple processor designs for increased computing capability.], comprising: circuitry to: generate one or more contracted tensors from one or more tensors generated from image data [The content may be, for example, images, videos, or text.], by at least reducing dimensionality of the one or more tensors [Abstract, Paragraphs 0039;0047;0051, The encoder receives content and generates a tensor as a compact representation of the content. The content may be, for example, images, videos, or text. The decoder receives a tensor and generates a reconstructed version of the content. In one embodiment, the compression system trains one or more encoding components such that the encoder can adaptively encode different degrees of information for regions in the content that are associated with characteristic objects, such as human faces, texts, or buildings.]; and generate, the image data, whose results are generated using the one or more contracted tensors [Paragraphs 0039;0047;0051, The corresponding weighted map m.sub.i=1 indicates weights 228 for elements of the tensor that are associated with the human faces 212 in the training content x.sub.i=1. The modified tensor 216 for the first piece of training content x.sub.i=1 contains a subset of weighted elements 218 corresponding to the human faces 212 in the content that contain a higher degree of information than the remaining elements.]. Ripple discloses compression of tensors and generate the image data but does not explicitly disclose generate a feature map corresponding to the image data by at least performing one or more convolutional operations. Stone discloses generate a feature map corresponding to the image data by at least performing one or more convolutional operations [Abstract, Paragraphs 0015-0017, …receiving a first image input, generating a number of feature maps from the first image input using a number of convolution filters, generating a first number of fully connected layers directly based on the number of feature maps, and detecting a number of objects in the first image and determining a set of features for each object from the first number of fully connected layers.]. It would have been obvious one ordinary skill in the art before the effective filling date of the claimed invention, to include Stone’s neural network system in to Ripple 's encoder for adaptively encoding content for the benefit of facilitating efficient and stable training of a neural network system to improve tracking efficiency of objects (Stone,[0065]) to obtain the invention as specified in claim 1. As per claim 2, Ripple and Stone teach all the limitations of claim 1 above, where Ripple and Stone teach, the one or more processors, wherein the one or more convolution operations include a first convolution operation with a first activation tensor and a filter tensor to generate a first feature map represented by an output tensor [Ripple, Paragraphs 0015-0017;0066,Convolution engine 502A may generate a first number of feature maps from the first image input using a number of convolution filters. Convolution engine 502B may generate a second number of feature maps from the second image input using the same number of convolution filters that are used by convolution engine 502A.], and the circuitry is to: construct a second activation tensor that has a higher number of modes than the first activation tensor [LEE, Paragraphs 0007; 0080,… when executed by processing circuitry, causes the processing circuitry to perform a convolution operation on an input image to generate a feature map, to extract a region of interest based on an objectness score associated with an existence of an object from the feature map, to align the extracted region of interest to a region of interest having a reference size, ….]; and generate the first feature map by performing a tensor contraction with the second activation tensor and the filter tensor [Ripple, Abstract, Paragraphs 0005-0006, The encoder receives content and generates a tensor as a compact representation of the content. The content may be, for example, images, videos, or text.]. As per claim 3, Ripple and Stone teach all the limitations of claim 2 above, where Ripple and Stone teach, the one or more processors, wherein the circuitry is to construct the second activation tensor based at least in part on: identifying a mode of the first activation tensor that is not present in the filter tensor and is not present in the output tensor [LEE, Paragraph 0085, identifying individuals and individual biometric features in a biometric authentication system; object focusing in a camera; identification of objects in extended reality presentations, such as augmented reality and virtual reality applications; and three-dimensional modeling, such as for digital animation and manufacturing via three-dimensional printing.]; and replacing the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second activation tensor [Ripple, Abstract, Paragraphs 0005-0006,0077, The encoder receives content and generates a tensor as a compact representation of the content. The content may be, for example, images, videos, or text.]. As per claim 4, Ripple and Stone teach all the limitations of claim 3 above, where Ripple and Stone teach, the one or more processors, wherein the circuitry is to construct the second activation tensor such that the first mode [Stone, Paragraph 0013, If the determined features of one detected object in a first frame are the same as the determined features of another detected object in a second frame, then the detected object in the first frame is the same object as the detected object in the second frame.], and the second mode of the second activation tensor have overlapping strides [Ripple, Abstract, Paragraphs 0005-0006,0077,The sender system 110 applies the tensor generator to content 910 to output the tensor 912 for the content 810. The sender system 110 identifies a weighted map 926 for the content 810.]. As per claim 5, Ripple and Stone teach all the limitations of claim 4 above, where Ripple and Stone teach, the one or more processors, wherein the identified mode of the first activation tensor has an identified stride [Stone, Paragraph 0013, If the determined features of one detected object in a first frame are the same as the determined features of another detected object in a second frame, then the detected object in the first frame is the same object as the detected object in the second frame.], and the circuitry are to set a first stride of the first mode and a second stride of the second mode of the second activation tensor to the identified stride [Ripple, Abstract, Paragraphs 0005-0006,0077,The sender system 110 applies the tensor generator to content 910 to output the tensor 912 for the content 810. The sender system 110 identifies a weighted map 926 for the content 810.]. As per claim 6, Ripple and Stone teach all the limitations of claim 2 above, where Ripple and Stone teaches, the one or more processors [Stone, Paragraph 0024, Processing resource 101 may, for example, be in the form of a graphics processing unit (GPU), central processing unit (CPU), a semiconductor-based microprocessor, a digital signal processor (DSP) such as a digital image processing unit,….], wherein the circuitry is to construct the second activation tensor using data elements of the first activation tensor without adding additional data elements [Ripple, Abstract, Paragraphs 0005-0006,0077,The sender system 110 applies the tensor generator to content 910 to output the tensor 912 for the content 810. The sender system 110 identifies a weighted map 926 for the content 810.]. As per claims 7-13, claims 7-13 are rejected in accordance to the same rational and reasoning as the above claims 1-6, wherein claims 14-20 are the device claims for the apparatus of claims 1-6. As per claims 14-20, claims 14-20 are rejected in accordance to the same rational and reasoning as the above claims 1-6, wherein claims 14-20 are the device claims for the apparatus of claims 1-6. As per claims 21-26, claims 21-26 are rejected in accordance to the same rational and reasoning as the above claims 1-6, wherein claims 21-26 are the device claims for the apparatus of claims 1-6. As per claims 27-33, claims 27-33 are rejected in accordance to the same rational and reasoning as the above claims 7-13, wherein claims 27-33 are the method claims for the system of claims 7-13. As per claim 34, Ripple and Stone teach all the limitations of claim 2 above, where Ripple and Stone teaches, the one or more processors [Stone, Paragraph 0024, Processing resource 101 may, for example, be in the form of a graphics processing unit (GPU), central processing unit (CPU), a semiconductor-based microprocessor, a digital signal processor (DSP) such as a digital image processing unit,….], wherein contracting the one or more tensors comprises reducing a number of dimensions used to represent the one or more tensors [Rippel, Paragraph 0051-0053, Specifically, a tensor y for content x may be generated by:y=f.sub.g(x; θ.sub.g)∈[AltContent: rect].sup.C×H×W where f.sub.g(⋅)denotes the functions of the tensor generator 452 associated with a set of parameters θ.sub.g. The tensor y has dimensions of width W, height H, and depth C, in which y.sub.chw denotes an element of the tensor at channel depth c=1, 2, . . . , C, height h=1, 2, . . . , H, and width w=1, 2, . . . , W. The tensor y is a compact representation of the content with respect to the structural features of the content.]. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. RELEVANT ART CITED BY THE EXAMINER The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c). References Considered Pertinent but not relied upon KIM (US Patent Application Pub. No: 20180136844 A1) teaches a device has a first operation control circuit to control a first cell array so that first read data and second read data stored in the first cell array are outputted based on read signal and read address. KIM discloses an arithmetic circuit performs a predetermined arithmetic operation to generate first write data and second write data based on first read data and second read data. KIM suggests a second operation control circuit controls a second cell array so that first write data and second write data are stored in second cell array based on write signal and write address. SUN et al.(US Patent Application Pub. No: 20180157976 A1) teaches a device has a first determiner configured to determine complexity of a database including multiple samples. SUN discloses a second determiner is configured to determine a classification capability of CNN model applicable to database based on complexity of database. SUN suggests a third determiner is configured to acquire a classification capability of each CNN models. SUN further discloses a matcher is configured to determine the CNN model applicable to the database based on the classification capability of each candidate CNN model and the classification capability of the CNN model applicable to the database. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GETENTE A YIMER whose telephone number is (571)270-7106. The examiner can normally be reached Monday-Friday 6:30-3:00.Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, IDRISS N ALROBAYE can be reached on 571-270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GETENTE A YIMER/Primary Examiner, Art Unit 2181
Read full office action

Prosecution Timeline

Sep 03, 2019
Application Filed
Aug 13, 2022
Non-Final Rejection — §103
Jan 05, 2023
Applicant Interview (Telephonic)
Jan 05, 2023
Examiner Interview Summary
Feb 21, 2023
Response Filed
Jun 29, 2023
Non-Final Rejection — §103
Sep 19, 2023
Interview Requested
Sep 25, 2023
Applicant Interview (Telephonic)
Oct 10, 2023
Examiner Interview Summary
Dec 05, 2023
Response Filed
Mar 02, 2024
Final Rejection — §103
Aug 07, 2024
Notice of Allowance
Mar 07, 2025
Request for Continued Examination
Mar 14, 2025
Response after Non-Final Action
Jul 12, 2025
Non-Final Rejection — §103
Nov 12, 2025
Response Filed
Nov 28, 2025
Final Rejection — §103
Jan 21, 2026
Interview Requested
Jan 27, 2026
Applicant Interview (Telephonic)
Jan 27, 2026
Examiner Interview Summary
Apr 02, 2026
Request for Continued Examination
Apr 07, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603988
END-TO-END SAFETY MECHANISM FOR DISPLAY SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12596661
SYSTEM AND METHOD FOR SYNCHRONIZED WAKE UP AND PACKET POLLING SYSTEM FOR WIRELESS INPUT/OUTPUT (IO) DEVICES AND WIRELESS RADIO SYSTEM OF AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12591531
TECHNIQUE FOR LIMITING TRANSMISSION OF PARTIAL SYMBOLS IN REPEATER DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12585317
Single USB Type-C connector with data port mode and console port mode
2y 5m to grant Granted Mar 24, 2026
Patent 12579405
NEURAL PROCESSING UNIT INCLUDING AN INTERNAL MEMORY INCLUDING A PLURALITY OF MEMORY UNITS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
88%
Grant Probability
98%
With Interview (+9.3%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 592 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month