DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 23, 2026 has been entered.
Claims 1-20 are pending in this case. Claims 1-20 have been newly amended. No claims have been newly added or cancelled. This action is made Non-Final.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vincent et al. (US 2022/0179703).
As to claim 1, Vincent et al. disclose one or more processors (Figure 8A, arithmetic logic unit(s) (ALUs) 810), comprising: circuitry ([0119] notes ALUs may include integer and/or floating point units) to, in response to an application programming interface (API) call (Figure 1, step 104, [0057]-[0060] notes application programming interface (API) 104 may be a function call performed by one or more software programs to interface with deep neural network library, e.g. such that the operations, e.g. first through seventh stages of respective steps 106-118, of the deep neural network library may be performed): cause, with a single invocation of the API (e.g. cause with a single API 104 call or invocation), a plurality of different types of tensor operations (e.g. group of operations as an operation set) to be performed using one or more tensors (e.g. to be performed using one or more tensors) based, at least in part, on one or more indications of the plurality of different types of tensor operations by the API (e.g. based on descriptors, operation descriptors further based parameter descriptors, e.g. tensor descriptors and/or convolution descriptors)(Figure 1, stages of deep neural network library, e.g. first stage 106, [0061]-[0062] notes is a parameter descriptors stage which processes one or more parameter descriptors 106 provided to a deep neural network library 102 through an API 104 which describe properties of one or more operations to be performed by the deep neural network library 102 executed by one or more processors, where Figure 2, first stage 202, [0081]-[0087] further notes parameter descriptors may be tensor descriptors 204, which comprises at least data and/or one or more attributes about one or more descriptors ([0084]-[0086]), and convolution descriptors 206, which comprises at least data and/or one or more attributes about one or more convolution operations to be performed by the deep neural network library using one or more processors ([0087]); second stage 108, [0063]-[0065] notes creates one or more descriptors, e.g. operation descriptors, that indicate one or more operations to be performed by the deep neural network library 102, where an operation is one or more computations to perform one or more computational steps to facilitate deep learning, such as convolution operations or any other deep learning or other operations, where Figure 2, second stage 208, [0088]-[0089] further notes is an operations stage which creates one or more operation descriptors 210, 212 using parameter descriptors, e.g. tensor descriptors 204 and convolution descriptors 206, which indicates operations to be performed (see claim 2 regarding different types of operations); third stage 110, [0066]-[0068] notes creates one or more operation sets using one or more operation descriptors from the second stage 108, where an operation set is a group of operations to be performed or computed, where Figure 2, third stage 214, [0090]-[0092] further notes is an operation set stage which uses one or more operation descriptors 210, 212 to create operation set 216, which is a grouping of one or more operation descriptors 218,220; fourth stage [0069]-[0071] notes is an engine configuration stage which computes or determines one or more computational options available to one or more operation sets or other groups of operations provided by the deep neural network library 102 and further organizes computational options into engines and knobs, where Figure 3, fourth stage 302, [0093]-[0098] notes is an engine configuration stage which constructs an engine configuration 314 as a result of one or more queries 306 performed by one or more users; fifth stage 114, [0072]-[0076] notes is an execution plan stage which configures or otherwise determines an execution plan using an engine configuration and one or more handles, where Figure 4, fifth stage 402, [0099]-[0104] further notes is an execution plan stage which determines one or more intermediates 412 by the API in conjunction with engine configuration 404 comprising operation set 406 as well as engine and knob selection 408; sixth stage 116, [0077]-[0078] notes is a variant pack stage which receives one or more pointers through an API 104 from one or more users, where Figure 5, sixth stage 502, [0105]-[0107] further notes is a variant pack stage which receives or determines a variant pack 512 associated with one or more data values that vary across one or more executions performed by the deep neural network library 102, e.g. as pointers to memory; seventh stage 118, [0079] notes is an execution stage that performs one or more deep neural network library functions to execution the one or more operation sets in conjunction with the execution plan and variant pack, where Figure 6, seventh stage 602, [0108]-[0109] notes is an execute stage which performs an execution plan 604 in conjunction with variant pack 612 using the deep neural network library, e.g. executes one or more computations 620 as a result of one or more function calls to the API provide by the deep neural network library)(NOTE: as described above, all of stages of deep neural network library are performed in response to one or more API calls, where the operation descriptors indicate operations to be performed, further based on tensor descriptors, which at least define tensors, and convolution descriptors, which at least define convolution operations, thus the operations further considered “tensor operations,” where [0067], [0091], and [0092] further notes operations may be performed or computed using a single API call or invocation).
As noted above, Vincent et al. describes operation descriptors to indicate operations to be performed, the operations described as convolution operations. However, these operations are defined by convolution descriptors, which may be performed on tensors, further defined by tensor descriptors, where it would have been obvious to one of ordinary skill in the art that the convolution operations may be further described as “tensor operations,” yielding predictable results, without changing the scope of the invention.
As to claim 2, Vincent et al. disclose the one or more indications (e.g. operation descriptors) of the plurality of different types of tensor operations (e.g. group of operations of an operation set) by the API is a selection of one or more tensor operations (e.g. as noted in claim 1, [0064] notes operation descriptors indicate one or more operations, e.g. operations of the operation set, to be performed, where [0088] notes operation descriptors 210, 212 are created using one or more parameter descriptors, e.g. tensor descriptors 204 and convolution descriptors 206, where [0084]-[0087] further notes tensor descriptors 204 and convolution descriptors 206 comprise further information such that these operations may be performed, e.g. as illustrated in Figure 2, first stage 202, tensor descriptors 204 with xa ,wa, ya and xb , wb, yb and convolution descriptors 206 with conva and convb, where second stage 204, tensor descriptors xa ,wa, ya and convolution descriptor conva are used to create operation descriptor 210 and tensor descriptors xb , wb, yb and convolution descriptor convb are used to create operation descriptor 212, where third stage 214, creates operation set 216 with two different operation descriptors 218 and 220, thus may be considered the operation descriptors allows a selection of the operations further using the information of the parameter descriptors, e.g. tensor descriptors and convolution descriptors).
As to claim 3, Vincent et al. disclose the API to cause the plurality of different types of tensor operations to be performed (e.g. operations of the operation set) using the one or more tensors (e.g. using one or more tensors) is further based, at least in part, on one or more tensor operation descriptors (e.g. based on one or more tensor descriptors)(as noted in claims 1 and 2, operation descriptors indicate one or more operations, e.g. operations of the operation set, to be performed, where operation descriptors are created using one or more parameter descriptors, e.g. tensor descriptors and convolution descriptors, where [0084] notes a tensor descriptor 204 is data comprising information about one or more tensors, where a tensor is a data object comprising information or data values corresponding to a relationship between one or more algebraic objects in vector space or a data object or data container comprising n-dimensional data in conjunction with linear operations, [0085] further notes a tensor descriptor 204 may comprise one or more attributes, e.g. tensor dimensions, strides, data type, and byte alignment, [0086] further notes a tensor descriptor 204 may optionally comprise a unique identification (ID) value, thus operations to be performed based on the information of a tensor descriptor).
As to claim 4, Vincent et al. disclose the API to cause the plurality of different types of tensor operations to be performed (e.g. operations of the operation set) using the one or more tensors (e.g. using one or more tensors) is further based, at least in part, on one or more tensor data identification (e.g. based on one or more tensor identification, e.g. described in tensor descriptor)(e.g. as noted in claim 3, [0086] notes tensor descriptor 204 may comprise a unique identification value, which is a data value comprising a unique numerical value).
As to claim 5, Vincent et al. disclose the API to cause the plurality of different types of tensor operations to be performed (e.g. operations of the operation set) using the one or more tensors (e.g. using one or more tensors) is further based, at least in part, on one or more operation parameters (e.g. operation descriptors)(e.g. as noted in claims 1 and 2, operation descriptors indicate one or more operations, e.g. operations of the operation set, to be performed, where operation descriptors are created using one or more parameter descriptors, e.g. tensor descriptors and convolution descriptors, which further comprise information of the operations to be performed).
As to claim 6, Vincent et al. disclose the API to cause the plurality of different types of tensor operations to be performed (e.g. operations of the operation set) using the one or more tensors (e.g. using one or more tensors) responds by providing one or more tensor operation algorithm outputs (e.g. as noted in claim 1, Figures 1 and 3, [0069]-[0071] and [0093]-[0098] notes engine configuration stage which computes or determines one or more computational options available to one or more operation sets or other groups of operations provided by the deep neural network library 102 and further organizes computational options into engines and knobs; and Figures 1 and 4, [0072]-[0076] and [0099]-[0104] notes execution plan stage which configures or otherwise determines an execution plan using an engine configuration and one or more handles).
As to claim 7, Vincent et al. disclose the one or more indications (e.g. operation descriptors) of the plurality of different types of tensor operations (e.g. operations of the operation set) by the API is a selection of at least two tensor operations (e.g. selection of at least two operations), wherein the two tensor operations are not of a same tensor operation type (e.g. wherein the two operations are not the same type)(e.g. as noted in claim 2, operation descriptors indicate one or more operations, e.g. operations of the operation set, to be performed, where operation descriptors are created using one or more parameter descriptors, e.g. tensor descriptors and convolution descriptors, which further comprise information of the operations to be performed, e.g. as illustrated in Figure 2, first stage 202, tensor descriptors 204 with xa ,wa, ya and xb , wb, yb and convolution descriptors 206 with conva and convb, where second stage 204, tensor descriptors xa ,wa, ya and convolution descriptor conva are used to create operation descriptor 210 and tensor descriptors xb , wb, yb and convolution descriptor convb are used to create operation descriptor 212, where third stage 214, creates operation set 216 with two different operation descriptors 218 and 220, thus may be considered the operation descriptors allows a selection of the operations further using the information of the parameter descriptors, e.g. tensor descriptors and convolution descriptors, where the operations of the operation set are not the same, e.g. of different types).
As to claim 8, Vincent et al. disclose a system (Figure 8A, training logic/hardware structure 815), comprising: one or more processors to cause circuitry (arithmetic logic unit(s) (ALUs) 810, where [0119] notes ALUs may include integer and/or floating point units), similar to the circuitry of the one or more processors of claim 1 to perform the method as described. Please see the rejection and rationale of claim 1.
Claims 9-14 are similar in scope to claims 2-7, respectively, and are therefore rejected under similar rationale.
Claims 15-20 are similar in scope to claims 1-6, respectively, and are therefore rejected under similar rationale.
Response to Arguments
Applicant’s arguments, see pages 5 and 6, filed February 23, 2026, with respect to claims 1, 3-5, 8, 10-12, 15, and 17-19 have been fully considered and are persuasive. Claims have been amended for the present application and co-pending application, now distinguishing from each other. Therefore, the Double Patenting rejection of claims 1, 3-5, 8, 10-12, 15, and 17-19 has been withdrawn.
Applicant's arguments filed have been fully considered but they are not persuasive. Applicant amends independent claims 1, 8 and 15 to similarly recite, “…circuitry to, in response to an application programming interface (API) call: cause, with a single invocation of the API, a plurality of different types of tensor operations to be performed using one or more tensors based, at least in part, on one or more indications of the plurality of different types of tensor operations by the API…” Applicant argues on pages 6 and 7 of the Amendment filed that the prior art of record, Liu et al., fails to teach or suggest the limitations of the claims as now amended.
In light of the amendments of claims 1, 8, and 15, the claims are now rejected in view of newly found reference Vincent et al. (US 2022/0179703). Please see the rejection and notes regarding the claims above.
Applicant further argues on page 7 of the Amendment filed regarding dependent claims 2-7, 9-14, and 16-20 that these claims are allowable for depending upon allowable independent claims 1, 8, and 15, respectively.
In reply, in light of the amendments of claims 1, 8, and 15, the claims are now rejected in view of newly found reference Vincent et al. (US 2022/0179703). Therefore, claims 1, 8, and 15 nor their respective dependent claims are in condition for allowance.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACINTA M CRAWFORD whose telephone number is (571)270-1539. The examiner can normally be reached 8:30a.m. to 4:30p.m.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y. Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACINTA M CRAWFORD/Primary Examiner, Art Unit 2617