Prosecution Insights
Last updated: April 19, 2026
Application No. 18/914,395

PADDING IN A STREAM OF MATRIX ELEMENTS

Final Rejection §102§103§112
Filed
Oct 14, 2024
Examiner
VICARY, KEITH E
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Texas Instruments Incorporated
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
393 granted / 683 resolved
+2.5% vs TC avg
Strong +41% interview lift
Without
With
+41.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
41 currently pending
Career history
724
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
34.0%
-6.0% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
37.6%
-2.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 683 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-22 are pending in this office action and presented for examination. Claims 1-2, 4, 6, 12-13, 15, and 17 are newly amended, and claims 21-22 are newly added, by the response received January 5, 2026. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-11, 17, and 21-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “the size of the data portion of the array” in lines 13-14. However, it is indefinite as to whether this limitation is to be interpreted as “the size of the data portion of the array in the dimension” or as “a size of the data portion of the array” or as “a size of a data portion of the array” or as something else. Claims 2-11 and 21 are rejected for failing to alleviate the rejection of claim 1 above. Claim 17 recites the limitation “determining that a third portion of the array partially exceeds the size of the data portion of the array” in lines 4-5. However, it is indefinite as to what it means for a third portion of the array to “partially” exceed the size of the data portion of the array. In general, it is indefinite as to what it means for a first entity to “partially” exceed the size of a second entity. Claim 22 is rejected for failing to alleviate the rejection of claim 17 above. Claim 22 recites the limitation “the processor” in line 1. However, there is insufficient antecedent basis for this limitation in the claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 6-8, 10-13, 16-19, and 21-22 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Liao et al. (Liao) (US 20020026569 A1). Consider claim 1, Liao discloses a circuit device comprising: a processor ([0040], line 2, control unit 18) configured to receive an instruction ([0076], line 2, vector load instruction) that specifies a size of an array in a dimension ([0076], line 5, primary op code; [0076], lines 6-7, destination vector register location) and a size of a data portion of the array in the dimension ([0076], lines 7-9, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends); a memory control circuit coupled to the processor ([0039], lines 13-14, Load/store unit (LSU) 24); and a memory coupled to the memory control circuit ([0077], line 4, memory); wherein the processor is configured to, based on the instruction, cause the memory control circuit to: determine whether a first portion of the array exceeds the size of the data portion of the array in the dimension; based on the first portion of the array not exceeding the size of the data portion of the array in the dimension, request the first portion of the array from the memory ([0076], lines 7-15, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends. The value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector. In other words, if the value field is a "1", then all locations in the vector register beyond the position indicated by the position bit(s) are set to "1"; in other words, if the position bit(s) indicate that the vector data size does not exceed the size of the useful data, the vector data is requested from the memory); determine whether a second portion of the array exceeds the size of the data portion of the array in the dimension; and based on the second portion of the array exceeding the size of the data portion of the array, generate a set of predetermined values ([0076], lines 7-15, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends. The value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector. In other words, if the value field is a "1", then all locations in the vector register beyond the position indicated by the position bit(s) are set to "1"; in other words, if the position bit(s) indicate that the vector data size exceeds the size of the useful data, a set of predetermined values corresponding to the value field is generated) without accessing the memory ([0077], lines 5-6, eliminating the need to load filler data from memory; [0077], lines 11-14, the improved vector load format of FIG. 6a frees bandwidth and memory by not requiring that filler data (such as zeros) be stored in memory or loaded from memory). Consider claim 6, Liao discloses the circuit device of claim 1 (see above), wherein: the set of predetermined values is a first set of predetermined values; and the processor is configured to, based on the instruction, cause the memory control circuit to: determine whether a third portion of the array exceeds the size of the data portion of the array in the dimension; and based on a first subset of the third portion not exceeding the size of the data portion of the array in the dimension and a second subset of the third portion exceeding the size of the data portion of the array in the dimension: request a set of data from the memory; and replace a subset of the set of data requested from the memory with a second set of predetermined values ([0076], lines 7-15, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends. The value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector. In other words, if the value field is a "1", then all locations in the vector register beyond the position indicated by the position bit(s) are set to "1"; in other words, if the position bit(s) indicate that the vector data size exceeds the size of the useful data, a set of predetermined values corresponding to the value field is generated). Consider claim 7, Liao discloses the circuit device of claim 1 (see above), wherein each of the set of predetermined values is a null element value ([0078], lines 1-11, the value field 128 in the vector load instruction format of FIG. 6a, may designate any suitable constant, and the constant may vary depending on the particular application in which the invention is embodied. For example, a value field of "1" may be used if the vector will be involved with a multiplication operation, so as not to cause a change in the values of a vector being multiplied therewith. Similarly, a value field of "0" may be used if the vector will be used in an addition operation for the same reason explained above. However, any constant may be indicated by the value filed in accordance with the instant invention). Consider claim 8, Liao disclose the circuit device of claim 1 (see above), wherein each of the set of predetermined values is zero ([0078], lines 1-11, the value field 128 in the vector load instruction format of FIG. 6a, may designate any suitable constant, and the constant may vary depending on the particular application in which the invention is embodied. For example, a value field of "1" may be used if the vector will be involved with a multiplication operation, so as not to cause a change in the values of a vector being multiplied therewith. Similarly, a value field of "0" may be used if the vector will be used in an addition operation for the same reason explained above. However, any constant may be indicated by the value filed in accordance with the instant invention). Consider claim 10, Liao discloses the circuit device of claim 1 (see above), wherein the memory control circuit is configured to provide the array to the processor as a set of vectors ([0076], lines 7-15, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends. The value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector. In other words, if the value field is a "1", then all locations in the vector register beyond the position indicated by the position bit(s) are set to "1"; in other words, if the position bit(s) indicate that the vector data size exceeds the size of the useful data, a set of predetermined values corresponding to the value field is generated; in other words, for example, the useful data is a vector and the set of predetermined values is a vector). Consider claim 11, Liao discloses the circuit device of claim 1 (see above), wherein the memory is a cache memory (FIG. 3, data cache 34). Consider claim 12, Liao discloses a method comprising: receiving an instruction ([0076], line 2, vector load instruction) that specifies a size of an array in a dimension ([0076], line 5, primary op code; [0076], lines 6-7, destination vector register location) and a size of a data portion of the array in the dimension ([0076], lines 7-9, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends); determining that a first portion of the array does not exceed the size of the data portion of the array in the dimension; requesting the first portion of the array from a memory ([0076], lines 7-15, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends. The value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector. In other words, if the value field is a "1", then all locations in the vector register beyond the position indicated by the position bit(s) are set to "1"; in other words, if the position bit(s) indicate that the vector data size does not exceed the size of the useful data, the vector data is requested from the memory); determining that a second portion of the array exceeds the size of the data portion of the array in the dimension; and generating a set of predetermined values without accessing the memory ([0076], lines 7-15, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends. The value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector. In other words, if the value field is a "1", then all locations in the vector register beyond the position indicated by the position bit(s) are set to "1"; in other words, if the position bit(s) indicate that the vector data size exceeds the size of the useful data, a set of predetermined values corresponding to the value field is generated) without accessing the memory ([0077], lines 5-6, eliminating the need to load filler data from memory; [0077], lines 11-14, the improved vector load format of FIG. 6a frees bandwidth and memory by not requiring that filler data (such as zeros) be stored in memory or loaded from memory). Consider claim 13. Liao discloses the method of claim 12 (see above), wherein the instruction specifies a set of parameters that specifies the size of the array in the dimension ([0076], line 5, primary op code; [0076], lines 6-7, destination vector register location) and the size of the data portion of the array in the dimension ([0076], lines 7-9, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends). Consider claim 16, Liao discloses the method of claim 13, wherein the set of parameters specifies a value of the set of predetermined values ([0076], lines 9-12, the value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector). Consider claim 17, Liao discloses the method of claim 12, wherein: the set of predetermined values is a first set of predetermined values; and the method further comprises: determining that a third portion of the array partially exceeds the size of the data portion of the array in the dimension; requesting a set of data from the memory; and replacing a subset of the set of data requested from the memory with a second set of predetermined values ([0076], lines 7-15, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends. The value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector. In other words, if the value field is a "1", then all locations in the vector register beyond the position indicated by the position bit(s) are set to "1"; in other words, if the position bit(s) indicate that the vector data size exceeds the size of the useful data, a set of predetermined values corresponding to the value field is generated). Consider claim 18, Liao discloses the method of claim 12 (see above), wherein each of the set of predetermined values is a null element value ([0078], lines 1-11, the value field 128 in the vector load instruction format of FIG. 6a, may designate any suitable constant, and the constant may vary depending on the particular application in which the invention is embodied. For example, a value field of "1" may be used if the vector will be involved with a multiplication operation, so as not to cause a change in the values of a vector being multiplied therewith. Similarly, a value field of "0" may be used if the vector will be used in an addition operation for the same reason explained above. However, any constant may be indicated by the value filed in accordance with the instant invention). Consider claim 19, Liao disclose the method of claim 12 (see above), wherein each of the set of predetermined values is zero ([0078], lines 1-11, the value field 128 in the vector load instruction format of FIG. 6a, may designate any suitable constant, and the constant may vary depending on the particular application in which the invention is embodied. For example, a value field of "1" may be used if the vector will be involved with a multiplication operation, so as not to cause a change in the values of a vector being multiplied therewith. Similarly, a value field of "0" may be used if the vector will be used in an addition operation for the same reason explained above. However, any constant may be indicated by the value filed in accordance with the instant invention). Consider claim 21, Liao discloses the circuit device of claim 6 (see above), wherein the memory control circuit is configured to provide the first portion of the array, the first set of predetermined values, the second set of predetermined values, and another subset of the set of data requested from the memory that was not replaced by the second set of predetermined values to the processor ([0076], lines 7-15, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends. The value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector. In other words, if the value field is a "1", then all locations in the vector register beyond the position indicated by the position bit(s) are set to "1"; in other words, if the position bit(s) indicate that the vector data size exceeds the size of the useful data, a set of predetermined values corresponding to the value field is generated). Consider claim 22, Liao discloses the method of claim 17 (see above), comprising providing to the processor the first portion of the array, the first set of predetermined values, the second set of predetermined values, and another subset of the set of data requested from the memory that was not replaced by the second set of predetermined values ([0076], lines 7-15, the position bit(s) are used by the load/store unit 24 to identify where the useful data in memory beginning at the source address ends. The value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector. In other words, if the value field is a "1", then all locations in the vector register beyond the position indicated by the position bit(s) are set to "1"; in other words, if the position bit(s) indicate that the vector data size exceeds the size of the useful data, a set of predetermined values corresponding to the value field is generated). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-5 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liao as applied to claims 1 and 13 above, and further in view of Anderson et al. (Anderson) (US 20150019840 A1). Consider claim 2, Liao discloses the circuit device of claim 1 (see above), but does not disclose the circuit device further comprises a register configured to store a set of parameters of the array, wherein the instruction specifies the size of the array in the dimension and the size of the data portion of the array in the dimension by specifying the register. On the other hand, Anderson discloses a register configured to store a set of parameters of an array, wherein an instruction specifies a size of the array in a dimension and a size of a data portion of the array in the dimension by specifying the register ([0159], lines 5-7, the STROPEN specifies a stream template register which stores the stream template as described above; [0147], Table 9, ICNT0 Iteration count for loop 0 (innermost), ICNT1 Iteration count for loop 1, ICNT2 Iteration count for loop 2, ICNT3 Iteration count for loop 3 (outermost), DIM1 Signed dimension for loop 1, DIM2 Signed dimension for loop 2, DIM3 Signed dimension for loop 3; [0110], lines 1-4, this form of addressing allows programs to specify regular paths through memory in a small number of parameters. Table 5 lists the addressing parameters of a basic stream; Table 5, ELEM_BYTES, ICNT0, ICNT1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Anderson with the invention of Liao in order to decrease instruction size, via storing parameters in a register specified by an instruction rather than the instruction itself. Consider claim 3, the overall combination entails the circuit device of claim 2 (see above). In addition, Anderson further discloses the set of parameters includes a set of counts associated with a set of loops that defines the size of the array in the dimension (Anderson, [0159], lines 5-7, the STROPEN specifies a stream template register which stores the stream template as described above; [0147], Table 9, ICNT0 Iteration count for loop 0 (innermost), ICNT1 Iteration count for loop 1, ICNT2 Iteration count for loop 2, ICNT3 Iteration count for loop 3 (outermost)). Anderson’s teaching enables greater capabilities and varieties relative to single dimensions (Anderson, [0122], lines 1-2), and increases performance and is useful for real-time digital filtering operations (Anderson, [0051], last 4 lines, streaming engines are thus useful for real-time digital filtering operations on well-behaved data. Streaming engines free these memory fetch tasks from the corresponding CPU enabling other processing functions). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the further teaching of Anderson with the previously-explained combination of Liao and Anderson in order to enable greater capabilities and varieties, and to increase performance and in view of its usefulness for real-time digital filtering operations. Consider claim 4, the overall combination entails the circuit device of claim 3, wherein: the dimension is a first dimension; and the set of loops defines a size of the array in a second dimension (Anderson, [0159], lines 5-7, the STROPEN specifies a stream template register which stores the stream template as described above; [0147], Table 9, ICNT0 Iteration count for loop 0 (innermost), ICNT1 Iteration count for loop 1, ICNT2 Iteration count for loop 2, ICNT3 Iteration count for loop 3 (outermost); [0122], line 1, two dimensional). Consider claim 5, the overall combination entails the circuit device of claim 2, wherein the set of parameters specifies a value of the set of predetermined values (Liao, [0076], lines 9-12, the value field 128 provides the constant (x) that is to be used by the load/store unit for setting the vector locations beyond the end of the useful data in the vector). Consider claim 14, the overall combination entails the method of claim 13 (see above). However, Liao does not disclose the set of parameters includes a set of counts of a set of loops that defines the size of the array in the dimension. On the other hand, Anderson discloses a set of parameters includes a set of counts of a set of loops that defines a size of the array in a dimension (Anderson, [0159], lines 5-7, the STROPEN specifies a stream template register which stores the stream template as described above; [0147], Table 9, ICNT0 Iteration count for loop 0 (innermost), ICNT1 Iteration count for loop 1, ICNT2 Iteration count for loop 2, ICNT3 Iteration count for loop 3 (outermost)). Anderson’s teaching enables greater capabilities and varieties relative to single dimensions (Anderson, [0122], lines 1-2), and increases performance and is useful for real-time digital filtering operations (Anderson, [0051], last 4 lines, streaming engines are thus useful for real-time digital filtering operations on well-behaved data. Streaming engines free these memory fetch tasks from the corresponding CPU enabling other processing functions). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Anderson with the invention of Liao in order to enable greater capabilities and varieties, and to increase performance and in view of its usefulness for real-time digital filtering operations. Consider claim 15, the overall combination entails the method of claim 14, wherein: the dimension is a first dimension; and the set of loops defines a size of the array in a second dimension (Anderson, [0159], lines 5-7, the STROPEN specifies a stream template register which stores the stream template as described above; [0147], Table 9, ICNT0 Iteration count for loop 0 (innermost), ICNT1 Iteration count for loop 1, ICNT2 Iteration count for loop 2, ICNT3 Iteration count for loop 3 (outermost); [0122], line 1, two dimensional). Claim(s) 9 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liao as applied to claims 1 and 12 above, and further in view of Sato (US 4491911). Consider claim 9, Liao discloses the circuit device of claim 1 (see above), but does not disclose that the circuit device further comprises a table look-aside buffer, wherein: the requesting of the first portion of the array by the memory control circuit includes: generating a first set of addresses; translating the first set of addresses using the table look-aside buffer to generate a second set of addresses; and retrieving the first portion of the array from the memory using the second set of addresses; and the generating of the second portion of the array does not access the table look-aside buffer. On the other hand, Sato discloses a circuit device further comprises a table look-aside buffer, wherein: a requesting of data by a memory control circuit includes: generating a first set of addresses; translating the first set of addresses using the table look-aside buffer to generate a second set of addresses; and retrieving the data from a memory using the second set of addresses (col. 1, lines 32-37, a translation look aside buffer (TLB) 13 for improving the speed of the address translation by storing a recently accessed portion of the page table 15 therein. This associative addressing system and the hardware thereof is well known to those skilled in the art). Sato’s TLB, and its associated memory system environment, is used to increase the amount of memory available to the user beyond which is actually present in the main memory of the system, with the TLB itself improving the speed of address translation by storing a recently accessed portion of the page table therein (Sato, col. 1, lines 10-17 and 32-35). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Sato with the invention of Liao in order to increase the amount of memory available to the user beyond which is actually present in the main memory of the system, with the TLB itself improving the speed of address translation by storing a recently accessed portion of the page table therein. Alternatively, this modification merely entails combining prior art elements (the prior art elements of Liao as cited above, and Sato’s TLB) according to known methods (Examiner submits that TLBs are well-known and are implemented in many different processors) to yield predictable results (the invention of Liao, further entailing a TLB), which is an example of a rationale that may support a conclusion of obviousness as per MPEP 2143. Note that the Sato's teaching of a TLB, when applied to the invention of Liao which entails data being a portion of an array in particular, and which entails generating a set of predetermined values without accessing memory, results in the overall limitation that a circuit device further comprises a table look-aside buffer, wherein: a requesting of data by a memory control circuit includes: generating a first set of addresses; translating the first set of addresses using the table look-aside buffer to generate a second set of addresses; and retrieving the data from a memory using the second set of addresses. Consider claim 20, Liao discloses the method of claim 12 (see above), but does not disclose the requesting of the first portion of the array includes: generating a first set of addresses; translating the first set of addresses using a table look-aside buffer to generate a second set of addresses; and retrieving the first portion of the array from the memory using the second set of addresses; and the generating of the second portion of the array does not access the table look-aside buffer. On the other hand, Sato discloses requesting of data includes: generating a first set of addresses; translating the first set of addresses using the table look-aside buffer to generate a second set of addresses; and retrieving the data from a memory using the second set of addresses (col. 1, lines 32-37, a translation look aside buffer (TLB) 13 for improving the speed of the address translation by storing a recently accessed portion of the page table 15 therein. This associative addressing system and the hardware thereof is well known to those skilled in the art). Sato’s TLB, and its associated memory system environment, is used to increase the amount of memory available to the user beyond which is actually present in the main memory of the system, with the TLB itself improving the speed of address translation by storing a recently accessed portion of the page table therein (Sato, col. 1, lines 10-17 and 32-35). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Sato with the invention of Liao in order to increase the amount of memory available to the user beyond which is actually present in the main memory of the system, with the TLB itself improving the speed of address translation by storing a recently accessed portion of the page table therein. Alternatively, this modification merely entails combining prior art elements (the prior art elements of Liao as cited above, and Sato’s TLB) according to known methods (Examiner submits that TLBs are well-known and are implemented in many different processors) to yield predictable results (the invention of Liao, further entailing a TLB), which is an example of a rationale that may support a conclusion of obviousness as per MPEP 2143. Note that the Sato's teaching of a TLB, when applied to the invention of Liao which entails data being a portion of an array in particular, and which entails generating a set of predetermined values without accessing memory, results in the overall limitation that the requesting of the first portion of the array includes: generating a first set of addresses; translating the first set of addresses using a table look-aside buffer to generate a second set of addresses; and retrieving the first portion of the array from the memory using the second set of addresses; and the generating of the second portion of the array does not access the table look-aside buffer. Response to Arguments Applicant on page 8 argues: “The Office Action objects to the Abstract and paragraph [0001] of the Specification in view of minor informalities. Applicant thanks the Examiner for his helpful suggestions stated on page 2 of the Office Action. Specifically, the Abstract is amended to correct a minor typographical error, and paragraph [0001] is amended to further list the issued patent corresponding to Application No. 17/583,380 in the priority claim. In view of these amendments, Applicant respectfully requests withdraw the objections to the specification.” In view of the aforementioned amendments, the previously presented objections to the specification are withdrawn. Applicant on page 8 argues: “Applicant notes that claims 1, 2, 4, 6, 12, 13, 15, and 17 are presently amended to address the concerns noted by the Examiner on pages 2-7 of the Office Action. Applicant respectfully submits that the present amendments to these claims fully address and thus render moot the 35 U.S.C. § 112(b) rejections. Accordingly, withdrawal of the rejection and allowance of claims 1-20 are respectfully requested. Most previously presented rejections of the claims under 35 U.S.C. §112(b) are withdrawn in view of the amendments to the claims. However, two presented rejections of the claims under 35 U.S.C. §112(b) remain applicable, and in one case the amendments to the claims introduce an additional issue under 35 U.S.C. §112(b) — see the Claim Rejections - 35 USC § 112 section above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEITH E VICARY whose telephone number is (571)270-1314. The examiner can normally be reached Monday to Friday, 9:00 AM to 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at (571)270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEITH E VICARY/Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Oct 14, 2024
Application Filed
Sep 02, 2025
Non-Final Rejection — §102, §103, §112
Jan 05, 2026
Response Filed
Jan 27, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602349
HANDLING DYNAMIC TENSOR LENGTHS IN A RECONFIGURABLE PROCESSOR THAT INCLUDES MULTIPLE MEMORY UNITS
2y 5m to grant Granted Apr 14, 2026
Patent 12572360
Cache Preload Operations Using Streaming Engine
2y 5m to grant Granted Mar 10, 2026
Patent 12554507
SYSTEMS AND METHODS FOR PROCESSING FORMATTED DATA IN COMPUTATIONAL STORAGE
2y 5m to grant Granted Feb 17, 2026
Patent 12554494
APPARATUSES, METHODS, AND SYSTEMS FOR INSTRUCTIONS TO REQUEST A HISTORY RESET OF A PROCESSOR CORE
2y 5m to grant Granted Feb 17, 2026
Patent 12547401
Load Instruction Fusion
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+41.2%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 683 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month