Prosecution Insights
Last updated: April 19, 2026
Application No. 17/489,998

GENERIC IMAGE RESIZER USING MATRIX MULTIPLIER ACCELERATOR

Non-Final OA §101§103
Filed
Sep 30, 2021
Examiner
BUI, KENNY KIM
Art Unit
2182
Tech Center
2100 — Computer Architecture & Software
Assignee
Texas Instruments Incorporated
OA Round
3 (Non-Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
85%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
6 granted / 10 resolved
+5.0% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
27 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
29.8%
-10.2% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
22.6%
-17.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §103
DETAILED ACTION The Office Action is sent in response to Applicant’s Communication received on 12/16/2025 for application number 17/489,998. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered. Examiner Notes the following: Claims 1, 4-5, 7, 9, 11-13, 18, and 20 have been amended. Claims 17 and 19 have been canceled. Claim 21 have been newly added. Claims 1-16, 18, and 20-21 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the limitations: In claims 5 and 13, “Replacing the data values in the first portion of the fourth vector with the data values in the second portion of the third vector to generate the second vector” when done after or before, where “Replacing the data values in the second portion of the third vector with the data values in the first portion of the fourth vector to generate the first vector” does not imply swapping, and causes the data to be duplicated between both vectors and is not shown in figure 5 In claim 21, “replace the data values in the second portion of the third vector with the data values in the first portion of the fourth vector, and replace the data values in the first portion of the fourth vector with the data values in the second portion of the of the third vector to generate fifth and sixth vectors” does not imply swapping, and causes the data to be duplicated between both vectors and is not shown. must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-16, 18, and 20-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Under the Alice Framework Step 2A prong 1, claim 1 recites A method for resizing data, comprising: receiving input data values for resizing; generating, by the processing circuitry, a first vector in which a first number of data values from a first line of data values of the input data values is placed in a first portion of a first vector, and the first number of data values from a second line of data values of the input values is placed in a second portion of the first vector; generating, by the processing circuitry, a second vector in which the first number of data values from the first line of data values of the input values is placed in a first portion of the second vector, and the first number of data values from the second line of data values of the input data values is placed in a second portion of the second vector; receiving, by the processing circuitry, a first matrix of weights, wherein each weight of the first matrix of weights corresponds to an amount of weight to apply to a data value for a point on a first line of a set of resized data; receiving, by the processing circuitry, a second matrix of weights, wherein each weight of the second matrix of weights corresponds to an amount of weight to apply to a data value for a point on the second line of the set of resized data; multiplying, by the processing circuitry, the first vector and the first matrix of weights to determine data values for the first line of the set of resized data; multiplying, by the processing circuitry, the second vector and the second matrix of weights to determine data values for the second line of the set of resized data and outputting the set of resized data; wherein generating the first and second vectors includes generating a third vector that includes a second number of data values from the first line of data values of the input data values, and generating a fourth vector that includes the second number of data values from the second line of data values of the input data values, the second number being twice the first number, and each of the third and fourth vectors including first and second portions and generating each of the first and second vectors based on the third and fourth vectors. The above underlined limitations are related to calculating, processing and organizing data for matrix multiplication operations which amount to mathematical relationships/calculations and organizing data which falls within the “mathematical concepts” (see paragraphs [20,31-37,42,45,46,49-52,54]) and/or “mental processes” grouping of abstract ideas. Accordingly, the claim is directed to an abstract idea. Under the Alice Framework Step 2A prong 2, claim 1 recites the following additional elements: “A processing circuitry”, “receiving input data values for resizing”, “receiving a first matrix of weights”, and “receiving a second matrix of weights”. However, the additional element of “a processing circuitry” is recited at a high-level of generality (i.e., as a generic computer component for organizing and multiplying data) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The additional elements of “receiving input data values for resizing”, “receiving a first matrix of weights”, and “receiving a second matrix of weights” are merely adding insignificant extra-solution activities. The additional elements do not, individually or in combination, integrate the exception into a practical application. Accordingly, the claim is not integrated into a practical application. Under the Alice Framework Step 2B, claim 1 does not include additional elements that individually or in combination, are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “a processing circuitry” is recited at a high-level of generality (i.e., as a generic computer component for organizing and multiplying data) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The additional elements of “receiving input data values for resizing”, “receiving a first matrix of weights”, and “receiving a second matrix of weights” are merely adding insignificant extra-solution activities. See MPEP 2106.05(d)(II) which states that the courts have recognized computer functions such as “Storing and retrieving information in memory” as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. The claim does not recite additional elements that alone or in combination amount to an inventive concept. Accordingly, the claim does not amount to significantly more than the abstract idea. Under the Alice Framework Step 2A prong 1, Claims 2-8 recite further steps and details to calculating, processing and organizing data for matrix multiplication operations which amount to mathematical relationships/calculations and organizing data which falls within the “mathematical concepts” (see paragraphs [20,31-37,42,45,46,49-52,54]) and/or “mental processes” grouping of abstract ideas. Regarding claim 2, it is directed to a use of a matrix multiplier accelerator to do the matrix multiplication. Accordingly, the claims are directed to an abstract idea. Under the Alice Framework Step 2A prong 2, claim 2 recites the following additional element: “a matrix multiplier accelerator”. However, the additional element of “a matrix multiplier accelerator” is recited at a high-level of generality (i.e., as a generic computer component for matrix multiplication) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The additional elements do not, individually or in combination, integrate the exception into a practical application. Accordingly, the claims are not integrated into a practical application. Under the Alice Framework Step 2B, claim 2 does not include additional elements that individually or in combination, are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “a matrix multiplier accelerator” is recited at a high-level of generality (i.e., as a generic computer component for matrix multiplication) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The claim does not recite additional elements that alone or in combination amount to an inventive concept. Accordingly, the claims does not amount to significantly more than the abstract idea. Regarding claims 3-8, they are directed to limitations that deals with the natural mathematical nature of vectors, permuting data and formulating the structure for the matrix multiplication operations. In particular claims 3-8 do not include additional elements that would require further analysis under Step 2A prong 2 and Step 2B. Accordingly, the claims are directed to an abstract idea. Under the Alice Framework Step 2A prong 1, claim 9 recites An electronic device, comprising: one or more memories: and matrix multiplier accelerator circuitry operably coupled to the one or more memories, the matrix multiplier accelerator circuitry configured to: receive, from a memory of the one or more memories, input data values for resizing; generate a first vector in which a first number of data values from a first line of data values of the input data values is placed in a first portion of the first vector, and the first number of data values from a second line of data values of the input data values is placed in a second portion of the first vector; generate a second vector in which the first number of data values from the first line of data values of the input data values is placed in a first portion of the second vector, and the first number of data values from the second line of data values of the input data values is placed in a second portion of the second vector; receive a first matrix of weights, wherein each weight of the first matrix of weights corresponds to an amount of weight to apply to a data value for a point on a first line of a set of resized data; receive a second matrix of weights, wherein each weight of the second matrix of weights corresponds to an amount of weight to apply to a data value for a point on the second line of the set of resized data; multiply the first vector and the first matrix of weights to determine data values for the first line of the set of resized data; multiply the second vector and the second matrix of weights to determine values for the second line of the set of resized data and output the set of resized data. wherein generating the first and second vectors includes generating a third vector that includes a second number of data values from the first line of data values of the input data values, and generating a fourth vector that includes the second number of data values from the second line of data values of the input data values, the second number being twice the first number, and each of the third and fourth vectors including first and second portions and generating each of the first and second vectors based on the third and fourth vectors. The above underlined limitations are related to calculating, processing and organizing data for matrix multiplication operations which amount to mathematical relationships/calculations and organizing data which falls within the “mathematical concepts” (see paragraphs [20,31-37,42,45,46,49-52,54]) and/or “mental processes” grouping of abstract ideas. Accordingly, the claim is directed to an abstract idea. Under the Alice Framework Step 2A prong 2, claim 9 recites the following additional elements: “a matrix multiplier accelerator circuitry”, “one or more memories”, “receiving input data values for resizing”, “receiving a first matrix of weights”, and “receiving a second matrix of weights”. However, the additional elements of “a matrix multiplier accelerator circuitry” and “one or more memories” are recited at a high-level of generality (i.e., as a generic computer component for organizing and multiplying data; and as a generic computer component for storing data) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The additional elements of “receiving input data values for resizing”, “receiving a first matrix of weights”, and “receiving a second matrix of weights” are merely adding insignificant extra-solution activities. The additional elements do not, individually or in combination, integrate the exception into a practical application. Accordingly, the claim is not integrated into a practical application. Under the Alice Framework Step 2B, claim 9 does not include additional elements that individually or in combination, are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the elements of “a matrix multiplier accelerator circuitry” and “one or more memories” are recited at a high-level of generality (i.e., as a generic computer component for organizing and multiplying data; and as a generic computer component for storing data) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The additional elements of “receiving input data values for resizing”, “receiving a first matrix of weights”, and “receiving a second matrix of weights” are merely adding insignificant extra-solution activities. See MPEP 2106.05(d)(II) which states that the courts have recognized computer functions such as “Storing and retrieving information in memory” as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. The claim does not recite additional elements that alone or in combination amount to an inventive concept. Accordingly, the claim does not amount to significantly more than the abstract idea. Under the Alice Framework Step 2A prong 1, Claims 10-16 recite further steps and details to calculating, processing and organizing data for matrix multiplication operations which amount to mathematical relationships/calculations and organizing data which falls within the “mathematical concepts” (see paragraphs [20,31-37,42,45,46,49-52,54]) and/or “mental processes” grouping of abstract ideas. Regarding claim 10, it is directed to a chip that compute the matrix multiplication operation using the generic components as disclosed above. Accordingly, the claims are directed to an abstract idea. Under the Alice Framework Step 2A prong 2, claim 10 recites the following additional element: “a chip”. However, the additional element of “a ship” is recited at a high-level of generality (i.e., as a generic computer component for comprising of other generic computer components to compute matrix multiplication) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The additional elements do not, individually or in combination, integrate the exception into a practical application. Accordingly, the claims are not integrated into a practical application. Under the Alice Framework Step 2B, claim 10 does not include additional elements that individually or in combination, are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “a chip”. However, the additional element of “a ship” is recited at a high-level of generality (i.e., as a generic computer component for comprising of other generic computer components to compute matrix multiplication) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The claim does not recite additional elements that alone or in combination amount to an inventive concept. Accordingly, the claims does not amount to significantly more than the abstract idea. Regarding claims 11-16, they are directed to limitations that deals with the natural mathematical nature of vectors, and formulating the structure for the matrix multiplication operations. In particular claims 11-16 do not include additional elements that would require further analysis under Step 2A prong 2 and Step 2B. Accordingly, the claims are directed to an abstract idea. Under the Alice Framework Step 1, claims 18, 20, and 21 recites a non-transitory program storage device, therefore, is an article of manufacture. Under the Alice Framework Step 2A prong 1, claim 21 recites A non-transitory program storage device comprising instructions stored thereon to cause matrix multiplier accelerator circuitry to: receive input data values for resizing; place a first number of data values from a first line of data values of the input data values in a first vector; place the first number of data values from a second line of data values of the input data values in a second vector; replace a first value of a second portion of the first vector with a last value of a first portion of the first vector to generate a third vector; replace a first value of a second portion of the second vector with a last value of a first portion of the second vector to generate a fourth vector; replace the data values in the second portion of the third vector with the data values in the first portion of the fourth vector, and replace the data values in the first portion of the fourth vector with the data values in the second portion of the of the third vector to generate fifth and sixth vectors; multiply a first matrix of weights, wherein each weight of the first matrix of weights corresponds to an amount of weight to apply to a data value for a point on a first line of a set of resized data; and multiply a second matrix of weights, wherein each weight of the second matrix of weights corresponds to an amount of weight to apply to a data value for a point on a second line of the set of resized data; and output the set of resized data. The above underlined limitations are related to calculating, processing and organizing data for matrix multiplication operations which amount to mathematical relationships/calculations and organizing data which falls within the “mathematical concepts” (see paragraphs [20,31-37,42,45,46,49-52,54]) and/or “mental processes” grouping of abstract ideas. Accordingly, the claim is directed to an abstract idea. Under the Alice Framework Step 2A prong 2, claim 21 recites the following additional elements: “A non-transitory program storage device comprising instructions stored thereon”, “a matrix multiplier accelerator circuitry”, and “receiving input data values for resizing”. However, the additional elements of “A non-transitory program storage device comprising instructions stored thereon”, and “a matrix multiplier accelerator circuitry” are recited at a high-level of generality (i.e., as a generic computer component for storing instructions; and as a generic computer component for organizing and multiplying data) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The additional elements of “receiving input data values for resizing” are merely adding insignificant extra-solution activities. The additional elements do not, individually or in combination, integrate the exception into a practical application. Accordingly, the claim is not integrated into a practical application. Under the Alice Framework Step 2B, claim 21 does not include additional elements that individually or in combination, are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the elements of “A non-transitory program storage device comprising instructions stored thereon”, and “a matrix multiplier accelerator circuitry” are recited at a high-level of generality (i.e., as a generic computer component for storing instructions; and as a generic computer component for organizing and multiplying data) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The additional elements of “receiving input data values for resizing” are merely adding insignificant extra-solution activities. See MPEP 2106.05(d)(II) which states that the courts have recognized computer functions such as “Storing and retrieving information in memory” as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. The claim does not recite additional elements that alone or in combination amount to an inventive concept. Accordingly, the claim does not amount to significantly more than the abstract idea. Under the Alice Framework Step 2A prong 1, Claims 18 and 20 recite further steps and details to calculating, processing and organizing data for matrix multiplication operations which amount to mathematical relationships/calculations and organizing data which falls within the “mathematical concepts” (see paragraphs [20,31-37,42,45,46,49-52,54]) and/or “mental processes” grouping of abstract ideas. Regarding Claim 18, it is directed to a chip that compute the matrix multiplication operation using the generic components as disclosed above. Accordingly, the claims are directed to an abstract idea. Under the Alice Framework Step 2A prong 2, claim 18 recites the following additional element: “a chip”. However, the additional element of “a ship” is recited at a high-level of generality (i.e., as a generic computer component for comprising of other generic computer components to compute matrix multiplication) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The additional elements do not, individually or in combination, integrate the exception into a practical application. Accordingly, the claims are not integrated into a practical application. Under the Alice Framework Step 2B, claim 18 does not include additional elements that individually or in combination, are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “a chip”. However, the additional element of “a ship” is recited at a high-level of generality (i.e., as a generic computer component for comprising of other generic computer components to compute matrix multiplication) such that they amount to no more than mere instructions using a generic computer component or merely as tools to implement the abstract idea. The claim does not recite additional elements that alone or in combination amount to an inventive concept. Accordingly, the claims does not amount to significantly more than the abstract idea. Regarding Claims 20, it is directed to formulating the structure for the matrix multiplication operations. In particular claims 20 do not include additional elements that would require further analysis under Step 2A prong 2 and Step 2B. Accordingly, the claims are directed to an abstract idea. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, and 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Redfern et al. (US 2018/0253402 A1), hereinafter Redfern, and in view of Li et al. (NPL “Reducing DRAM Image Data Access Energy Consumption in Video Processing”), hereinafter Li, and in view of Garg et al. (NPL “A Low-Cost Energy Efficient Image Scaling Processor for Multimedia Applications”), hereinafter Garg, and further in view of Hong et al. (US RE47,341 E), hereinafter Hong, and further in view of Eichenberger et al. (US 9,575,753 B2), hereinafter Eichenberger. Regarding claim 9, Redfern discloses: An electronic device, comprising: one or more memories [Device 100, “FIG. 1 depicts an example device 100 configurable to implement fundamental computational primitives such as those previously mentioned herein using a matrix multiplication accelerator (MMA) 104 coupled to a processor 102." Par. 20; Figure 1, shows various memories]: and matrix multiplier accelerator circuitry operably coupled to the one or more memories ["a processor coupled to the memory, and a matrix multiplication accelerator (MMA) coupled to the processor" Par. 4], the matrix multiplier accelerator circuitry configured to: receive, from a memory of the one or more memories, input data values for resizing ["the processor 102 is configured to receive data vectors for the MMA 104" Par. 31; "The load operation portion of the LSE instruction includes fields identifying the location in the buffer 124 of the data to be loaded into the A matrix buffer 138" Par. 23]; generate a third vector and a fourth vector ["to apply formatting 122 to the data as needed for the fundamental computational primitive being executed by the device 100, and store the data vectors in respective buffers 124, 128 for consumption by the MMA 104" Par. 31; teaches modifying the input; "The streaming engine 108 is configured to read elements of the A matrix from the L2 cache 106 and to provide each row of the A matrix in tum for loading in the A matrix buffer 138. That is, the first vector from the streaming engine 108 will contain the first row, row 0, of the A matrix, the second vector from the streaming engine will contain the second row of the A matrix" Par. 53, teaches loading at least 2 vectors of input data] place a second number of data values from the first line of data values of the input data values in the third vector ["The streaming engine 108 is configured to read elements of the A matrix from the L2 cache 106 and to provide each row of the A matrix in tum for loading in the A matrix buffer 138. That is, the first vector from the streaming engine 108 will contain the first row, row 0, of the A matrix," Par. 53], place the second number of data values from the second line of data values of the input data values in the fourth vector ["the second vector from the streaming engine will contain the second row of the A matrix" Par. 53]. receive a first matrix, wherein each value of the first matrix corresponds to an amount of value to apply to a data value for a point on a first line of a set of resized data; receive a second matrix, wherein each value of the second matrix corresponds to an amount of value to apply to a data value for a point on the second line of the set of resized data; ["The MMA 104 includes sufficient memory to store two 32x32 multiplicand buffers 144 of 16-bit elements for storing two B matrices" Par. 21, teaches loading in at least two matrices; B Matrix, "B matrix stored in a selected B matrix buffer 144" Par. 22, "if a multiplier matrix A is an MxK matrix and a multiplicand matrix B is a KxN matrix, the matrix product of these two matrices is an MxN matrix C" Par. 20, shows taking values from both matrix A and matrix B and computing matrix C which is Matrix A scaled by Matrix B]. multiply the third vector and the first matrix to determine data values for the first line of the set of resized data; multiply the fourth vector and the second matrix to determine values for the second line of the set of resized data and output the set of resized data ["in a cycle, a vector of data is loaded into the A matrix buffer 138 and a matrix multiplication operation is performed between a B matrix stored in a selected B matrix buffer 144 and the data vector in the A matrix buffer 138. That is, the matrix product of the data vector in the A matrix buffer 138 with each column of the B matrix in the selected B matrix buffer 144 is computed." Par. 22, teaches multiplying a vector of data with a matrix; "The result of the matrix multiplication operation is a row of data elements that is stored in a row of a C matrix in a selected C matrix buffer 134" Par. 22]. However, Redfern does not explicitly disclose: generate a first vector in which a first line of data values of the input data values is placed in a first portion of a first vector, and the first number of data values from a second line of data values of the input data values is placed in a second portion of the first vector; generate a second vector in which the first number of data values from the first line of data values of the input data values is placed in a first portion of the second vector, and the first number of data values from the second line of data values of the input data values is placed in a second portion of the second vector; receive a first matrix of weights, wherein each weight of the first matrix of weights corresponds to an amount of weight to apply to a data value for a point on a first line of a set of resized data; receive a second matrix of weights, wherein each weight of the second matrix of weights corresponds to an amount of weight to apply to a data value for a point on the second line of the set of resized data; multiply the first vector and the first matrix of weights to determine data values for the first line of the set of resized data; multiply the second vector and the second matrix of weights to determine values for the second line of the set of resized data and output the set of resized data; wherein generating the first and second vectors includes a third vector, and a fourth vector, the second number being twice the first number, and each of the third and fourth vectors including first and second portions and generating each of the first and second vectors based on the third and fourth vectors. In the analogous art of data manipulation, Li teaches: in which a first line of data values of the input data values is placed in a first vector, and a second line of data values of the input data values is placed in the first vector [Figure 2, Block mapped DRAM, “instead of such a linear translated mapping, a block-by-block mapping geared to video processing should be used, i.e., each row stores the pixel data of one or more blocks” Page, 305, Col 1, Line 10-12, teaches the mapping of multiple input lines in a relevant block of input data into a single row] It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern and Li before him before the effective filing date of the claimed invention to incorporate the data mapping method as taught by Li into the input formatting as disclosed by Redfern, to allow for an improvement of energy efficiency with respect to data structure and access [Li, Page, 305, Col 1]. However, Redfern and Li does not explicitly disclose: generate a first vector in which a first line of data values of the input data values is placed in a first portion of a first vector, and the first number of data values from a second line of data values of the input data values is placed in a second portion of the first vector; generate a second vector in which the first number of data values from the first line of data values of the input data values is placed in a first portion of the second vector, and the first number of data values from the second line of data values of the input data values is placed in a second portion of the second vector; receive a first matrix of weights, wherein each weight of the first matrix of weights corresponds to an amount of weight to apply to a data value for a point on a first line of a set of resized data; receive a second matrix of weights, wherein each weight of the second matrix of weights corresponds to an amount of weight to apply to a data value for a point on the second line of the set of resized data; multiply the first vector and the first matrix of weights to determine data values for the first line of the set of resized data; multiply the second vector and the second matrix of weights to determine values for the second line of the set of resized data and output the set of resized data; wherein generating the first and second vectors includes a third vector, and a fourth vector, the second number being twice the first number, and each of the third and fourth vectors including first and second portions and generating each of the first and second vectors based on the third and fourth vectors. In the analogous art of data interpolation, Garg teaches: Wherein the second number is twice the first number and wherein a vector includes two portions [Section IV, Subsection A. Register Bank, teaches the method of using a bank (the vector) that has two portions where each portion can hold equal halves of the data values (the first number)] place a first number of data values in a first portion [Figure 7, Register Bank, Reg 0-3] of a vector; and place the first number of data values in a second portion [Figure 7, Register Bank, Reg 4-7] of the vector; ["The register bank receives the input pixels serially and provides eight pixels in(i-1,j), in(i,j), in(i+1,j), in(i+2,j), in(i-1,j+1), in(i,j+1), in(i+1,j+1), in(i+2,j+1)" Section IV, Subsection A. Register Bank, Teaches the method of loading in a portion (a first number) one input row in the first portion, then loading in a portion (a first number) of the next input row into the second portion] It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li and Garg before him before the effective filing date of the claimed invention to incorporate the register structure format as taught by Garg into the input formatting as disclosed by the combination of Redfern and Li, to allow for an improvement of memory requirements and energy efficiency with respect to scaling using bilinear interpolation [Garg, Section IV. Proposed Energy Efficient Image Scalar Architecture, Subsections A, and D]. The combination of Redfern, Li and Garg discloses: a vector in which a first line of data values of the input data values is placed in a first portion of a vector, and the first number of data values from a second line of data values of the input data values is placed in a second portion of the vector; However, Redfern, Li and Garg does not explicitly disclose: generate a first vector in which a first line of data values of the input data values is placed in a first portion of a first vector, and the first number of data values from a second line of data values of the input data values is placed in a second portion of the first vector; generate a second vector in which the first number of data values from the first line of data values of the input data values is placed in a first portion of the second vector, and the first number of data values from the second line of data values of the input data values is placed in a second portion of the second vector; receive a first matrix of weights, wherein each weight of the first matrix of weights corresponds to an amount of weight to apply to a data value for a point on a first line of a set of resized data; receive a second matrix of weights, wherein each weight of the second matrix of weights corresponds to an amount of weight to apply to a data value for a point on the second line of the set of resized data; multiply the first vector and the first matrix of weights to determine data values for the first line of the set of resized data; multiply the second vector and the second matrix of weights to determine values for the second line of the set of resized data and output the set of resized data. wherein generating the first and second vectors includes a third, and a fourth vector and generating each of the first and second vectors based on the third and fourth vectors. In the analogous art of data interpolation, Hong teaches: a matrix of weights [Bi-linear interpolation filters (B), Col. 5, Line 6, Figure 2, 4x9 Matrix], wherein each weight of the matrix of weights corresponds to an amount of weight to apply to a data value for a point on a set of resized data ["FIG. 2 illustrates the interpolation filter coefficient for getting a twice enlarged image according to the embodiment of the present invention. In other words, the interpolation filter coefficient for interpolating the twice enlarged image of FIG. 1 is depicted in FIG. 2." Col. 4, Lines 5-9, where each of the weights in the matrix corresponds to the scaling factor of each given input for a given output]; Multiply the vector and the matrix of weights to determine data values for the set of resized data ["the low resolution image... is z, high resolution image gotten by the bi-linear interpolation method is g...each image can be described as below. g=Bz=Hf+n" Col 4-5, lines 66-67 and 1-6 respectively, where g is the output vector or scaled vector, z is the input vector (first vector) and B is the bi-linear interpolation filters or (the first) weight matrix as shown in Figure 2]; It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg and Hong before him before the effective filing date of the claimed invention to incorporate the interpolation method using matrix multiplication as taught by Hong into the device as disclosed by Redfern, to allow for an improvement of real-time processing and computational complexity during interpolation [Hong, Col. 2, Lines 45-52]. The combination of Redfern, Li, Garg and Hong discloses receive a first matrix of weights, wherein each weight of the first matrix of weights corresponds to an amount of weight to apply to a data value for a point on a first line of a set of resized data; receive a second matrix of weights, wherein each weight of the second matrix of weights corresponds to an amount of weight to apply to a data value for a point on the second line of the set of resized data; multiply the first vector and the first matrix of weights to determine data values for the first line of the set of resized data; multiply the second vector and the second matrix of weights to determine values for the second line of the set of resized data and output the set of resized data. However, Redfern, Li, Garg and Hong does not explicitly disclose: generate a first vector in which a first line of data values of the input data values is placed in a first portion of a first vector, and the first number of data values from a second line of data values of the input data values is placed in a second portion of the first vector; generate a second vector in which the first number of data values from the first line of data values of the input data values is placed in a first portion of the second vector, and the first number of data values from the second line of data values of the input data values is placed in a second portion of the second vector; multiply the first vector and the first matrix of weights to determine data values for the first line of the set of resized data; multiply the second vector and the second matrix of weights to determine values for the second line of the set of resized data and output the set of resized data. wherein generating the first and second vectors includes a third, and a fourth vector and generating each of the first and second vectors based on the third and fourth vectors. In the analogous art of permute logic and data manipulation, Eichenberger teaches: wherein generating the vector [Fig.6, 650] includes a third vector [Fig.6, 660] and a fourth vector [Fig.6, 670], generating the vector based on the third and fourth vectors. [“The vector permute execution unit 448 operates to provide a mechanism for rearranging the data elements in the slots of a vector register. That is, based on one or more input vectors, and a control input, the vector permute execution unit 448 can rearrange the data elements of the one or more vectors such that they are in different slots of a resulting vector register.” Col. 8, Lines 55-61]. It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg, Hong, and Eichenberger before him before the effective filing date of the claimed invention to incorporate the vector manipulation as taught by Eichenberger into the input formatting as disclosed by the combination of Redfern, Li, and Garg, to allow for an improvement of permuting logic on basis of dependency and performance [Eichenberger, Col. 6, Lines 1-12, Col. 9 Lines 25-37, and Col 12, Lines 51-64]. The combination of Redfern, Li, Garg, Hong, and Eichenberger discloses generate a first vector in which a first line of data values of the input data values is placed in a first portion of a first vector, and the first number of data values from a second line of data values of the input data values is placed in a second portion of the first vector; generate a second vector in which the first number of data values from the first line of data values of the input data values is placed in a first portion of the second vector, and the first number of data values from the second line of data values of the input data values is placed in a second portion of the second vector; multiply the first vector and the first matrix of weights to determine data values for the first line of the set of resized data; multiply the second vector and the second matrix of weights to determine values for the second line of the set of resized data and output the set of resized data. wherein generating the first and second vectors includes a third, and a fourth vector and generating each of the first and second vectors based on the third and fourth vectors. Regarding claim 10, Redfern, Li, Garg, Hong, and Eichenberger disclose the invention substantially as claimed. See the discussion of claim 9 above. Redfern further discloses the one or more memories and the one or more MMAs are integrated on a single chip [Figure 1] Regarding claim 11, Redfern, Li, Garg, Hong, and Eichenberger disclose the invention substantially as claimed. See the discussion of claim 9 above. Garg further teaches the first number of data values corresponds to half of a width of the first vector [Section IV, Subsection A. Register Bank, Teaches the method of loading a portion (a first number) which is one of the two portions shown] Method claims 1 and 3 corresponds to device claims 9 and 11 respectively. A mere change in statutory class is obvious. Method claims 1 and 3 are therefore rejected for the reasons given above for device claims 9 and 11. Regarding claim 2, Redfern, Li, Garg, Hong, and Eichenberger Redfern, Li, Garg, Hong, and Eichenberger disclose the invention substantially as claimed. See the discussion of claim 1 above. Redfern discloses multiplying the first vector and the first matrix of weights is performed by a matrix multiplier accelerator of the processing circuitry as a matrix multiplication operation [MMA 104]. Claims 4, and 12, are rejected under 35 U.S.C. 103 as being unpatentable over Redfern, Li, Garg, Hong, and Eichenberger and further in view Aho et al. (NPL “Block-Level Parallel Processing for Scaling Evenly Divisible Images”), hereinafter Aho. Regarding claim 12, Redfern, Li, Garg, Hong, and Eichenberger disclose the invention substantially as claimed. See the discussion of claim 9 above. Redfern and Li does not explicitly disclose the additional limitations of claim 12. In the analogous art of data interpolation, Garg teaches: a vector includes two portions [Section IV, Subsection A. Register Bank, teaches the method of using a bank (the vector) that has two portions where each portion can hold equal halves of the data values (the first number)] It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li and Garg before him before the effective filing date of the claimed invention to incorporate the register structure format as taught by Garg into the input formatting as disclosed by the combination of Redfern and Li, to allow for an improvement of memory requirements and energy efficiency with respect to scaling using bilinear interpolation [Garg, Section IV. Proposed Energy Efficient Image Scalar Architecture, Subsections A, and D]. However, Redfern, Li, Garg, and Hong does not explicitly disclose replace a first data value of the second portion of the third vector with a last data value of the first portion of the third vector to generate a fifth vector having first and second portions; and replace a first data value of the second portion of the fourth vector with a last data value of the first portion of the fourth vector to generate a sixth vector having first and second portions; In the analogous art of permute logic and data manipulation, Eichenberger teaches: Replace a data value of the vector with another data value of the vector to generate another vector [“The vector permute execution unit 448 operates to provide a mechanism for rearranging the data elements in the slots of a vector register. That is, based on one or more input vectors, and a control input, the vector permute execution unit 448 can rearrange the data elements of the one or more vectors such that they are in different slots of a resulting vector register.” Col. 8, Lines 55-61]. It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg, Hong, and Eichenberger before him before the effective filing date of the claimed invention to incorporate the vector manipulation as taught by Eichenberger into the input formatting as disclosed by Redfern, Li, and Garg, to allow for an improvement of permuting logic on basis of dependency and performance [Eichenberger, Col. 6, Lines 1-12, Col. 9 Lines 25-37, and Col 12, Lines 51-64]. However, Redfern, Li, Garg, Hong, and Eichenberger does not explicitly disclose replace a first data value of the second portion of the third vector with a last data value of the first portion of the third vector to generate a fifth vector having first and second portions; and replace a first data value of the second portion of the fourth vector with a last data value of the first portion of the fourth vector to generate a sixth vector having first and second portions; Aho discloses: wherein the permute pattern indicates an overlap pattern including neighboring pixels [“ An image can be divided row-wise, column-wise, or when extensive parallelism is required utilizing both… When image blocks are scaled separately, the pixels close to the block boundary may need the neighboring block pixels for interpolation. This overlapping amount depends on the utilized interpolation algorithm, image division direction (row- or column-wise), image sizes (original and scaled), and whether scaling up or down is used.” III. Block Boundaries, page 2718, teaches, for subsets of the input image, to use additional neighboring pixels for interpolation i.e. overlapping]. It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg, Hong, Eichenberger and Aho before him before the effective filing date of the claimed invention to incorporate the image blocking method taught by Aho into the input formatter disclosed by the combination of Redfern, Li, Garg, and Eichenberger to allow for image interpolation parallelization and improvements with throughput, especially with larger input or more complex interpolation algorithms [Aho: I. Introduction and VII. Results: A. Hardware Complexity]. The combination of Redfern, Li, Garg, Hong, Eichenberger and Aho discloses replace a first data value of the second portion of the third vector with a last data value of the first portion of the third vector to generate a fifth vector having first and second portions; and replace a first data value of the second portion of the fourth vector with a last data value of the first portion of the fourth vector to generate a sixth vector having first and second portions; Method claim 4 corresponds to device claim 12. A mere change in statutory class is obvious. Method claim 4 is therefore rejected for the reasons given above for device claim 12. Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Redfern, Li, Garg, Hong, Eichenberger, and in view of Getreuer (NPL “Linear Methods for Image Interpolation”), and further in view of Joshi et al. (US 2011/0317764 A1), hereinafter Joshi. Regarding claim 14, Redfern, Li, Garg, Hong, and Eichenberger disclose the invention substantially as claimed. See the discussion of claim 9 above. Redfern discloses wherein the second line of data values is after the first line of data values ["the first vector from the streaming engine 108 will contain the first row, row 0, of the A matrix, the second vector from the streaming engine will contain the second row of the A matrix" Par. 53], and wherein the matrix multiplier accelerator circuitry is further configured to: use input data values as a vector and a matrix to determine data values for the third line of the set of resized data and use input data values as a vector and a matrix to determine data values for the fourth line of the set of resized data ["The MMA 104 includes sufficient memory to store two 32x32 multiplicand buffers 144 of 16-bit elements for storing two B matrices" Par. 21, teaches loading in at least two matrices; "if a multiplier matrix A is an MxK matrix and a multiplicand matrix B is a KxN matrix, the matrix product of these two matrices is an MxN matrix C" Par. 20, shows taking values from both matrix A and matrix B and computing matrix C which is Matrix A scaled by Matrix B]. However, Redfern does not explicitly disclose: wherein the resizing comprises a four-times resizing, and wherein the matrix multiplier accelerator circuitry is further configured to: use data values from the second line of data values and the first matrix of weights to determine data values for the third line of the set of resized data; and use data values from the second line of data values and the second matrix of weights to determine data values for a fourth line of the set of resized data In the analogous art of data manipulation, Li teaches: The first and second lines of data values are placed into one vector [Figure 2, Block mapped DRAM, “instead of such a linear translated mapping, a block-by-block mapping geared to video processing should be used, i.e., each row stores the pixel data of one or more blocks” Page, 305, Col 1, Line 10-12, teaches the mapping of multiple input lines in a relevant block of input data into a single row] It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern and Li before him before the effective filing date of the claimed invention to incorporate the data mapping method as taught by Li into the input formatting as disclosed by Redfern, to allow for an improvement of energy efficiency with respect to data structure and access [Li, Page, 305, Col 1]. However, Redfern and Li does not explicitly disclose: wherein the resizing comprises a four-times resizing, and wherein the matrix multiplier accelerator circuitry is further configured to: use data values from the second line of data values and the first matrix of weights to determine data values for the third line of the set of resized data; and use data values from the second line of data values and the second matrix of weights to determine data values for a fourth line of the set of resized data In the analogous art of data interpolation, Garg further teaches: Bilinear interpolation using two portions and using two lines of inputs [Figure 1, teaches bilinear interpolation which are based on distances from the original pixel to the target pixels; Figure 7, Register Bank, Reg 0-3 and Figure 7, Register Bank, Reg 4-7; "The register bank receives the input pixels serially and provides eight pixels in(i-1,j), in(i,j), in(i+1,j), in(i+2,j), in(i-1,j+1), in(i,j+1), in(i+1,j+1), in(i+2,j+1)" Section IV, Subsection A. Register Bank, Teaches the method of loading in a portion (a first number) one input row in the first portion, then loading in a portion (a first number) of the next input row into the second portion] It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li and Garg before him before the effective filing date of the claimed invention to incorporate the register structure format as taught by Garg into the input formatting as disclosed by the combination of Redfern and Li, to allow for an improvement of memory requirements and energy efficiency with respect to scaling using bilinear interpolation [Garg, Section IV. Proposed Energy Efficient Image Scalar Architecture, Subsections A, and D]. The combination of Redfern, Li and Garg discloses the use of both the first and second line of data values to compute the set of resized data. However, Redfern, Li, and Garg does not explicitly disclose: wherein the resizing comprises a four-times resizing, and wherein the matrix multiplier accelerator circuitry is further configured to: use data values from the second line of data values and the first matrix of weights to determine data values for the third line of the set of resized data; and use data values from the second line of data values and the second matrix of weights to determine data values for a fourth line of the set of resized data In the analogous art of data interpolation, Hong teaches: a matrix of weights [Bi-linear interpolation filters (B), Col. 5, Line 6, Figure 2, 4x9 Matrix;" Col. 4, Lines 5-9, where each of the weights in the matrix corresponds to the scaling factor of each given input for a given output; "g=Bz=Hf+n" Col 4-5, lines 66-67 and 1-6 respectively]; It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg and Hong before him before the effective filing date of the claimed invention to incorporate the interpolation method using matrix multiplication as taught by Hong into the device as disclosed by Redfern, to allow for an improvement of real-time processing and computational complexity during interpolation [Hong, Col. 2, Lines 45-52]. However, Redfern, Li, Garg, Hong, and Eichenberger does not explicitly disclose: wherein the resizing comprises a four-times resizing, and wherein the matrix multiplier accelerator circuitry is further configured to: use data values from the second line of data values and the first matrix of weights to determine data values for the third line of the set of resized data; and use data values from the second line of data values and the second matrix of weights to determine data values for a fourth line of the set of resized data In the analogous art of Image Interpolation and Boundary Handling, Getreuer teaches: wherein the resizing comprises a four-times resizing [Figure 18, shows a 4-line scaling, wherein there are four interpolated lines of data between two lines of input data], It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg, Hong, Eichenberger, and Getreuer before him before the effective filing date of the claimed invention to incorporate the interpolation methodology taught by Getreuer into the bilinear interpolation methodology disclosed by the combination of Redfern, Li, and Garg, to allow for symmetric or centered interpolation on the data set [Getreuer: 3. Interpolation Kernels, and 14 Methodology]. However, Redfern, Li, Garg, Hong and Getreuer, does not disclose: wherein the matrix multiplier accelerator circuitry is further configured to: use data values from the second line of data values and the first matrix of weights to determine data values for the third line of the set of resized data; and use data values from the second line of data values and the second matrix of weights to determine data values for a fourth line of the set of resized data In the analogous art of Interpolation filtering, Joshi teaches reusing interpolation filters (matrix of weights) for different interpolation data points [“For some sub-pixel locations, the video decoder may be able to reuse the coefficients for the interpolation filter at a first sub-pixel location to reconstruct an interpolation filter at a different sub-pixel location”, Par. 66]. It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg, Hong, Eichenberger, Getreuer and Joshi before him before the effective filing date of the claimed invention to modify the weight matrix as disclosed by the combination of Redfern, and Hong, to include symmetric based filter coefficients, in order to reduce the number of filter coefficients and improve bandwidth [Joshi: Paragraph 66]. The combination of Redfern, Li, Garg, Hong, Eichenberger, Getreuer and Joshi discloses wherein the matrix multiplier accelerator circuitry is further configured to: use data values from the second line of data values and the first matrix of weights to determine data values for the third line of the set of resized data; and use data values from the second line of data values and the second matrix of weights to determine data values for a fourth line of the set of resized data Method claim 6 corresponds to device claim 14, respectively. A mere change in statutory class is obvious. Method claim 6 is therefore rejected for the reasons given above for device claim 14. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Redfern, Li, Garg, Hong, Eichenberger, and further in view of Getreuer. Regarding claim 15, Redfern, Li, Garg, Hong, and Eichenberger disclose the invention substantially as claimed. See the discussion of claim 9 above. Redfern further discloses Add a pad to the input data values and modifying the first vector based on the locations for values of the first vector indicated by the permute pattern [“input formatting 122 include zero padding, even/odd vector generation, value copying, known matrix creation and linked operations” Par. 32] However, Redfern, Li, Garg, and Hong does not explicitly disclose: add a top pad to the input data values wherein values for the top pad are based on the input data values; Add a bottom pad to the input data values wherein values for the bottom pad are based on the input data values; Determine a right pad based on a permute pattern, the permute pattern indicating locations for values of the first vector; Modify the vector based on locations for values of the vector indicated by the permute pattern In the analogous art of permute logic and data manipulation, Eichenberger teaches: Modify the vector based on locations for values of the vector indicated by the permute pattern [“the control vector represents the permutation pattern for rearranging the data from one or more input vectors to generate an output data vector in an output vector register” Col. 10, Lines 40-43]. It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg, Hong, and Eichenberger before him before the effective filing date of the claimed invention to incorporate the vector manipulation as taught by Eichenberger into the input formatting as disclosed by Redfern, Li, and Garg, to allow for an improvement of permuting logic on basis of dependency and performance [Eichenberger, Col. 6, Lines 1-12, Col. 9 Lines 25-37, and Col 12, Lines 51-64]. However, Redfern, Li, Garg, Hong, and Eichenberger does not explicitly disclose: add a top pad to the input data values wherein values for the top pad are based on the input data values; Add a bottom pad to the input data values wherein values for the bottom pad are based on the input data values; and Determine a right pad based on a permute pattern, the permute pattern indicating locations for values of the first vector; In the analogous art of Image Interpolation and Boundary Handling, Getreuer teaches: Padding the boundaries based on the adjacent input data values and determine the padding based on a permute pattern, the permute pattern indicating locations for values of the first vector [“The usual approach is to extrapolate (pad) the input image Several methods for doing this are Constant Extension…” 14.1 Boundary Handling, Page 250, teaches which values to repeat (permute pattern) for boundaries of the image that includes the all sides of the image, wherein Constant extension is adjacent padding]. It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg, Hong, Eichenberger, and Getreuer before him before the effective filing date of the claimed invention to incorporate the Boundary handling methodology taught by Getreuer into the input formatting disclosed Redfern, to allow for sampling input data outside the data set for interpolating data near the image boundaries and allow for symmetric or centered interpolation on the data set [Getreuer: 3. Interpolation Kernels, and 14 Methodology]. The combination of Redfern, Li, Garg, Hong, Eichenberger, and Getreuer discloses add a top pad to the input data values wherein values for the top pad are based on the input data values; Add a bottom pad to the input data values wherein values for the bottom pad are based on the input data values; and Determine a right pad based on a permute pattern, the permute pattern indicating locations for values of the first vector; Regarding claim 7, Redfern, Li, Garg, Hong, and Eichenberger disclose the invention substantially as claimed. See the discussion of claim 1 above. Redfern further discloses Add a pad to the input data values and modifying the first vector based on the locations for values of the first vector indicated by the permute pattern [“input formatting 122 include zero padding, even/odd vector generation, value copying, known matrix creation and linked operations” Par. 32] However, Redfern, Li, Garg, Hong, and Eichenberger does not explicitly disclose: adding a top pad to the input data values wherein values for the top pad are based on a top-most line of data values of the input data values; Adding a bottom pad to the input data values wherein values for the bottom pad are based on a bottom-most line of data values of the input data values; and Determine a right pad based on a permute pattern, the permute pattern indicating locations for values of the first vector; In the analogous art of Image Interpolation and Boundary Handling, Getreuer teaches: Padding the boundaries based on the adjacent input data values and determine the padding based on a permute pattern, the permute pattern indicating locations for values of the first vector [“The usual approach is to extrapolate (pad) the input image Several methods for doing this are Constant Extension…” 14.1 Boundary Handling, Page 250, teaches which values to repeat (permute pattern) for boundaries of the image that includes the all sides of the image, wherein Constant extension is adjacent padding]. It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg, Hong, Eichenberger, and Getreuer before him before the effective filing date of the claimed invention to incorporate the Boundary handling methodology taught by Getreuer into the input formatting disclosed Redfern, to allow for sampling input data outside the data set for interpolating data near the image boundaries and allow for symmetric or centered interpolation on the data set [Getreuer: 3. Interpolation Kernels, and 14 Methodology]. The combination of Redfern, Li, Garg, Hong, Eichenberger, and Getreuer discloses adding a top pad to the input data values wherein values for the top pad are based on a top-most line of data values of the input data values; Adding a bottom pad to the input data values wherein values for the bottom pad are based on a bottom-most line of data values of the input data values; and Determine a right pad based on a permute pattern, the permute pattern indicating locations for values of the first vector; Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Redfern, Li, Garg, Hong, Eichenberger, Getreuer and further in view Aho. Regarding claim 16, Redfern, Li, Garg, Hong, Eichenberger, and Getreuer disclose the invention substantially as claimed. See the discussion of claim 15 above. Redfern, Li, Garg, Hong, and Getreuer does not disclose the additional limitations of claim 16. More specifically, Redfern, Li, Garg, Hong, and Getreuer does not explicitly disclose wherein the permute pattern indicates an overlap pattern for a value of the first vector. In the analogous art of parallel processing for scaling images, Aho discloses: wherein the permute pattern indicates an overlap pattern for a value of the first vector [“ An image can be divided row-wise, column-wise, or when extensive parallelism is required utilizing both… When image blocks are scaled separately, the pixels close to the block boundary may need the neighboring block pixels for interpolation. This overlapping amount depends on the utilized interpolation algorithm, image division direction (row- or column-wise), image sizes (original and scaled), and whether scaling up or down is used.” III. Block Boundaries, page 2718, teaches, for subsets of the input image, to use additional neighboring pixels for interpolation i.e. overlapping]. It would have been obvious to one of ordinary skill in the art, having the teachings of Redfern, Li, Garg, Hong, Eichenberger, Getreuer and Aho before him before the effective filing date of the claimed invention to incorporate the image blocking method taught by Aho into the input formatter disclosed by the combination of Redfern and Getreuer to allow for image interpolation parallelization and improvements with throughput, especially with larger input or more complex interpolation algorithms [Aho: I. Introduction and VII. Results: A. Hardware Complexity]. Method Claim 8 corresponds to device claim 16. A mere change in statutory class is obvious. Method claim 8 is therefore rejected for the reasons given above for device claim 16. Allowable Subject Matter Claims 5 and 13 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: In regards to claim 13 and 5, the limitation of “replace the data values in the second portion of the fifth vector with the data values in the first portion of the sixth vector to generate the first vector, and replace the data values in the first portion of the sixth vector with the data values in the second portion of the fifth vector to generate the second vector” The net result of this replacement of information appears to result in the discarding of relevant information to the operations of the interpolation method done by matrix multiplication. The prior art of record does not teach or suggest this discarding of relevant information, in combination as claimed in claim 13. None of the references, individually nor in combination, explicitly teach “…replace the data values in the second portion of the fifth vector with the data values in the first portion of the sixth vector to generate the first vector, and replace the data values in the first portion of the sixth vector with the data values in the second portion of the fifth vector to generate the second vector”, in combination with the remaining limitations as required by Claim 12. In regards to claim 21 (and dependent claims 18 and 20), the limitation of “replace the data values in the second portion of the third vector with the data values in the first portion of the fourth vector, and replace the data values in the first portion of the fourth vector with the data values in the second portion of the of the third vector to generate fifth and sixth vectors;” The net result of this replacement of information appears to result in the discarding of relevant information to the operations of the interpolation method done by matrix multiplication. The prior art of record does not teach or suggest this discarding of relevant information. None of the references, individually nor in combination, explicitly teach “…replace the data values in the second portion of the third vector with the data values in the first portion of the fourth vector, and replace the data values in the first portion of the fourth vector with the data values in the second portion of the of the third vector to generate fifth and sixth vectors…”, in combination with the remaining limitations as required. Response to Arguments Applicant's arguments, see page 12, filed 12/16/2025, with respect to objections to the drawings have been fully considered but they are not persuasive. The limitations are directed to replacing the data, and causes the data to be duplicated between both vectors and is not shown in figure 5. The examiner respectfully disagrees with the applicant’s assertion to the contrary for at least the reasons given above and the objection is maintained. See Objections to Drawings. Applicant’s arguments, see page 12, filed 12/16/2025, with respect to Objections to the Claims have been fully considered and are persuasive. The Objections to the Claims of the Office Action mailed 07/16/2025 has been withdrawn. Applicant's arguments, see pages 12-13, filed 12/16/2025, with respect to Rejections under 35 U.S.C. 101 have been fully considered but they are not persuasive. Regarding claim 1 (and 9 and 21), applicant argues that the claims as a whole integrates the claim into a practical application of improving resizing of input data, and multiplication operations executed in parallel. However, the applicant’s argument are arguing unclaimed features and does not address the reasoning given in the prior office action. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The examiner respectfully disagrees with the applicant’s assertion to the contrary for at least the reasons given above. Applicant's arguments, see pages 13, filed 12/16/2025, with respect to the rejection under 35 U.S.C. 103 are directed to the new features added via amendment on 12/16/25. Applicant’s arguments with respect to the references applied in the last Office action are persuasive. However, upon further consideration, a new ground(s) of rejection is made in view of Eichenberger. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kenny K. Bui whose telephone number is (571)270-0604. The examiner can normally be reached 8:00 am to 3:00 pm on Monday, 8:00 am to 4:00 pm on Tuesday to Friday ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew T Caldwell can be reached at (571)272-3702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KENNY K. BUI/Patent Examiner, Art Unit 2182 (571)270-0604 /ANDREW CALDWELL/Supervisory Patent Examiner, Art Unit 2182
Read full office action

Prosecution Timeline

Sep 30, 2021
Application Filed
Jan 24, 2025
Non-Final Rejection — §101, §103
Apr 23, 2025
Response Filed
Jul 11, 2025
Final Rejection — §101, §103
Dec 16, 2025
Request for Continued Examination
Dec 31, 2025
Response after Non-Final Action
Feb 13, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12425047
METHODS AND APPARATUS TO PERFORM WEIGHT AND ACTIVATION COMPRESSION AND DECOMPRESSION
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
85%
With Interview (+25.0%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month