Prosecution Insights
Last updated: April 19, 2026
Application No. 16/422,754

VECTOR FLOATING-POINT SCALE

Non-Final OA §103§112
Filed
May 24, 2019
Examiner
VICARY, KEITH E
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Texas Instruments Incorporated
OA Round
13 (Non-Final)
58%
Grant Probability
Moderate
13-14
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
393 granted / 683 resolved
+2.5% vs TC avg
Strong +41% interview lift
Without
With
+41.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
41 currently pending
Career history
724
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
34.0%
-6.0% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
37.6%
-2.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 683 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 4, 2025, has been entered. Claims 23-38 are pending in this office action and presented for examination. Claims 1-3, 5-8, 12-15, and 17-22 are newly cancelled, and claims 23-38 are newly added, by the response received November 4, 2025. Claim Objections Claims 31-38 are objected to because of the following informalities. Appropriate correction is required. In claim 31, line 22, an “and” should be added to precede the last recited step (“execute the vector floating-point scale instruction…”) that the processor is configured to perform. (Note that further recited steps appear to be sub-steps of the aforementioned step.) Claims 32-38 are objected to for failing to alleviate the objection of claim 31 above. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 26, 28, and 31-38 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 26 recites the limitation “16-bits” in lines 1-2. However, it is indefinite as to what is being conveyed. For example, while “16 bits” is a noun and “16-bit” is an adjective, it is unclear as to what “16-bits” is. For the purposes of this office action, Examiner is interpreting this limitation as “16 bits”. Claim 28 recites the limitation “a vector floating-point scale instruction” in lines 1-2. However, it is indefinite as to whether this vector floating-point scale instruction is the same as, or different from, “a vector floating-point scale instruction” as recited in claim 23, line 2. If the same, antecedent basis language should be used for clarity. For the purposes of this office action, Examiner is taking the former possibility to be the case. Claim 28 recites the limitation “the vector floating-point scale instruction” in line 2. However, it is indefinite as to whether this limitation has antecedent basis back to “a vector floating-point scale instruction” as recited in claim 23, line 2, or “a vector floating-point scale instruction” as recited in claim 28, lines 1-2. Claim 31 recites the limitation “A processor comprising: … wherein the processor is configured to: receive…” in lines 1-16. However, there is no indication about how the recited function (receive…) is performed, as the recited function does not follow from the structure recited in the claim, i.e. the plurality of functional units, the plurality of data paths, the first source register, the second source register, or the destination register. As such, it is unclear whether the function requires some other structure or is simply a result of operating the machine in a certain manner. Specifying a particular structure that performs the recited function, provided such an amendment is supported by the original disclosure, would inform one of ordinary skill in the art of the metes and bounds of the functional limitation. Claim 31 recites the limitation “the corresponding lane of the second source register” in lines 26-27. However, the antecedent basis for this limitation is indefinite. For example, it is indefinite as to which corresponding lane of the second source register, of multiple corresponding lanes of the second source register previously set forth via the limitation “the second source register includes a plurality of lanes respectively corresponding to the plurality of lanes of the first source register” in claim 31, lines 7-9, is providing the antecedent basis for the aforementioned limitation of claim 31, lines 26-27. Note that this limitation is also recited in claim 31, lines 29-30. Claim 31 recites the limitation “the lane of the first source register” in line 28. However, the antecedent basis for this limitation is indefinite. For example, it is indefinite as to which lane of the first source register, of multiple lanes of the first source register previously set forth via the limitation “the first source register includes a plurality of lanes” in claim 31, lines 4-5, is providing the antecedent basis for the aforementioned limitation of claim 31, line 28. Claim 31 recites the limitation “the corresponding lane of the destination register” in lines 38-39. However, the antecedent basis for this limitation is indefinite. For example, it is indefinite as to which corresponding lane of the destination register, of multiple corresponding lanes of the destination register previously set forth via the limitation “the destination register includes a plurality of lanes respectively corresponding to the plurality of lanes of the first source register and respectively corresponding to the plurality of lanes of the second source register” in claim 31, lines 12-14, is providing the antecedent basis for the aforementioned limitation of claim 31, lines 38-39. Claims 32-38 are rejected for failing to alleviate the rejections of claim 31 above. Claim 34 recites the limitation “16-bits” in lines 1-2. However, it is indefinite as to what is being conveyed. For example, while “16 bits” is a noun and “16-bit” is an adjective, it is unclear as to what “16-bits” is. For the purposes of this office action, Examiner is interpreting this limitation as “16 bits”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 23-38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anderson et al. (Anderson) (US 20150088946) in view of Lutz et al. (Lutz) (US 20060117082 A1) in view of Gupta et al. (Gupta) (US 5058048) in view of Dockser (US 5649174) in view of Zbiciak et al. (Zbiciak) (US 20170168898 A1) in view of Sankaranarayanan et al. (Sankaranarayanan) (US 20160125263 A1). Consider claim 23, Anderson discloses a method comprising: specifying, in a first field ([0105], lines 1-9, the instruction format may include … a second source specifier 1435 to explicitly specify a second source operand or storage location … By way of example, each of these specifiers may include an address of a register, memory location, or other storage location) of a vector floating-point scale instruction ([0061], lines 3-4, floating point scaling instruction; [0047], lines 3-6, the packed data registers may be used to store packed floating point data associated with the floating point scaling instruction(s) 103), a first source register that stores source data ([0061], lines 12-14, the second source includes a corresponding plurality of N packed floating point data elements B.sub.0-B.sub.N, where N is two or more), in which the first source register includes a plurality of lanes, each of which contains a floating-point value ([0061], lines 12-14, the second source includes a corresponding plurality of N packed floating point data elements B.sub.0-B.sub.N, where N is two or more) specified by a first set of bits representing a fraction value ([0066], lines 1-2, FIGS. 4A-E are block diagrams illustrating example embodiments of suitable floating point formats; FIGs. 4A-E, significand field 411A-E), a second set of bits representing an exponent value ([0066], lines 1-2, FIGS. 4A-E are block diagrams illustrating example embodiments of suitable floating point formats; FIGs. 4A-E, exponent field 412A-E), and a sign bit ([0066], lines 1-2, FIGS. 4A-E are block diagrams illustrating example embodiments of suitable floating point formats; FIGs. 4A-E, sign bit 413A-E); specifying, in a second field ([0105], lines 1-9, the instruction format may include a first source specifier 1434 to explicitly specify a first source operand or storage location … By way of example, each of these specifiers may include an address of a register, memory location, or other storage location) of the vector floating-point scale instruction ([0061], lines 3-4, floating point scaling instruction; [0047], lines 3-6, the packed data registers may be used to store packed floating point data associated with the floating point scaling instruction(s) 103), a second source register ([0061], lines 11-12, the first source includes a plurality of N packed floating point data elements A.sub.0-A.sub.N), in which the second source register includes a plurality of lanes respectively corresponding to the plurality of lanes of the first source register ([0061], lines 11-12, the first source includes a plurality of N packed floating point data elements A.sub.0-A.sub.N; [0062], lines 1-6, the floating point scaling instruction also specifies or otherwise indicates a destination (e.g., a destination storage location). A result 322 including one or more corresponding result floating point data elements may be generated and stored in the destination in response to the floating point scaling instruction; [0062], lines 6-12, each of the one or more result floating point data elements (C.sub.i) may represent a scaled floating point result data element that includes a corresponding floating point data element of the second source (B.sub.i) multiplied by a base raised to a power of an integer representative of the corresponding floating point data element (A.sub.i) of the first source (int(A.sub.0))), and each of the plurality of lanes of the second source register stores bits representing a scale value to be applied to the floating-point value of the corresponding lane of the first source register ([0061], lines 11-12, the first source includes a plurality of N packed floating point data elements A.sub.0-A.sub.N; [0062], lines 6-12, each of the one or more result floating point data elements (C.sub.i) may represent a scaled floating point result data element that includes a corresponding floating point data element of the second source (B.sub.i) multiplied by a base raised to a power of an integer representative of the corresponding floating point data element (A.sub.i) of the first source (int(A.sub.0))); specifying, in a third field ([0105], lines 1-9, a destination specifier 1436 to explicitly specify a destination operand or storage location where a result is to be stored. By way of example, each of these specifiers may include an address of a register, memory location, or other storage location) of the vector floating-point scale instruction ([0061], lines 3-4, floating point scaling instruction; [0047], lines 3-6, the packed data registers may be used to store packed floating point data associated with the floating point scaling instruction(s) 103), a destination register to store scaled source data ([0062], lines 1-6, the floating point scaling instruction also specifies or otherwise indicates a destination (e.g., a destination storage location). A result 322 including one or more corresponding result floating point data elements may be generated and stored in the destination in response to the floating point scaling instruction), in which the destination register includes a plurality of lanes respectively corresponding to the plurality of lanes of the first source register and respectively corresponding to the plurality of lanes of the second source register ([0061], lines 11-12, the first source includes a plurality of N packed floating point data elements A.sub.0-A.sub.N; [0062], lines 1-6, the floating point scaling instruction also specifies or otherwise indicates a destination (e.g., a destination storage location). A result 322 including one or more corresponding result floating point data elements may be generated and stored in the destination in response to the floating point scaling instruction; [0062], lines 6-12, each of the one or more result floating point data elements (C.sub.i) may represent a scaled floating point result data element that includes a corresponding floating point data element of the second source (B.sub.i) multiplied by a base raised to a power of an integer representative of the corresponding floating point data element (A.sub.i) of the first source (int(A.sub.0))); specifying in a fifth field of the vector floating-point scale instruction a precision of the floating-point values in the plurality of lanes of the first source register ([0135], lines 1-4, data element width field 1764--its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions); [0090], lines 2-7, operations on single precision and double precision floating point data have been shown due to the widespread use of these formats. However, in other embodiments floating point scaling operations may operate on other floating point formats (e.g., half precision, quadruple precision, extended double precision, etc.)); and executing the vector floating-point scale instruction ([0061], lines 3-4, floating point scaling instruction; [0047], lines 3-6, the packed data registers may be used to store packed floating point data associated with the floating point scaling instruction(s) 103), wherein executing the vector floating-point scale instruction includes ([0061], lines 2-3, a floating point scaling operation 324 that may be performed in response to an embodiment of a floating point scaling instruction), for each lane in the first source register: reading the scale value from the corresponding lane of the second source register ([0061], lines 11-12, the first source includes a plurality of N packed floating point data elements A.sub.0-A.sub.N; [0062], lines 6-12, each of the one or more result floating point data elements (C.sub.i) may represent a scaled floating point result data element that includes a corresponding floating point data element of the second source (B.sub.i) multiplied by a base raised to a power of an integer representative of the corresponding floating point data element (A.sub.i) of the first source (int(A.sub.0))); scaling the floating-point value in the lane of the first source register, using the scale value read and as stored in the corresponding lane of the second source register, to generate a scaled floating-point value ([0062], lines 6-12, each of the one or more result floating point data elements (C.sub.i) may represent a scaled floating point result data element that includes a corresponding floating point data element of the second source (B.sub.i) multiplied by a base raised to a power of an integer representative of the corresponding floating point data element (A.sub.i) of the first source (int(A.sub.0))); and storing the scaled floating-point value in the corresponding lane of the destination register ([0062], lines 1-6, the floating point scaling instruction also specifies or otherwise indicates a destination (e.g., a destination storage location). A result 322 including one or more corresponding result floating point data elements may be generated and stored in the destination in response to the floating point scaling instruction). To any extent to which Anderson is argued to not disclose the second source register storing bits representing a scale value to be applied to a floating-point value, and scaling the floating-point value using the scale value read and as stored in the second source register, Anderson further discloses, as an alternative to a floating point scaling instruction having the first source (which, in Anderson, stores the scale values) store floating point values (which will be converted to integer values during execution), the use of one or more preceding instructions to generate the integer values as input to the floating point scaling instruction ([0063], lines 1-5, in some embodiments, the floating point scaling instruction/operation may permit the first source to have non-integer floating point values. This may help to avoid one or more preceding instructions to generate the integer values as input to the floating point scaling instruction/operation; also, note [0042], lines 4-7, specifically, it is often useful to scale the floating point numbers by multiplying each of them by a base raised to an integer power. Commonly, the integer power is derived from another floating point number.) Examiner submits that Anderson teaches a single embodiment entailing the entirety of the above cited subject matter. Nevertheless, to any extent to which the above cited subject matter is considered to be in two separate embodiments and a rationale of obviousness is needed to combine the further subject matter in the immediately preceding office action paragraph with the first cited subject matter, Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the aforementioned further subject matter of Anderson with the first cited subject matter of Anderson, as this modification merely entails simple substitution of one known element (converting a floating-point value to an integer value during execution of an instruction that performs processing using the integer value) for another (converting a floating-point value to an integer value prior to execution of an instruction that performs processing using the integer value) to obtain predictable results (the conversion occurring prior to execution of the vector floating-point scale instruction, such that an integer value, rather than a to-be-converted floating-point value, is stored in the recited corresponding lane of the second source register, with this integer value used in the recited scaling, such that the second source register stores bits representing a scale value to be applied to a floating-point value, and scaling the floating-point value uses the scale value read and as stored in the second source register), which is an exemplary rationale that may support a conclusion of obviousness, as per MPEP 2143. Note that other exemplary rationales that may support a conclusion of obviousness listed in MPEP 2143 may also be applicable. For example, Examiner submits that the aforementioned converting before execution of the vector floating-point scale instruction would have been “obvious to try”, as such entails choosing form a finite number of identified, predictable solutions (converting before execution of the vector floating-point scale instruction, and converting during execution of the vector floating-point scale instruction), with a reasonable expectation of success (in both cases, the conversion occurs before the integer value is used in the scaling calculation, so the result of the scaling calculation is valid). Examiner further submits that converting before, rather than during, execution of the vector floating-point scale instruction may decrease the time necessary to execute the vector floating-point scale instruction. However, Anderson does not explicitly entail the scaling including determining that the scaled floating-point value is one of below a smallest normal floating-point value and above a largest subnormal floating-point value, the scaling further including adjusting the scaled floating-point value by performing a flush-to-zero operation on the scaled floating-point value when it is determined that the scaled floating-point value is below the smallest normal floating-point value and normalizing a fraction field of the scaled floating-point value when it is determined that the scaled floating-point value is above the largest subnormal floating-point value, and storing the adjusted scaled floating-point value. Anderson also does not explicitly entail executing is within a single cycle of the processor. Anderson also does not explicitly entail specifying, in a fourth field of the vector floating-point scale instruction, a functional unit, from among multiple functional units of a processor, for execution of the vector floating-point scale instruction. Anderson also does not entail specifying in a sixth field of the vector floating-point scale instruction a datapath, among multiple datapaths of the processor, for execution of the vector floating-point scale instruction. On the other hand, Lutz discloses determining that a floating-point value is one of below a smallest normal floating-point value and above a largest subnormal floating-point value, and further adjusting the floating-point value by performing a flush-to-zero operation on the floating-point value when it is determined that the floating-point value is below the smallest normal floating-point value ([0023], lines 1-15, the underflow range is divided into two sub-ranges. The first sub-range is immediately below the minimum normal range and is referred to as the "subnormal" range. In this range a subnormal result is returned or a predetermined result value is returned. As an example the predetermined result value may, dependent on the rounding mode employed, be a minimum normal positive result, a minimum normal negative result or a signed zero value, When the returned result is a signed zero rather than a subnormal result the mode of operation of the processor may be referred to as a "flush-to-zero" or "abrupt underflow" mode. In such a mode all values which are below the minimum normal threshold for the target precision and which are not zero result in a signed zero value returned, and optionally a flag signalling this event is set). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lutz with the invention of Anderson in order to result in reduced cost (Lutz, [0024]). Alternatively, this modification merely entails combining prior art elements (the vector floating-point scale instruction and scaled floating-point value of Anderson, and Lutz’s explicit teaching of performing a flush-to-zero operation on a floating-point value when it is determined that the floating-point value is below the smallest normal floating-point value, as cited above) according to known methods (Lutz explicitly teaches performing a flush-to-zero operation on a floating-point value when it is determined that the floating-point value is below the smallest normal floating-point value as cited above) to yield predictable results (the invention of Anderson, further entailing performing a flush-to-zero operation on a floating-point value when it is determined that the floating-point value is below the smallest normal floating-point value), which is an exemplary rationale that may support a conclusion of obviousness, as per MPEP 2143. Note that Lutz’s teaching of determining that a floating-point value is one of below a smallest normal floating-point value and above a largest subnormal floating-point value, and further adjusting the floating-point value by performing a flush-to-zero operation on the floating-point value when it is determined that the floating-point value is below the smallest normal floating-point value, when applied to the invention of Anderson wherein the floating-point value is a scaled floating-point value which is stored, results in the overall claim limitation of the scaling including determining that the scaled floating-point value is one of below a smallest normal floating-point value and above a largest subnormal floating-point value, the scaling further including adjusting the scaled floating-point value by performing a flush-to-zero operation on the scaled floating-point value when it is determined that the scaled floating-point value is below the smallest normal floating-point value, and storing the adjusted scaled floating-point value. However, the combination thus far does not disclose normalizing a fraction field of the scaled floating-point value when it is determined that the scaled floating-point value is above the largest subnormal floating-point value. The combination thus far also does not explicitly entail executing within a single cycle of the processor. The combination thus far also does not explicitly entail specifying, in a fourth field of the vector floating-point scale instruction, a functional unit, from among multiple functional units of a processor, for execution of the vector floating-point scale instruction. The combination thus far also does not entail specifying in a sixth field of the vector floating-point scale instruction a datapath, among multiple datapaths of the processor, for execution of the vector floating-point scale instruction. On the other hand, Gupta discloses normalizing a fraction field of a floating-point value when it is determined that the floating-point value is above the largest subnormal floating-point value (col. 1, lines 58-68, a normalization scheme for floating point numbers assures that all floating point numbers with the same value have the same representation. One normalization scheme is to ensure that the position of the most significant bit of the mantissa is one. Accordingly, to normalize a denormalized floating point number, the binary point is shifted to the right until the left most digit in the mantissa has a value of one. The exponent is then decreased so that the value of the combination of the mantissa and the exponent remains constant). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Gupta with the combination of Anderson and Lutz in order to assure all floating point numbers with the same value have the same representation (Gupta, col. 1, lines 58-68). Alternatively, this modification merely entails combining prior art elements (the vector floating-point scale instruction and scaled floating-point value of the combination of Anderson and Lutz, and Gupta’s explicit teaching of normalizing a fraction field of a floating-point value when it is determined that the floating-point value is above the largest subnormal floating-point value, as cited above) according to known methods (Gupta explicitly teaches normalizing a fraction field of a floating-point value when it is determined that the floating-point value is above the largest subnormal floating-point value, as cited above) to yield predictable results (the combination of Anderson and Lutz, further entailing normalizing a fraction field of a floating-point value when it is determined that the floating-point value is above the largest subnormal floating-point value), which is an exemplary rationale that may support a conclusion of obviousness, as per MPEP 2143. Note that Gupta’s teaching of normalizing a fraction field of a floating-point value when it is determined that the floating-point value is above the largest subnormal floating-point value, when applied to the combination of Anderson and Lutz, wherein the floating-point value is a scaled floating-point value, results in the overall claim limitation of normalizing a fraction field of the scaled floating-point value when it is determined that the scaled floating-point value is above the largest subnormal floating-point value. However, the combination thus far does not explicitly entail executing within a single cycle of the processor. The combination thus far also does not explicitly entail specifying, in a fourth field of the vector floating-point scale instruction, a functional unit, from among multiple functional units of a processor, for execution of the vector floating-point scale instruction. The combination thus far also does not entail specifying in a sixth field of the vector floating-point scale instruction a datapath, among multiple datapaths of the processor, for execution of the vector floating-point scale instruction. On the other hand, Dockser discloses executing within a single cycle of the processor (col. 1, lines 52-53, preferably, all or most instructions are executed within a single instruction cycle). Dockser’s teaching minimizes the circuitry required to manage instructions of varying length (Dockser, col. 1, lines 53-54). In addition, Examiner submits that executing an instruction within a single cycle of the processor may result in increased speed of executing the instruction relative to executing the instruction within multiple cycles of the processor. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Dockser with the combination of Anderson, Lutz, and Gupta, in order to minimize circuitry and/or increase speed. However, the combination thus far does not explicitly entail specifying, in a fourth field of the vector floating-point scale instruction, a functional unit, from among multiple functional units of a processor, for execution of the vector floating-point scale instruction. The combination thus far also does not entail specifying in a sixth field of the vector floating-point scale instruction a datapath, among multiple datapaths of the processor, for execution of the vector floating-point scale instruction. On the other hand, Zbiciak explicitly discloses specifying, in a field of an instruction, a functional unit, from among multiple functional units of a processor, for execution of the instruction ([0102], lines 4-12, FIG. 13 illustrates an example of the instruction coding 1300 of functional unit instructions used by this invention. Those skilled in the art would realize that other instruction codings are feasible and within the scope of this invention. Each instruction consists of 32 bits and controls the operation of one of the individually controllable functional units (L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225, D2 unit 237, L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245 and P unit 246); [0108], lines 3-6, specifies the type of instruction and designates appropriate instruction options. This includes unambiguous designation of the functional unit used and operation performed). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Zbiciak with the combination of Anderson, Lutz, Gupta, and Dockser in order to facilitate correct execution of an instruction via designation of the appropriate functional unit that executes that instruction. Alternatively, this modification merely entails combining prior art elements (the vector floating-point scale instruction of the combination of Anderson, Lutz, Gupta, and Dockser, and Zbiciak’s explicit teaching of specifying, in a field of an instruction, a functional unit, from among multiple functional units of a processor, for execution of the instruction, as cited above) according to known methods (Zbiciak explicitly discloses specifying, in a field of an instruction, a functional unit, from among multiple functional units of a processor, for execution of the instruction, as cited above, and Examiner generally submits that it is known for an instruction to be directed to a functional unit that has the capability of executing that instruction) to yield predictable results (the combination of Anderson, Lutz, Gupta, and Dockser, further entailing specifying, in a field of an instruction, a functional unit, from among multiple functional units of a processor, for execution of the instruction), which is an exemplary rationale that may support a conclusion of obviousness, as per MPEP 2143. Note that Zbiciak’s teaching of specifying, in a field of an instruction, a functional unit, from among multiple functional units of a processor, for execution of the instruction, when applied to the combination of Anderson, Lutz, Gupta, and Dockser, wherein the instruction is a vector floating-point scale instruction, results in the overall claim limitation of specifying, in a fourth field of the vector floating-point scale instruction, a functional unit, from among multiple functional units of a processor, for execution of the vector floating-point scale instruction, and executing the vector floating-point scale instruction by the functional unit specified in the fourth field. However, the combination thus far does not entail specifying in a sixth field of the vector floating-point scale instruction a datapath, among multiple datapaths of the processor, for execution of the vector floating-point scale instruction. On the other hand, Sankaranarayanan discloses specifying in a field of an instruction a datapath, among multiple datapaths of a processor, for execution of the instruction ([0096], lines 1-9, the s bit 1307 (bit 1) designates scalar datapath side A 115 or vector datapath side B 116. If s=0, then scalar datapath side A 115 is selected. This limits the functional unit to L1 unit 221, S1 unit 222, M1 unit 223, N1 unit 224, D1 unit 225 and D2 unit 226 and the corresponding register files illustrated in FIG. 2. Similarly, s=1 selects vector datapath side B 116 limiting the functional unit to L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, P unit 246 and the corresponding register file illustrated in FIG. 2). Sankaranarayanan’s teaching increases system performance (via, for example, parallelism, relative to a system with just one data path). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Sankaranarayanan with the combination of Anderson, Lutz, Gupta, Dockser, and Zbiciak in order to increase system performance. Alternatively, this modification merely entails combining prior art elements (the prior art elements of Anderson, Lutz, Gupta, Dockser, and Zbiciak as described above, and Sankaranarayanan’s teaching of multiple data paths, as cited) according to known methods (Sankaranarayanan teaches of supporting multiple data paths via specification in a field of an instruction) to yield predictable results (the combination of Anderson, Lutz, Gupta, Dockser, and Zbiciak, implementing multiple data paths), which is an exemplary rationale that may support a conclusion of obviousness, as per MPEP 2143. Note that Sankaranarayanan’s teaching of specifying in a field of an instruction a datapath, among multiple datapaths of a processor, for execution of the instruction, when applied to the combination of Anderson, Lutz, Gupta, Dockser, and Zbiciak, wherein the instruction is a vector floating-point scale instruction, results in the overall claim limitation of specifying in a sixth field of the vector floating-point scale instruction a datapath, among multiple datapaths of the processor, for execution of the vector floating-point scale instruction. Consider claim 24, the overall combination entails the method of claim 23 (see above), wherein the source data comprises a 512-bit vector (Anderson, [0090], lines 13-18, floating point scaling operations may operate on packed data having widths of 512-bits or wider (e.g., including at least sixteen 32-bit single precision floating point data elements or at least eight 64-bit double precision floating point data elements); [0061], lines 14-17, commonly, the number N of the packed floating point data elements may be equal to the size in bits of the packed data divided by the size in bits of the floating point data elements.) Consider claim 25, the overall combination entails the method of claim 23 (see above), wherein each floating-point value is a single precision floating-point value (Anderson, [0078], lines 2-3, floating point scaling operations that may be performed on packed 32-bit single precision floating point data) or a double precision floating-point value (Anderson, [0083], lines 2-3, floating point scaling operations that may be performed on packed 64-bit double precision floating point data), as specified by the vector floating-point scale instruction (Anderson, [0135], lines 1-4, data element width field 1764--its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions; [0172], lines 1-4, data element width field 1764 (EVEX byte 2, bit [7]--W)--is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements)). Consider claim 26, the overall combination entails the method of claim 23 (see above), wherein each of the scale values is represented by 16-bits (Anderson, [0090], lines 9-11, in other embodiments the floating point formats of the sources may be different (e.g., mixed-format scaling operations may be performed; [0067], lines 2-3, the half precision floating point format has 16-bits and is also referred to as binary16). Consider claim 27, the overall combination entails the method of claim 23 (see above), wherein the scale values are signed values (Anderson, [0068], line 6, 1-bit sign 413B in bit [31]; [0069], line 6, 1-bit sign 413C in bit [63]). Consider claim 28, the combination thus far entails the method of claim 23 (see above), but does not entail specifying, in a sixth field of a vector floating-point scale instruction, whether the vector floating-point scale instruction is to be executed in parallel with another instruction. On the other hand, Zbiciak further explicitly discloses specifying, in a field of an instruction, whether an instruction is to be executed in parallel with another instruction ([0111], lines 1-11, the p bit 1308 (bit 0) marks the execute packets. The p-bit determines whether the instruction executes in parallel with the following instruction. The p-bits are scanned from lower to higher address. If p=1 for the current instruction, then the next instruction executes in parallel with the current instruction. If p=0 for the current instruction, then the next instruction executes in the cycle after the current instruction. All instructions executing in parallel constitute an execute packet. An execute packet can contain up to twelve instructions. Each instruction in an execute packet must use a different functional unit). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the further teaching of Zbiciak with the previously presented combination of Anderson, Lutz, Gupta, Dockser, Zbiciak, and Sankaranarayanan in order to increase system performance via parallel execution. Alternatively, this modification merely entails combining prior art elements (the vector floating-point scale instruction of the combination of Anderson, Lutz, Gupta, Dockser, Zbiciak, and Sankaranarayanan, and Zbiciak’s further explicit teaching of specifying, in a field of an instruction, whether an instruction is to be executed in parallel with another instruction, as cited above) according to known methods (Zbiciak explicitly discloses specifying, in a field of an instruction, whether an instruction is to be executed in parallel with another instruction, as cited above) to yield predictable results (the previously presented combination of Anderson, Lutz, Gupta, Dockser, Zbiciak, and Sankaranarayanan, further entailing specifying, in a field of an instruction, whether an instruction is to be executed in parallel with another instruction), which is an exemplary rationale that may support a conclusion of obviousness, as per MPEP 2143. Note that Zbiciak’s teaching of specifying, in a field of an instruction, whether an instruction is to be executed in parallel with another instruction, when applied to the previously presented combination of Anderson, Lutz, Gupta, Dockser, Zbiciak, and Sankaranarayanan, wherein the instruction is a vector floating-point scale instruction, results in the overall claim limitation of specifying, in a sixth field of a vector floating-point scale instruction, whether the vector floating-point scale instruction is to be executed in parallel with another instruction. Consider claim 29, the overall combination entails the method of claim 23 (see above), wherein at least one of the scale values is different than others of the scale values (Anderson, [0061], lines 11-12, the first source includes a plurality of N packed floating point data elements A.sub.0-A.sub.N. Also see FIG. 7B, as compared to FIG. 10, for example). Nevertheless, to any extent to which Anderson might not disclose that at least one of the scale values is different than others of the scale values, Examiner submits that a value in a vector being different from another value in the vector would have been obvious to try, which is an exemplary rationale that may support a conclusion of obviousness, as per MPEP 2143. Consider claim 30, the overall combination entails the method of claim 23 (see above), wherein, when it is determined that the scaled floating-point value is above the largest subnormal floating-point value, executing the vector floating-point scale instruction further comprises: determining a portion of the scale value that is consumed by normalizing the fraction field of the scaled floating-point value; and applying a remaining portion of the scale value to the exponent field of the floating-point value in the corresponding lane (Gupta, col. 1, lines 58-68, a normalization scheme for floating point numbers assures that all floating point numbers with the same value have the same representation. One normalization scheme is to ensure that the position of the most significant bit of the mantissa is one. Accordingly, to normalize a denormalized floating point number, the binary point is shifted to the right until the left most digit in the mantissa has a value of one. The exponent is then decreased so that the value of the combination of the mantissa and the exponent remains constant). Independent claim 31 is rejected for the same reasons as claim 23 above. Dependent claim 32 is rejected for the same reasons as claim 24 above. Dependent claim 33 is rejected for the same reasons as claim 25 above. Dependent claim 34 is rejected for the same reasons as claim 26 above. Dependent claim 35 is rejected for the same reasons as claim 27 above. Dependent claim 36 is rejected for the same reasons as claim 28 above. Dependent claim 37 is rejected for the same reasons as claim 29 above. Dependent claim 38 is rejected for the same reasons as claim 30 above. Response to Arguments Applicant on page 7 argues: “None of the references in any combination teach each and every element of claim 23. Claim 23 and its dependent claims are thus allowable over the art of record. Independent claim 31, which is directed to a processor, contains substantially similar recitations as claim 23. Thus, claim 31 and its dependent claims are also allowable.” In view of various newly recited subject matter, Examiner is newly relying upon one or more additional references — see the Claim Rejections - 35 USC § 103 section above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEITH E VICARY whose telephone number is (571)270-1314. The examiner can normally be reached Monday to Friday, 9:00 AM to 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at (571)270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEITH E VICARY/ Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

May 24, 2019
Application Filed
Sep 13, 2020
Non-Final Rejection — §103, §112
Dec 17, 2020
Response Filed
Jan 04, 2021
Final Rejection — §103, §112
Apr 29, 2021
Applicant Interview (Telephonic)
Apr 29, 2021
Examiner Interview Summary
May 10, 2021
Response after Non-Final Action
May 21, 2021
Request for Continued Examination
May 24, 2021
Response after Non-Final Action
Aug 03, 2021
Applicant Interview (Telephonic)
Aug 03, 2021
Examiner Interview Summary
Sep 22, 2021
Response Filed
Oct 04, 2021
Non-Final Rejection — §103, §112
Feb 28, 2022
Interview Requested
Mar 07, 2022
Examiner Interview Summary
Mar 07, 2022
Applicant Interview (Telephonic)
Mar 08, 2022
Response Filed
Mar 24, 2022
Final Rejection — §103, §112
Jun 24, 2022
Request for Continued Examination
Jul 05, 2022
Response after Non-Final Action
Jul 20, 2022
Non-Final Rejection — §103, §112
Jan 26, 2023
Response Filed
Jan 30, 2023
Final Rejection — §103, §112
Jun 05, 2023
Request for Continued Examination
Jun 12, 2023
Response after Non-Final Action
Jun 16, 2023
Non-Final Rejection — §103, §112
Sep 22, 2023
Response Filed
Oct 13, 2023
Final Rejection — §103, §112
Feb 20, 2024
Request for Continued Examination
Feb 27, 2024
Response after Non-Final Action
Mar 25, 2024
Non-Final Rejection — §103, §112
Jul 29, 2024
Response Filed
Aug 09, 2024
Final Rejection — §103, §112
Dec 16, 2024
Request for Continued Examination
Dec 30, 2024
Response after Non-Final Action
Feb 21, 2025
Non-Final Rejection — §103, §112
May 20, 2025
Examiner Interview Summary
May 20, 2025
Applicant Interview (Telephonic)
May 22, 2025
Response Filed
Jun 02, 2025
Final Rejection — §103, §112
Nov 04, 2025
Request for Continued Examination
Nov 14, 2025
Response after Non-Final Action
Dec 16, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602349
HANDLING DYNAMIC TENSOR LENGTHS IN A RECONFIGURABLE PROCESSOR THAT INCLUDES MULTIPLE MEMORY UNITS
2y 5m to grant Granted Apr 14, 2026
Patent 12572360
Cache Preload Operations Using Streaming Engine
2y 5m to grant Granted Mar 10, 2026
Patent 12554507
SYSTEMS AND METHODS FOR PROCESSING FORMATTED DATA IN COMPUTATIONAL STORAGE
2y 5m to grant Granted Feb 17, 2026
Patent 12554494
APPARATUSES, METHODS, AND SYSTEMS FOR INSTRUCTIONS TO REQUEST A HISTORY RESET OF A PROCESSOR CORE
2y 5m to grant Granted Feb 17, 2026
Patent 12547401
Load Instruction Fusion
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

13-14
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+41.2%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 683 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month