Prosecution Insights
Last updated: April 19, 2026
Application No. 17/958,378

INSTRUCTIONS TO CONVERT FROM FP16 TO FP8

Non-Final OA §112§DP
Filed
Oct 01, 2022
Examiner
LAROCQUE, EMILY E
Art Unit
2182
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
366 granted / 454 resolved
+25.6% vs TC avg
Moderate +12% lift
Without
With
+12.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
41 currently pending
Career history
495
Total Applications
across all art units

Statute-Specific Performance

§101
29.3%
-10.7% vs TC avg
§103
22.2%
-17.8% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 454 resolved cases

Office Action

§112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application IN 202241044437 filed on 08/03/22 . It is noted, however, that applicant has not filed a certified copy of the foreign priority application as required by 37 CFR 1.55. Claim Objections Claims 1-20 objected to because of the following informalities. Claim 1 line 6, and line 11 each recite “the identified source”. This limitation lacks antecedent basis. Antecedent basis is present for “the identified source operand”. Claims 2-7 inherit the same deficiency as claim 1 based on dependence. Claim 8 and claim 16 each recite substantially the same limitation and are objected to for the same reasons. Claims 9-15 inherit the same deficiency as claim 8 based on dependence. Claims 17-20 inherit the same deficiency as claim 16 based on dependence. Claim 1 line 9 recites “the decoded instruction”. This limitation lacks antecedent basis. Antecedent basis is present for “the decoded single instruction”. Claims 2-7 inherit the same deficiency as claim 1 based on dependence. Claim 8 and claim 16 each recite substantially the same limitation and are objected to for the same reasons. Claims 9-15 inherit the same deficiency as claim 8 based on dependence. Claims 17-20 inherit the same deficiency as claim 16 based on dependence. Claims 1, 4, and 5 recite “packed FP8 data.” At the first instance Applicant should write out the acronym as in “packed 8-bit floating point” as set forth in the specification [0036]. Claims 2-7 inherit the same deficiency as claim 1 based on dependence. Claim 8, 11, and 12, and claim 16-18 each recite substantially the same limitation and are objected to for the same reasons. Claims 9-15 inherit the same deficiency as claim 8 based on dependence. Claims 17-20 inherit the same deficiency as claim 16 based on dependence. Furthermore claims 4-5, 7, 11-12, 14, and 17-18 recite “the FP8 data” or “the converted FP8 data” . This limitation lacks antecedent basis and should recite “the packed FP8 data” or “the converted FP8 data” appropriately . Claim 2-3, and claim 9-10 each recite “the first source operand”. This limitation lacks antecedent basis. Antecedent basis is present for “the first packed data source operand”. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg , 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman , 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi , 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum , 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Voge l , 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington , 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA/25, or PTO/AIA/26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer . Claims 8-9, and 11-20 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 8-9, and 11-20 of copending Application No. 17958380 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other. Claims 8-9, and 11-20 of the reference application would anticipate claims 8-9, and 11-20 of the present application. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. See representative claim comparison below. LINK Excel.Sheet.12 "https://usptogov-my.sharepoint.com/personal/elarocque_uspto_gov/Documents/Documents/Patents being Examined/17958378/17958378 dp.xlsx" Sheet1!R1C1:R6C2 \a \f 4 \h 17958378 17958380 8. A method comprising: 8. A method comprising: decoding a single instruction, the single instruction to include one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data and store the packed FP8 data into corresponding data element positions of the identified destination operand; and decoding a single instruction, the single instruction to include one or more fields to identify a first source operand, one or more fields to identify a second source operand, one or more fields to identify a source/destination operand, and one or more fields for an opcode, wherein the opcode is to indicate that execution circuitry is to convert packed half-precision data from the identified first and second source operands to packed 8-bit floating point data using bias terms from the identified source/destination operand and store the packed 8-bit floating point data into corresponding data element positions of the identified source/destination operand, wherein the packed 8-bit floating point data has one bit for a sign, four bits for an exponent, and three bits for a fraction; and executing the decoded instruction according to the opcode to convert packed half- precision floating-point data or single-precision floating point data from the identified source to packed FP8 data and store the packed FP8 data into corresponding data element positions of the identified destination operand. executing the decoded instruction according to the opcode to convert packed half- precision data from the identified first and second source operands to the packed 8-bit floating point data using bias terms from the identified source/destination operand and store the packed 8-bit floating point data into corresponding data element positions of the identified source/destination operand. Claims 1-2, and 4-7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-2, and 4-7 of the Reference Application above (17958380) in view of US 20240329991. Although the claims at issue are not identical, they are not patentably distinct from each other. Claims 8-9, and 11-20 of the reference application would be obvious claims 8-9, and 11-20 of the present application. See representative claim mapping below with respect to the Reference Application. The Reference Application does not explicitly disclose the opcode to indicate that the execution circuitry is to convert single-precision floating point data. However, US 20240329991 claims an apparatus comprising decoder circuitry to decode an instruction, and execution circuitry to perform operations according to the instruction to indicate at least one source floating-point vector, the source floating-point vector to have a plurality of floating-point data elements, the at least one value to indicate at least one of a) a number of significant bits of the floating-point data elements; (b) a number of exponent bits of the floating-point data elements; (c) exponent bias information for the floating point data elements (claim 1), wherein the instruction is a floating-point conversion instruction wherein the source floating-point conversion instruction and conversion including double precision, single precision, half precision, bfloat16, and FP8. It would have been obvious to include single precision in addition to packed half-precision floating-point data as one of the source floating point vectors to be converted from as this is one of the floating point data formats considered for conversion. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. LINK Excel.Sheet.12 "https://usptogov-my.sharepoint.com/personal/elarocque_uspto_gov/Documents/Documents/Patents being Examined/17958378/17958378 dp.xlsx" Sheet1!R1C1:R6C2 \a \f 4 \h 17958378 17958380 8. A method comprising: 8. A method comprising: decoding a single instruction, the single instruction to include one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data and store the packed FP8 data into corresponding data element positions of the identified destination operand; and decoding a single instruction, the single instruction to include one or more fields to identify a first source operand, one or more fields to identify a second source operand, one or more fields to identify a source/destination operand, and one or more fields for an opcode, wherein the opcode is to indicate that execution circuitry is to convert packed half-precision data from the identified first and second source operands to packed 8-bit floating point data using bias terms from the identified source/destination operand and store the packed 8-bit floating point data into corresponding data element positions of the identified source/destination operand, wherein the packed 8-bit floating point data has one bit for a sign, four bits for an exponent, and three bits for a fraction; and executing the decoded instruction according to the opcode to convert packed half- precision floating-point data or single-precision floating point data from the identified source to packed FP8 data and store the packed FP8 data into corresponding data element positions of the identified destination operand. executing the decoded instruction according to the opcode to convert packed half- precision data from the identified first and second source operands to the packed 8-bit floating point data using bias terms from the identified source/destination operand and store the packed 8-bit floating point data into corresponding data element positions of the identified source/destination operand. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 lines 10-11 recite “convert packed half-precision floating-point data or single-precision floating point data”. It is unclear whether this refers to the “packed half-precision floating-point data or single-precision floating-point data” recited in lines 5-6 or different. For purposes of examination Examiner interprets as the same “packed half-precision floating-point data or single-precision floating-point data” as recited in lines 5-6. Claims 2-7 inherit the same deficiency as claim 1 based on dependence. Claim 8, and claim 16 recite substantially the same limitation and are rejected for the same reasons. Claims 9-15 inherit the same deficiency as claim 8 based on dependence. Claims 17-20 inherit the same deficiency as claim 16 based on dependence. Claim 2-3, and claim 9-10 recite “the first source operand”. This limitation lacks antecedent basis. It is unclear whether “the first source operand” refers to “a source operand” recited in claim 1 or different. For purposes of examination, Examiner interprets as the same. Claim 7 line 3, and claim 14 line 3 recite “execution circuitry”. It is unclear whether this is the same “execution circuitry” recited in claim 1 or different. For purposes of examination, Examiner interprets as the same “execution circuitry” as recited in claim 1. Allowable Subject Matter 7. Claims 1-20 would be allowable if rewritten to overcome the rejections under 35 35 USC 112(b), the provisional nonstatutory double patenting rejection, and the claim objections. The following is a statement of reasons for the indication of allowable subject matter. Applicant claims apparatus, methods, and non-transitory machine-readable for decoding and executing a conversion instruction wherein the apparatus as in claim 1 comprises: decoder circuitry to decode a single instruction, the single instruction to include one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data and store the packed FP8 data into corresponding data element positions of the identified destination operand; and execution circuitry to execute the decoded instruction according to the opcode to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data and store the packed FP8 data into corresponding data element positions of the identified destination operand. The primary reason for indication of allowable subject matter is limitations in combination with the remaining limitations wherein the single instruction including one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data and store the packed FP8 data into corresponding data element positions of the identified destination operand. US 20220137963 A1 Yang (hereinafter “Yang”) discloses a neural network accelerator, which includes an instruction analyzer instructing an operation including control of a type converter that performs conversion of data stored in internal memory or data generated under the control of the instruction analyzer (abstract, fig 1). Yang further discloses the accelerator may support data types including 16-bit floating point (FP16), 8-bit floating point (HFP8), and conversion from FP16 to HFP8 ([0041-0042], fig 3). Yang does not, however, teach or suggest a single instruction wherein the single instruction includes one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data. US 20190079762 A1 Heinecke et al., (hereinafter “Heinecke”) discloses apparatus and methods for performing instructions to convert to 16-bit floating-point format, wherein a processor includes decode circuitry to decode a fetched instruction, and execution circuitry to respond to the decoded instruction as specified by the opcode (abstract, fig 1, fig 2A-D, 3A-C, 5A-B). Heinecke further discloses source operands to be operated on may include 32-bit pack data elements, or 32 separate 8-bit data elements ([0003]). Heinecke does not, however, teach or suggest wherein single instruction including one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data. US 20200349216 A1 Das Sarma (hereinafter “Das Sarma”) discloses a microprocessor including a control unit configured to provide a matrix processor instruction that specifies a floating-point operand formatted using a first floating-point representation format and calculates an intermediate result in a second floating-point representation format (abstract). Das Sarma further discloses converting floating point formats from a 21 bit floating point format to a 16 bit floating point format, or an 8-bit floating point format, or a 32 bit format, and wherein the apparatus is configurable to operate in multiple floating point formats ([0021],[0036], [0057], [0068], [0072]). Das Sarma does not, however teach or suggest wherein single instruction including one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data. S. Mach et al., FPnew: An Open-Source Multi-Format Floating-Point Unit Architecture for Energy-Proportional Transprecision Computing, arXiv:2007.01530v1 [cs.AR], 2020 (hereinafter “Mach”) discloses a floating point unit capable of supporting a wide range of standard and custom formats, and extending RISC-V ISA with operations including binary 32 (FP32), half-precision, bfloat16, and 8bit FP format (abstract, section III.A). Mach further discloses scalar and vector instructions including conversions among supported types (section III.A). Mach does not, however disclose details of the instructions, and does not teach or suggest a single instruction including one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data. P. Micikevicius et al., FP8 Formats for Deep Learning, arXiv:2209.05433v2 [cs.LG] 29 Sep 2022 (hereinafter “Micikevicius”) discloses an 9-bit floating point binary interchange format consisting of two encodings, the E4M3 (4-bit exponent and 3-bit mantissa), and the E5M2 (5-bit exponent and 2-bit mantissa) (abstract). Micikevicius further discloses converting to and from FP8 and FP16 or bfloat16 (section 4). Micikevicius does not, however teach or suggest a single instruction including one or more fields to identify a source operand, one or more fields to identify a destination operand, and one or more fields for an opcode, the opcode to indicate that execution circuitry is to convert packed half-precision floating-point data or single-precision floating point data from the identified source to packed FP8 data. Furthermore, no motivation to combine could be determined. Conclusion 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMILY E LAROCQUE whose telephone number is (469)295-9289. The examiner can normally be reached on 10:00am - 1200pm, 2:00pm - 8pm ET M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Andrew Caldwell can be reached on 571-272-3701. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMILY E LAROCQUE/ Primary Examiner, Art Unit 2182
Read full office action

Prosecution Timeline

Oct 01, 2022
Application Filed
Mar 13, 2023
Response after Non-Final Action
Mar 20, 2026
Non-Final Rejection — §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602202
Finite State Machine-Based Bit-Stream Generator for Low-Discrepancy Stochastic Computing
2y 5m to grant Granted Apr 14, 2026
Patent 12596475
COMPRESSION AND DECOMPRESSION OF MULTI-DIMENSIONAL DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12579414
ARTIFICIAL NEURON
2y 5m to grant Granted Mar 17, 2026
Patent 12579214
AUGMENTING MATHEMATICAL OPTIMIZATION MODELS GENERATED FROM HISTORICAL DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12578923
METHOD AND APPARATUS FOR GENERATING ARCHITECTURE SPECIFIC CONVOLUTION GRADIENT KERNELS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
93%
With Interview (+12.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 454 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month