CTNF 18/456,838 CTNF 90574 DETAILED ACTION This is a non-final, first office action on the merits. Claims 1-20 are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 112 07-30-02 AIA The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 07-34-01 AIA Claim s 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 1, 9, and 15 recite " as written is not clear because it does not define such meaning. As such, the claim is indefinite for failing to distinctly claim the invention. The dependent claims 2-8, 10-14, and 16-20 inherit the deficiency of their respective parent claim. Claim Rejections - 35 USC § 101 07-04-01 AIA 07-04 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Specifically, claims 1-20 are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. With respect to Step 2A Prong One of the framework, claims 1, 9, and 15 recite an abstract idea. Claims 1, 9, and 15 include “mapping 8-bit fp8 weight values to unsigned 8-bit integers, wherein most frequently occurring fp8 values map to lowest unsigned 8-bit integer weight values; for every four fp8 weight values: adding a 2-bit descriptor that indicates whether the associated four fp8 weight values are stored as 5-bit, 6-bit, 7-bit, or 8-bit; and storing the four fp8 weight values based on the 2-bit descriptor; and using the stored fp8 weight values for inferencing using the machine learning model”. The limitations above recite an abstract idea under Step 2A Prong One. More particularly, the elements above recite mental processes-concepts performed in the human mind (including an observation, evaluation, judgment, opinion) because the elements describe a process for implementing lossless compression. As a result, claims 1, 9, and 15 recite an abstract idea under Step 2A Prong One. Claims 2-8, 10-14, and 16-20 further describe the process for implementing lossless compression. As a result, claims 2-8, 10-14, and 16-20 recite an abstract idea under Step 2A Prong One for the same reasons as stated above with respect to claims 1, 9, and 15. With respect to Step 2A Prong Two of the framework, claims 1, 9, and 15 do not include additional elements that integrate the abstract idea into a practical application. Claims 1, 9, and 15 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 1, 9, and 15 include a machine learning model, a processing system, a processor, a computer-readable media, and computer-executable instructions. When considered in view of the claim as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional computing elements are generic computing elements that are merely used as a tool to perform the recited abstract idea. As a result, claims 1, 9, and 15 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. Claims 3-4, 11-12, and 17-18 do not include any additional elements beyond those recited with respect to claims 1, 9, and 15. As a result, claims 3-4, 11-12, and 17-18 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two for the same reasons as stated above with respect to claims 1, 9, and 15. Claims 2, 5-8, 10, 13-14, 16, and 19-20 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 2, 5-8, 10, 13-14, 16, and 19-20 include a machine learning model, a large language model, and Compute Unified Device Architecture (CUDA) kernels. When considered in view of the claims as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional computing elements do no more than generally link the use of the recited abstract idea to a particular technological environment. As a result, claims 2, 5-8, 10, 13-14, 16, and 19-20 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. With respect to Step 2B of the framework, claims 1, 9, and 15 do not include additional elements amounting to significantly more than the abstract idea. As noted above, claims 1, 9, and 15 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 1, 9, and 15 include a machine learning model, a processing system, a processor, a computer-readable media, and computer-executable instructions. The additional elements do not amount to significantly more than the abstract idea because the additional computing elements are generic computing elements that are merely used as a tool to perform the recited abstract idea. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, independent claims 1, 9, and 15 do not include additional elements that amount to significantly more than the abstract idea under Step 2B. Claims 3-4, 11-12, and 17-18 do not include any additional elements beyond those recited with respect to claims 1, 9, and 15. As a result, claims 3-4, 11-12, and 17-18 do not include additional elements that amount to significantly more than the abstract idea under Step 2B for the same reasons as stated above with respect to claims 1, 9, and 15. Claims 2, 5-8, 10, 13-14, 16, and 19-20 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 2, 5-8, 10, 13-14, 16, and 19-20 include a proximal sensor, a remote sensor, a sensor, a remote sensing satellite, an airplane, an unmanned aerial vehicle (UAV). The additional elements do not amount to significantly more than the abstract idea because the additional computing elements do no more than generally link the use of the recited abstract idea to a particular technological environment. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, claims 2, 5-8, 10, 13-14, 16, and 19-20 do not include additional elements that amount to significantly more than the abstract idea under Step 2B. Therefore, the claims are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. Accordingly, claims 1-20 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Allowable Subject Matter Claims 1-20 appear to be allowable if rewritten to overcome the 35 USC § 101 and 112(b) rejections. The prior art references most closely resembling the Applicant’s claimed invention S Han et al. (Efficient methods and hardware for deep learning) 2017 search.proquest.com (hereinafter Han et al. ) in view of Fenney et al. (US Pub No. 20210194500) (hereinafter Fenney et al. ). Han et al. discloses mapping weight values, wherein most frequently occurring values map to lowest values (page 47, wherein section 4.4: we can use fewer bits to represent those more frequently appearing weights, and use more bits to represent those less frequently appearing weights); Han et al. discloses storing the weight values (page 47, wherein last paragraph model storage); and Han et al. discloses using the weight values for inferencing using the machine learning model (page 6, wherein about chapter 6: performs decompression and inference simultaneously; page 98, wherein last paragraph: parallelize the computation and do runtime decompression, rather than having to decompress the model before performing inference). However the system in Han does not explicitly disclose mapping 8-bit fp8 weight values to unsigned 8-bit integers, wherein most frequently occurring fp8 values map to lowest unsigned 8-bit integer weight values; for every four fp8 weight values: adding a 2-bit descriptor that indicates whether the associated four fp8 weight values are stored as 5-bit, 6-bit, 7-bit, or 8-bit; and storing the four fp8 weight values based on the 2-bit descriptor; and using the stored fp8 weight values for inferencing using the machine learning model. Moreover, neither Han et al., nor Fenney et al. disclose mapping 8-bit fp8, unsigned 8-bit integers, adding a 2-bit descriptor that indicates whether the associated four fp8 weight values are stored as 5-bit, 6-bit, 7-bit, or 8-bit. Fenney et al. discloses weights in 8,bit format, grouping them in groups of 4 weights, discarding leading zeros, using a 3,bit header and storing the weights and the headers, without storing the leading zeros, thereby allowing 1-bit to 8-bit formats (see para [0050], [0062], [0066], and [0071]-[0072], wherein the bit depth of the data items, n (which is fixed) is between 4 and 16 and in various examples n=8, the number of data items in a group, N, is 4…….in the example shown in FIG. 5, the optimum body portion size is .5 bits (bopt=5).1….removing three leading zeros, figures 5, 6A, 8A). Moreover, since the specific combination of claim elements mapping 8-bit fp8, unsigned 8-bit integers, adding a 2-bit descriptor that indicates whether the associated four fp8 weight values are stored as 5-bit, 6-bit, 7-bit, or 8-bit recited in claims 1, 9, and 15 cannot be found in the cited prior art and can only be found as recited in Applicant’s Specification, any combination of the cited references and/or additional references(s) to teach all the claim elements, including the aforementioned features not taught by the cited prior art, would be the result of impermissible hindsight reconstruction. Accordingly, a combination of Han et al., Fenney et al., and/or any other additional reference(s) would be improper to teach the claimed invention. While the teachings of Han et al., and Fenney et al. separately address different parts of the claimed invention, these teachings would not be combinable by one of ordinary skill in the art at the time of the invention with a reasonable expectation of success to provide a predictable combination that would render the claimed invention obvious. Thus, the novelty of the claimed invention is in the combination of limitations rather than any single limitation. Conclusion 07-96 The prior arts made of record and not relied upon is considered pertinent to applicant's disclosure. X Sun, J Choi, CY Chen, N Wang et al. (Hybrid 8-bit floating point (HFP8) training and inference for deep neural networks)… - Advances in neural …, 2019 - proceedings.neurips.cc discloses how different FP8 precision formats for activations and weights impact Trans-Precision Inference accuracy. G Ko, S Yoo, S Ham, S Kim, JY Kim et al. (Super Floating-Point (SuFP): Efficient To All. Multi-Region Piecewise Quantization using Scalable Bias with Hardware Optimization) - openreview.net discloses… of SuFP with that of FP16, FP8, MSFP and BSFP. … with 5-bit and 2-bit mantissa, accompanied by 8-bit and 7-bit scale … utilizes a 7-bit multiplier to multiply mantissa, SuFP employs a 6-bit. E Dupuis, S Filip, O Sentieys, D Novo et al. (Approximations in deep learning), 2022 – Springer discloses… mantissa bits for exponent bits (a 5-bit exponent and 10-bit mantissa for float16 versus an 8-bit exponent and 7-bit … ), coupled with a 6-bit exponent 9-bit mantissa format. Lacey et al. US Pub No. 2023/0327682 discloses a data compression in which the total size of the compressed data is determined and based on that determination, the bit depth. Van Baalen et al. US Pub No. 2023/0376272 discloses fast floating point simulations with learnable parameters includes receiving a single precision input. Emmart et al. US Pub No. 2021/0064338 discloses convert data value types. In at least one embodiment, data value types are converted by adjusting data floating point numbers to identify integer values. Sriram et al. US Pub No. 2022/0044114 discloses one or more weights of a trained model are represented by low bit integer numbers instead of using full floating point precision. Chalfin et al. US Pub No. 2018/0239992 discloses apparatus processes a set of weight values for an artificial neural network by representing the set of weight values in the form of an array of weight values and by using an image compression scheme to provide compressed weight data for the artificial neural network. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAFIZ A KASSIM whose telephone number is (571)272-8534. The examiner can normally be reached 9:00 - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAFIZ A KASSIM/Primary Examiner, Art Unit 3623 03/09/2026 Application/Control Number: 18/456,838 Page 2 Art Unit: 3623 Application/Control Number: 18/456,838 Page 3 Art Unit: 3623 Application/Control Number: 18/456,838 Page 4 Art Unit: 3623 Application/Control Number: 18/456,838 Page 5 Art Unit: 3623 Application/Control Number: 18/456,838 Page 6 Art Unit: 3623 Application/Control Number: 18/456,838 Page 7 Art Unit: 3623 Application/Control Number: 18/456,838 Page 8 Art Unit: 3623 Application/Control Number: 18/456,838 Page 9 Art Unit: 3623 Application/Control Number: 18/456,838 Page 10 Art Unit: 3623