Prosecution Insights
Last updated: April 19, 2026
Application No. 18/249,389

Machine-Learned Discretization Level Reduction

Non-Final OA §101§102§103
Filed
Apr 18, 2023
Examiner
BEAN, GRIFFIN TANNER
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
21%
Grant Probability
At Risk
1-2
OA Rounds
4y 4m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
4 granted / 19 resolved
-33.9% vs TC avg
Strong +28% interview lift
Without
With
+28.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
45 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This Action is responsive to Claims filed 04/18/2023. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted were filed before the mailing date of the first Action on the merits. The submission(s) are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) are being considered by the examiner. Drawings Receipt of Drawings filed 04/18/2023 is acknowledged. These Drawings are acceptable. Status of the Claims The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 14-19 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.) Step 1: Claims 14-19 recite a method, which falls under the statutory category of a process. Step 2A – Prong 1: Claim 14 recites an abstract idea, law of nature, or natural phenomenon. The limitations “determining, by the computing system and based at least in part on the discretization level reduction model, the level-reduced tensor data;”, “determining, by the computing system and based at least in part on the discretization level reduction model, reconstructed input tensor data based at least in part on the level-reduced tensor data;”, “determining, by the computing system, a loss based at least in part on the input tensor data and the reconstructed input tensor data;”, and “adjusting, by the computing system, one or more parameters of the discretization level reduction model based at least in part on the loss.” under the broadest reasonable interpretation, cover a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. These limitations therefore fall within the mental process group. Determining tensor data is practically performed within the human mind or with the aid of pen and paper. Determining a reconstructed tensor is practically performed within the human mind or with the aid of pen and paper. Determining a loss is practically performed within the human mind or with the aid of pen and paper. Adjusting model parameters is practically performed within the human mind or with the aid of pen and paper. Step 2A – Prong 2: The additional elements of claim 1 do not integrate the abstract idea into a judicial exception. The claim recites the additional elements “A computer-implemented method”, “tensor data”, and “a computing system comprising one or more computing devices” are recognized as generic computer components recited at a high level of generality (the Specification does not indicate these elements are different from a typical processing unit). Although it has and executes instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). The additional elements recited in the limitations “a discretization level reduction model to provide level-reduced tensor data” are recognized as non-generic computer components, however, they are found to generally link the abstract idea to a particular technological field (See MPEP 2106.05(h)). The limitations “obtaining, by a computing system comprising one or more computing devices, training data, the training data comprising input tensor data;” and “providing, by the computing system, the training data to a discretization level reduction model, the discretization level reduction model configured to receive tensor data comprising a number of discretization levels and produce, in response to receiving the tensor data, level- reduced tensor data comprising a reduced number of discretization levels;” are found to pre- or post-extra-solution activity or data transmittal steps (See MPEP 2106.05(g)). Step 2B: The only limitation on the performance of the described method is a limitation reciting “A computer-implemented method”, “tensor data”, and “a computing system comprising one or more computing devices” These elements are insufficient to transform a judicial exception to a patentable invention because the recited elements are considered insignificant extra-solution activity (generic computer system, processing resources, links the judicial exception to a particular, respective, technological environment). The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components; mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (see MPEP 2106.05(f)). The additional elements recited in the limitations “a discretization level reduction model to provide level-reduced tensor data” are recognized as non-generic computer components, however, they are found to generally link the abstract idea to a particular technological field (See MPEP 2106.05(h)). The limitations “obtaining, by a computing system comprising one or more computing devices, training data, the training data comprising input tensor data;” and “providing, by the computing system, the training data to a discretization level reduction model, the discretization level reduction model configured to receive tensor data comprising a number of discretization levels and produce, in response to receiving the tensor data, level- reduced tensor data comprising a reduced number of discretization levels;” are found to well-understood, routine, or conventional activity (See MPEP 2106.05(d)(II)(i)(first list)). Taken alone or in ordered combination, these additional elements do not amount to significantly more than the above-identified abstract idea. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Dependent Claims: Claim 15 recites refinements to the data types operated upon. Claim 16 recites refinements to the loss calculated in Claim 14. Claim 17 recites refinements to the structure of the model additional element. Claim 18 recites refinements to the model additional element(s) of Claim 14, mere pre- or post-extra-solution activity steps “obtaining…” and “obtaining…”, and abstract ide amental process step “determining…”. Claim 19 recites refinements to the data type of Claim 18. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-7 and 9-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hou et al. (Image Companding and Inverse Halftoning using Deep Convolutional Neural Networks, 2017), hereinafter Hou. In regards to claim 1: The present invention claims: “A computer-implemented method for providing level-reduced tensor data having improved representation of information, the method comprising: obtaining input tensor data;” Hou Figure 2 shows a 3x256x256 input tensor, and teaches “In all the experiment we use 3 bit depths (8 color levels) as input images to train our deep networks for both color and grayscale images.” (Page 6). “providing the input tensor data as input to a machine-learned discretization level reduction model configured to receive tensor data comprising a number of discretization levels and produce, in response to receiving the tensor data, level-reduced tensor data comprising a reduced number of discretization levels,” The purple blocks of Hou’s Figure 2 show the input tensor data being reduced at each layer. Hou also teaches “The typical digital halftoning process is considered as a technique of converting a continuous-tone grayscale image with 255 color levels (8 bits) into a binary black-and-white image with only 0 and 1 two color levels (1 bit).” (Page 2) and “The details of our model are shown in Fig. 2, we first encode the input image to lower dimension vector by a series of stride convolutions, which consists of 4 x 4 convolution kernels and 2 x 2 stride in order to achieve its own downsampling.” (Page 4). “wherein the machine-learned discretization level reduction model comprises: at least one input layer configured to receive the tensor data;” Hou Figure 2 shows a 3x256x256 input tensor and Hou teaches “The details of our model are shown in Fig. 2, we first encode the input image to lower dimension vector by a series of stride convolutions…” (Page 4). “and one or more level reduction layers connected to the at least one input layer, the one or more level reduction layers configured to receive input having a first number of discretization levels and to provide a layer output having a reduced a number of discretization levels;” Hou Figure 2 shows each layer reducing its respective input tensor and teaches “The dash-line arrows indicate the features from the encoding layers are directly copied to the decoding layers and form half of the corresponding layers’ features.” “wherein each level reduction layer is associated with a respective number of discretization levels and the discretization level is reduced at each layer of the one or more level reduction layers based at least in part on a discretized activation function having the respective number of discretization levels associated with the level reduction layer;” Hou teaches “we use 8 bit images as our highest bit depth images in the experiments. The 8 bit images are reduced by different depths as the lower bit depth images, and then expanded back to 8 bits.” (Page 4) “obtaining, from the machine-learned discretization level reduction model, the level- reduced tensor data;” Hou teaches “to firstly encode the input images through several convolutional layers until a bottleneck layer, followed by a reversed decoding process to produce the output images.” (Page 3) and “we use 8 bit images as our highest bit depth images in the experiments. The 8 bit images are reduced by different depths as the lower bit depth images, and then expanded back to 8 bits.” (Page 4). “wherein the machine-learned discretization level reduction model is trained using reconstructed input tensor data generated using an output of the machine-learned discretization level reduction model.” Hou teaches “to firstly encode the input images through several convolutional layers until a bottleneck layer, followed by a reversed decoding process to produce the output images.” (Page 3) and “We denote the loss function as L(^y; y) to measure the perceptual difference between two images. As illustrated in Fig. 1, both the output image ^y = T(x) generated by the transformation network and the corresponding target image y are fed into a pretrained deep CNN _ for feature extraction. We use _i(y) to represent the hidden representations of image y at ith convolutional layer. _i(x) is a 3D array of shape [Ci, Wi, Hi], where Ci is the number of filters, Wi and Hi are the width and height of the given feature map of the ith convolutional layer. The final perceptual loss of two images at ith layer is the Euclidean distance of the corresponding 3D arrays as following…” (Page 4). In regards to claim 2: The present invention claims: “wherein the input tensor data comprises image data, and wherein the level-reduced tensor data comprises binarized image data.” Hou teaches “The typical digital halftoning process is considered as a technique of converting a continuous-tone grayscale image with 255 color levels (8 bits) into a binary black-and-white image with only 0 and 1 two color levels (1 bit).” (Page 2). In regards to claim 3: The present invention claims: “wherein the discretization level reduction model further comprises at least one feature representation layer configured to map the input tensor data from the input layer to a feature representation of the input tensor data.” Hou teaches “We not only use a deep CNN as a nonlinear transformation function to map a low bit depth image to a higher bit depth image or from a halftone image to a continuous tone image, but also employ another pre-trained deep CNN as a feature extractor or convolutional spatial filter to derive visually important features to construct the objective function for the training of the transformation neural network.” (Page 2). In regards to claim 4: The present invention claims: “wherein the discretization level reduction model further comprises at least one channel reduction layer configured to reduce an input to the at least one channel reduction layer input data having a first number of channels to an output of the at least one channel reduction layer having a reduced number of channels.” See Hou Figure 2 for each layer reducing the size of the input before the next layer. Hou teaches “converting a continuous-tone grayscale image with 255 color levels (8 bits) into a binary black-and-white image with only 0 and 1 two color levels (1 bit).” (Page 2). In regards to claim 5: The present invention claims: “wherein the one or more level reduction layers are each configured to reduce the number of discretization levels based at least in part on a scaling factor.” Hou teaches “The default approach [3] for converting 8 bit images to 4 bit images is to divide by 16 to quantize the color level from 256 to 16, which will be then scaled up to fill the full range of the display.” (Page 4). In regards to claim 6: The present invention claims: “wherein the scaling factor is one half.” See the rejection of Claim 5 and Hou Equation 2 (Page 4) for converting 8 bit images to 4 bit images (one half). In regards to claim 7: The present invention claims: “wherein the one or more level reduction layers progressively and monotonically reduce a number of discretization levels at each of the one or more level reduction layers.” See Hou Figure 2 for each purple block reducing per layer. In regards to claim 9: The present invention claims: “wherein the machine-learned discretization level reduction model comprises an output layer configured to provide the level-reduced tensor data.” Hou teaches “converting a continuous-tone grayscale image with 255 color levels (8 bits) into a binary black-and-white image with only 0 and 1 two color levels (1 bit).” (Page 2). In regards to claim 10: The present invention claims: “wherein the reduced number of discretization levels of the level-reduced tensor data is two discretization levels.” See the above Rejection of Claim 9 how Hou reads on reducing input to two discretization levels. In regards to claim 11: The present invention claims: “wherein the discretization level reduction model comprises one or more reconstruction layers configured to reconstruct the reconstructed input tensor data from the level-reduced tensor data.” See Hou Figure 2 for the orange blocks representing reconstruction layers. Hou teaches “By inverting convolutional features [39], the colors and the rough contours of an image can be reconstructed from activations in pretrained CNNs.” (Page 3). In regards to claim 12: The present invention claims: “wherein the discretization level reduction model comprises a color bypass network, the color bypass network comprising one or more fully connected hidden units.” See Hou Figure 2 for “The dash-line arrows indicate the features from the encoding layers are directly copied to the decoding layers and form half of the corresponding layers’ features.” (Page 3). In regards to claim 13: The present invention claims: “wherein the color bypass network comprises between one and ten fully connected hidden units.” See above and Hou Figure 2 for the multiple hidden layers convolving and deconvolving the input tensor. In regards to claims 14-15 and 17-18: Claims 14-15 and 17-18 recite similar limitations to those found in Claims 1-13, save for the recitation of “A computer-implemented method for training a discretization level reduction model to provide level-reduced tensor data having improved representation of information, the computer-implemented method comprising:” of Claim 14; therefore, both claims are similarly rejected. In regards to claim 16: The present invention claims: “wherein the loss comprises a pixel-wise difference between the input tensor data and the reconstructed input tensor data.” While Hou does not utilize pixel-wise difference loss directly (See Page 3 “Instead of using perpixel losses, i.e. measuring pixel-wise difference between the output image and its target (the original) image, we measure the difference between the output image and target image based on the high level features extracted from pretrained deep convolutional neural networks.”), Section II.C of Hou does go into detail regarding the state of the art of per-pixel difference loss at the time of Hou’s writing, as well as the deficiencies such a method has compared to the method used by their system. A person of ordinary skill in the art before the Applicant’s filing date would have been aware of per-pixel difference loss and its relevant benefits of deficiencies. In regards to claim 19: The present invention claims: “wherein the first reconstructed input tensor data component comprises a reconstructed image and wherein the second reconstructed input tensor data component comprises a color tint for the reconstructed image.” Hou teaches “We not only use a deep CNN as a nonlinear transformation function to map a low bit depth image to a higher bit depth image or from a halftone image to a continuous tone image, but also employ another pre-trained deep CNN as a feature extractor or convolutional spatial filter to derive visually important features to construct the objective function for the training of the transformation neural network.” (Page 2) and “By inverting convolutional features [39], the colors and the rough contours of an image can be reconstructed from activations in pretrained CNNs.” (Page 3). In regards to claim 20: Claim 20 recites similar in scope to the limitations found in Claims 1 and/or 7, save for the recitation of “One or more non-transitory, computer-readable media storing a machine-learned discretization level reduction model configured to…” in Claim 20; therefore, both Claims are similarly rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hou as applied to Claim 1 above, in further view of Banerjee et al. (An Empirical Study on Generalizations of the ReLU Activation Function, 2019), hereinafter Banerjee. In regards to claim 8: Hou does not explicitly teach the use of tanh activation functions, instead using LeakyReLU or ReLU (Page 4), as is claimed in “wherein the discretized activation function is a discretized tanh function.” However; Banerjee teaches “However Linear, Sigmoid, Tanh and ReLU are the most commonly used activation functions and they are often selected empirically during the network design phase, rather than through a proper data driven process.” (Abstract) and “A variation of the Tanh activation, called the Leaky-Tanh breaks the symmetry of Tanh by penalizing the negative part [10] and has been shown to be better than ReLU and leaky-ReLU for training deep neural networks.” (introduction). Banerjee illustrates the tanh and ReLU activation functions are commonly used and well-known in the art before the Applicant’s filing date. It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to use either activation function to fit their specific needs in a system similar to Hou’s. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GRIFFIN TANNER BEAN/ Examiner, Art Unit 2121 /Li B. Zhen/ Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Apr 18, 2023
Application Filed
Mar 09, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12424302
ACCELERATED MOLECULAR DYNAMICS SIMULATION METHOD ON A QUANTUM-CLASSICAL HYBRID COMPUTING SYSTEM
2y 5m to grant Granted Sep 23, 2025
Patent 12314861
SYSTEMS AND METHODS FOR SEMI-SUPERVISED LEARNING WITH CONTRASTIVE GRAPH REGULARIZATION
2y 5m to grant Granted May 27, 2025
Patent 12261947
LEARNING SYSTEM, LEARNING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 25, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
21%
Grant Probability
50%
With Interview (+28.4%)
4y 4m
Median Time to Grant
Low
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month