Prosecution Insights
Last updated: April 19, 2026
Application No. 18/330,990

ADAPTERS FOR QUANTIZATION

Non-Final OA §101§102§103
Filed
Jun 07, 2023
Examiner
ELARABI, TAREK A
Art Unit
3661
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
154 granted / 222 resolved
+17.4% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
29 currently pending
Career history
251
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
34.0%
-6.0% vs TC avg
§102
32.3%
-7.7% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 222 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This office action is in response to application number 18/330,990 filed on 06/07/2023, in which claims 1-28 are presented for examination. Priority Acknowledgment is made of applicant’s claim for priority of provisional patent application No. 63,355,472 filed on 06/24/2022. Information Disclosure Statement The information disclosure statement(s) (IDS(s)) submitted on 11/07/2023 has/have been received and considered. Examiner Notes Examiner cites particular paragraphs (or columns and lines) in the references as applied to Applicant’s claims for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The prompt development of a clear issue requires that the replies of the Applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP §2163.06. Applicant is reminded that the Examiner is entitled to give the Broadest Reasonable Interpretation (BRI) to the language of the claims. Furthermore, the Examiner is not limited to Applicant’s definition which is not specifically set forth in the claims. See MPEP §2111.01. Claim Interpretation The following is a quotation of 35 USC §112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 USC §112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph, except as otherwise indicated in an Office action. “means for receiving ...” in claims 21-28 “means for incorporating ...” in claims 21-28 “means for scaling ...” in claim 22 “means for determining ...” in claim 23 “means for jointly determining ...” in claim 24 “means for re-training ...” in claim 25 Because this/these claim limitation(s) is/are being interpreted under 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 USC §112(f) or pre-AIA 35 USC §112, sixth paragraph. Claim Rejections – 35 USC §101 35 USC §101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-28 is/are rejected under 35 USC §101 because the claimed invention is directed to an abstract idea without significantly more. See MPEP 2106 (III) The determination of whether a claim recites patent ineligible subject matter is a two-step inquiry. STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), See MPEP 2106.03, or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: See MPEP 2106.04 STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP 2106.04(II)(A)(1) STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP 2106.04(II)(A)(2) STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP 2106.05 Claim 1. A processor-implemented method, comprising: receiving an artificial neural network (ANN) model, the ANN model having a plurality of channels of target activations [pre-solution activity/ particular technological environment or field of use without telling how it is accomplished]; and incorporating a quantization module between a first linear layer of the ANN model and a second linear layer of the ANN model to generate an adapted ANN model [mathematical process/step], the quantization module scaling a first set of weights and biases of the first linear layer based on a learnable quantization module parameter [mathematical process/step]and scaling a second set of weights of the second linear layer based on an inverse of the learnable quantization module parameter [mathematical process/step]. 101 Analysis - Step 1: Statutory category – Yes The claim recites a method that is including at least one step. The claim falls within one of the four statutory categories. See MPEP 2106.03 Step 2A Prong one evaluation: Judicial Exception – Yes – Mental processes In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity. The Office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can be “performed in the human mind, or by a human using a pen and paper”. See MPEP 2106.04(a)(2)(III) The claim recites the limitation of (1) incorporating a quantization module between a first linear layer of the ANN model and a second linear layer of the ANN model to generate an adapted ANN model, the quantization module (2) scaling a first set of weights and biases of the first linear layer based on a learnable quantization module parameter and (3) scaling a second set of weights of the second linear layer based on an inverse of the learnable quantization module parameter. The claim recite a method for scaling layers of an artificial neural network (ANN) model by incorporating quantization module and based on a learnable quantization module parameter and/or an inverse of the learnable quantization module parameter using mathematical techniques. In other words, the claimed method simply describes the concept of scaling set of weights and/or biases by using said parameter and/or its inverse through mathematical relationships. The incorporating and scaling merely employ(s) mathematical relationships to manipulate existing quantization module without limit to any use of the ANN model. This idea is similar to the basic concept of manipulating information using mathematical relationships (e.g., converting numerical representation in Benson or calculating parameters in Grams), which has been found by the courts to be an abstract idea. Therefore, the claim is directed to an abstract idea. These limitation(s), as drafted, is/are simple processes that, under its broadest reasonable interpretation, employ(s) mathematical relationships to manipulate existing quantization module but for the recitation of “at least one processor” and “memory” (in base claims 9 & 17). That is, other than reciting “processor”/ “memory” nothing in the claim elements precludes the steps from practically being performed in the mind. For example, but for the processor/ memory language, the claim encompasses a person looking at data collected and forming a simple judgement. The mere nominal recitation of by a controller does not take the claim limitations out of the mental process grouping. Thus, the claim recites a mental process. Step 2A Prong two evaluation: Practical Application - No In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in MPEP 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” The Office submits that the foregoing underlined limitation(s) recite additional elements that do not integrate the recited judicial exception into a practical application. The claim recites additional step and/or element of receiving an artificial neural network (ANN) model, the ANN model having a plurality of channels of target activations. The receiving are recited at a high level of generality (i.e. as a general means of gathering data model for use in the incorporating and scaling steps), and amount to mere data gathering, which is a form of insignificant extra-solution activity. The “artificial neural network (ANN) model … having a plurality of channels of target activations” element also recited at a high level of generality, and amounts to mere linking use of a judicial exception to a particular technological environment or field of use without telling how it is accomplished. Examiner notes, in base claims 9 & 17, the “at least one processor” and “memory” merely describes how to generally and merely automates the incorporating and scaling steps, therefore acting as a generic computer to perform the abstract idea and/ or “apply” the otherwise mental judgements using a generic or general-purpose processor, i.e. a computer. The processor/ memory system is recited at a high level of generality and is merely automates the incorporating and scaling steps. Accordingly, even in combination, these additional elements/ steps do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Step 2B evaluation: Inventive concept - No In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. See MPEP 2106.05(f). Under the 2019 PEG, a conclusion that an additional element is insignificant extra- solution activity in Step 2A should be re-evaluated in Step 2B. Here, the receiving step(s), and the artificial neural network (ANN) model element(s) were considered to be insignificant extra-solution activity in Step 2A, and thus they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. MPEP 2106.05(d)(II), indicate that mere receiving or obtaining data and/or information related elements over a network, i.e., ANN model is/are a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). The “artificial neural network (ANN) model” element that has “a plurality of channels of target activations” also recited at a high level of generality, and amounts to mere linking use of a judicial exception to a particular technological environment or field of use without telling you how it is accomplished. The background section of Applicant’s Specification recites that the artificial neural network is a conventional neural networks, and the Specification does not provide any indication that the said artificial neural network (ANN) model is anything other than a conventional neural network model. The processor/ memory of base claims 9 & 17 are merely describes how to generally and merely automates the incorporating & scaling steps, therefore acting as a generic computer to perform the abstract idea and/or “apply” the otherwise mathematical concept using a generic or general-purpose processor, i.e. a computer. Accordingly, a conclusion that the receiving step(s), and the artificial neural network (ANN) model element(s) is/are well-understood, routine, conventional activity is supported under Berkheimer. Thus, the claim is ineligible. Independent apparatus, non-transitory computer-readable medium & other apparatus claims 9, 17 & 21, respectively, recite similar limitations performed by the method of claim 1. Therefore, claims 9, 17 & 21 are rejected under the same rationales used in the rejections of claim 1 as outlined above. Dependent claims 2-8, 10-16, 18-20 & 22-28 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application and amounts to mere input and/or output data manipulation. Therefore, dependent claims 2-8, 10-16, 18-20 & 22-28 are not patent eligible under the same rationale as provided for in the rejection of claim 1. Thus, claims 1-28 are ineligible under 35 USC §101 Claim Rejections - 35 USC §102 In the event the determination of the status of the application as subject to AIA 35 USC §102 and §103 (or as subject to pre-AIA 35 USC §102 and §103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 USC §102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-6, 8-14, 16-26 & 28 is/are rejected under 35 USC §102(a)(1) as being clearly anticipated by PG Pub. No. US-2020/0302299-A1 by Nagel et al. (hereinafter “Nagel”), which is found in the IDS submitted on 11/07/2023 As per claim 1, Nagel discloses a processor-implemented method (Nagel, in at least ¶¶3, 15, 33 & 54, discloses a method for performing quantization in neural networks), comprising: receiving an artificial neural network (ANN) model, the ANN model having a plurality of channels of target activations (Nagel, in at least ¶¶3, 15, 33 & 54, discloses a method for performing quantization in neural networks, and to apply scaling in this manner to many channels/layers in the network); and PNG media_image1.png 308 542 media_image1.png Greyscale where S [Wingdings font/0xE0] a diagonal matrix in which the element, and Sii [Wingdings font/0xE0] a nonnegative scaling factor for channel i Nagel’s specified equation(s) ¶54 incorporating a quantization module between a first linear layer of the ANN model and a second linear layer of the ANN model to generate an adapted ANN model, the quantization module scaling a first set of weights and biases of the first linear layer based on a learnable quantization module parameter and scaling a second set of weights of the second linear layer based on an inverse of the learnable quantization module parameter (Nagel, in at least ¶¶3, 15, 33 & 54, discloses a method for performing quantization in neural networks, including (1) cross-layer rescaling to reduce quantization errors due to layer weights that vary widely or include outliers, (2) equalizing the ranges of weight tensors or channel weights within a layer of a neural network by scaling each of the output channels weights by a corresponding scaling factor, and scaling the next layer's corresponding input channel weights by the inverse of the corresponding scaling factor, (3) techniques used to determine the corresponding scaling factor, including differential learning using Straight Through Estimator (STE) methods and a local or global loss, and/or by using a metric for the quantization error and a black box optimizer that minimizes the error metric with respect to the scaling parameters. Nagel further discloses determining that consecutive layers in the neural network have a linear relationship, and shift scaling factors from one layer to another to improve quantization performance, wherein the linear relationship between layers may be described generally including bias terms. See specified equation(s), which has/have reproduced above for convenience, providing a scaling for a first such layer followed by a rescaling using an inverse of the scaling factor for the subsequent layer). As per claim 2, Nagel discloses the processor-implemented method of claim 1, accordingly, the rejection of claim 1 above is incorporated. Nagel further discloses comprising scaling a target activation in each channel of the plurality of channels of target activations based on a learnable quantization parameter (Nagel, in at least ¶¶26-33, discloses activation re-quantization loss, wherein techniques used to determine the corresponding scaling factor, including differential learning using Straight Through Estimator (STE) methods and a local or global loss, and/or by using a metric for the quantization error and a black box optimizer that minimizes the error metric with respect to the scaling parameters). As per claim 3, Nagel discloses the processor-implemented method of claim 2, accordingly, the rejection of claim 2 above is incorporated. Nagel further discloses comprising determining the learnable quantization parameter based on a task loss of the adapted ANN model (Nagel, in at least Fig. 1B, and ¶¶26-33, discloses types of loss in the fixed-point quantized pipeline, e.g., input quantization loss, weight quantization loss, runtime saturation loss, activation re-quantization loss, and possible clipping loss for certain non-linear operations, wherein techniques used to determine the corresponding scaling factor, including differential learning using Straight Through Estimator (STE) methods and a local or global loss, and/or by using a metric for the quantization error and a black box optimizer that minimizes the error metric with respect to the scaling parameters). As per claim 4, Nagel discloses the processor-implemented method of claim 2, accordingly, the rejection of claim 1 above is incorporated. Nagel further discloses in which the learnable quantization parameter and the learnable quantization module parameter are jointly determined (Nagel, in at ¶¶26-33, discloses performing post-training quantization of weights and activations of a trained neural network model without asymmetric min-max quantization. A computing device may be configured to map large bit-width (e.g., FP32, etc.) weights and activations to small bit-width (e.g., INTS) representations). As per claim 5, Nagel discloses the processor-implemented method of claim 1, accordingly, the rejection of claim 1 above is incorporated. Nagel further discloses comprising re-training the adapted ANN model using a quantization-aware training process (Nagel, in at ¶¶26-33, discloses performing post-training quantization of weights and activations of a trained neural network model). As per claim 6, Nagel discloses the processor-implemented method of claim 1, accordingly, the rejection of claim 1 above is incorporated. Nagel further discloses in which the ANN model comprises a transformer neural network model (Nagel, in at Fig. 5, and ¶¶14, 24 & 69, discloses trained neural networks are transformed to lower precision through a process known as quantization so that the weight tensors within the neural network. Nagel further discloses using cross layer rescaling and quantization to transform the neural network into a form suitable for execution on a small bit-width computing device). As per claim 8, Nagel discloses the processor-implemented method of claim 1, accordingly, the rejection of claim 1 above is incorporated. Nagel further discloses comprising operating the adapted ANN model to generate an inference based on the learnable quantization module parameter (Nagel, in at ¶¶17, 23, 25, 37, 40, 50 & 72, discloses process that controls a function of a computing device or generates a neural network inference, wherein neural network quantization techniques are used to reduce size, memory access, and computation requirements of neural network inference by using small bit-width values (e.g., INTS values) in the weights and activations of a neural network model. Nagel further discloses the output layer 204 includes a node 242 that operates on the inputs augmented with the weight factors to produce an estimated value 244 as output or neural network inference). Claim Rejections - 35 USC §103 In the event the determination of the status of the application as subject to AIA 35 USC §102 and §103 (or as subject to pre-AIA 35 U.S.C. §102 and §103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 USC §103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 7, 15 & 27 is/are rejected under 35 USC §103 as being unpatentable over Nagel (US-2020/0302299-A1) in view of IEEE Publication with DOI: 10.1109/CAC53003.2021.9728246 to Li et al. (hereinafter “Li”), which are both found in the IDS submitted on 11/07/2023 As per claim 7, Nagel discloses the processor-implemented method of claim 6, accordingly, the rejection of claim 6 above is incorporated. While Nagel clearly discloses to transform the neural network, it is silent on claim 7 limitations. Li, in at least Page 7282 that is was old and well known at the time of filing in the art of neural network systems, teaches in which the transformer neural network model comprises one of a bi-directional encoder representations from transformers (BERT), a robustly optimized BERT approach (RoBERTa)-based transformer, an XLNet-based transformer, a Transformer-XL-based transformer, or a generative pre-trained transformer (GPT) (Li, , in at Page 7282, discloses the quantization of BERT, being an effective Transformer-based model that can handle various downstream NLP tasks after being pretrained on a large dataset and finetuned on specific tasks). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Nagel in view of Li with a reasonable expectation of success, as both/ all inventions are directed to the same field of endeavor – neural network systems - and the combination would handle various downstream tasks with no severe performance drop (see at least Li’s ¶¶2782). As per claims 9-16, the claims is/are directed towards apparatus that recite(s) similar limitations performed by the method(s) of claim(s) 1-8. The cited portions of Nagel & Li used in the rejection(s) of claim(s) 1-8 discloses/ teaches the same apparatus limitations of claim(s) 9-16. Therefore, claim 9-14 & 16 is/are rejected under the same rationales used in the rejections of claims 1-8 as outlined above. As per claims 17-20, the claims is/are directed towards non-transitory computer-readable medium(s) that recite(s) similar limitations performed by the method(s) of claim(s) 1-3 & 8. The cited portions of Nagel used in the rejection(s) of claim(s) 1-3 & 8 discloses the same apparatus limitations of claim(s) 17-20. Therefore, claim 17-20 is/are rejected under the same rationales used in the rejections of claims 1-3 & 8 as outlined above. As per claims 21-28, the claims is/are directed towards apparatus that recite(s) similar limitations performed by the method(s) of claim(s) 1-8. The cited portions of Nagel and/or Li used in the rejection(s) of claim(s) 1-8 discloses/ teaches the same apparatus limitations of claim(s) 21-28. Therefore, claim 21-28 is/are rejected under the same rationales used in the rejections of claims 1-8 as outlined above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tarek Elarabi whose telephone number is (313)446-4911. The examiner can normally be reached on Monday thru Thursday; 6:00 AM - 4:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Nolan can be reached on (571)270-7016. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or (571)272-1000. /Tarek Elarabi/Primary Examiner, Art Unit 3661
Read full office action

Prosecution Timeline

Jun 07, 2023
Application Filed
Jan 31, 2026
Non-Final Rejection — §101, §102, §103
Apr 16, 2026
Applicant Interview (Telephonic)
Apr 16, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603008
SEMI-TRUCK DETECTION AND AVOIDANCE
2y 5m to grant Granted Apr 14, 2026
Patent 12592149
METHOD, APPARATUS, AND SYSTEM FOR DETERMINING A BICYCLE LANE DEVIATION FOR AUTONOMOUS VEHICLE OPERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12589974
A COMPUTER-IMPLEMENTED METHOD FOR TRAINING A MACHINE LEARNING MODEL TO DETECT INSTALLATION ERRORS IN AN ELEVATOR, IN PARTICULAR AN ELEVATOR DOOR, A COMPUTER-IMPLEMENTED METHOD FOR CLASSIFYING INSTALLATION ERRORS AND A SYSTEM THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12583450
VEHICLE CONTROL DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12566452
METHOD FOR TRAINING MIGRATION SCENE-BASED TRAJECTORY PREDICTION MODEL AND UNMANNED DRIVING DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+36.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 222 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month