Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s argument filed 12/29/2025 have been fully considered but they are not persuasive.
Applicant’s Argument: On page 17-19 of Applicant’s response to rejections under 35 U.S.C. 101, applicant states the claimed invention is a technical improvement in the field of training recurrent neural networks. The technical improvement consists of dynamically acquiring data variation range of data and dynamically determining a first target iteration interval. Applicant also states that the claimed invention involves specific steps that provide significantly more than the abstract idea and is a substantial improvement to the functionality of computer technology.
Examiner’s Response: Applicant’s argument is not persuasive. The claimed technological improvement of dynamically acquiring data variation range of data is directed to data gathering, which is understood to be insignificant extra solution activity - see MPEP 2106.05(g). Dynamically acquiring data only defines the frequency at which data is obtained and does not provide a technological improvement. The claimed technological improvement of dynamically determining a first target iteration interval based on a control function is directed to an abstract idea of a mathematical concept. The claim limitation “wherein the first target iteration interval is shortened when the data variation range increases, and the first target iteration interval is lengthened when the data variation range decreases” defines an inverse relationship between two variables, which is a mathematical concept.
During examination, the examiner should analyze the "improvements" consideration by evaluating the specification and the claims to ensure that a technical explanation of the asserted improvement is present in the specification, and that the claim reflects the asserted improvement (see MPEP §2106.05(a)). The MPEP (§2106.05(a)(II)) also warns, “it is important to keep in mind that an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology.” Here, the alleged improvement in the form of “dynamically determining a first target iteration interval based on said variation range” is an improvement to the abstract idea of a mathematical concept.
An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome (see MPEP 2106.05(a)). The amended claims do not provide sufficient details to describe any technological improvement. If the specifications explicitly set forth an improvement but in a conclusory manner (see MPEP 2106.04(d)(1): a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology.
Applicant’s Argument: On page 9-10 of Applicant’s response to rejections under 35 U.S.C. 103, applicant states Lian completely lacks the core concept of “iteration interval” and the interval length is completely irrelevant to data variation. Additionally, Applicant argues that Tsai does not teach the claims because Tsai teaches re-quantization of physical characteristics of hardware rather than real-time changes in data. Applicant states that the inventive concept is the data variation range of data to be quantized, which relates to real-time data distribution and is irrelevant to hardware.
Examiner’s Response: Applicant’s argument is not persuasive. Lian (par. 81-83) describes different methods of performing the quantization process such as executing the process periodically or triggered by a specific condition. Lian teaches “iteration interval” because the reference describes different embodiments of when to perform the quantization process. In Lian, the interval length and data variation are performed separately and the two set of information does not have any direct correlation. Lian in combination with Tsai is used to introduce this correlation. Applicant agrees that Tsai teaches the intensity of an external factor is negatively correlated with length of the adjustment interval. Tsai (par. 46) teaches increasing the period for re-quantization when the read/write noise is decreasing. Tsai teaches the correlation between the interval length and data variation as recited in the amended claims.
Applicant further argues that the type of data taught in Tsai is not what is recited in the claimed invention. Claim 1 does not define what constitutes as a data variation range and what constitutes as data. Tsai (par. 40) teaches read and write noise of weight data when a neural network is executed. Data that is generated from hardware is still data. Under the broadest reasonable interpretation, Examiner interprets the noise to be the data variation range and the weights are the data.
Applicant’s Argument: On page 14-16 of Applicant’s response to rejections under 35 U.S.C. 103, applicant states that Yuan, Gou, Zhu, nor Wang teaches the claimed invention of correlating data variation range with iteration interval.
Examiner’s Response: Applicant’s argument is not persuasive. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The core of the claimed invention of determining when to adjust quantization parameters according to the data variation range is taught by Lian in combination of Tsai as described in an earlier remark.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. CN201910798228.2, filed on 08/27/2019 and Application No. CN201910888141.4,
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18, 20, and 30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Subject Matter Eligibility Analysis Step 1:
Claim 1 recites “A quantization parameter adjustment method of a recurrent neural network during training or fine-tuning of the recurrent neural network, ... the method comprising” and is thus a process, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
“dynamically determining a first target iteration interval according to the data variation range of the data to be quantized, wherein the first target iteration interval is shortened when the data variation range increases, and the first target iteration interval is lengthened when the data variation range decreases, so as to adjust quantization parameters in recurrent neural network computation according to the first target iteration interval, wherein the first target iteration interval comprises at least one iteration , and the quantization parameters of the recurrent neural network are configured to implement quantization of the data to be quantized in the recurrent neural network computation” (a mathematical calculation and relationship, See pg. 31, lines 25-34 in Specification)
Claim 1 therefore recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
“... implemented by a quantization parameter adjustment apparatus, the apparatus comprising: a memory configured to store a computer program; and a processor configured to execute the computer program to implement the method” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f))
"dynamically obtaining a data variation range of data to be quantized” (This step is directed to data gathering, which is understood to be insignificant extra solution activity - see MPEP 2106.05(g))
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Therefore, Claim 1 is directed to the abstract idea.
Subject Matter Eligibility Analysis Step 2B:
“... implemented by a quantization parameter adjustment apparatus, the apparatus comprising: a memory configured to store a computer program; and a processor configured to execute the computer program to implement the method” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f))
"dynamically obtaining a data variation range of data to be quantized” (This step is directed to transmitting or receiving information, which is understood to be insignificant extra solution activity and well understood, routine and conventional activity of transmitting and receiving data as identified by the court - see MPEP 2106.05(d))
The additional elements as disclosed above alone or in combination do not recite significantly more than the abstract idea itself as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Therefore, Claim 1 is subject-matter ineligible.
Regarding Claim 2:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“adjusting the quantization parameters according to a preset iteration interval when a current verify iteration is less than or equal to a first preset iteration” (a mental process that can be performed in the human mind, i.e. judgement)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 3:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining the first target iteration interval according to the data variation range of the data to be quantized when a current verify iteration is greater than a first preset iteration” (a mathematical calculation, See pg. 31, lines 25-29 in Specification)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 4:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining a second target iteration interval corresponding to a current verify iteration according to the first target iteration interval and a total count of iterations in each cycle when the current verify iteration is greater than or equal to a second preset iteration, and the current verify iteration requires adjustment in quantization parameters” (a mathematical calculation, See pg. 31, lines 25-29 in Specification)
“determining an update iteration corresponding to the current verify iteration according to the second target iteration interval to adjust the quantization parameters in the update iteration, which is an iteration after the current verify iteration” (a mental process that can be performed in the human mind, i.e. judgement)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“wherein the second preset iteration is greater than a first preset iteration, and a quantization adjustment process of the recurrent neural network includes a plurality of cycles, wherein iterations are not consistent in the plurality of cycles in terms of total count” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h)).
Regarding Claim 5:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining an update cycle of the current verify iteration according to an iterative ordering number of the current verify iteration in a current cycle and the total count of iterations in a cycle after the current cycle, wherein the total count of iterations in the update cycle is greater than or equal to an iterative ordering number of the current verify iteration” (a mental process that can be performed in the human mind, i.e. judgement)
“determining the second target iteration interval according to the first target iteration interval, the iterative ordering number and the total count of iterations in the cycle between the current cycle and the update cycle” (a mathematical calculation, See pg. 31, lines 25-29 in Specification)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 6:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining that the current verify iteration is greater than or equal to the second preset iteration if a convergence degree of the recurrent neural network satisfies a preset condition” (a mental process that can be performed in the human mind, i.e. judgement)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 7:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining the point location(s) corresponding to an iteration(s) in a reference iteration interval according to a target data bit width corresponding to the current verify iteration and the data to be quantized in the current verify iteration to adjust the point location(s) in the recurrent neural network computation” (a mathematical calculation, See pg. 7, lines 17-20 in Specification)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“wherein the quantization parameters include a point location(s), and the point location(s) is a location of a decimal point number in quantized data corresponding to the data to be quantized” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h)).
“wherein the point location(s) corresponding to iteration(s) in the reference iteration interval are consistent, and the reference iteration interval includes the second target iteration interval or a preset iteration interval” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h)).
Regarding Claim 8:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining a data bit width corresponding to a reference iteration interval according to a target data bit width corresponding to the current verify iteration, wherein data bit widths corresponding to iteration(s) in the reference iteration interval are consistent, and the reference iteration interval includes the second target iteration interval or a preset iteration interval” (a mathematical calculation, See pg. 7, lines 7-20 in Specification)
“adjusting the point location(s) corresponding to an iteration(s) in the reference iteration interval according to an obtained point location iteration interval and the data bit width corresponding to the reference iteration interval to adjust the point location(s) in the recurrent neural network computation” (a mental process that can be performed in the human mind with the aid of pen and paper, i.e. judgement)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“wherein the point location iteration interval includes at least one iteration, and point locations of iterations in the point location iteration interval are consistent” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h)).
Regarding Claim 9:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“wherein the point location iteration interval is less than or equal to the reference iteration interval” (a mathematical relationship)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 10:
Subject Matter Eligibility Analysis Step 2A Prong 1: None
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“wherein the quantization parameters also include a scale factor, and the scale factor is updated synchronously with the point location(s)” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h)).
Regarding Claim 11:
Subject Matter Eligibility Analysis Step 2A Prong 1: None
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“wherein the quantization parameters also include an offset, and the offset is updated synchronously with the point location(s)” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h)).
Regarding Claim 12:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining a quantization error according to the data to be quantized of the current verify iteration and the quantized data of the current verify iteration, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration” (a mathematical calculation, See pg. 25, lines 19-33 in Specification)
“determining the target data bit width corresponding to the current verify iteration according to the quantization error” (a mathematical calculation, See pg. 26, lines 17-29 in Specification)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 13:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“increasing a data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is greater than or equal to a first preset threshold” (a mathematical calculation)
“decreasing the data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is less than or equal to a second preset threshold” (a mathematical calculation)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 14:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining a first intermediate data bit width according to a first preset bit width stride if the quantization error is greater than or equal to the first preset threshold” (a mathematical calculation, See pg. 27, lines 31-35 in Specification)
“returning to determine the quantization error according to the data to be quantized in the current verify iteration and the quantized data of the current verify iteration until the quantization error is less than the first preset threshold, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration according to the bit width of the first intermediate data” (a mathematical calculation, See pg. 25, lines 19-33 in Specification)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 15:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining a second intermediate data bit width according to a second preset bit width stride if the quantization error is less than or equal to the second preset threshold” (a mathematical calculation, See pg. 29, lines 2-5 in Specification)
“returning to determine the quantization error according to the data to be quantized in the current verify iteration and the quantized data of the current verify iteration until the quantization error is greater than the second preset threshold, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration according to the bit width of the second intermediate data” (a mathematical calculation, See pg. 25, lines 19-33 in Specification)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 16:
Subject Matter Eligibility Analysis Step 2A Prong 1: None
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“obtaining a variation range of a point location(s), wherein the variation range of the point location(s) is used to characterize the data variation range of the data to be quantized, and the variation range of the point location(s) is positively correlated with the data variation range of the data to be quantized” (This step is directed to data gathering, which is understood to be insignificant extra solution activity (2106.05(g) in step 2A prong 2) and well understood, routine and conventional activity of transmitting and receiving data as identified by the court (2106.05(d) in step 2B)).
Regarding Claim 17:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining a first average value according to the point location corresponding to a previous verify iteration before a current verify iteration and point location(s) of historical verify iteration(s) before a previous verify iteration, wherein the previous verify iteration is the verify iteration corresponding to the previous iteration interval before a reference iteration interval” (a mathematical calculation, See pg. 21, lines 30-33 in Specification)
“determining a second average value according to the point location corresponding to the current verify iteration and the point location(s) of the historical verify iteration(s) before the current verify iteration, wherein the point location corresponding to the current verify iteration is determined according to a target data bit width and the data to be quantized corresponding to the current verify iteration” (a mathematical calculation, See pg. 22, lines 8-10 in Specification)
“determining a first error according to the first average value and the second average value, wherein the first error is used to characterize the variation range of the point location(s)” (a mathematical calculation, See pg. 24, lines 5-8 in Specification)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 18:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“determining the second average value according to the point location(s) of the current verify iteration and the preset number of intermediate moving average values” (a mathematical calculation, See pg. 21, lines 30-33 in Specification)
“determining the second average value according to the point location corresponding to the current verify iteration and the first average value” (a mathematical calculation, See pg. 23, lines 33-35 in Specification)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“obtaining a preset number of intermediate moving average values, wherein each intermediate moving average value is determined according to a preset number of verify iterations before the current verify iteration” (This step is directed to data gathering, which is understood to be insignificant extra solution activity (2106.05(g) in step 2A prong 2) and well understood, routine and conventional activity of transmitting and receiving data as identified by the court (2106.05(d) in step 2B)).
“wherein determining the second average value according to the point location corresponding to the current verify iteration and the point location(s) of the historical verify iteration(s) before the current verify iteration comprises” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h))
Regarding Claim 20:
Subject Matter Eligibility Analysis Step 2A Prong 1:
“updating the second average value according to an obtained data bit width adjustment value of the current verify iteration, wherein the data bit width adjustment value of the current verify iteration is determined from the target data bit width and an initial data bit width of the current verify iteration” (a mathematical calculation)
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B: None
Regarding Claim 30:
Claim 30 recites “the computer readable storage medium stores a computer program”. The specification does not provide a clear definition on whether the claimed storage medium is limited to statutory or non-transitory elements and does not include any non-statutory elements. Therefore, under the broadest reasonable interpretation, the claim element “computer readable storage medium” are not limited to statutory elements and can be considered as non-statutory. Claim 30 is rejected because it does not fall within at least one of the four categories of patent eligible subject matter under Step 1 of the 101 subject matter eligibility analysis.
The claim also recites program product that performs the method as described in claim 1. Therefore, claim 30 is rejected for the same reasons as disclosed for claim 1. The limitations for additional elements of claim 30 are analyzed below.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Please see Step 2A Prong 1 analysis of claim 1
Subject Matter Eligibility Analysis Step 2A Prong 2 & 2B:
“A computer readable storage medium, wherein the computer readable storage medium stores a computer program, and when the computer program is executed, the steps of the method of claim1 are implemented” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f))
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 16, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Lian (US20210192349A1) in view of Tsai (US20200334525A1).
Regarding claim 1, Lian teaches:
“A quantization parameter adjustment method of a during training or fine-tuning of the recurrent neural network, implemented by a quantization parameter adjustment apparatus, the apparatus comprising: a memorv configured to store a computer program; and a processor configured to execute the computer program to implement the method, the method comprising” (abstract, [0064, 0101, 0111], Quantization parameters are calculated based on user calibration data for a neural network model. The neural network model is obtained through training and may consist of initial quantization parameters. The model is finetuned by updating the quantization parameter using the proposed method in the reference. The apparatus for quantizing a neural network model is shown in Figure 6 that consists of a calculation module, configured to input the user calibration data into the neural network model to calculate a quantization parameter of each of a plurality of layers of the neural network model. The apparatus may store instructions in a memory and the instructions can execute the instructions.)
“dynamically obtaining a data variation range of data to be quantized” ([0015-0016, 0046-0047, 0059, 0081-0082], The quantization parameter calculation process consists of determining the maximum and minimum values in the input data (data variation range of data) of the layer. The input data consists of various forms of data and can be voice data recorded by a recording device or voice calls received by a mobile phone. It is implied that the audio data of voice calls may consist of different lengths and received dynamically at various times during the day. In some embodiment, the quantization parameter calculation stage can be continuously performed as the user uses the device and based on the data generated by the device of the user. Therefore, the quantization parameter calculation can be dynamically performed as input data is received by the device.)
“dynamically determining a first target iteration interval , to adjust quantization parameters in ” ([0043, 0072-0073, 0081-0084], A quantization parameter update time may be based on a specific condition to determine when to calculate the quantization parameter (adjust quantization parameters) based on the maximum and minimum values in the input data of the layer (data variation range of the data). A quantization process (first target iteration interval) may be executed immediately after the quantization parameter calculation stage. The quantization parameter calculation process may be executed dynamically as the user device is collecting the input data to calibrate or update the quantization parameter. When the quantization parameter calculation is executed, it is executed for one or more iterations. The input quantization parameter of each layer is used to quantize the input data of the layer.)
Lian does not explicitly disclose an implementation of “dynamically determining a first target iteration interval according to the data variation range of the data”, “wherein the first target iteration interval is shortened when the data variation range increases, and the first target iteration interval is lengthened when the data variation range decreases”, and where the neural network model is a recurrent neural network. However, Tsai discloses in the same field of endeavor:
“A quantization parameter adjustment method of a recurrent neural network ...” ([abstract, 0003, 0019], The neural network may be a recurrent neural network. A correction factor is applied to the neural network parameters to compensate for conductance drift.)
“dynamically determining a first target iteration interval according to the data variation range of the data to be quantized, wherein the first target iteration interval is shortened when the data variation range increases, and the first target iteration interval is lengthened when the data variation range decreases, so as to adjust quantization parameters in recurrent neural network computation according to the first target iteration interval, wherein the first target iteration interval comprises at least one iteration, and the quantization parameters of the recurrent neural network are configured to implement quantization of the data to be quantized in the recurrent neural network computation” ([0003, 0019, 0040-0042, 0045-0046], A recurrent neural network is a type of neural network and the described process can be implemented into a recurrent neural network architecture. Quantization can be performed on the weights or activation function during the training process. Re-quantization can be performed on the weights to account for both read and write noise (data variation range). Read noise refers to inconsistency in the value read out from a particular weight (data variation range of the data to be quantized). Various types of quantization can be used such as four-bit resolution or two-bit resolution (quantization parameter). The period for re-quantization is determined based on the magnitude of the read/write noise. Lower read/write noise means that a longer period for re-quantization can be used (the first target iteration interval is lengthened when the data variation range decreases). Thus, it is implied that a higher read/write noise means a shorter period for re-quantization (the first target iteration interval is shortened when the data variation range increases).)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “dynamically determining a first target iteration interval according to the data variation range of the data”, “wherein the first target iteration interval is shortened when the data variation range increases, and the first target iteration interval is lengthened when the data variation range decreases”, and where the neural network model is a recurrent neural network from Tsai into the teaching of Lian. Doing so can improve the system to perform correction to the quantization process in the presence of drift (Tsai, abstract). The implementation where the neural network model is a recurrent neural network taught by Tsai may be similarly implemented into the dependent claims 4, 6, 7, and 8.
Regarding claim 2, Lian teaches:
“adjusting the quantization parameters according to a preset iteration interval when a current verify iteration is less than or equal to a first preset iteration” ([0081-0082], The quantization parameter calculation process may be executed periodically, such as monthly (preset iteration interval) to calibrate or update the quantization parameter. Alternatively, the quantization parameter calculation stage can last for a long time, that is, whenever the user uses the device (current verify iteration). It is implied that there is an embodiment in which the quantization parameter calculation occurs due to user usage before the preset update interval (current verify iteration is less than or equal to a first preset iteration).)
Regarding claim 16, Lian teaches:
“obtaining a variation range of a point location(s), wherein the variation range of the point location(s) is used to characterize the data variation range of the data to be quantized, and the variation range of the point location(s) is positively correlated with the data variation range of the data to be quantized” ([0066-0068], The quantization scale value may be a 32-bit, 64-bit , or 16-bit floating point number, which represents a range of different precisions (variation range of the point locations). Formula 1 discloses a linear relationship (positively correlated) between the quantization scale value and the maximum value in the input data (data variation range of the data).)
Regarding claim 30:
Claim 30 recites a system that performs the same process as described in Claim 1. Therefore claim 30 is rejected under the same reasons mention for claim 1. The additional elements of claim 30 is addressed below by Lian:
“A computer readable storage medium, wherein the computer readable storage medium stores a computer program, and when the computer program is executed, the steps of the method of any one of claim 1 are implemented” ([0007, 0029], The computer-readable storage medium stores instructions that can perform the process of calculating the quantization parameters when executed.)
Claims 3-4, 6-9, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Lian (US20210192349A1) in view of Tsai (US20200334525A1), and Yuan (US20210374540A1).
Regarding claim 3, Lian in view of Tsai teaches “determining the first target iteration interval according to the data variation range of the data to be quantized ” as described above in claim 1.
Lian in view of Tsai does not explicitly disclose an implementation where “quantized when a current verify iteration is greater than a first preset iteration”. However, Yuan discloses in the same field of endeavor:
“determining the first target iteration interval ” ([0083-0085, 0088-0089, 0093], The embodiment describes determining jump ratios for predetermined time intervals (preset iterations) within the predetermined time range. The original optimization algorithm of the quantization model is shown in par. 93 and it is a function between the learning parameter (first target iteration interval), an embedding layer parameter value of a current time interval, and an embedding layer parameter value of a previous time interval (first preset iteration). The learning parameter may be determined from the optimization equation after a current and previous iterations have been performed (current verify iteration is greater than the first preset iteration) to obtained a value for the embedding layer parameter.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “quantized when a current verify iteration is greater than a first preset iteration” from Yuan into the teaching of Lian in view of Tsai. Doing so can improve the trained quantization model using an optimization method based on a predetermined time range and time scaling parameters. (Yuan, abstract).
Regarding claim 4, Lian in view of Tsai does not explicitly disclose an implementation where “determining a second target iteration interval corresponding to a current verify iteration according to the first target iteration interval and a total count of iterations in each cycle when the current verify iteration is greater than or equal to a second preset iteration, and the current verify iteration requires adjustment in quantization parameters; and determining an update iteration corresponding to the current verify iteration according to the second target iteration interval to adjust the quantization parameters in the update iteration, which is an iteration after the current verify iteration, wherein the second preset iteration is greater than a first preset iteration, and a quantization adjustment process of the recurrent neural network includes a plurality of cycles, wherein iterations are not consistent in the plurality of cycles in terms of total count”. However, Yuan discloses in the same field of endeavor:
“determining a second target iteration interval corresponding to a current verify iteration according to the first target iteration interval and a total count of iterations in each cycle when the current verify iteration is greater than or equal to a second preset iteration, and the current verify iteration requires adjustment in quantization parameters” ([0083-0085, 0093, 0099-0101], The optimized learning rate parameter (second target iteration interval) is determined by the original learning rate parameter and the time scaling parameter, which includes ti, a quantity of update times (total count of iterations) of the embedding layer parameters. Determining the jump ratios (adjustment in quantization parameters) occur for a predetermined time range and predetermined time intervals. If the time range is 24 hours, the first preset iteration is time interval from 0:00 to 3:00 and the second preset iteration is from 3:00 to 6:00. The time scaling parameter is obtained after all the jump ratios in the predetermined time range are determined (current verify iteration is greater than or equal to a second preset iteration).)
“determining an update iteration corresponding to the current verify iteration according to the second target iteration interval to adjust the quantization parameters in the update iteration, which is an iteration after the current verify iteration” ([0102-0103], The quantization model is retrained using the optimized target optimization algorithm, which includes the optimized learning rate parameter. Re-training consists of determining the jump ratios for a new predetermined time range for the quantization model)
“wherein the second preset iteration is greater than a first preset iteration, and a quantization adjustment process of the ” ([0083-0085, 0093, 0102-0103], The plurality of time intervals is used to determine the jump ratios. The quantization model is trained and goes through re-training (plurality of cycles). The system adjusts the time scaling parameter and may be different for each cycle. For example, Figure 2 shows the jump ratios over a cycle of 24 hours and Figure 3 shows the jump ratio determined over a different cycle of multiple days.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “determining a second target iteration interval corresponding to a current verify iteration according to the first target iteration interval and a total count of iterations in each cycle when the current verify iteration is greater than or equal to a second preset iteration, and the current verify iteration requires adjustment in quantization parameters; and determining an update iteration corresponding to the current verify iteration according to the second target iteration interval to adjust the quantization parameters in the update iteration, which is an iteration after the current verify iteration, wherein the second preset iteration is greater than a first preset iteration, and a quantization adjustment process of the recurrent neural network includes a plurality of cycles, wherein iterations are not consistent in the plurality of cycles in terms of total count” from Yuan into the teaching of Lian in view of Tsai. Doing so can improve the trained quantization model using an optimization method based on a predetermined time range, learning rate, and time scaling parameters. (Yuan, abstract).
Regarding claim 6, Lian in view of Tsai does not explicitly disclose an implementation where “determining that the current verify iteration is greater than or equal to the second preset iteration if a convergence degree of the recurrent neural network satisfies a preset condition”. However, Yuan discloses in the same field of endeavor:
“determining that the current verify iteration is greater than or equal to the second preset iteration if a convergence degree of the ” ([0121], The confidence (convergence degree) in the current time interval may be compared to a previous confidence in a previous time interval to determine if a preset condition is met. In some embodiment, it is implied that the current time interval may occur after the second preset time interval prior to conducting the comparison of the confidence values.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “determining that the current verify iteration is greater than or equal to the second preset iteration if a convergence degree of the recurrent neural network satisfies a preset condition” from Yuan into the teaching of Lian in view of Tsai. Doing so can improve the trained quantization model using an optimization method based on a predetermined time range and time scaling parameters. (Yuan, abstract).
Regarding claim 7, Lian in view of Tsai teaches:
“determining the point location(s) corresponding to an iteration(s) in a network computation” ([Lian, 0066-0068], The quantization scale value may be a 32-bit, 64-bit , or 16-bit floating point number, which represents a range of different precisions (point locations). Formula 1 discloses a function that describes the relationship between the quantization scale value, a quantity of quantized bits, and the maximum value in the input data.)
“wherein the point location(s) corresponding to iteration(s) in the ” ([Lian, 0066-0068], The calculation of the quantization scale value is performed using formula 1 each time the calculation of quantization parameter is required.)
Lian in view of Tsai does not explicitly disclose an implementation where “the reference iteration interval includes the second target iteration interval or the preset iteration interval”. However, Yuan discloses in the same field of endeavor:
“determining ” ([0102-0103], The quantization model is retrained after obtaining the optimized target optimization algorithm, which includes the optimized learning rate parameter.)
“” ([0102-0103], The adjusting of the parameters of the quantization model occurs during re-training (second target iteration interval).)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “the reference iteration interval includes the second target iteration interval or the preset iteration interval” from Yuan into the teaching of Lian in view of Tsai. Doing so can improve the trained quantization model using an optimization method based on a predetermined time range and time scaling parameters. (Yuan, abstract).
Regarding claim 8, Lian in view of Tsai teaches:
“determining a data bit width corresponding to a ” ([Lian, 0066-0068], During the calculation of the quantization parameter, the quantization process may be based on INTx. The example in the reference uses INT8 as the quantization parameter. INT8 will be used during the quantization process for all input data.)
“adjusting the point location(s) corresponding to an iteration(s) in the computation” ([Lian, 0066-0068], The quantization scale value may be a 32-bit, 64-bit, or 16-bit floating point number, which represents a range of different precisions (point locations). Formula 1 discloses a function that describes the relationship between the quantization scale value, a quantity of quantized bits, and the maximum value in the input data. The quantization scale value may be updated when receiving new input data.)
“wherein the point location iteration interval includes at least one iteration, and point locations of iterations in the point location iteration interval are consistent” ([Lian, 0066-0068], The calculation of the quantization scale value is performed using formula 1 each time the calculation of quantization parameter is required. Running the quantization process includes at least one iteration and it is implied that the quantization scale value may be used for the duration of the quantization process.)
Lian in view of Tsai does not explicitly disclose an implementation where “the reference iteration interval includes the second target iteration interval or the preset iteration interval” and a “reference iteration interval”. However, Yuan discloses in the same field of endeavor:
“determining ” ([0102-0103], The quantization model is retrained after obtaining the optimized target optimization algorithm, which includes the optimized learning rate parameter. Performing re-training with the optimized learning rate parameter is the second target iteration interval or reference iteration interval.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “the reference iteration interval includes the second target iteration interval or the preset iteration interval” and a “reference iteration interval” from Yuan into the teaching of Lian in view of Tsai. Doing so can improve the trained quantization model using an optimization method based on a predetermined time range and time scaling parameters. (Yuan, abstract).
Regarding claim 9, Lian in view of Tsai teaches:
“wherein the point location iteration interval ” ([Lian, 0066-0068], The quantization scale value may be a 32-bit, 64-bit , or 16-bit floating point number, which represents a range of different precisions (point locations). Formula 1 discloses a function that calculates the quantization scale.)
Lian in view of Tsai does not explicitly disclose an implementation where “wherein the point location iteration interval is less than or equal to the reference iteration interval”. However, Yuan discloses in the same field of endeavor:
“wherein ” ([0083, 0088], The predetermined time range is 24 hours and the predetermined time intervals are every 3 hours. The training and re-training of the quantization may use the same predetermined time intervals. Figure 2 shows the time interval of one day and Figure 3 shows the time interval of multiple days having the same predetermined time intervals as Figure 2. Therefore, the interval in the re-training stage is the same as the interval in the initial training stage.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “wherein the point location iteration interval is less than or equal to the reference iteration interval” from Yuan into the teaching of Lian in view of Tsai. Doing so can improve the trained quantization model using an optimization method based on a predetermined time range and time scaling parameters. (Yuan, abstract).
Regarding claim 11, Lian teaches:
“wherein the quantization parameters also include an offset, and the offset is updated synchronously with the point location(s)” ([0065, 0072-0075], The quantization offset parameter is calculated by Formula 3. When the quantization scale parameter is updated, the offset is also updated.)
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Lian (US20210192349A1) in view of Tsai (US20200334525A1), Yuan (US20210374540A1), and Gou (US20190303765A1).
Regarding claim 5, Lian in view of Tsai and Yuan teaches:
“determining the second target iteration interval according to the first target iteration interval, ” ([Yuan, 0093, 0099-0101], The optimized learning rate parameter (second target iteration interval) is determined by the original learning rate parameter (first target iteration interval) and the time scaling parameter.)
Lian in view of Tsai and Yuan does not explicitly disclose an implementation where “determining an update cycle of the current verify iteration according to an iterative ordering number of the current verify iteration in a current cycle and the total count of iterations in a cycle after the current cycle, wherein the total count of iterations in the update cycle is greater than or equal to an iterative ordering number of the current verify iteration; and determining the second target iteration interval according to the first target iteration interval, the iterative ordering number and the total count of iterations in the cycle between the current cycle and the update cycle”. However, Gou discloses in the same field of endeavor:
“determining an update cycle of the current verify iteration according to an iterative ordering number of the current verify iteration in a current cycle and the total count of iterations in a cycle after the current cycle, wherein the total count of iterations in the update cycle is greater than or equal to an iterative ordering number of the current verify iteration” ([0134-0135], The number of epochs and the number of iterations for each epoch may be dynamic. Therefore, the number of iterations and epoch may change to meet conditional requirements during the training of the model. The number of iterations in the following epoch (update cycle) may be determined by a cumulative reward for a select number of iterations (iterative ordering number of the current verify iteration in a current cycle) meeting a threshold. In one embodiment, if the accuracy of the current epoch is less than a threshold, the number of iterations may increase in the following epoch.)
“determining the ” ([0134-0135], The number of epochs and the number of iterations for each epoch may be dynamic. Therefore, the number of iterations and epoch may change to meet conditional requirements during the training of the model. The number of iterations in each epoch may be determined by a number of episodes completed (iterative ordering number) and the average reward for all iterations in a given epoch.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “determining an update cycle of the current verify iteration according to an iterative ordering number of the current verify iteration in a current cycle and the total count of iterations in a cycle after the current cycle, wherein the total count of iterations in the update cycle is greater than or equal to an iterative ordering number of the current verify iteration; and determining the second target iteration interval according to the first target iteration interval, the iterative ordering number and the total count of iterations in the cycle between the current cycle and the update cycle” from Gou into the teaching of Lian in view of Tsai and Yuan. Doing so can improve the performance of the neural network model by implementing reinforcement learning to adjust the number of iterations and epochs during training. (Gou, abstract).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Lian (US20210192349A1) in view of Tsai (US20200334525A1), Yuan (US20210374540A1), and Wang, “Two-Step Quantization for Low-bit Neural Networks”.
Regarding claim 10, Lian in view of Tsai and Yuan does not explicitly disclose an implementation where “wherein the quantization parameters also include a scale factor, and the scale factor is updated synchronously with the point location(s)”. However, Wang discloses in the same field of endeavor:
“wherein the quantization parameters also include a scale factor, and the scale factor is updated synchronously with the point location(s)” ([pg. 4, section 3.2, par. 1; Equation 9 and 13], A floating-point scaling factor is introduced for each convolutional kernel, which are low-bit constraints. As shown in Equation 13, when the low-bit precision weights are updated, the scaling factor will also be updated.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “wherein the quantization parameters also include a scale factor, and the scale factor is updated synchronously with the point location(s)” from Wang into the teaching of Lian in view of Tsai and Yuan. Doing so can improve the optimization of the quantization process by applying low-bit constraints. (Wang, abstract).
Claims 12-15 is rejected under 35 U.S.C. 103 as being unpatentable over Lian (US20210192349A1) in view of Tsai (US20200334525A1), Yuan (US20210374540A1), and Zhu (US20200302283A1).
Regarding claim 12, Lian in view of Tsai and Yuan does not explicitly disclose an implementation where “determining a quantization error according to the data to be quantized of the current verify iteration and the quantized data of the current verify iteration, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration; and determining the target data bit width corresponding to the current verify iteration according to the quantization error”. However, Zhu discloses in the same field of endeavor:
“determining a quantization error according to the data to be quantized of the current verify iteration and the quantized data of the current verify iteration, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration” ([0015-0016, 0072-0073], A quantization error may be calculated by comparing the results of the quantized data with a baseline value, such as results when using the full precision floating point values. The quantization error may be determined over a portion of the training iteration.)
“determining the target data bit width corresponding to the current verify iteration according to the quantization error” ([0072-0073], The first bit width is set based on the calculated quantization error, which is calculated during the portion of the training step that is executed.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “determining a quantization error according to the data to be quantized of the current verify iteration and the quantized data of the current verify iteration, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration; and determining the target data bit width corresponding to the current verify iteration according to the quantization error” from Zhu into the teaching of Lian in view of Tsai and Yuan. Doing so can improve the performance of the quantization process by the use of mixed precision values during training. (Zhu, abstract).
Regarding claim 13, Lian in view of Tsai and Yuan does not explicitly disclose an implementation where “increasing the data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is greater than or equal to a first preset threshold; or decreasing the data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is less than or equal to a second preset threshold”. However, Zhu discloses in the same field of endeavor:
“increasing a data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is greater than or equal to a first preset threshold” ([0072-0073], If the difference in accuracy exceeds a defined threshold, bit width for the set of nodes may be increased.)
“decreasing the data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is less than or equal to a second preset threshold” ([0072-0073, 0076], If the difference in accuracy falls below another defined threshold, bit width for the set of nodes may be decreased in order to save computing resources. In one configuration shown in the example, the first bit width define is 6-bit and the second bit width define is 5-bit.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “increasing the data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is greater than or equal to a first preset threshold; or decreasing the data bit width corresponding to the current verify iteration to obtain the target data bit width corresponding to the current verify iteration if the quantization error is less than or equal to a second preset threshold” from Zhu into the teaching of Lian in view of Tsai and Yuan. Doing so can improve the performance of the quantization process by the use of mixed precision values during training. (Zhu, abstract).
Regarding claim 14, Lian in view of Tsai and Yuan does not explicitly disclose an implementation where “determining a first intermediate data bit width according to a first preset bit width stride if the quantization error is greater than or equal to the first preset threshold; and returning to determine the quantization error according to the data to be quantized in the current verify iteration and the quantized data of the current verify iteration until the quantization error is less than the first preset threshold, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration according to the bit width of the first intermediate data”. However, Zhu discloses in the same field of endeavor:
“determining a first intermediate data bit width according to a first preset bit width stride if the quantization error is greater than or equal to the first preset threshold” ([0065-0066, 0072-0074], Epoch 502 is executed with a precision of 5-bits (first intermediate data bit width) allocated to the activation values. The next epoch may seek to increase accuracy (quantization error is greater than or equal to the first preset threshold) and as a result, epoch 504 may increase the bit width by 1 (preset bit width stride) to 6-bit for the activation values. It is implied that the system may continue to increase the next epoch by 1 to further increase the accuracy of the model.)
“returning to determine the quantization error according to the data to be quantized in the current verify iteration and the quantized data of the current verify iteration until the quantization error is less than the first preset threshold, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration according to the bit width of the first intermediate data” ([0072-0076], The adjustment of the bit width may continue until the quantization error meets certain statistical requirements. In some embodiment, the bit width may be increased until the quantization error falls below a defined threshold.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “determining a first intermediate data bit width according to a first preset bit width stride if the quantization error is greater than or equal to the first preset threshold; and returning to determine the quantization error according to the data to be quantized in the current verify iteration and the quantized data of the current verify iteration until the quantization error is less than the first preset threshold, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration according to the bit width of the first intermediate data” from Zhu into the teaching of Lian in view of Tsai and Yuan. Doing so can improve the performance of the quantization process by the use of mixed precision values during training. (Zhu, abstract).
Regarding claim 15, Lian in view of Tsai and Yuan does not explicitly disclose an implementation where “determining the second intermediate data bit width according to the second preset bit width stride if the quantization error is less than or equal to the second preset threshold; and returning to determine the quantization error according to the data to be quantized in the current verify iteration and the quantized data of the current verify iteration until the quantization error is greater than the second preset threshold, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration according to the bit width of the second intermediate data”. However, Zhu discloses in the same field of endeavor:
“determining a second intermediate data bit width according to a second preset bit width stride if the quantization error is less than or equal to the second preset threshold” ([0065-0066, 0072-0074], Epoch 502 is executed with a precision of 6-bits (second intermediate data bit width) allocated to the weights. Similar to the example when the accuracy is increased, it is implied that if the accuracy is to decrease (quantization error is less than or equal to the second preset threshold), epoch 504 may decrease the bit width by 1 (second preset bit width stride) to 5-bit for the weights. It is implied that the system may continue to decrease the next epoch by 1 if it is determined that the accuracy falls below a threshold.)
“returning to determine the quantization error according to the data to be quantized in the current verify iteration and the quantized data of the current verify iteration until the quantization error is greater than the second preset threshold, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration according to the bit width of the second intermediate data” ([0072-0076], The adjustment of the bit width may continue until the quantization error meets certain statistical requirements. In some embodiment, the bit width may be decreased until the quantization error exceeds above a defined threshold.)
It would be obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of “determining the second intermediate data bit width according to the second preset bit width stride if the quantization error is less than or equal to the second preset threshold; and returning to determine the quantization error according to the data to be quantized in the current verify iteration and the quantized data of the current verify iteration until the quantization error is greater than the second preset threshold, wherein the quantized data of the current verify iteration is obtained by quantizing the data to be quantized of the current verify iteration according to the bit width of the second intermediate data” from Zhu into the teaching of Lian in view of Tsai and Yuan. Doing so can improve the performance of the quantization process by the use of mixed precision values during training. (Zhu, abstract).
Remarks
The claims have been searched, but no prior art teaches all of the limitations of dependent claims 17, 18, and 20. Claims 17, 18, and 20 are allowable over prior art.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GARY MAC whose telephone number is (703)756-1517. The examiner can normally be reached Monday - Friday 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GARY MAC/Examiner, Art Unit 2127
/ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127