Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The present application was filed on 01/27/2023. Claims 1-16 are pending and have been examined.
Priority
The examiner acknowledges the present application is a continuation of PCT/CA21/51059 filed on 07/28/2021, which claims priority to U.S. Provisional Application No. 63/057,876 filed on 07/28/2020.
Information Disclosure Statement
As required by M.P.E.P 609(c), the applicant’s submissions of the Information Disclosure Statements dated 01/07/2025 and 04/06/2023 are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action.
Specification
The disclosure is objected to because of the following informalities:
In paragraphs [0016] and [0004] “sematic segmentation” which are typographical errors. These recitations should read “semantic segmentation”.
Appropriate correction is required.
Claim objection
Claims 7-10 are objected to because of the following informalities:
Claim 7 recites “sematic segmentation” which is a typographical error. This recitation should read “semantic segmentation”. Appropriate correction is required.
Claims 8-10 depend on claim 7 and do not cure the deficiencies of claim 7, therefore claims 8-10 are objected to base on their dependencies from claim 7.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 16 is rejected under 35 U.S.C 112(b) or 35 U.S.C 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for application subject to pre-AIA 35 U.S.C 112, the application regards, as the invention.
Claim 16 recites the limitation "the method" in line 3. There is insufficient antecedent basis and typo for this limitation in the claim. For examination purposes examiner has interpreted to be “the operation”.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1:
Step 1: Claim 1 recites a method; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
computing a total variation loss for use in backpropagation…which individually classifies data points - In the context of the claim limitation, this encompasses a mathematical concept of computing forward and backward passes with mental process of evaluation/judgment/opinion to classify data points based on observing the data points.
predicting…a respective label for each data point in a set of input data points - In the context of the claim limitation, this encompasses a mental process of evaluation/judgment/opinion to predict a label based on observed input data.
determining a variation indicator that indicates a variance between: (i) smoothness of the predicted labels among neighboring data points and (ii) smoothness of the ground truth labels among the same neighboring data points - In the context of the claim limitation, this encompasses a mathematical concept of determining variance between predicted labels and ground truth labels.
computing a total variation loss based on the variation indicator - In the context of the claim limitation, this encompasses a mathematical concept of calculating a total variation loss.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “during training of a neural network” – this is a mere instruction to apply the judicial exception using a generic computer programmed with instructions/program code/logic. See MPEP 2106.05(f). Regarding the “neural network”, no details of the neural network or its training are recited and the neural network is recited at a high level of generality and can be constructed by hand with pen and paper. The claimed “neural network”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper based on a reasonable amount of observed data (i.e., the “data points”). The neural network is recited at a high level of generality and therefore is being interpreted as performing an abstract idea (mental process) on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process. The additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, some of the additional elements are directed to mere instructions to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Therefore, the claim does not include additional elements which provide an inventive concept nor represent significantly more than the abstract idea, and the claim is not patent eligible.
Claim 2:
Step 1: Claim 2 recites a method; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein determining the smoothness of the predicted labels among neighboring data points comprises determining differences in the predicted labels between the neighboring data points, and determining the smoothness of the ground truth labels among neighboring data points comprises determining differences in the ground truth labels between the neighboring data points - In the context of the claim limitation, this encompasses a mathematical concept of determining variance between predicted labels and ground truth labels.
Step 2A Prong 2: Please see analysis of an independent claim 1.
Step 2B Analysis: Please see analysis of the independent claim 1.
Claim 3:
Step 1: Claim 3 recites a method; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein determining the variation indicator comprises determining a norm of a difference between the smoothness of the predicted labels among neighboring data points and the smoothness of the ground truth labels among the same neighboring data point - In the context of the claim limitation, this encompasses a mathematical concept of determining norm of a difference between the smoothness of the predicted labels and ground truth labels.
Step 2A Prong 2: Please see analysis of the claim 2.
Step 2B Analysis: Please see analysis of the claim 2.
Claim 4:
Step 1: Claim 4 recites a method; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein the data points are image pixels, and neighboring data points are defined a by a defined pixel distance - In the context of the claim limitation, this encompasses a mathematical concept of calculating pixel distance.
Step 2A Prong 2: Please see analysis of the independent claim 1.
Step 2B Analysis: Please see analysis of the independent claim 1.
Claim 5:
Step 1: Claim 5 recites a method; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein the data points are point cloud data points of a point cloud and neighboring data points are defined by a nearest neighbor identification algorithm - In the context of the claim limitation, this encompasses a mathematical concept of data point and defining point by a nearest neighbor identification algorithm.
Step 2A Prong 2: Please see analysis of the independent claim 1.
Step 2B Analysis: Please see analysis of the independent claim 1.
Claim 6:
Step 1: Claim 6 recites a method; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein the total variation loss is incorporated into a total loss function…to generate a total loss - In the context of the claim limitation, this encompasses a mathematical concept of computing a total loss function.
the method further comprising determining update values for plurality of parameters…as part of gradient decent training - In the context of the claim limitation, this encompasses a mathematical concept of determining values based on gradient decent.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “the neural network” – this is a mere instruction to apply the judicial exception using a generic computer programmed with instructions/program code/logic. See MPEP 2106.05(f). Regarding the “neural network”, no details of the neural network or its training are recited and the neural network is recited at a high level of generality and can be constructed by hand with pen and paper. The claimed “neural network”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper based on a reasonable amount of observed data (i.e., the “data points”). The neural network is recited at a high level of generality and therefore is being interpreted as performing an abstract idea (mental process) on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process. The additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, some of the additional elements are directed to mere instructions to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Therefore, the claim does not include additional elements which provide an inventive concept nor represent significantly more than the abstract idea, and the claim is not patent eligible.
Claim 7:
Step 1: Claim 7 recites a method for training a neural network which performs sematic segmentation; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
predicting…a respective label for each data point in a set of input data points - In the context of the claim limitation, this encompasses a mental process of evaluation/judgment/opinion to classify data points based on observing the data points.
for each data point, determining: (i) a predicted label difference value between the predicted label for the data point and a predicted label for at least one neighbor data point of the data point; and (ii) a ground truth label difference value between a ground truth label for the data point and a ground truth label for the least one neighbor data point of the data point - In the context of the claim limitation, this encompasses a mathematical concept of determining variance between predicted labels and ground truth labels.
for each data point, determining a norm of a difference between the predicted label difference value and the ground truth label difference value - In the context of the claim limitation, this encompasses a mathematical concept of determining a norm of difference between predicted labels and ground truth labels.
computing a total variation loss for the set of input data points based on a sum of the norms - In the context of the claim limitation, this encompasses a mathematical concept of calculating a total variation loss.
performing backpropagation to update a set of parameters…based at least on the total variation loss - In the context of the claim limitation, this encompasses a mathematical concept of computing forward and backward passes.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “training of a neural network” – this is a mere instruction to apply the judicial exception using a generic computer programmed with instructions/program code/logic. See MPEP 2106.05(f). Regarding the “neural network”, no details of the neural network or its training are recited and the neural network is recited at a high level of generality and can be constructed by hand with pen and paper. The claimed “neural network”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper based on a reasonable amount of observed data (i.e., the “data points”). The neural network is recited at a high level of generality and therefore is being interpreted as performing an abstract idea (mental process) on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process. The additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, some of the additional elements are directed to mere instructions to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Therefore, the claim does not include additional elements which provide an inventive concept nor represent significantly more than the abstract idea, and the claim is not patent eligible.
Claim 8:
Step 1: Claim 8 recites a method; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
determining the predicted label difference values comprises: for all the data points (i,j) and values Δi and Δj, where (i,j) is a data point index and Δi,Δj are respective step values in the data point index, computing an absolute value of y{(i+Δi),(j)}−y{i,j}, where y{i,j} is the predicted label for data point (i,j) for inclusion in a corresponding location of a tensor variable Y{(Δi),(j)}, and computing the absolute value of y{(i),(j+Δj)}−y{i,j} for inclusion in a corresponding location of a tensor variable Y{(Δi),(j)} - In the context of the claim limitation, this encompasses a mathematical concept of determining difference values.
determining the ground truth label difference values comprises: for all the data points (i,j) and values Δi and Δj, computing the absolute value of ŷ{(i+Δi),(j)}−ŷ{i,j}, where ŷ{i,j} is the ground truth label for data point (i,j), for inclusion in a corresponding location of a tensor variable Ŷ{(i),(Δj)}, and computing the absolute value of ŷ{(i),(j+Δj)}−ŷ{i,j} for inclusion in a corresponding location of a tensor variable Ŷ{(i),(Δj)} - In the context of the claim limitation, this encompasses a mathematical concept of determining difference values.
determining the norm of the difference indicators comprises: computing a first p,q norm of Y{(Δi),(j)} and Ŷ{(Δi),(j)} for all pairs of (Δi), (j) and computing a p,q norm of Y{(i),(Δj)} and Ŷ{(i),(Δj)} for all pairs of (i), (Δj) - In the context of the claim limitation, this encompasses a mathematical concept of determining the norms.
Step 2A Prong 2: Please see analysis of an independent claim 7.
Step 2B Analysis: Please see analysis of the independent claim 7.
Claim 9:
Step 1: Claim 9 recites a method; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein determining the variation indicator comprises determining a norm of a difference between the smoothness of the predicted labels among neighboring data points and the smoothness of the ground truth labels among the same neighboring data point - In the context of the claim limitation, this encompasses a mathematical concept of determining norm of a difference between the smoothness of the predicted labels and ground truth labels.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim also recites “wherein the set of input data points comprises an image” - which recite the insignificant extra-solution activity of mere data gathering and output. MPEP 2106.05(g). The additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recitation of “wherein the set…” is directed to insignificant extra-solution activity that is well known, routine and conventional because the limitation is directed to receiving or transmitting data over a network, e.g., using the Internet to gather data. See MPEP 2106.05(d)(II), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Therefore, the claim does not include additional elements which provide an inventive concept nor represent significantly more than the abstract idea, and the claim is not patent eligible.
Claim 10:
Step 1: Claim 10 recites a method; thus, it is a process, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein the set of input data points comprises data points of a point cloud - In the context of the claim limitation, this encompasses a mathematical concept of data points.
Step 2A Prong 2: Please see analysis of the independent claim 7.
Step 2B Analysis: Please see analysis of the independent claim 7.
Claim 11:
Step 1: Claim 11 recites a computer system; thus, it is a machine, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
perform operations to compute a total variation loss for use in backpropagation…which individually classifies data points - In the context of the claim limitation, this encompasses a mathematical concept of computing forward and backward passes and mental process of evaluation/judgment/opinion to classify data points based on observing the data points.
predicting…a respective label for each data point in a set of input data points - In the context of the claim limitation, this encompasses a mental process of evaluation/judgment/opinion to predict a label based on observed input data.
determining a variation indicator that indicates a variance between: (i) smoothness of the predicted labels among neighboring data points and (ii) smoothness of the ground truth labels among the same neighboring data points - In the context of the claim limitation, this encompasses a mathematical concept of determining variance between predicted labels and ground truth labels.
computing a total variation loss based on the variation indicator - In the context of the claim limitation, this encompasses a mathematical concept of calculating a total variation loss.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “one or more processors and non-volatile memory coupled to the one or more processors, the memory storing instructions that when executed by the one or more processors”; “during training of a neural network” – these are mere instructions to apply the judicial exception using a generic computer programmed with instructions/program code/logic. See MPEP 2106.05(f). Regarding the “neural network”, no details of the neural network or its training are recited and the neural network is recited at a high level of generality and can be constructed by hand with pen and paper. The claimed “neural network”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper based on a reasonable amount of observed data (i.e., the “data points”). The neural network is recited at a high level of generality and therefore is being interpreted as performing an abstract idea (mental process) on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process. The additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, some of the additional elements are directed to mere instructions to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Therefore, the claim does not include additional elements which provide an inventive concept nor represent significantly more than the abstract idea, and the claim is not patent eligible.
Claim 12:
Step 1: Claim 12 recites a computer system; thus, it is a machine, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein determining the smoothness of the predicted labels among neighboring data points comprises determining differences in the predicted labels between the neighboring data points, and determining the smoothness of the ground truth labels among neighboring data points comprises determining differences in the ground truth labels between the neighboring data points - In the context of the claim limitation, this encompasses a mathematical concept of determining variance between predicted labels and ground truth labels.
Step 2A Prong 2: Please see analysis of an independent claim 11.
Step 2B Analysis: Please see analysis of the independent claim 11.
Claim 13:
Step 1: Claim 13 recites a computer system; thus, it is a machine, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein determining the variation indicator comprises determining a norm of a difference between the smoothness of the predicted labels among neighboring data points and the smoothness of the ground truth labels among the same neighboring data points - In the context of the claim limitation, this encompasses a mathematical concept of determining norm of a difference between the smoothness of the predicted labels and ground truth labels.
Step 2A Prong 2: Please see analysis of the claim 12.
Step 2B Analysis: Please see analysis of the claim 12.
Claim 14:
Step 1: Claim 14 recites a computer system; thus, it is a machine, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein the data points are image pixels, and neighboring data points are defined a by a defined pixel distance - In the context of the claim limitation, this encompasses a mathematical concept of calculating pixel distance.
Step 2A Prong 2: Please see analysis of the independent claim 11.
Step 2B Analysis: Please see analysis of the independent claim 11.
Claim 15:
Step 1: Claim 15 recites a computer system; thus, it is a machine, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein the data points are point cloud data points of a point cloud and neighboring data points are defined by a nearest neighbor identification algorithm - In the context of the claim limitation, this encompasses a mathematical concept of data point and defining point by a nearest neighbor identification algorithm.
Step 2A Prong 2: Please see analysis of the independent claim 11.
Step 2B Analysis: Please see analysis of the independent claim 11.
Claim 16:
Step 1: Claim 16 recites a computer system; thus, it is a machine, one of the four statutory categories of patentable subject matter.
Step 2A Prong 1: The claim recites the limitations:
wherein the total variation loss is incorporated into a total loss function…to generate a total loss - In the context of the claim limitation, this encompasses a mathematical concept of computing a total loss function.
the method further comprising determining update values for plurality of parameters…as part of gradient decent training - In the context of the claim limitation, this encompasses a mathematical concept of determining values based on gradient decent.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “the neural network” – these are mere instructions to apply the judicial exception using a generic computer programmed with instructions/program code/logic. See MPEP 2106.05(f). Regarding the “neural network”, no details of the neural network or its training are recited and the neural network is recited at a high level of generality and can be constructed by hand with pen and paper. The claimed “neural network”, under the broadest reasonable interpretation (BRI), in light of the specification, could be constructed by hand with pen and paper based on a reasonable amount of observed data (i.e., the “data points”). The neural network is recited at a high level of generality and therefore is being interpreted as performing an abstract idea (mental process) on a generic computer. See MPEP 2106.04(a)(2) § III.C which states that “a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept” still recite a mental process. The additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, some of the additional elements are directed to mere instructions to apply the judicial exception. Mere instruction to apply a judicial exception does not amount to significantly more. See MPEP 2106.05(f). Therefore, the claim does not include additional elements which provide an inventive concept nor represent significantly more than the abstract idea, and the claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 6-14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kaneko (Deep Monocular Depth Estimation in Partially-Known Environments) in view of Javanmardi (Unsupervised Total Variation Loss for Semi-supervised Deep Learning of Semantic Segmentation).
Claim 1.
Kaneko teaches a method for computing a total variation loss…during training of a neural network which individually classifies data points, comprising (SECTION III. Method, Page 345 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map” teaches a method of computing a total variation loss):
predicting, using the neural network, a respective label for each data point in a set of input data points (SECTION III. Method, Page 345 “Taking an RGB image and a partial depth measurement as inputs, we estimate the full depth map of a scene using a deep convolutional neural network…
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… D^u,v is the predicted depth value at a pixel (u, v)” and SECTION IV. Experiments, Page 346 “where y^i is the predicted depth value of a pixel i,yi is the ground truth” and II. RELATED WORK, Page 345 “depth estimation as a classification problem rather than a regression task and used fully convolutional residual network followed by a fully connected conditional random field to classify discretized depth values as class labels” teaches predicting a depth value for the each of the dataset wherein depth value for a pixel in an image is analogous to classifying a data point):
determining a variation indicator that indicates a variance between: (i) smoothness of the predicted labels among neighboring data points and (ii) smoothness of the ground truth labels among the same neighboring data points (SECTION III. Method, Page 345-346 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map.
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches predicting a depth value D^u,v (corresponding to predicted label) and D depth value (corresponding to ground truth labels) for the each of the dataset);
and computing a total variation loss based on the variation indicator (SECTION III. Method, Page 346 “We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches computing a total variation loss).
Kaneko does not explicitly teach for use in backpropagation during training of a neural network.
However, in the same field, analogous art, Javanmardi teaches a total variation loss for use in backpropagation during training of a neural network (3. Unsupervised Loss Functions from Spatial Structure & Page 2-3 “we will show how these general spatial loss functions can be minimized using backpropagation. We will then derive the solution for the specific case of the total variation loss in Section 4” teaches a backpropagation during training of a neural network);
Kaneko and Javanmardi are analogous art because they are both directed to systems computing a total variation loss using a neural network.
It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Javanmardi into the disclosed invention of Kaneko.
One of ordinary skill in the arts would have been motivated to make this modification because of the following, “We proposed and showed how to minimize these constraint through learning via backpropagation in any pixel classifier”, as suggested by Javanmardi (Javanmardi, 6. Conclusion, Page 7).
Claim 2.
As discussed above, Kaneko in view of Javanmardi teaches the method of claim 1
Kaneko further teaches wherein determining the smoothness of the predicted labels among neighboring data points comprises determining differences in the predicted labels between the neighboring data points, and determining the smoothness of the ground truth labels among neighboring data points comprises determining differences in the ground truth labels between the neighboring data points (SECTION III. Method, Page 345-346 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map.
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches predicting a depth value D^u,v (corresponding to predicted label) and depth value (corresponding to ground truth labels) for the each of the dataset);
Claim 3.
As discussed above, Kaneko in view of Javanmardi teaches the method of claim 2
Kaneko further teaches wherein determining the variation indicator comprises determining a norm of a difference between the smoothness of the predicted labels among neighboring data points and the smoothness of the ground truth labels among the same neighboring data points (SECTION III. Method, Page 345-346 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map.
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches determining a norm of a difference between the predicted labels and ground truth label).
Claim 4.
As discussed above, Kaneko in view of Javanmardi teaches the method of claim 1
Kaneko further teaches wherein the data points are image pixels, and neighboring data points are defined a by a defined pixel distance (SECTION III. Method, Page 345 “
PNG
media_image1.png
114
392
media_image1.png
Greyscale
where M is the foreground mask and D^u,v is the predicted depth value at a pixel (u, v). When computing the TV loss, we apply 3-pixels dilation to M to consider broader context around the boundary” teaches wherein the depth value at pixel (corresponds to image pixels) and distance between the predicted depth value and ground truth value at a pixel (corresponds to neighboring data points)).
Claim 6.
As discussed above, Kaneko in view of Javanmardi teaches the method of claim 1
Kaneko further teaches wherein the total variation loss is incorporated into a total loss function for the neural network to generate a total loss for the neural network (SECTION III. Method, Page 346 “We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches combine total variation loss into loss function to generate a total loss for the neural network),
Kaneko does not explicitly teach the method further comprising determining update values for plurality of parameters of the neural network as part of gradient decent training of the neural network.
However, in the same field, analogous art, Javanmardi teaches the method further comprising determining update values for plurality of parameters of the neural network as part of gradient decent training of the neural network (3. Unsupervised Loss Functions from Spatial Structure & Page 3 “We adopt stochastic gradient descent to minimize this loss function(Bottou (1991); LeCun et al. (2012)). By applying the multivariate chain rule to compute the gradient of EU with respect to w” teaches update value using gradient decent training of neural network).
Kaneko and Javanmardi are analogous art because they are both directed to systems computing a total variation loss using a neural network.
It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Javanmardi into the disclosed invention of Kaneko.
One of ordinary skill in the arts would have been motivated to make this modification because of the following, “We proposed and showed how to minimize these constraint through learning via backpropagation in any pixel classifier” (Javanmardi, 6. Conclusion, Page 7).
Claim 7.
Kaneko teaches a method for training a neural network which performs sematic segmentation, comprising (B. Dataset & 346 “The spatial resolution of images is 320×240. The foreground masks are pixel-wise segmentation images of foreground (foods) and background (appliance) regions” and SECTION III. Method, Page 345 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map” teaches sematic segmentation):
predicting, using the neural network, a respective label for each data point in a set of input data points (SECTION III. Method, Page 345 “Taking an RGB image and a partial depth measurement as inputs, we estimate the full depth map of a scene using a deep convolutional neural network…
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… D^u,v is the predicted depth value at a pixel (u, v)” and SECTION IV. Experiments, Page 346 “where y^i is the predicted depth value of a pixel i,yi is the ground truth” and II. RELATED WORK, Page 345 “depth estimation as a classification problem rather than a regression task and used fully convolutional residual network followed by a fully connected conditional random field to classify discretized depth values as class labels” teaches predicting a depth value for the each of the dataset wherein depth value for a pixel in an image is analogous to classifying a data point);
for each data point, determining: (i) a predicted label difference value between the predicted label for the data point and a predicted label for at least one neighbor data point of the data point; and (ii) a ground truth label difference value between a ground truth label for the data point and a ground truth label for the least one neighbor data point of the data point (SECTION III. Method, Page 345-346 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map.
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches predicting a depth value D^u,v (corresponding to predicted label) and D depth value (corresponding to ground truth labels) for the each of the dataset);
for each data point, determining a norm of a difference between the predicted label difference value and the ground truth label difference value (SECTION III. Method, Page 345-346 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map.
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches determining a norm of a difference between the predicted labels and ground truth label);
computing a total variation loss for the set of input data points based on a sum of the norms (SECTION III. Method, Page 346 “We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches computing a total variation loss);
Kaneko does not explicitly teach and performing backpropagation to update a set of parameters of the neural network based at least on the total variation loss.
However, in the same field, analogous art, Javanmardi teaches and performing backpropagation to update a set of parameters of the neural network based at least on the total variation loss (3. Unsupervised Loss Functions from Spatial Structure & Page 2-3 “we will show how these general spatial loss functions can be minimized using backpropagation. We will then derive the solution for the specific case of the total variation loss in Section 4” teaches a backpropagation during training of a neural network).
Kaneko and Javanmardi are analogous art because they are both directed to systems computing a total variation loss using a neural network.
It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Javanmardi into the disclosed invention of Kaneko.
One of ordinary skill in the arts would have been motivated to make this modification because of the following, “We proposed and showed how to minimize these constraint through learning via backpropagation in any pixel classifier”, as suggested by Javanmardi (Javanmardi, 6. Conclusion, Page 7).
Claim 8.
As discussed above, Kaneko in view of Javanmardi teaches the method of claim 7
Kaneko further teaches wherein: determining the predicted label difference values comprises: for all the data points (i,j) and values Δi and Δj, where (i,j) is a data point index and Δi, Δj are respective step values in the data point index, computing an absolute value of y{(i+Δi),(j)}−y{i,j}, where y{i,j} is the predicted label for data point (i,j) for inclusion in a corresponding location of a tensor variable Y{(Δi),(j)}, and computing the absolute value of y{(i),(j+Δj)}−y{i,j} for inclusion in a corresponding location of a tensor variable Y{(Δi),(j)}; determining the ground truth label difference values comprises: for all the data points (i,j) and values Δi and Δj, computing the absolute value of ŷ{(i+Δi),(j)}−ŷ{i,j}, where ŷ{i,j} is the ground truth label for data point (i,j), for inclusion in a corresponding location of a tensor variable Ŷ{(i),(Δj)}, and computing the absolute value of ŷ{(i),(j+Δj)}−ŷ{i,j} for inclusion in a corresponding location of a tensor variable Ŷ{(i),(Δj)}; determining the norm of the difference indicators comprises: computing a first p,q norm of Y{(Δi),(j)} and Ŷ{(Δi),(j)} for all pairs of (Δi), (j) and computing a p,q norm of Y{(i),(Δj)} and Ŷ{(i),(Δj)} for all pairs of (i), (Δj) (SECTION III. Method, Page 345-346 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map.
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches predicted data D^u,v and the ground truth D label, equation 1 define the total variation loss term which measure sum of absolute difference between neighboring pixels; equation 2 𝐿1(𝐷̂,𝐷) is measuring the difference between predicted values and true values).
Claim 9.
As discussed above, Kaneko in view of Javanmardi teaches the method of claim 7
Kaneko further wherein the set of input data points comprises an image (SECTION III. Method, Page 345 “
PNG
media_image1.png
114
392
media_image1.png
Greyscale
where M is the foreground mask and D^u,v is the predicted depth value at a pixel (u, v). When computing the TV loss, we apply 3-pixels dilation to M to consider broader context around the boundary” teaches wherein the depth value at pixel (corresponds to image pixels)).
Claim 10.
As discussed above, Kaneko in view of Javanmardi teaches the method of claim 7
Kaneko further teaches wherein the set of input data points comprises data points of a point cloud (SECTION I. Introduction, page 344 “we propose a novel concept of the depth estimation problem, which aims for partially-known environments. Our insight is that we can relax the challenging monocular problem in some cases, especially in industrial products. Many of such products have fixed sizes, same camera positions, and same structures, defined in precise design drawings (i.e. CAD)” teaches image (data points) captures using camera or other sensor (corresponds to a point cloud)).
Claim 11.
Kaneko teaches A computer system comprising one or more processors and non-volatile memory coupled to the one or more processors, the memory storing instructions that when executed by the one or more processors configure the computer system to perform operations to compute a total variation loss…during training of a neural network which individually classifies data points, the operations comprising (SECTION III. Method, Page 345 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map” teaches a method of computing a total variation loss and I. INTRODUCTION & Page 344 “In such situation, we can simulate the depth of a product itself using computer graphics technique and treat it as a “known background”” and II. RELATED WORK, 345 “multi-modal autoencoders that take RGB images, semantic labels, and partial depth samples coming from LiDAR sensor and stereo cameras” teaches a stereo camera contains a processor):
predicting, using the neural network, a respective label for each data point in a set of input data points (SECTION III. Method, Page 345 “Taking an RGB image and a partial depth measurement as inputs, we estimate the full depth map of a scene using a deep convolutional neural network…
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… D^u,v is the predicted depth value at a pixel (u, v)” and SECTION IV. Experiments, Page 346 “where y^i is the predicted depth value of a pixel i,yi is the ground truth” and SECTION IV. Experiments, Page 346 “where y^i is the predicted depth value of a pixel i,yi is the ground truth” and II. RELATED WORK, Page 345 “depth estimation as a classification problem rather than a regression task and used fully convolutional residual network followed by a fully connected conditional random field to classify discretized depth values as class labels” teaches predicting a depth value for the each of the dataset wherein depth value for a pixel in an image is analogous to classifying a data point);
determining a variation indicator that indicates a variance between: (i) smoothness of the predicted labels among neighboring data points and (ii) smoothness of the ground truth labels among the same neighboring data points (SECTION III. Method, Page 345-346 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map.
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches predicting a depth value D^u,v (corresponding to predicted label) and D depth value (corresponding to ground truth labels) for the each of the dataset);
and computing a total variation loss based on the variation indicator (SECTION III. Method, Page 346 “We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches computing a total variation loss).
Kaneko does not explicitly teach for use in backpropagation during training of a neural network.
However, in the same field, analogous art, Javanmardi teaches a total variation loss for use in backpropagation during training of a neural network (3. Unsupervised Loss Functions from Spatial Structure & Page 2-3 “we will show how these general spatial loss functions can be minimized using backpropagation. We will then derive the solution for the specific case of the total variation loss in Section 4” teaches a backpropagation during training of a neural network);
Kaneko and Javanmardi are analogous art because they are both directed to systems computing a total variation loss using a neural network.
It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Javanmardi into the disclosed invention of Kaneko.
One of ordinary skill in the arts would have been motivated to make this modification because of the following, “We proposed and showed how to minimize these constraint through learning via backpropagation in any pixel classifier”, as suggested by Javanmardi (Javanmardi, 6. Conclusion, Page 7).
Claim 12.
As discussed above, Kaneko in view of Javanmardi teaches the computer system of claim 11
Kaneko further teaches wherein determining the smoothness of the predicted labels among neighboring data points comprises determining differences in the predicted labels between the neighboring data points, and determining the smoothness of the ground truth labels among neighboring data points comprises determining differences in the ground truth labels between the neighboring data points (SECTION III. Method, Page 345-346 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map.
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches predicting a depth value D^u,v (corresponding to predicted label) and depth value (corresponding to ground truth labels) for the each of the dataset).
Claim 13.
As discussed above, Kaneko in view of Javanmardi teaches the computer system of claim 12
Kaneko further teaches wherein determining the variation indicator comprises determining a norm of a difference between the smoothness of the predicted labels among neighboring data points and the smoothness of the ground truth labels among the same neighboring data points (SECTION III. Method, Page 345-346 “we introduce the total variation (TV) loss Ltv [24], which encourages spatial smoothness in the output depth map.
PNG
media_image1.png
114
392
media_image1.png
Greyscale
… We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches determining a norm of a difference between the predicted labels and ground truth label).
Claim 14.
As discussed above, Kaneko in view of Javanmardi teaches the computer system of claim 11
Kaneko further teaches wherein the data points are image pixels, and neighboring data points are defined a by a defined pixel distance (SECTION III. Method, Page 345 “
PNG
media_image1.png
114
392
media_image1.png
Greyscale
where M is the foreground mask and D^u,v is the predicted depth value at a pixel (u, v). When computing the TV loss, we apply 3-pixels dilation to M to consider broader context around the boundary” teaches wherein the depth value at pixel (corresponds to image pixels) and distance between the predicted depth value and ground truth value at a pixel (corresponds to neighboring data points)).
Claim 16.
As discussed above, Kaneko in view of Javanmardi teaches the computer system of claim 11
Kaneko further teaches wherein the total variation loss is incorporated into a total loss function for the neural network to generate a total loss for the neural network (SECTION III. Method, Page 346 “We combine the above TV loss Ltv with the traditional L1 error to form the total loss L:
PNG
media_image2.png
36
337
media_image2.png
Greyscale
where D is the ground truth depth map and λtv is a weight that controls the influence of the TV loss” teaches combine total variation loss into loss function to generate a total loss for the neural network),
Kaneko does not explicitly teach the method further comprising determining update values for plurality of parameters of the neural network as part of gradient decent training of the neural network.
However, in the same field, analogous art, Javanmardi further teaches the method further comprising determining update values for plurality of parameters of the neural network as part of gradient decent training of the neural network (3. Unsupervised Loss Functions from Spatial Structure & Page 3 “We adopt stochastic gradient descent to minimize this loss function(Bottou (1991); LeCun et al. (2012)). By applying the multivariate chain rule to compute the gradient of EU with respect to w” teaches update value using gradient decent training of neural network).
Kaneko and Javanmardi are analogous art because they are both directed to systems computing a total variation loss using a neural network.
It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Javanmardi into the disclosed invention of Kaneko.
One of ordinary skill in the arts would have been motivated to make this modification because of the following, “We proposed and showed how to minimize these constraint through learning via backpropagation in any pixel classifier”, as suggested by Javanmardi (Javanmardi, 6. Conclusion, Page 7).
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kaneko (Deep Monocular Depth Estimation in Partially-Known Environments) in view of Javanmardi (Unsupervised Total Variation Loss for Semi-supervised Deep Learning of Semantic Segmentation) and further in view of Weinberger (Distance Metric Learning for Large Margin Nearest Neighbor Classification).
Claim 5.
As discussed above, Kaneko in view of Javanmardi teaches the method of claim 1.
Kaneko further teaches wherein the data points are point cloud data points of a point cloud (SECTION I. Introduction, page 344 “we propose a novel concept of the depth estimation problem, which aims for partially-known environments. Our insight is that we can relax the challenging monocular problem in some cases, especially in industrial products. Many of such products have fixed sizes, same camera positions, and same structures, defined in precise design drawings (i.e. CAD)” teaches image (data points) captures using camera or other sensor (corresponds to cloud)).
Kaneko in view of Javanmardi does not explicitly teach neighboring data points are defined by a nearest neighbor identification algorithm.
However, Weinberger teaches neighboring data points are defined by a nearest neighbor identification algorithm (2 Model & Page 2 “In addition to the class label yi , for each input ~xi we also specify k “target” neighbors— that is, k other inputs with the same label yi that we wish to have minimal distance to ~xi , as computed by eq. (1). In the absence of prior knowledge, the target neighbors can simply be identified as the k nearest neighbors” teaches input define by a k nearest neighbors.
Kaneko, Javanmardi and Weinberger are analogous art because they are each directed to systems computing a total variation loss using a neural network.
It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Weinberger into the disclosed invention of Kaneko in view of Javanmardi.
One of ordinary skill in the arts would have been motivated to make this modification because of the following, “the Mahalanobis distance metrics learned by semidefinite programming led to significant improvements in kNN classification, both in training and testing. The training error rates reported in Fig. 2 are leave-one-out estimates” (Weinberger, 3 Results, Page 4).
Claim 15.
As discussed above, Kaneko in view of Javanmardi teaches the computer system of claim 11
Kaneko further teaches wherein the data points are point cloud data points of a point cloud (SECTION I. Introduction, page 344 “we propose a novel concept of the depth estimation problem, which aims for partially-known environments. Our insight is that we can relax the challenging monocular problem in some cases, especially in industrial products. Many of such products have fixed sizes, same camera positions, and same structures, defined in precise design drawings (i.e. CAD)” teaches image (data points) captures using camera or other sensor (corresponds to cloud)).
Kaneko in view of Javanmardi does not explicitly teach and neighboring data points are defined by a nearest neighbor identification algorithm.
However, Weinberger teaches and neighboring data points are defined by a nearest neighbor identification algorithm (2 Model & Page 2 “In addition to the class label yi , for each input ~xi we also specify k “target” neighbors— that is, k other inputs with the same label yi that we wish to have minimal distance to ~xi , as computed by eq. (1). In the absence of prior knowledge, the target neighbors can simply be identified as the k nearest neighbors” teaches input define by a k nearest neighbors.
Kaneko, Javanmardi and Weinberger are analogous art because they are each directed to systems computing a total variation loss using a neural network.
It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Weinberger into the disclosed invention of Kaneko in view of Javanmardi.
One of ordinary skill in the arts would have been motivated to make this modification because of the following, “the Mahalanobis distance metrics learned by semidefinite programming led to significant improvements in kNN classification, both in training and testing. The training error rates reported in Fig. 2 are leave-one-out estimates”, as suggested by Weinberger (Weinberger, 3 Results, Page 4).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Lokesha Patel whose telephone number is (571)272-6267. The examiner can normally be reached 8 AM - 4 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LOKESHA PATEL/Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125