DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement is made of Applicant’s claim of priority from Foreign Application No. CN202010525713.5, filed June 10, 2020 and from International Application No. PCT/CN2021/081532, filed March 18, 2021.
Status of Claims
Claims 1, 5-7, 9-11, 15-16, and 18-20 are pending. Claims 2-4, 8, 12-14 and 17 have been cancelled.
Response to Arguments
Applicant’s arguments, see p. 9-11, filed November 19, 2025 with respect to the 35 USC 101 rejection of the claims have been fully considered but are not persuasive. Applicant argues that the claims integrate the abstract idea into practical application because the subjective quality measurement value is used to optimize the decoder. While the goal of the invention may be to optimize the decoder, the claims merely recite the abstract idea of calculating the quality measurement, and give no recitation of applying the value to the decoder or how the value is applied. Therefore, the claims, as written, do not recite significantly more than the abstract idea, and do not integrate the abstract idea into practical application. Applicant further argues that the training with a test data set involves preprocessing of test data, training of a model, and evaluation and optimization of model performance, which cannot be completed manually or in the mind. However, Applicant is reminded that the specification is not read into the claims. The mere recitation of “determining a preset vector matrix by training the subjective quality test data set” could be performed mentally or manually by a person because under its broadest reasonable interpretation, a person could apply test data values into a formula to calculate a preset vector matrix. Thus, the 35 USC 101 rejection of the claims is upheld.
Applicant’s arguments, see p. 9-11, filed November 19, 2025 with respect to the 35 USC 101 rejection of the claims have been fully considered but are not persuasive. Applicant argues that the previously proposed references do not teach the added limitations. Examiner respectfully disagrees.
Specifically, Applicant argues that Rodriguez and Budagavi do not teach “the first calculation sub-model represents extracting feature values related to Color Fluctuation in Geometric Distance (CFGD)…” and “the second calculation sub-model represents extracting feature values related to Color Block Mean Variance (CBMV)…” but that Budagavi merely discloses the average value of each color and standard deviation values for each color component in the neighborhood of points, which is a value related to CBMV but not to CFGD. Applicant is once again reminded that the specification is not read into the claims. Because the claims do not specifically state what the “feature values related to Color Fluctuation in Geometric Distance” are, under Examiner’s broadest reasonable interpretation, Budagavi’s teaching of standard deviation values for each color component in the neighborhood of points is sufficient to teach this limitation.
Applicant further argues that the references do not teach “performing feature extraction using a first calculation sub-model…” and “…a second calculation sub-model” because in Zhang, the first determining submodule functions on a point cloud and the second submodule functions on processing the result of the first determining submodule, rather than the point cloud. However, Examiner asserts that the first and second determining submodules of Zhang are determining features using a point cloud, even if the second submodule defines a sample data set based on a piece of sample data from a point cloud. Additionally, the Zhang reference is relied upon merely to show that a module for performing feature extraction can contain a first and second submodule, while the feature extraction and exact features extracted are taught by other references in the herein rejection. Thus, the Zhang reference is sufficient in teaching this limitation.
Applicant further argues that none of the proposed references teach the limitation “wherein the preset vector matrix is determined based on: acquiring a subjective quality test data set, and determining a preset vector matrix by training the subjective quality test data set”. However, the Chechik reference teaches subjective testing data trains the weights of a prediction curve (see Chechik, Col. 9, lines 36-55). The weight vector of Shanableh (i.e., the preset vector matrix) combined with Chechik’s use of subjective testing data to acquire weight values is sufficient to teach “acquiring a subjective quality test data set, and determining a preset vector matrix by training the subjective quality test data set”. Therefore, the 35 USC 103 rejection of the claims is upheld, and consequently, THIS ACTION IS FINAL.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 5-7, 9-11, 15-16 and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite decoding and encoding methods and a decoder and encoder for assessing the quality of a point cloud. Consider method claim 1:
Step 1:
With regard to Step 1, the instant claim is directed to a method or a process; and therefore, the claim is directed to one of the statutory categories of invention.
Step 2A, Prong One:
With regard to 2A, Prong One, the limitations “determining a model parameter of a quality assessment model”, “determining, according to the model parameter and the feature parameter of the PC to be assessed, a subjective quality measurement value of the PC to be assessed by using the quality assessment model”, “wherein the feature parameter of the PC to be assessed comprises a quantization parameter of the PC to be assessed; and the quantization parameter comprises a geometric quantization parameter and a color quantization parameter of the PC to be assessed” and “wherein the determining a model parameter of a quality assessment model comprises: determining a first feature value of the PC to be assessed by performing feature extraction on the PC to be assessed using a first calculation sub-model; determining a second feature value of the PC to be assessed by performing feature extraction on the PC to be assessed using a second calculation sub-model; and determining the model parameter according to the first feature value, the second feature value and a preset vector matrix, wherein the preset vector matrix is determined based on: acquiring a subjective quality test data set, and determining a preset vector matrix by training the subjective quality test data set; the first calculation sub-model represents extracting feature values related to Color Fluctuation in Geometric Distance (CFGD) from the PC to be assessed, and the second calculation sub-model represents extracting feature values related to Color Block Mean Variance (CBMV) from the PC to be assessed” as drafted, recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitations manually and in the mind of a person. That is, a user or person skilled in the art may manually perform the determining of a model parameter and determining of a subject quality measurement value of the PC to be assessed by performing a mathematical calculation. This is the concept that falls under the grouping of abstract ideas mathematical concepts, i.e., mathematical relationships, mathematical formulas or equations, and mathematical calculations.
Step 2A, Prong Two:
The 2019 PEG defines the phrase “integration into a practical application” to require an additional step or a combination of additional steps in the claim to apply, rely on, or use the judicial exception. In the instant case, the additional step of “decoding a bitstream to acquire a feature parameter of a Point Cloud (PC) to be assessed” is considered to be extra-solution activity of gathering information. In addition, with respect to the decoder and encoder claims of claims 19 and 20, the mere recitation of a generic processor, memory, or storage medium to perform/store programming instructions of the recited/identified abstract idea does not integrate the identified abstract idea into a practical application. Accordingly, the above-mentioned additional elements/limitations do not integrate the abstract idea into a practical application; and therefore, the independent claims recite an abstract idea.
Step 2B:
Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception, because as discussed above with respect to integration of the abstract idea into practical application, the additional elements/limitations to perform the recited steps, amount to no more than insignificant extra-solution activity. Mere instructions to apply an exception using a generic component cannot provide an inventive concept. Therefore, independent claims 1, 10, 19 and 20 are not patent eligible. In addition, claims 5-7, 9, 11, 15-16 and 18 of the instant application provide limitations that both individually or in combination do not integrate the identified abstract idea into a practical application or provide significantly more than the identified abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 10 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rodrigues (“Blind Quality Assessment of 3-D Synthesized Views Based on Hybrid Feature Classes”, provided with Applicant’s Information Disclosure Statement) in view of Chou (US 2017/0347120 A1) and further in view of Zhang et al. (US 2019/0080203 A1), Shanableh (US 2010/0316131 A1), Chechik et al. (US 8,787,454 B1) and Budagavi (US 2019/0318509 A1).
Regarding claim 1, Rodrigues teaches a
(Rodrigues, pg. 1740, several features extracted from synthesized views are selected for training or testing the algorithm. Pg. 1741, texture and depth quality parameters (QPs));
determining a model parameter of a quality assessment model (Rodrigues, pg. 1747,
β
1
…
β
5
are the regression model parameters); and
determining, according to the model parameter and the feature parameter of the PC to be assessed, a subjective quality measurement value of the PC to be assessed by using the quality assessment model (Rodrigues, pg. 1747, the values resulting from these metrics were mapped to the subject scores using the logistic function expressed by:
D
M
O
S
p
=
β
1
1
2
-
1
1
+
e
x
p
β
2
S
-
β
3
+
β
4
S
+
β
5
wherein
D
M
O
S
p
is the mapped subjective score.).
wherein the feature parameter of the PC to be assessed comprises a quantization parameter of the PC to be assessed (Rodrigues, pg. 1740, several features extracted from synthesized views are selected for training or testing the algorithm. Pg. 1741, texture and depth quality parameters (QPs));
wherein the determining a model parameter of a quality assessment model comprises: determining a first feature value of the PC to be assessed by performing feature extraction on the PC to be assessed (Rodrigues, pg. 1740, several features extracted from synthesized views are selected for training or testing the algorithm. Pg. 1741, the quantization parameter (QP) of the lateral texture views, i.e., the first feature value);
determining a second feature value of the PC to be assessed by performing feature extraction on the PC to be assessed (Rodrigues, pg. 1740, several features extracted from synthesized views are selected for training or testing the algorithm. Pg. 1741, the quantization parameter (QP) of the lateral depth maps, i.e., the second feature value).
Although Rodrigues teaches acquiring feature parameters (Rodrigues, pg. 1740) and teaches texture and depth quality parameters (Rodrigues, pg. 1741), Rodrigues does not explicitly teach “decoding a bitstream” to acquire said feature parameters, “a Point Cloud (PC) to be assessed”, and “the quantization parameter comprises a geometric quantization parameter and a color quantization parameter of the PC to be assessed”. However, in an analogous field of endeavor, Chou teaches decisions for blocks of the current point cloud frame can be indicated in syntax elements (or other bitstream elements) of the bitstream, decoded by the decoder, and converted into appropriate control data (Chou, Para. [0086]). Chou further teaches minimizing both geometry and color residuals in embodiments of the disclosed compression schemes (Chou, Para. [0103]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Rodrigues with the teachings of Chou by including performing the method on a point cloud, decoding a bitstream, and using the quantization parameters to minimize geometry and color residuals in embodiments of the compression schemes. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for improved compression methods while maintaining acceptable visual quality, as recognized by Chou.
Although Rodrigues in view of Chou teaches determining a first and second feature value of the point cloud (Rodrigues, pgs. 1740-1741), they do not explicitly teach performing the extraction “using a first calculation sub-model and a second calculation sub-model”. However, in an analogous field of endeavor, Zhang teaches a generation module that includes a first determining submodule and a second determining submodule (Zhang, Para. [0016]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Rodrigues in view of Chou with the teachings of Zhang by including a first and second calculation sub-model. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for the use of two separate sub-modules when determining the feature values of the point clouds, as recognized by Zhang.
Although Rodrigues in view of Chou further in view of Chechik and Zhang teaches determining the first and second feature values (Rodrigues, pgs. 1740-1741), they do not explicitly teach “determining the model parameter according to the first feature value, the second feature value and a preset vector matrix”. However, in an analogous field of endeavor, Shanableh teaches that once the features are extracted, the training phase uses them to estimate the model parameters (Shanableh, Para. [0042]). The feature vectors are expanded in a polynomial network using the feature vector x (i.e., feature values) and a weight vector w that determines the orientation of the linear decision hyperplane (i.e., preset vector matrix). To reduce the dimensionality involved in feature vector expansion and yet retain the classification power, multinomials are used for expansion and model estimation, where the weight parameters are estimated (Shanableh, Para. [0068]-[0076]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Rodrigues in view of Chou further in view of Zhang with the teachings of Shanableh by including determining the model parameters based on the feature values and a preset vector matrix. One having ordinary skill in the art before the effective filing date would have been motivated to combine these references, because doing so would allow for determining the parameters of a model for quality estimation, as recognized by Shanableh.
Although Rodrigues in view of Chou further in view of Zhang and Shanableh teaches determining model parameters (Rodrigues, pg. 1747), they do not explicitly teach “wherein the preset vector matrix is determined based on: acquiring a subjective quality test data set, and determining a preset vector matrix by training the subjective quality test data set”. However, in an analogous field of endeavor, Chechik teaches that a prediction curve can have functions that can be trained for a large set of video content features such that each weight (i.e., preset vector matrix) is continually updated for the most accurate prediction. Each time the equation is compared with a data set corresponding to a video and a human rating of that video, the prediction curve can be further calibrated. In many cases, the subjective testing data (i.e., subjective quality test data set), will train each weight in the prediction curve to vary depending on its influence (Chechik, Col. 9, lines 36-55).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Rodrigues in view of Chou further in view of Zhang and Shanableh with the teachings of Chechik by including determining the weights of Shanableh (i.e., preset vector matrix) by training the subjective testing data (i.e., subjective quality test data set). One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for adjusting model parameters to best fit a dataset, as recognized by Chechik.
Although Rodrigues in view of Chou further in view of Zhang, Shanableh and Chechik teaches determining the first and second feature values (Rodrigues, pgs. 1740-1741), they do not explicitly teach “the first calculation sub-model represents extracting feature values related to Color Fluctuation in Geometric Distance (CFGD) from the PC to be assessed, and the second calculation sub-model represents extracting feature values related to Color Block Mean Variance (CBMV) from the PC to be assessed”. However, in an analogous field of endeavor, Budagavi teaches deriving the average value of each color and standard deviation values for each color component in the neighborhood of points (Budagavi, Para. [0093]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Rodrigues in view of Chou further in view of Zhang, Shanableh and Chechik with the teachings of Budagavi by including deriving the average value of each color and standard deviation values for each color component in the neighborhood of points (i.e., feature values related to color fluctuation in geometric distance and color block mean variance). One having ordinary skill in the art would have been motivated to combine these references, because doing so would allow for enhancing the visual quality of reconstructed point clouds, as recognized by Budagavi. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Claim 10 recites an encoding method with elements corresponding to the steps recited in Claim 1. Therefore, the recited steps of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Rodrigues, Chou, Zhang, Shanableh, Chechik and Budagavi references, presented in rejection of Claim 1, apply to this claim.
Claims 19 and 20 recite a decoder and encoder with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of these claims are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Rodrigues, Chou, Zhang, Shanableh, Chechik and Budagavi references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Rodrigues, Chou, Zhang, Shanableh, Chechik and Budagavi references discloses a processor and a memory (Chou, Para. [0033], the computer system includes one or more processing units and memory).
Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Rodrigues (“Blind Quality Assessment of 3-D Synthesized Views Based on Hybrid Feature Classes”, provided with Applicant’s Information Disclosure Statement) in view of Chou (US 2017/0347120 A1) further in view of Zhang (US 2019/0080203 A1), Shanableh (US 2010/0316131 A1), Chechik et al. (US 8,787,454 B1) and Budagavi (US 2019/0318509 A1), as applied to claims 1, 10 and 19-20 above, and further in view of Wang (US 8379972 B1).
Regarding claim 5, Rodrigues in view of Chou further in view of Zhang, Shanableh, Chechik and Budagavi teaches the method of claim 1, wherein the determining first feature values of the PC to be accessed by performing feature extraction on the PC to be assessed using a first calculation sub-model comprises:
calculating a first feature value corresponding to one or more points in the PC to be assessed (Rodrigues, pg. 1740, several features extracted from synthesized views are selected for training or testing the algorithm. Pg. 1741, the quantization parameter (QP) of the lateral texture views, i.e., the first feature value),
wherein the calculating the first feature values corresponding to one or more points in the PC to be assessed comprises:
for a current point in the PC to be assessed, determining a near-neighbor point set associated with the current point, wherein the near-neighbor point set comprises at least one near-neighbor point (Budagavi, Para. [0088], the boundary detection engine performs a K-d tree nearest neighbor search with respect to a query point of a 2D frame. For a query point, the boundary detection engine derives the distance between each neighboring point and the query point. If the distance between the query point and a neighboring point is larger than the first threshold, then the boundary detection engine discards the neighboring point);
for the near-neighbor point set, calculating a color intensity difference between the current point and the at least one near-neighbor point in a unit distance, to determine the color intensity difference in at least one unit distance (Budagavi, Para. [0090], the difference between the color values of a point along the boundary of the patch and the color values of the centroid).
The proposed combination as well as the motivation for combining the Rodrigues, Chou, Zhang, Shanableh, Chechik and Budagavi references presented in the rejection of Claim 4, apply to Claim 5 and are incorporated herein by reference.
Although Rodrigues in view of Chou further in view of Zhang, Shanableh, Chechik and Budagavi teaches determining the difference between color values of neighbor points (Budagavi, Para. [0090]), they do not explicitly teach “performing weighted mean calculation on the first feature values corresponding to the one or more points and determining a weighted mean as the first feature value of the PC to be assessed” and “determining the first feature value corresponding to the current point by calculating a weighted mean of the color intensity difference in the at least one unit distance”. However, in an analogous field of endeavor, Wang teaches a weight is calculated for each neighbor pixel within the window that will be used to determine an estimated foreground color for the selected pixel. Weights are based on the difference between the colors of the neighbor pixel and the selected pixel in the input image (Wang, Col. 8, lines 38-46), and teaches the weighted average of neighboring pixels for the selected pixel i for which a foreground color is to be estimated (Wang, Col. 9, lines 34-40).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Rodrigues, Chou, Zhang, Shanableh, Chechik and Budagavi with the teachings of Wang by including determining a weighted average of the color intensity difference of neighbor pixels. One having ordinary skill would have been motivated to combine these references because doing so would allow for a higher detail in a restored image, as recognized by Wang. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 6, Rodrigues in view of Chou further in view of Zhang, Shanableh, Chechik and Budagavi teaches the method of claim 4, wherein the determining a second feature value of the PC to be accessed by performing feature extraction on the PC to be accessed using a second calculation sub-model comprises:
calculating second feature values corresponding to one or more non-empty voxel blocks in the PC to be assessed (Rodrigues, pg. 1740, several features extracted from synthesized views are selected for training or testing the algorithm. Pg. 1741, the quantization parameter (QP) of the lateral depth maps, i.e., the second feature value).
Although Rodrigues in view of Chou further in view of Zhang, Shanableh, Chechik and Budagavi teaches calculating a second feature value (Rodrigues, pgs. 1740-1741), they do not explicitly teach “performing weighted mean calculation on the second feature values corresponding to the one or more non-empty voxel blocks, and determining a weighted mean as the second feature value of the PC to be assessed”. However, in an analogous field of endeavor, Wang teaches a weighted average of neighboring pixels for the selected pixel i for which a foreground color is to be estimated (Wang, Col. 9, lines 34-40).
The proposed combination as well as the motivation for combining the Rodrigues, Chou, Zhang, Shanableh, Chechik, Budagavi, and Wang references presented in the rejection of Claim 5, apply to Claim 6 and are incorporated herein by reference. Thus, the claimed invention is met by Rodrigues in view of Chou further in view of Zhang, Shanableh, Chechik, Budagavi, and Wang.
Claims 15 and 16 recite encoding methods with elements corresponding to the steps recited in Claims 5 and 6, respectively. Therefore, the recited steps of these claims are mapped to the proposed combination in the same manner as the corresponding steps in their corresponding method claim. Additionally, the rationale and motivation to combine the Rodrigues, Chou, Zhang, Shanableh, Chechik, Budagavi, and Wang references, presented in rejection of Claim 5, apply to this claim.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Rodrigues (“Blind Quality Assessment of 3-D Synthesized Views Based on Hybrid Feature Classes”, provided with Applicant’s Information Disclosure Statement) in view of Chou (US 2017/0347120 A1) further in view of Zhang (US 2019/0080203 A1), Shanableh (US 2010/0316131 A1), Chechik et al. (US 8,787,454 B1) and Budagavi (US 2019/0318509 A1), as applied to claims 1, 10 and 19-20 above, and further in view of Lin (US 2016/0360207 A1).
Regarding claim 11, Rodrigues in view of Chou further in view of Zhang, Shanableh, Chechik and Budagavi teaches the method of claim 10, as described above.
Although Rodrigues in view of Chou further in view of Zhang, Shanableh, Chechik and Budagavi teaches determining a first and second feature value of the point cloud (Rodrigues, pgs. 1740-1741), they do not explicitly teach “acquiring a pre-coding parameter of the PC to be assessed” and “determining the feature parameter of the PC to be assessed according to the pre-coding parameter and a preset lookup table, wherein the preset lookup table is used for reflecting a correspondence between a coding parameter and the feature parameter”. However, in an analogous field of endeavor, Lin teaches a quantization parameter (qp) of the coding unit level, and selecting from a look-up table a size corresponding to the quantization parameter of the coding unit level (Lin, Para. [0085]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Rodrigues in view of Chou further in view of Zhang, Shanableh, Chechik and Budagavi with the teachings of Lin by including a coding quantization parameter and a look-up table for selecting a feature parameter corresponding to the coding quantization parameter. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for improving coding quality and efficiency of video content, as recognized by Lin. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Emma Rose Goebel/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662