/DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s response to the Non-final Office Action dated 08/27/2025, filed with the office on 11/26/2025, has been entered and made of record.
Status of Claims
Claims 1, 3, 4, 6-15, 17 and 18 are pending. Claims 1, 14 and 15 are amended. Claim 2, 5 and 16 were previously cancelled. Claim 18 is new.
Response to Arguments
Applicant’s amendment of independent Claims 1, 14 and 15, which has altered the scope of the claims of the instant application, has necessitated the new ground(s) of rejection presented in this office action with respect to claims of the instant application. Accordingly, in response to Applicant’s arguments that are merely directed to the amended portion of the claims, new analyses have been presented below, which make Applicant’s arguments moot.
Consequently, THIS ACTION IS MADE FINAL.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 14 and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, Claim 1 recites “setting the data resolutions at random”. There is insufficient antecedent basis for “the data resolutions” in the claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3, 4, 6, 11 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Tariq et al (US 2019/0392268 A1) in view of Kim et al. (US 2021/0073945 A1) and in further view of Chen et al. (US 2019/0005603 A1).
Regarding claim 1, Tariq teaches, A learning apparatus comprising a processor (Tariq, ¶0172: “A system comprising: one or more processors”) configured to: determine, (Tariq, ¶0110: “processor capable of executing instructions”) based on a first data resolution of subject data (Tariq, ¶0092: “based on an input image of size 900×900”) obtained at a subject device, (Tariq, ¶0047: “image as input to an ML model 114 of a perception engine”) a plurality of data resolutions that differ from one another (Tariq, ¶0093: “scales may be determined for various ranges of input and output sizes. Images of various scales (greater than, equal to, or less than the original image size) may be input”) the first data resolution indicating a corresponding amount of information per unit, and (Tariq, ¶0051: “each cell of the output grid 200 identifies a portion of the image”) (Tariq, ¶0035: “train an ML model with a receptive field that is substantially similar to the size of the input image”) the (Tariq, ¶0025: “The ML model may include a neural network”) adapted for a variation of a data resolution of input data based on the subject data (Tariq, ¶0040: “each ML model is trained to respond best to a small range of sizes”; therefore, the model is adapted for a variation of data resolutions/sizes) wherein the processor determines, as a basic structure, (Tariq, ¶0088: “one or more smaller networks may be employed”; e.g. a small network is interpreted as a basic structure) corresponding to the first data resolution of the subject data; and (Tariq, ¶0088: “select optimal networks for any one or more of sizes”; the optimal network is interpreted as the basic network structure suitable for the first range of data resolutions that is similar to the input data resolution) provide a trained model (Tariq, ¶0040: “each ML model is trained”) to the subject device, (Tariq, ¶0048: “the perception engine 116 may include one or more ML models”) the trained model being the (Tariq, ¶0082: “training the ML model using the selected subset of examples”) corresponds to the first data resolution of the subject data, (Tariq, ¶0040: “train an ML model with a receptive field that is substantially similar to the size of the input image”; Therefore, the model can be employed for data resolutions close to the resolution of the input image) wherein the processor is further configured to acquire a training data set containing at least the training samples used in training the (Tariq, ¶0082: “training the ML model using the selected subset of examples”) each of the training samples contained in the training data set (Tariq, ¶0147: “a first batch of training images that include objects of different sizes”; each of the training samples are interpreted as each batch of training images) corresponding to a respective different one of the plurality of data resolutions, (Tariq, ¶0086: “cropped images of size 240×240 may be used in the first batch when training in the first stage, whereas image crops having size 960×600 may be used in a third batch used to train the model in a third stage”). However, Tariq does not explicitly teach the plurality of data resolutions including the first data resolution and a second data resolution having a resolution higher than the first data resolution, a structure of the scalable network and the training samples being determined by specifying one of the largest or the smallest data resolution from among the plurality of data resolutions and setting the data resolutions at random based on the specified one of the largest or smallest data resolution.
In an analogous field of endeavor, Kim teaches, the plurality of data resolutions including the first data resolution (Kim, ¶0008: “a low resolution image having a first size”; first size is interpreted as the first data resolution) and a second data resolution having a resolution higher than the first data resolution; (Kim, ¶0008: “a low resolution image having a second size larger than the first size”) and a structure of the scalable network. (Kim, ¶0131: “the structure of an artificial neural network may be determined by... number of hidden layers”; and ¶0134: “determining an optimal model parameter during the learning process of an artificial neural network”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq using the teachings of Kim to introduce a second data resolution that is higher than the first data resolution. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of increasing the scalability of a neural network for optimal image processing for varying input resolutions. Therefore, it would have been obvious to combine the analogous arts Tariq and Kim to obtain the above-described limitations in claim 1. However, the combination of Tariq and Kim does not explicitly teach, the training samples being determined by specifying one of the largest or the smallest data resolution from among the plurality of data resolutions and setting the data resolutions at random based on the specified one of the largest or smallest data resolution.
In another analogous field of endeavor, Chen teaches, the training samples being determined by specifying one of the largest or the smallest data resolution from among the plurality of data resolutions (Chen, ¶0030: “resized training input images are provided to the image operator circuit 110 and the CNN image operator approximation circuit 120 for many training iterations 306, over a relatively large number of training images”) and setting the data resolutions at random based on the specified one of the largest or smallest data resolution. (Chen, ¶0030: “the randomized resolution resize circuit 302 is configured and provided to perform a random resolution resize of the training input image 102 so that the training encompasses a broad range of image resolutions”; the largest and the smallest resolution images are interpreted as the lower and the upper limit of a resolution range).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq in view of Kim using the teachings of Chen to introduce training a neural network with input images with a randomized resolution. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of optimizing a neural network to process images with a broad range of input resolutions. Therefore, it would have been obvious to combine the analogous arts Tariq, Kim and Chen to obtain the invention in claim 1.
Regarding claim 3, Tariq in view of Kim and in further view of Chen teaches, The apparatus according to claim 1, wherein the processor determines the basic structure based on a specification of the subject device, (Tariq, ¶0088: “smaller networks may be employed (i.e., a network having a smaller memory footprint and/or processing requirements”) and determines the plurality of data resolutions according to changes in receptive field (Tariq, ¶0097: “first range of sizes may be based at least in part on a receptive field”) for convolutional processing. (Kim, ¶0191: “A neural network for object recognition can be formed using various models, such as a convolutional neural network”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq in view of Kim and in further view of Chen using the additional teachings of Kim to introduce convolutional processing. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of employing a convolutional neural network for automatic image processing. Therefore, it would have been obvious to combine the analogous arts Tariq, Kim and Chen to obtain the invention of claim 3.
Regarding claim 4, Tariq in view of Kim and in further view of Chen teaches, The apparatus according to claim 3, wherein the specification is at least one of a capacity of a memory of the subject device, processing ability of a processor of the subject device, and power consumption of the subject device. (Kim, ¶0151: “computing system… have greater processing power and greater memory capacity than user terminal”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq in view of Kim and in further view of Chen using the additional teachings of Kim to introduce the processing power and memory capacity of the computing system. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of determining a neural network structure that would be operable with the memory and processing capabilities of the computing device. Therefore, it would have been obvious to combine the analogous arts Tariq, Kim and Chen to obtain the invention of claim 4.
Regarding claim 6, Tariq in view of Kim and in further view of Chen teaches, The apparatus according to claim 1, wherein the processor trains the scalable network upon (Kim, ¶0176: “Once the initial neural network structure is designed, the neural network may be trained”) changing at least one of a layer number, a channel number, and a kernel size for convolutional processing, (Kim, ¶0168: “the number of channels, the number of hidden layers, and the like”) in proportion to the plurality of data resolutions. (Kim, ¶0165: “when the image is enlarged four times, a neural network for image processing in which a hidden layer is formed of four layers may be used”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq in view of Kim and in further view of Chen using the additional teachings of Kim to introduce adjusting the neural network before training. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of structuring the neural network based on the input for better efficiency. Therefore, it would have been obvious to combine the analogous arts Tariq, Kim and Chen to obtain the invention of claim 6.
Regarding claim 11, Tariq in view of Kim and in further view of Chen teaches, The apparatus according to claim 1, wherein the processor subjects the scalable network to mini-batch training with a plurality of training samples (Tariq, ¶0147: “a first batch of training images that include objects of different sizes”) corresponding to the plurality of data resolutions (Tariq, ¶0095: “a first range of sizes”) assigned to one batch. (Tariq, ¶0031: “first batch of images (whether scaled or not) to the ML model”).
Regarding claim 13, Tariq in view of Kim and in further view of Chen teaches, The apparatus according to claim 1, wherein the processor determines the plurality of data resolutions (Tariq, ¶0093: “Images of various scales (greater than, equal to, or less than the original image size”) in such a manner that the plurality of data resolutions include each of the data resolutions of the subject data (Tariq, ¶0147: “determining the first range of sizes for the ML model based at least in part on… a first batch of training images”) obtained at a plurality of subject devices, respectively. (Tariq, ¶0115: “the image discussed herein may be received at a sensor of the sensor(s) 1012 and provided to the perception engine 1026”).
Regarding claim 14, it recites a learning method with steps corresponding to the elements of the apparatus recited in claim 1. Therefore, the recited steps of the method claim 14 are mapped to the proposed combination in the same manner as the corresponding elements of the apparatus claim 1. Additionally, the rationale and motivation to combine Tariq, Kim and Chen presented in rejection of claim 1, apply to this claim. Tariq additionally teaches, A learning method (Tariq, ¶0184: “the content of the example clauses can also be implemented via a method”).
Regarding claim 15, it recites a non-transitory computer readable medium including computer executable instructions corresponding to the elements of the learning apparatus recited in claim 1. Therefore, the recited instructions of the computer readable medium of claim 15 are mapped to the proposed combination in the same manner as the corresponding elements of the apparatus claim 1. Additionally, the rationale and motivation to combine Tariq, Kim and Chen presented in rejection of claim 1, apply to this claim. Tariq further teaches, A non-transitory computer readable storage medium storing computer executable instructions, (Tariq, ¶0141: “A non-transitory computer-readable medium having a set of instructions”) wherein the instructions, when executed by a processor, control the processor to perform processes comprising: (Tariq, ¶0141: “a set of instructions that, when executed, cause one or more processors to perform operations comprising”).
Claims 7, 8, 10, 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Tariq et al (US 2019/0392268 A1), in view of Kim et al. (US 2021/0073945 A1), in further view of Chen et al. (US 2019/0005603 A1) and still in further view of Bai et al. (US 2021/0383234 A1).
Regarding claim 7, Tariq in view of Kim and in further view of Chen teaches, The apparatus according to claim 1, wherein the subject data is image data, (Tariq, ¶0045: “The sensor data may include an image”) and the plurality of data resolutions (Tariq, ¶0095: “a first range of sizes”) (Tariq, ¶0035: “large objects appear to be smaller in the scaled-down image”). However, the combination of Tariq, Kim and Chen does not explicitly teach, the plurality of data resolutions are mutually different multiple image sizes, and wherein the processor determines the mutually different multiple image sizes.
In an analogous field of endeavor, Bai teaches, the plurality of data resolutions are mutually different multiple image sizes, and wherein the processor determines the mutually different multiple image sizes (Bai, ¶0006: “processor programmed to receive the input data at the neural network, wherein the input includes a plurality of resolution inputs of varying resolutions”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq in view of Kim and in further view of Chen using the teachings of Bai to introduce training a neural network with a plurality of resolutions. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of a neural network capable of processing images with different resolutions. Therefore, it would have been obvious to combine the analogous arts Tariq, Kim, Chen and Bai to obtain the invention of claim 7.
Regarding claim 8, Tariq in view of Kim, in further view of Chen and still in further view of Bai teaches, The apparatus according to claim 7, wherein the processor determines the size of the object in the image data, (Tariq, ¶0089: “graph 704 may indicate a size of the object in the image”) based on a label in target data (Tariq, ¶0074: “labeling… an area of the image that represent an object in the image”) or a bounding box for object detection. (Tariq, ¶0022: “a box indicative of pixels identified as being associated with the detected object”).
Regarding claim 10, Tariq in view of Kim, in further view of Chen and still in further view of Bai teaches, The apparatus according to claim 7, wherein the processor determines the size of the object in the image data, using a classification result (Tariq, ¶0059: “ML model may have determined for the ROI 400′ for a specific classification”) and a saliency map obtained (Tariq, ¶0023: “ML model actually identified a salient object in the image”) by inputting the image data to another trained model. (Tariq, ¶0038: “images may be provided, as input, to a second ML model”).
Regarding claim 12, Tariq in view of Kim and in further view of Chen teaches, The apparatus according to claim 1, wherein the processor uses. However, the combination of Tariq, Kim and Chen does not explicitly teach, uses individual normalization layers in network structures of the scalable network corresponding to the plurality of data resolutions, respectively.
In an analogous field of endeavor, Bai teaches, individual normalization layers in network structures of the scalable network (Bai, ¶0054: “the MDEQ may utilize group normalization”) corresponding to the plurality of data resolutions, respectively. (Bai, ¶0054: “performs normalization within each group (e.g., each resolution)”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq in view of Kim and in further view of Chen using the teachings of Bai to introduce normalization layers for different resolutions. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of improving the generalization ability of the neural network. Therefore, it would have been obvious to combine the analogous arts Tariq, Kim, Chen and Bai to obtain the invention of claim 12.
Regarding claim 17, Tariq in view of Kim and in further view of Chen teaches, The apparatus according to claim 1, wherein the processor is configured to train the scalable network with the training samples corresponding to each of the plurality of data resolutions. However, the combination of Tariq, Kim and Chen does not explicitly teach, while keeping a network structure and a layer number of the scalable network unchanged.
In an analogous field of endeavor, Bai teaches, while keeping a network structure and a layer number of the scalable network unchanged. (Bai, ¶0020: “deep neural network with hidden layers z… finding an optimal number of layers L”; therefore, as the optimized number of layers L is used, the structure and layer number of the scalable neural network is unchanged).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq in view of Kim and in further view of Chen using the teachings of Bai to introduce optimized number of layers. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of determining the optimized structure of a scalable neural network adapted for variable data resolution. Therefore, it would have been obvious to combine the analogous arts Tariq, Kim, Chen and Bai to obtain the invention of claim 17.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Tariq et al (US 2019/0392268 A1), in view of Kim et al. (US 2021/0073945 A1), in further view of Chen et al. (US 2019/0005603 A1), still in further view of Bai et al. (US 2021/0383234 A1), and still in further view of Andrea Pisoni (US 10,019,654 B1).
Regarding claim 9, Tariq in view of Kim, in further view of Chen and still in further view of Bai teaches, The apparatus according to claim 7, wherein the processor; however, the combination of Tariq, Kim, Chen and Bai does not explicitly teach, determines the size of the object in the image data, based on a spatial relationship between the object and the subject device.
In an analogous field of endeavor, Pisoni teaches, determines the size of the object in the image data, based on a spatial relationship between the object and the subject device. (Pisoni, col. 18, line. 13-15: “The system may determine an actual size… camera distance from the object when a corresponding image was captured”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq in view of Kim, in further view of Chen, and still in further view of Bai using the teachings of Pisoni to introduce a spatial relationship between an object and an image capturing device. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of calculating the actual size of an object based on the capturing distance. Therefore, it would have been obvious to combine the analogous arts Tariq, Kim, Chen, Bai, and Pisoni to obtain the invention of claim 9.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Tariq et al (US 2019/0392268 A1) in view of Li et al. (US 2019/0065884 A1).
Regarding claim 18, Tariq teaches, A learning apparatus comprising a processor (Tariq, ¶0172: “A system comprising: one or more processors”) configured to: determine, (Tariq, ¶0110: “processor capable of executing instructions”) based on a first data resolution of subject data (Tariq, ¶0092: “based on an input image of size 900×900”) obtained at a subject device, (Tariq, ¶0047: “image as input to an ML model 114 of a perception engine”) a plurality of data resolutions that differ from one another (Tariq, ¶0093: “scales may be determined for various ranges of input and output sizes. Images of various scales (greater than, equal to, or less than the original image size) may be input”) the first data resolution indicating a corresponding amount of information per unit, and (Tariq, ¶0051: “each cell of the output grid 200 identifies a portion of the image”) (Tariq, ¶0035: “train an ML model with a receptive field that is substantially similar to the size of the input image”) the (Tariq, ¶0025: “The ML model may include a neural network”) adapted for a variation of a data resolution of input data based on the subject data (Tariq, ¶0040: “each ML model is trained to respond best to a small range of sizes”; therefore, the model is adapted for a variation of data resolutions/sizes) wherein the processor determines, as a basic structure, (Tariq, ¶0088: “one or more smaller networks may be employed”; e.g. a small network is interpreted as a basic structure) corresponding to the first data resolution of the subject data; and (Tariq, ¶0088: “select optimal networks for any one or more of sizes”; the optimal network is interpreted as the basic network structure suitable for the first range of data resolutions that is similar to the input data resolution) provide a trained model (Tariq, ¶0040: “each ML model is trained”) to the subject device, (Tariq, ¶0048: “the perception engine 116 may include one or more ML models”) the trained model being the basic structure of the (Tariq, ¶0088: “one or more smaller networks may be employed”; e.g. a small network is interpreted as a basic structure) that corresponds to the first data resolution of the subject data, (Tariq, ¶0088: “select optimal networks for any one or more of sizes”; the optimal network is interpreted as the basic network structure suitable for the first range of data resolutions that is similar to the input data resolution) wherein the processor is further configured to: acquire a training data set containing at least the training samples used in training the (Tariq, ¶0082: “training the ML model using the selected subset of examples”) each of the training samples contained in the training data set (Tariq, ¶0147: “a first batch of training images that include objects of different sizes”; each of the training samples are interpreted as each batch of training images) corresponding to a respective different one of the plurality of data resolutions, (Tariq, ¶0086: “cropped images of size 240×240 may be used in the first batch when training in the first stage, whereas image crops having size 960×600 may be used in a third batch used to train the model in a third stage”). However, Tariq does not explicitly teach, the plurality of data resolutions including the first data resolution and a second data resolution having a resolution higher than the first data resolution; and train a structure of the scalable network that is larger than the basic structure based on training data having a second data resolution higher than the first data resolution, and train a structure of the scalable network that is smaller than the basic structure based on training data having a third data resolution lower than the first data resolution.
In an analogous field of endeavor, Li teaches, the plurality of data resolutions including the first data resolution and a second data resolution having a resolution higher than the first data resolution; (Li, ¶0014: “at least one image with first resolution and at least one image with second resolution, and the second resolution being higher than the first resolution”) and train a structure of the scalable network (Li, ¶0064: “a multi-scale neural network such as convolutional neural network may be established through training by using the acquired images as training samples”) that is (Li, ¶0094: “the training part includes three input terminals, that is, a first input terminal through which an image with first resolution P is input into the neural network, a second input terminal through which a part-cropping image from an image with resolution A is input into the neural network, and a third input terminal through which a part-cropping image from a image with resolution B is input into the neural network. Each of the resolution A and the resolution B is higher than the first resolution P and the resolution A is different from the resolution B”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq using the teachings of Li to introduce training images with 3 different resolutions. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of increasing the scalability of a neural network for optimal image processing for varying input resolutions. Therefore, it would have been obvious to combine the analogous arts Tariq and Li to obtain the above-described limitations in claim 18. However, the combination of Tariq and Li does not explicitly teach, a structure of the neural network larger than the basic structure and smaller than the basic structure.
In another analogous field of endeavor, Kim teaches, a structure of the neural network larger than the basic structure and smaller than the basic structure. (Kim, ¶0131: “the structure of an artificial neural network may be determined by a number of factors, including the number of hidden layers”, and ¶0182: “set the structure of the neural network in consideration of the number of pixels input”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Tariq in view of Li using the teachings of Kim to introduce a changing a structure of a neural network. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of increasing the scalability of a neural network for optimal image processing for varying input resolutions. Therefore, it would have been obvious to combine the analogous arts Tariq, Li and Kim to obtain the above-described limitations in claim 18.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MEHRAZUL ISLAM/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662