DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
The office action is being examined in response to the application filed by the applicant on December 07, 2025.
Claims 1-16 are pending and have been examined.
This action is made FINAL.
Information Disclosure Statement
The information disclosure statement (IDS) submitted December 07, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
A new claim objection was entered due to the applicant’s amendments.
The drawings were found in the acceptable.
Due to applicants’ amendments, the 112f, 112a, and 112b the have been withdrawn.
With respect to the applicant’s arguments on page 7, that the art of reference alone or in combination does not teach “the training data is used for generating a trained model for estimating respective picked positions of the plurality of workpieces by machine learning based on the at least one image” as in claim 1, the examiner respectfully disagrees. The limitation is taught by Ohya, “one piece of image data. The predetermined information indicates, for example, whether a defect is present. Examples of the case of indicating whether a defect is present include a case of indicating whether metal is peeled when the workpiece is a part with a metallic surface, and a case of indicating whether the workpiece is colored with a color different from a predetermined color when the workpiece is a colored part. A trained model 430 is obtained by inputting the teacher data to the learning model 420, changing the algorithm by a backpropagation method or the like, and training the learning model to output highly accurate information on whether the predetermined information is present. This learning phase processing is performed by at least one of the CPU 41 and the GPU 42. The learning phase processing may be desirably performed on a cloud. In a case where the learning phase processing is performed in the processing apparatus 40, the processing apparatus 40 is required to have a performance of a certain level or higher. On the other hand, in a case where the learning phase processing is performed on a cloud, the workpiece defect determination can be performed regardless of the performance of the processing apparatus 40.” See at least paragraph 0046. In addition the examiner would like to point out that the claims are replete with intended use for example the training data is used for generating a trained model for ….”. However for compact prosecution Examiner has applied art. Therefore, the combination of art does teach the limitation above. For the reasons explained above applicant’s arguments are not persuasive.
With respect to the applicant’s arguments on page 8, that the art of reference alone or in combination does not teach “a movement device configured to move at least one workpiece” as in claim 1, the examiner respectfully disagrees. The limitation is taught by Ohya, “The processing apparatus 40 determines whether a defect is present in an area of image data based on the image data acquired by the image capturing apparatus 30 (FIG. 2: S5). As a result of determination performed by the processing apparatus 40, if the final determination result to be output to a programmable logic controller (PLC) 50 indicates that a defect is present in the workpiece, the PCL 50 inputs a signal for operation control to a robot 60.The robot 60 switches a workpiece movement operation and causes the workpiece determined to be defective to move from the production line” See at least paragraph 0043. In addition, by Ohya teaching Fig 2 (S6). The applicant argues that the movement from the robot when a defective workpiece is found and subsequently the robot moves the workpiece is different than the intended use and does not disclose a movement device configured to move at least one piece of the plurality of workpieces in a visual field of the vision sensor . However moving a defective piece does move the workpiece in a visual field of the vision sensor. For the reasons explained above applicant’s arguments are not persuasive.
On page 8, the Applicant argues that a prima facie case of obviousness has apparently not been established. In response, the Examiner respectfully submits that obviousness is determined on the basis of the evidence as a whole and the relative persuasiveness of the arguments. See In re Oetiker, 977 F.2d 1443, 1445, 24 USPQ2d 1443, 1444 (Fed. Cir. 1992); In re Hedges, 783 F.2d 1038, 1039, 228 USPQ 685,686 (Fed. Cir. 1992); In re Piasecki, 745 F.2d 1468, 1472, 223 USPQ 785,788 (Fed. Cir. 1984); and In re Rinehart, 531 F.2d 1048, 1052, 189 USPQ 143,147 (CCPA 1976). Using this standard, the Examiner respectfully submits that the burden of presenting a prima facie case of obviousness has successfully been satisfied, since evidence of corresponding claim elements in the prior art has been presented, and since the Examiner has expressly articulated the combinations and the motivations for combinations that fairly suggest Applicant's claimed invention.
In response to Applicant's argument that there is no suggestion to combine the references, the Examiner recognizes that obviousness can only be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988) and In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992).
To this end, the Examiner recognizes that references cannot be arbitrarily altered or modified and that there must be some reason why one skilled in the art would be motivated to make the proposed modifications. Although the motivation or suggestion to make modifications must be articulated, it is respectfully submitted that there is no requirement that the motivation to make modifications must be expressly articulated within the references themselves. References are evaluated by what they suggest to one versed in the art, rather than by their specific disclosures, In re Bozek, 163 USPQ 545 (CCPA 1969).
In the instant case, the Examiner respectfully notes that each and every motivation to combine the applied references is accompanied by select portions of the respective references which specifically support that particular motivation. As such, it is NOT seen that the Examiner's combination of references is unsupported by the applied prior art of record. Rather, it is respectfully submitted that explanation based on the logic and scientific reasoning of one ordinarily skilled in the art at the time of the invention that support a holding of obviousness has been adequately provided by the motivations and reasons indicated by the Examiner, Ex pane Levengood 28 USPQ 2d 1300 (Bd. Pat. App. & Inter., 4/22/93).
The Applicant's arguments stating that the combination of the prior art of record does not fully disclose nor fairly suggest the claimed invention fails to persuade the Examiner because, as shown in the rejections below, the prior art of record is clearly and unarguably analogous as well as relevant. In addition, Applicant's arguments regarding the teachings of the prior art of record fall short because when combined together, the prior art of record wholly and flawlessly discloses the claimed invention.
Claim Objections
Claim 15 is objected to because of the following informalities: The limitation starting with "moving" ends with a "; and changes" in the next limitation begins with "changing". A suggestion to overcome this objection is to remove the words “and changes”. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 5, 8-11, and 14-16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by OHYA (US 20220284567 A1).
Regarding claims 1, 14, 15, and 16:
OHYA teaches:
a vision sensor configured to image an arrangement region of a plurality of workpieces and acquire at least one image selected from a group of a two-dimensional image and a three-dimensional image; (“The image capturing apparatus 30 includes a lens unit, an image sensor” [0036]; Fig. 1-2; “a sensor 10 is used to detect whether a workpiece (object) is present within a predetermined range (FIG. 2: S1). The sensor 10 is, for example, a sensor for detecting a workpiece moving at a high speed on a production line. For example, an infrared sensor is used. When the sensor 10 detects a workpiece within the predetermined range, the sensor 10 outputs a signal to a trigger generation circuit 20. The trigger generation circuit 20 generates an image capturing trigger signal based on the signal from the sensor 10 (FIG. 2: S2). The trigger generation circuit 20 is composed of a logic circuit such as a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC). The trigger generation circuit 20 performs hardware processing on the signal received from the sensor 10, and transmits the image capturing trigger signal that has undergone hardware processing to an image capturing apparatus 30. Then, the image capturing apparatus 30 captures an image of a workpiece (FIG. 2: S3).” [0033-34]; “the captured image is rotated such that linear portions corresponding to outer edges of the captured image” [0074] therefore the images are 2-dimensional images.)
a movement device configured to move at least one workpiece of the plurality of workpieces in a visual field of the vision sensor; (“The processing apparatus 40 determines whether a defect is present in an area of image data based on the image data acquired by the image capturing apparatus 30 (FIG. 2: S5). As a result of determination performed by the processing apparatus 40, if the final determination result to be output to a programmable logic controller (PLC) 50 indicates that a defect is present in the workpiece, the PCL 50 inputs a signal for operation control to a robot 60.The robot 60 switches a workpiece movement operation and causes the workpiece determined to be defective to move from the production line” [0043]; Fig. 2 (S6))
a processor configured to control an operation of the movement device; (“CPU 41 “ [0042]; “processing apparatus 40, if the final determination result to be output to a programmable logic controller (PLC) 50 indicates that a defect is present in the workpiece, the PCL 50 inputs a signal for operation control to a robot 60.” [0043])
generate training data including the at least one image acquired by the vision sensor and pick position information for picking out the at least one workpiece, ([0045-6];
wherein the training data is used for generating a trained model for estimating respective pick positions of the plurality of workpieces by machine learning, based on the at least one image, and (“one piece of image data. The predetermined information indicates, for example, whether a defect is present. Examples of the case of indicating whether a defect is present include a case of indicating whether metal is peeled when the workpiece is a part with a metallic surface, and a case of indicating whether the workpiece is colored with a color different from a predetermined color when the workpiece is a colored part. A trained model 430 is obtained by inputting the teacher data to the learning model 420, changing the algorithm by a backpropagation method or the like, and training the learning model to output highly accurate information on whether the predetermined information is present. This learning phase processing is performed by at least one of the CPU 41 and the GPU 42. The learning phase processing may be desirably performed on a cloud. In a case where the learning phase processing is performed in the processing apparatus 40, the processing apparatus 40 is required to have a performance of a certain level or higher. On the other hand, in a case where the learning phase processing is performed on a cloud, the workpiece defect determination can be performed regardless of the performance of the processing apparatus 40.” [0046])
generate a plurality of training data by repeating control of moving the at least one workpiece by the movement device so as to change an arrangement pattern of the plurality of workpieces, imaging of the arrangement region of the plurality of workpieces by the vison sensor, and generation of the training data. (“each of the areas illustrated in FIG. 4 is separated into a plurality of components and information about the separated components is used as teacher data. In other words, each area is separated into a plurality of components and each of the plurality of components is input to a learning model to thereby generate a trained model. The plurality of components includes a color component, hue (H), saturation (S), and value (V). The values of these components are obtained and the trained model performs estimation processing based on the values. This leads to an improvement in the determination accuracy of the trained model. First, each of the defective areas R1 and non-defective areas R2 illustrated in FIG. 5 is separated into color image data, H image data, S image data, and V image data. Specifically, the color image data includes red (R) image data, green (G) image data, and blue (B) image data. The learning model is trained with data on the areas R1 and data on the areas R2. In this case, the data is learned by marking defective or non-defective information on each image group. In other words, classification learning for two classes, i.e., defective or non-defective, is performed on each image data. As a result, a trained model is generated by learning estimated values indicating defective or non-defective of R, G, B, H, S, and V images. In the estimation phase, the obtained estimated values are compared with the data on the image of the workpiece, thereby performing defective or non-defective determination” [0055-57]; Fig. 5)
Regarding claim 2:
OHYA, as shown in the rejection above, discloses the limitations of claim 1.
OHYA further teaches:
wherein the processor is configured to generate the arrangement pattern of the plurality of workpieces based on a predetermined training data generation condition. (Fig. 7-11 “; “arranged in a column direction “ [0076]; “each model has an adverse effect on the trained model, and thus there is a need to generate different trained models for the respective models. Like in the present exemplary embodiment, it is determined whether a defect is present based on a difference image in a single workpiece including a predetermined repetitive pattern, ” [0070]).
Regarding claim 3:
OHYA, as shown in the rejection above, discloses the limitations of claim 2.
OHYA further teaches:
wherein the processor is configured to acquire the predetermined training data generation condition] including at least one selected from a group of a target value of a number of training data, a range of a number of types of workpieces, a range of sizes of workpieces, and a condition of the arrangement pattern of workpieces. (“a small difference in design or characters on each model has an adverse effect on the trained model, and thus there is a need to generate different trained models for the respective models. Like in the present exemplary embodiment, it is determined whether a defect is present based on a difference image in a single workpiece including a predetermined repetitive pattern, thereby eliminating the adverse effect due to the difference in design or characters. Matters other than the matters to be described below are similar to those of the second exemplary embodiment.” [0070])
Regarding claim 5:
OHYA, as shown in the rejection above, discloses the limitations of claim 2.
OHYA further teaches:
wherein the processor is configured to generate an operation plan for the movement device using, as target values, the arrangement pattern including a position and orientation of the at least one workpiece, and generate an operation command for operating the movement device based on the generated operation plan, and the movement device is configured to, in response to a generated operation command, move the at least one workpiece. (adjusting the orientation in the rotational direction ” [0074]; “ arranged” [0038, 0076]; fig. 2 (S6); “, the PCL 50 inputs a signal for operation control to a robot 60. The robot 60 switches a workpiece movement operation and causes the workpiece determined to be defective to move from the production line” [0043]).
Regarding claim 8:
OHYA, as shown in the rejection above, discloses the limitations of claim 1.
OHYA further teaches:
wherein the processor is configured to perform image processing on the at least one image acquired by vision sensor, (“ image data obtained in the image segmentation step P15, the learning model is trained with data marking information indicating whether the workpiece is defective or non-defective (annotation step P16)” [0084])
output, as a processing result, pick position information for picking out the at least one workpiece by a robot device including a robot and a hand, and generate the training data including the outputted pick position information. (annotation step P16, an image may be generated by rotating or translating the image data obtained in the image segmentation step P15, to thereby increase the number of pieces of learning image data. Note that annotation processing may be performed using the processed difference image without performing the image segmentation. The learning model is trained with these pieces of data ” [0084]; “if the final determination result to be output to a programmable logic controller (PLC) 50 indicates that a defect is present in the workpiece, the PCL 50 inputs a signal for operation control to a robot 60. The robot 60 switches a workpiece movement operation and causes the workpiece determined to be defective to move from the production line “ [0043]).
Regarding claim 9:
OHYA, as shown in the rejection above, discloses the limitations of claim 5.
OHYA further teaches:
wherein the processor is configured to perform image processing on the at least one image acquired by the vison sensor, and generate the operation plan based on result of the image processing. (adjusting the orientation in the rotational direction ” [0074]; fig. 2 (S6); “, the PCL 50 inputs a signal for operation control to a robot 60. The robot 60 switches a workpiece movement operation and causes the workpiece determined to be defective to move from the production line” [0043]).
Regarding claim 10:
OHYA, as shown in the rejection above, discloses the limitations of claim 1.
OHYA further teaches:
wherein the processor is configured to determine acceptance of the training data based on whether or not a number of the training data has reached a predetermined target value. (multiple training cycles. The resulting defect vectors may be analyzed for each supplied training dataset. Various defect metrics are conceivable for the analysis, these being able to be optimized during the learning steps, such as for example the mean error over all pixels of the defect map (usually cross-entropy) or perceptual defect metrics (for instance adversarial losses)…the stopping point may also be earlier, if for instance a previously defined training time budget is consumed or if the model error during training, evaluated on a separate validation dataset, does not drop any further. The statistical learning model may then be sufficiently trained.” [0072]; “ target result that the statistical learning model 112′ is intended to deliver after training is complete if it is supplied with the training images 120a, 120b again as input data. In an iterative optimization process, the filter masks 92a, 92b for the convolution operations of the statistical learning model 112′ (cf. FIG. 4) are modified, for instance with stochastic gradient descent with or without Momentum, Adam, RMSProp and the like, until the statistical learning model 112′ delivers the annotated versions 122a, 122b as part of a defined defect criterion and/or stop criterion.” [0083] It is noted the target value could be one (once).)
Regarding claim 11:
OHYA, as shown in the rejection above, discloses the limitations of claim 1.
OHYA further teaches:
wherein the processor is configured to perform image processing on the at least one image acquired by the vision sensor, and determine whether to save or discard the training data based on a result of the image processing and a predetermined determination criterion for acceptance of the training data. (multiple training cycles. The resulting defect vectors may be analyzed for each supplied training dataset. Various defect metrics are conceivable for the analysis, these being able to be optimized during the learning steps, such as for example the mean error over all pixels of the defect map (usually cross-entropy) or perceptual defect metrics (for instance adversarial losses)…the stopping point may also be earlier, if for instance a previously defined training time budget is consumed or if the model error during training, evaluated on a separate validation dataset, does not drop any further. The statistical learning model may then be sufficiently trained.” [0072]; “ target result that the statistical learning model 112′ is intended to deliver after training is complete if it is supplied with the training images 120a, 120b again as input data. In an iterative optimization process, the filter masks 92a, 92b for the convolution operations of the statistical learning model 112′ (cf. FIG. 4) are modified, for instance with stochastic gradient descent with or without Momentum, Adam, RMSProp and the like, until the statistical learning model 112′ delivers the annotated versions 122a, 122b as part of a defined defect criterion and/or stop criterion.” [0083] It is noted the target value could be one (once).)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 4 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over OHYA (US 20220284567 A1) in further view of Freytag (US 20230294173 A1).
Regarding claim 4:
OHYA, as shown in the rejection above, discloses the limitations of claim 3.
OHYA does not teach, however, Freytag teaches:
wherein the condition of the arrangement pattern of workpieces includes at least one selected from a group of a range of a number of layers on which workpieces are stacked and a condition of a gap between workpieces. (“intended to deliver after training is complete if it is supplied with the training images 120a, 120b again as input data….. After the statistical learning model 112′ has been trained sufficiently, it is supplied, in preferred example embodiments of the methods for the additive manufacture of a workpiece, with the respectively current images 92 and preferably also historical images of previous material layers … each case at least three and in particular four current and preferably normalized images of the new material layer… statistical learning model may be retrained using these training images and be optimized with regard to the manufactured workpiece.” [0083] )
It would have been obvious to one of ordinary skill in the art at the effective time of filing to have modified the combination of OHYA and OOTA to include the teachings as taught by Freytag because “individual workpiece layers are often produced from the bottom upward on a production platform that is lowered following each workpiece layer by the corresponding layer height..” (Freytag, See at least paragraph 0004).
Regarding claim 6:
OHYA, as shown in the rejection above, discloses the limitations of claim 1.
OHYA does not teach, however, Freytag teaches:
wherein the movement device is configured so as to perform an operation of changing a three- dimensional position and orientation of the at least one workpiece. (“ operations in three dimensions is applied. By way of example, the first two dimensions may be the spatial pixel information along the X axis and Y axis of the elevation map, and the third dimension of the convolution operations may be time, wherein the current elevation map and one or more historical elevation maps are used. The input dataset may for example be a tensor the dimensions of which correspond to the width and height of the elevation maps and to the number of historical and current elevation maps. .” [0043] )
It would have been obvious to one of ordinary skill in the art at the effective time of filing to have modified the OHYA to include the teachings as taught by Freytag because “allows a highly advantageous implementation of the novel method and of the corresponding device with monitoring of the manufacturing process for a plurality of different workpieces and process sequences..” (Freytag, See at least paragraph 0041).
Regarding claim 7:
OHYA, as shown in the rejection above, discloses the limitations of claim 1.
OHYA does not teach, however, Freytag teaches:
wherein the vison sensor is configured to perform the three-dimensional image of the arrangement regions and acquire three-dimensional point group data, and the processor is configured to generate the training data including three-dimensional pick position information of the at least one workpiece. (“ operations in three dimensions is applied. By way of example, the first two dimensions may be the spatial pixel information along the X axis and Y axis of the elevation map, and the third dimension of the convolution operations may be time, wherein the current elevation map and one or more historical elevation maps are used. The input dataset may for example be a tensor the dimensions of which correspond to the width and height of the elevation maps and to the number of historical and current elevation maps. .” [0043] ; “learning model 112″… determine the spatial dimensions of detected layer defects.” [0085])
It would have been obvious to one of ordinary skill in the art at the effective time of filing to have modified the OHYA to include the teachings as taught by Freytag because “allows a highly advantageous implementation of the novel method and of the corresponding device with monitoring of the manufacturing process for a plurality of different workpieces and process sequences..” (Freytag, See at least paragraph 0041).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over OHYA (US 20220284567 A1) in view of YANG (CN 103567677 A).
Regarding claim 12:
OHYA, as shown in the rejection above, discloses the limitations of claim 1.
YANG further teaches:
wherein the movement device includes a conveyance cart configured to move on a floor surface with the at least one workpiece placed on the conveyance cart. (“ clamping fixing device is a pneumatic self-centring three-claw chuck; the movable bracket located under the workpiece 2, comprising a sliding base, a lifting driving device and a roller; the sliding base sliding installed on the middle slide rail 5, the lifting drive device is installed on the sliding base, the rolling wheel is installed on the lifting drive device, and tangent to the pipe work piece,... and conveying mechanical arm 18 through two flange, tail shaft 13 of the cutting robot 4 output end respectively connected with cutting gun and a conveying mechanical hand 1” [0019] Fig. 1)
It would have been obvious to one of ordinary skill in the art at the effective time of filing to have modified OHYA to include the teachings as taught by YANG because “is able to complete more than one circumferential position on the axial workpiece processing.” (YANG, See at least paragraph 0002).
Claims 13 are rejected under 35 U.S.C. 103 as being unpatentable over OHYA (US 20220284567 A1) in view of OOTA (US10596698 B2).
Regarding claim 13:
OHYA, as shown in the rejection above, discloses the limitations of claim 1.
OHYA teaches robot but does not teach explicitly (hand nor arm), however, OOTA teaches:
wherein the movement device includes a robot device that includes a robot and a hand and the robot device is configured to move the at least one workpiece while holding the at least one workpiece. (“has a robot hand to hold a workpiece or camera. The state information includes a flaw detection position of the workpiece, a movement route of the robot hand“ abstract)
It would have been obvious to one of ordinary skill in the art at the effective time of filing to have modified OHYA to include the teachings as taught by OOTA because “number of imaging pieces and imaging positions are optimized, and a cycle time of the inspection is reduced.” (OOTA, See at least col. 2 lines 10-15).
CONCLUSION
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W ANDERSON whose telephone number is (571)270-0508. The examiner can normally be reached Monday - Thursday 9am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Debbie Reynolds can be reached at (571) 272-0734. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Mike Anderson
Supervisor Patent Examiner
Art Unit 3693
/Mike Anderson/Supervisory Patent Examiner, Art Unit 3693