Prosecution Insights
Last updated: April 19, 2026
Application No. 18/711,108

TEACHING DEVICE

Final Rejection §102§103
Filed
May 17, 2024
Examiner
NAVAS JR, EDEMIO
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Fanuc Corporation
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
384 granted / 540 resolved
+13.1% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
571
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments In light of the changes made to the specifications, the objection pertaining to a non-descriptive title is withdrawn. In light of the changes made to the claims, the interpretation of 35 U.S.C. 112(f) is withdrawn. Applicant's arguments with respect to claims 1-15 have been considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 and 13-15 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Takeshi (JP 2017-040600). The 102(a)(2) rejections below are based on Japanese Patent Application Publication No. 2017-040600. The rejections rely on the machine translations of the Japanese prior art. These English translations are deemed to fully comply with the translation requirement of MPEP section 1207.02. See USPTO memorandum "Machine Translation of a Non-English Document Being Relied Upon by the Examiner in Support of a Rejection in an Examiner's Answer," located at http://www.uspto.gov/patents/law/exam/20091117_mach_trans_memo.pdf. The English translation of the foreign patent documents is attached. In regards to claim 1, Takeshi teaches a teaching device comprising: a processor configured to execute detection of a position of a target object from a captured image acquired by capturing an image of the target object by a visual sensor (“The workpiece W is installed at an arbitrary position where the robot apparatus 110 can acquire the workpiece W by installation means (not shown). The work W may be installed by an articulated robot, may be performed by a human hand, or may be performed by other means. The robot apparatus 110 includes an articulated robot 111 and a robot controller 112 that controls the operation of the robot 111. The robot 111 has a vertically articulated robot arm and a robot hand attached to the robot arm. The robot controller 112 changes the position and posture of the workpiece W relative to the camera 105 by causing the robot 111 to grip the workpiece W and operating the robot 111 according to the trajectory data,” – pg. 2, also see FIG. 1, finally “Conversely, any structure may be used as long as there is no structure that causes optical interference as described above and the workpiece W can be placed at a predetermined position,” – pg. 3, as also described in pg. 4 there is a base posture of the robot and camera for the image coordinate system, thus establishing posture relationship between the object under inspection and the rest of the system); set a plurality of image capture conditions related to image capture of the target object (“The workpiece W is installed at an arbitrary position where the robot apparatus 110 can acquire the workpiece W by installation means (not shown). The work W may be installed by an articulated robot, may be performed by a human hand, or may be performed by other means. The robot apparatus 110 includes an articulated robot 111 and a robot controller 112 that controls the operation of the robot 111. The robot 111 has a vertically articulated robot arm and a robot hand attached to the robot arm. The robot controller 112 changes the position and posture of the workpiece W relative to the camera 105 by causing the robot 111to grip the workpiece W and operating the robot 111 according to the trajectory data,” – pg. 1, “Further, the relative position between the workpiece W and the camera 105 changes. In the first embodiment, the relative imaging position of the camera 105 with respect to the workpiece W is changed by moving the workpiece W by the robot apparatus 110. In addition, although the case where the workpiece | work W is moved by the robot apparatus110 is demonstrated in 1st Embodiment, it is not limited to this, A moving apparatus may be apparatuses other than a robot apparatus. Moreover, although the case where the workpiece | work W is moved with respect to the camera 105 is demonstrated, you may move the camera 105 with respect to the workpiece | work W. FIG. In either case, the relative imaging position of the camera 105 with respect to the workpiece W can be changed.” – pg. 3, wherein the plurality of image capture conditions relate to the relative positioning of the object to the camera, additionally see FIG. 1), execute image-capture-and-detection of the position of the target object under each of the plurality of image capture conditions (See the citations described above from pgs. 2-3 which describe changing the relative position of the object to the camera for imaging the object [under various angles caused by the change in relative positioning], also “The CPU 201 causes the camera 105 to capture a plurality of images while changing the relative position between the work W and the camera 105 in order to acquire an image for inspection of the work W. As a method for driving the workpiece W with respect to the camera 105, the workpiece W may be driven step-and-repeat, and the workpiece W may be imaged when stopped, or the workpiece W may be imaged at any timing while being driven,” – pg. 3-4, with pg. 4 further describing that position and orientation of the object with respect to the camera are required for each captured position and measured within an image coordinate system); and determine, as a formally employed detection result, a formal detection position of the target object, based on an index indicating a statistical property of a plurality of detection results relating to a plurality of detected positions of the target object acquired by executing the image-capture-and-detection of the position of the target object under the plurality of image capture conditions (See FIG. 5A-5D in view of “5A to 5C show defect candidates 301 corresponding to defects when the workpiece W is moved relative to the camera 105 in the inspection method according to the first embodiment of the present invention. It is a conceptual diagram which shows the defect candidate 302 corresponding to the reflection of the light source 102. When rotating moving the workpiece W, the image I1, I2, I3 of the three obtained by the imaging of the camera 105, the inspection region 300 is-through in the image coordinate system sigma .sub.I corresponding to the inspected surface WA of the work W It is crowded. These among three images I1, I2, I3, the position of the defect candidate 301 does not change the work coordinate system sigma .sub.W, the position of the defect candidate 302 will change the work coordinate system sigma .sub.W. Incidentally, in FIG. 5 (a) ~FIG 5 (c), was a reflection of the light source 102, except the light source 102, for example, be a glare due to reflection and the like of the workpiece W itself, in the work coordinate system sigma .sub.W Change,” – pg. 6, wherein a given example shows three images taken at three differing positions, as well as a correlated frequency map, from here an index indicating a statistical property of a plurality of detection results relating to a plurality of detected positions of the object is shown in the form of an appearance frequency value, described as follows: “In step S405 of the first embodiment, the determination based on the frequency threshold is performed with reference to the frequency map. When the number of feature points in a certain cell area in the frequency map is greater than or equal to the frequency threshold recorded in the data storage memory 253, it is determined that the corresponding cell area is defective, and the workpiece W is processed as a defective product. . Conversely, when there is no cell area equal to or higher than the frequency threshold, it is determined that the workpiece W is not defective, and the workpiece W is processed as a non-defective product. The workpiece W can be inspected through the above-described process. In the example of FIG. 5D, when threshold processing is performed on the frequency map 303 with the frequency threshold value being “2”, the region 304 having a high appearance frequency has a count number of “3”, so that there is a defect. I understand that On the other hand, since the count number is “1” in the region 305 having a low appearance frequency, it can be seen that there is no defect. Here, since the frequency threshold with respect to the count number of the frequency map changes depending on the number of captured images and the like, it is necessary for the user to set an optimal value,” – pg. 6, wherein the formal detection position counts as the positions which determine a defect according to the frequency map and frequency threshold and corresponding appearance frequency described above). In regards to claim 13, Takeshi teaches the teaching device according to claim 11, wherein the plurality of image capture conditions includes an image capture condition being an image capture position of the visual sensor (“The workpiece W is installed at an arbitrary position where the robot apparatus 110 can acquire the workpiece W by installation means (not shown). The work W may be installed by an articulated robot, may be performed by a human hand, or may be performed by other means. The robot apparatus 110 includes an articulated robot 111 and a robot controller 112 that controls the operation of the robot 111. The robot 111 has a vertically articulated robot arm and a robot hand attached to the robot arm. The robot controller 112 changes the position and posture of the workpiece W relative to the camera 105 by causing the robot 111 to grip the workpiece W and operating the robot 111 according to the trajectory data,” – pg. 2, also see FIG. 1, finally “Conversely, any structure may be used as long as there is no structure that causes optical interference as described above and the workpiece W can be placed at a predetermined position,” – pg. 3, as also described in pg. 4 there is a base posture of the robot and camera for the image coordinate system, thus establishing posture relationship between the object under inspection and the rest of the system); and the processor is configured to determine one or more image capture positions around one image capture position being the standard in such a way that one or more image capture areas based on the one or more image capture positions on an image capture target surface partially overlap an image capture area based on the one image capture position being the standard on the image capture target surface (See wherein the system indicates that the object is rotationally moved to the front and left-right [see FIG. 5], wherein it is understood to one of ordinary skill that the front image would be an imaging condition that serves as a reference and would naturally overlap portions of image capture areas that have been moved left and right; it is noted by the examiner that numbered paragraph translations were not found for this reference and apologies are made thereof). In regards to claim 14, Takeshi teaches the teaching device according to claim 13, wherein the visual sensor is provided on a robot (See FIG. 1), and the processor is configured to move the robot in such a way that image capture of the target object is performed at a plurality of image capture positions (“The workpiece W is installed at an arbitrary position where the robot apparatus 110 can acquire the workpiece W by installation means (not shown). The work W may be installed by an articulated robot, may be performed by a human hand, or may be performed by other means. The robot apparatus 110 includes an articulated robot 111 and a robot controller 112 that controls the operation of the robot 111. The robot 111 has a vertically articulated robot arm and a robot hand attached to the robot arm. The robot controller 112 changes the position and posture of the workpiece W relative to the camera 105 by causing the robot 111 to grip the workpiece W and operating the robot 111 according to the trajectory data,” – pg. 2, also see FIG. 1, finally “Conversely, any structure may be used as long as there is no structure that causes optical interference as described above and the workpiece W can be placed at a predetermined position,” – pg. 3, as also described in pg. 4 there is a base posture of the robot and camera for the image coordinate system, thus establishing posture relationship between the object under inspection and the rest of the system). In regards to claim 15, Takeshi teaches the teaching device according to claim 13, wherein the visual sensor is fixed to a workspace in which a robot provided with a hand is installed (“The workpiece W is installed at an arbitrary position where the robot apparatus 110 can acquire the workpiece W by installation means (not shown). The work W may be installed by an articulated robot, may be performed by a human hand, or may be performed by other means. The robot apparatus 110 includes an articulated robot 111 and a robot controller 112 that controls the operation of the robot 111. The robot 111 has a vertically articulated robot arm and a robot hand attached to the robot arm. The robot controller 112 changes the position and posture of the workpiece W relative to the camera 105 by causing the robot 111 to grip the workpiece W and operating the robot 111 according to the trajectory data,” – pg. 2, also see FIG. 1, finally “Conversely, any structure may be used as long as there is no structure that causes optical interference as described above and the workpiece W can be placed at a predetermined position,” – pg. 3, as also described in pg. 4 there is a base posture of the robot and camera for the image coordinate system, thus establishing posture relationship between the object under inspection and the rest of the system), the robot is configured to grip the target object with the hand (See FIG. 1), and the processor is configured to move the robot in such a way that image capture of the target object is performed at a plurality of image capture positions (“The workpiece W is installed at an arbitrary position where the robot apparatus 110 can acquire the workpiece W by installation means (not shown). The work W may be installed by an articulated robot, may be performed by a human hand, or may be performed by other means. The robot apparatus 110 includes an articulated robot 111 and a robot controller 112 that controls the operation of the robot 111. The robot 111 has a vertically articulated robot arm and a robot hand attached to the robot arm. The robot controller 112 changes the position and posture of the workpiece W relative to the camera 105 by causing the robot 111to grip the workpiece W and operating the robot 111 according to the trajectory data,” – pg. 1, “Further, the relative position between the workpiece W and the camera 105 changes. In the first embodiment, the relative imaging position of the camera 105 with respect to the workpiece W is changed by moving the workpiece W by the robot apparatus 110. In addition, although the case where the workpiece | work W is moved by the robot apparatus110 is demonstrated in 1st Embodiment, it is not limited to this, A moving apparatus may be apparatuses other than a robot apparatus. Moreover, although the case where the workpiece | work W is moved with respect to the camera 105 is demonstrated, you may move the camera 105 with respect to the workpiece | work W. FIG. In either case, the relative imaging position of the camera 105 with respect to the workpiece W can be changed.” – pg. 3, wherein the plurality of image capture conditions relate to the relative positioning of the object to the camera, additionally see FIG. 1). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 3, 7, 11 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Takeshi (JP 2017-040600) in view of Hayashi (U.S. PG Publication No. 2019/0268522). In regards to claim 2, Takeshi fails to teach the teaching device according to claim 1, wherein the processor is configured to determine formally employed detection result, based on a mode value as the index, the mode value being related to the plurality of detection results. In a similar endeavor Hayashi teaches wherein the processor is configured to determine formally employed detection result, based on a mode value as the index, the mode value being related to the plurality of detection results (See ¶0051-0053 wherein the score is related to the plurality of detection results, also average luminance value may be taught as the index/mode value which is also related to the plurality of detection results). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Hayashi into Takeshi because it allows for setting imaging conditions which connote the best score as optimum illumination conditions as described in at least ¶0051. In regards to claim 3, Takeshi fails to teach the teaching device according to claim 2, wherein the processor is configured to average one or more detection results determined as the formally employed detection results out of the plurality of detection results and uses an averaged result as the formally employed detection result. In a similar endeavor Hayashi teaches wherein the processor is configured to average one or more detection results determined as the formally employed detection results out of the plurality of detection results and uses an averaged result as the formally employed detection result (See ¶0051-0053 wherein the average luminance value may be set as the initial setting). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Hayashi into Takeshi because it allows for setting imaging conditions which connote the best score as optimum illumination conditions as described in at least ¶0051. In regards to claim 7, Takeshi fails to teach the teaching device according to claim 1, wherein the processor is configured to determine the formally employed detection result by using a value acquired by averaging the plurality of detection results as the index. In a similar endeavor Hayashi teaches wherein the processor is configured to determine the formally employed detection result by using a value acquired by averaging the plurality of detection results as the index (See ¶0024-0026 and 0051-0053 in view of FIG. 4, 6 and 7). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Hayashi into Takeshi because it allows for setting imaging conditions which connote the best score as optimum illumination conditions as described in at least ¶0051. In regards to claim 11, Takeshi fails to teach the teaching device according to claim 1, wherein the processor is configured to set the plurality of image capture conditions by generating one or more image capture conditions, based on one image capture condition being a standard. In a similar endeavor Hayashi teaches wherein the processor is configured to set the plurality of image capture conditions by generating one or more image capture conditions, based on one image capture condition being a standard (See ¶0024-0026 and 0051-0053 in view of FIG. 4, 6 and 7 wherein a standard may, for example, be set as the middle value, or the initial setting). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Hayashi into Takeshi because it allows for setting imaging conditions which connote the best score as optimum illumination conditions as described in at least ¶0051. In regards to claim 12, Takeshi fails to teach the teaching device according to claim 11, wherein the one image capture condition being the standard is a previously taught image capture condition. In a similar endeavor Hayashi teaches wherein the one image capture condition being the standard is a previously taught image capture condition (See ¶0024-0026 and 0051-0053 in view of FIG. 4, 6 and 7 wherein it is understood that the middle or initial setting value may be one which was taught to the target object detection unit). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Hayashi into Takeshi because it allows for setting imaging conditions which connote the best score as optimum illumination conditions as described in at least ¶0051. Claim(s) 4-6 and 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Takeshi (JP 2017-040600) in view of Hayashi (U.S. PG Publication No. 2019/0268522) and Nishimura et al. (“Nishi”) (U.S. PG Publication No. 2021/0092280). In regards to claim 4, Takeshi fails to teach the teaching device according to claim 2, extract an image capture condition positioned as being unsuitable for use in the image-capture-and-detection or an image capture condition positioned as being suitable for use in the image-capture-and-detection from the plurality of image capture conditions by using at least one of two decision criteria being: (1) an image capture condition producing a greater number of detection results matching each other is an image capture condition with a higher probability of successful detection; and (2) an image capture condition producing a result with a higher predetermined evaluation value related to a detection result is an image capture condition with a higher probability of successful detection, and adjust a previously taught image capture condition with the extracted image capture condition positioned as being unsuitable for use in the image-capture-and-detection or the extracted image capture condition positioned as being suitable for use in the image-capture-and-detection. In a similar endeavor Nishi teaches extract an image capture condition positioned as being unsuitable for use in the image-capture-and-detection or an image capture condition positioned as being suitable for use in the image-capture-and-detection from the plurality of image capture conditions by using at least one of two decision criteria being: (1) an image capture condition producing a greater number of detection results matching each other is an image capture condition with a higher probability of successful detection; and (2) an image capture condition producing a result with a higher predetermined evaluation value related to a detection result is an image capture condition with a higher probability of successful detection (It is noted by the examiner that the claim language merely requires one of the two decision criteria to be fulfilled, see ¶0003, 0014 and 0018 wherein the system may adjust various parameters [thus changing imaging conditions] in order to get better detection results [more successful, in a higher predetermined evaluation value related to other conditions]), and adjust a previously taught image capture condition with the extracted image capture condition positioned as being unsuitable for use in the image-capture-and-detection or the extracted image capture condition positioned as being suitable for use in the image-capture-and-detection (See ¶0003, 0014 and 0018). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Nishi into Takeshi because it allows for an improved detection system through the use of a neural network in order to find optimal imaging conditions, especially ones which are better than initial imaging conditions and those created by a user, thus improving efficiency of such a system. In regards to claim 5, Takeshi fails to teach the teaching device according to claim 4, wherein the processor is configured to extract the image capture condition positioned as being unsuitable for use in the image-capture-and-detection and make an adjustment in such a way that the image capture condition positioned as being unsuitable for use in the image-capture-and-detection is not used as an image capture condition. In a similar endeavor Nishi teaches wherein the processor is configured to extract the image capture condition positioned as being unsuitable for use in the image-capture-and-detection and make an adjustment in such a way that the image capture condition positioned as being unsuitable for use in the image-capture-and-detection is not used as an image capture condition (See for example ¶0003, 0014 and 0018 wherein the system may use iterative processes of imaging conditions, until a confidence level has exceeded a threshold, therefore various attempts are made by the neural network model which would be considered unsuitable for image capture and detection and would thus not be used under such conditions). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Nishi into Takeshi because it allows for an improved detection system through the use of a neural network in order to find optimal imaging conditions, especially ones which are better than initial imaging conditions and those created by a user, thus improving efficiency of such a system. In regards to claim 6, Takeshi fails to teach the teaching device according to claim 4, wherein the processor is configured to extract the image capture condition positioned as being suitable for use in the image-capture-and-detection and updates the previously taught an image capture condition previously taught to the target object detection unit with the image capture condition positioned as being suitable for use in the image-capture-and-detection. In a similar endeavor Nishi teaches wherein the processor is configured to extract the image capture condition positioned as being suitable for use in the image-capture-and-detection and updates the previously taught an image capture condition previously taught to the target object detection unit with the image capture condition positioned as being suitable for use in the image-capture-and-detection (See ¶0003, 0014 and 0018, this is taken in view of Takeshi’s teachings). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Nishi into Takeshi because it allows for an improved detection system through the use of a neural network in order to find optimal imaging conditions, especially ones which are better than initial imaging conditions and those created by a user, thus improving efficiency of such a system. In regards to claim 8, Takeshi fails to teach the teaching device according to claim 7, wherein the processor is configured to extract image capture condition adjustment unit extracts an image capture condition positioned as being unsuitable for use in the image-capture-and-detection or an image capture condition positioned as being suitable for use in the image-capture-and-detection from the plurality of image capture conditions by using a decision criterion that an image capture condition producing a result with a higher predetermined evaluation value related to a detection result is an image capture condition with a higher probability of successful detection, and adjust a previously taught image capture condition with the extracted image capture condition positioned as being unsuitable for use in the image-capture-and-detection or the extracted image capture condition positioned as being suitable for use in the image-capture-and-detection. In a similar endeavor Nishi teaches wherein the processor is configured to extract image capture condition adjustment unit extracts an image capture condition positioned as being unsuitable for use in the image-capture-and-detection or an image capture condition positioned as being suitable for use in the image-capture-and-detection from the plurality of image capture conditions by using a decision criterion that an image capture condition producing a result with a higher predetermined evaluation value related to a detection result is an image capture condition with a higher probability of successful detection (See ¶0003, 0014 and 0018 wherein the system may adjust various parameters [thus changing imaging conditions] in order to get better detection results [more successful, in a higher predetermined evaluation value related to other conditions]), and adjust a previously taught image capture condition with the extracted image capture condition positioned as being unsuitable for use in the image-capture-and-detection or the extracted image capture condition positioned as being suitable for use in the image-capture-and-detection (See for example ¶0003, 0014 and 0018 wherein the system may use iterative processes of imaging conditions, until a confidence level has exceeded a threshold, therefore various attempts are made by the neural network model which would be considered unsuitable for image capture and detection and would thus not be used under such conditions). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Nishi into Takeshi because it allows for an improved detection system through the use of a neural network in order to find optimal imaging conditions, especially ones which are better than initial imaging conditions and those created by a user, thus improving efficiency of such a system. In regards to claim 9, Takeshi fails to teach the teaching device according to claim 8, wherein the processor is configured to extract the image capture condition positioned as being unsuitable for use in the image-capture-and-detection and make an adjustment in such a way that the image capture condition positioned as being unsuitable for use in the image-capture-and-detection is not used as an image capture condition. In a similar endeavor Nishi teaches wherein the processor is configured to extract the image capture condition positioned as being unsuitable for use in the image-capture-and-detection and make an adjustment in such a way that the image capture condition positioned as being unsuitable for use in the image-capture-and-detection is not used as an image capture condition (See for example ¶0003, 0014 and 0018 wherein the system may use iterative processes of imaging conditions, until a confidence level has exceeded a threshold, therefore various attempts are made by the neural network model which would be considered unsuitable for image capture and detection and would thus not be used under such conditions). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Nishi into Takeshi because it allows for an improved detection system through the use of a neural network in order to find optimal imaging conditions, especially ones which are better than initial imaging conditions and those created by a user, thus improving efficiency of such a system. In regards to claim 10, Takeshi fails to teach the teaching device according to claim 8, wherein the processor is configured to set an image capture condition positioned as being suitable for use in the image-capture-and-detection and update the previously taught image capture condition with the image capture condition positioned as being suitable for the use in image-capture-and-detection. In a similar endeavor Nishi teaches wherein the processor is configured to set an image capture condition positioned as being suitable for use in the image-capture-and-detection and update the previously taught image capture condition with the image capture condition positioned as being suitable for the use in image-capture-and-detection (See ¶0003, 0014 and 0018 wherein the system may adjust various parameters [thus changing imaging conditions] in order to get better detection results [more successful, in a higher predetermined evaluation value related to other conditions]). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Nishi into Takeshi because it allows for an improved detection system through the use of a neural network in order to find optimal imaging conditions, especially ones which are better than initial imaging conditions and those created by a user, thus improving efficiency of such a system. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDEMIO NAVAS JR whose telephone number is (571)270-1067. The examiner can normally be reached M-F, ~ 9 AM -6 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EDEMIO NAVAS JR Primary Examiner Art Unit 2483 /EDEMIO NAVAS JR/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

May 17, 2024
Application Filed
Sep 15, 2025
Non-Final Rejection — §102, §103
Dec 16, 2025
Response Filed
Jan 14, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598398
Terminal Detection Platform
2y 5m to grant Granted Apr 07, 2026
Patent 12598283
METHOD AND DISPLAY APPARATUS FOR CORRECTING DISTORTION CAUSED BY LENTICULAR LENS
2y 5m to grant Granted Apr 07, 2026
Patent 12593141
INFORMATION MANAGEMENT DEVICE, INFORMATION MANAGEMENT METHOD, AND STORAGE MEDIUM FOR MANAGING INFORMATION PROVIDED TO A MOBILE OBJECT AND DEVICE USED BY A USER IN LOCATION DIFFERENT FROM THE MOBILE OBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12587686
SIGNALING FOR GENERAL CONSTRAINT INFORMATION IN VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587643
IMAGE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM IN WHICH BITSTREAM IS STORED FOR BLOCK DIVISION AT PICTURE BOUNDARY
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
96%
With Interview (+24.7%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month