Prosecution Insights
Last updated: April 19, 2026
Application No. 18/821,693

IMAGING SYSTEM, IMAGING METHOD, AND STORAGE MEDIUM

Non-Final OA §102§103
Filed
Aug 30, 2024
Examiner
KASPER, BYRON XAVIER
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Casio Computer Co. Ltd.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
72 granted / 103 resolved
+17.9% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
36 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 103 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This communication is responsive to Application No. 18/821,693 and the claims filed on 8/30/2024. 3. Claims 1-15 are presented for examination. Information Disclosure Statement 4. The information disclosure statements (IDS) submitted on 8/30/2024 and 9/4/2025 have been fully considered by the Examiner. Claim Rejections - 35 USC § 102 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 7. Claim(s) 1, 2, 3, 8, 9, 14, and 15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ino et al. (US 20220182533 A1 hereinafter Ino). Regarding Claim 1, Ino teaches an imaging system comprising: a camera ([0066] via “The first camera robot 1 includes a camera 11, ….”); and at least one processor ([0241] via “Each component may be implemented by reading and executing, by a program execution unit such as a CPU or a processor, ….”); wherein the at least one processor in a case in which a gesture, that a robot is to be caused to execute at a time of video capturing, is selected from among a plurality of gestures registered in advance ([0059] via “One of the first camera robot 1, the second camera robot 2, and the third camera robot 3 selects an operation pattern for the first camera robot 1, the second camera robot 2, and the third camera robot 3 from among a plurality of operation patterns based on a detected attribute of one or more subjects. … The one camera robot operates according to the selected operation pattern. The other camera robot operates according to the notified operation pattern.”), ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), (Note: The Examiner interprets the operation pattern of Ino as the gesture) and, also, video capturing by the camera is to be started with the robot as a subject ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), ([0084] via “The communicator 19 transmits, to the other camera robots (the second camera robot 2 and the third camera robot 3), operation start instruction information for causing the other camera robots (the second camera robot 2 and the third camera robot 3) to operate in accordance with an operation pattern selected by the controller 13. The communicator 19 also transmits the captured image output from the camera 11 to the server 4.”), controls the video capturing by the camera so that the video capturing ends at a timing corresponding to a timing at which the robot ends the gesture ([0082] via “The shutter 17 causes the captured image to be output every one second, in a case where one or more subjects are no longer detected by the image recognizer 12, the shutter 17 causes the outputting of the captured image by the camera 11 to end. Furthermore, in a case where one or more subjects are no longer detected by the image recognizer 12, the controller 13 causes the operation of the first camera robot 1 according to the operation pattern to end.”), ([0086] via “When the image recognizer 12 no longer detects one or more subjects, the controller 13 causes the communicator 19 to transmit, to the other camera robots (the second camera robot 2 and the third camera robot 3), operation end instruction information for ending operation of the other camera robots (the second camera robot 7 and the third camera robot 3). The communicator 19 transmits the operation end instruction information to the other camera robots (the second camera robot 2 and the third camera robot 3).”). Regarding Claim 2, Ino teaches the imaging system according to claim 1, wherein the at least one processor upon the gesture that the robot is to be caused to execute being selected from among the plurality of gestures registered in advance, sends a signal instructing an execution start of the gesture to the robot in conjunction with a start of the video imaging ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), ([0084] via “The communicator 19 transmits, to the other camera robots (the second camera robot 2 and the third camera robot 3), operation start instruction information for causing the other camera robots (the second camera robot 2 and the third camera robot 3) to operate in accordance with an operation pattern selected by the controller 13. The communicator 19 also transmits the captured image output from the camera 11 to the server 4.”), (Note: See paragraph [0112] of Ino as well.). Regarding Claim 3, Ino teaches the imaging system according to claim 2, comprising: the robot ([0057] via “The imaging system illustrated in FIG. 1 includes a first camera robot 1, a second camera robot 2, a third camera robot 3, and a server 4.”), wherein the robot starts execution of the gesture upon receiving of the signal instructing the execution start of the gesture ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), ([0084] via “The communicator 19 transmits, to the other camera robots (the second camera robot 2 and the third camera robot 3), operation start instruction information for causing the other camera robots (the second camera robot 2 and the third camera robot 3) to operate in accordance with an operation pattern selected by the controller 13. The communicator 19 also transmits the captured image output from the camera 11 to the server 4.”). Regarding Claim 8, Ino teaches the imaging system according to claim 1, wherein the at least one processor upon the gesture that the robot is to be caused to execute being selected from among the plurality of gestures registered in advance, controls the video imaging by the camera by sending a signal instructing a start of the video imaging to the camera in conjunction with a start of the gesture by the robot ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), ([0101] via “Next, in Step S8, the controller 13 starts operation of the selected operation pattern. When selecting the first operation pattern, the controller 13 outputs a control signal corresponding to a drive pattern associated with the first operation pattern to the driver 18, outputs an image associated with the first operation pattern to the display 16, end outputs sound associated with the first operation pattern to the speaker 15.”), ([0112] via “Returning to FIG. 7, next, in Step S9, the shutter 17 starts outputting of the captured image by the camera 11. Note that the shutter 17 causes the camera 11 to output a captured image to the communicator 19, for example, every one second. ... Note that the shutter 17 causes the captured image to be output while the first camera robot 1 is operating according to the selected operation pattern, i.e., during a period from the start of imaging until one or more subjects are no longer detected in the captured image.”). Regarding Claim 9, Ino teaches the imaging system according to claim 8, wherein the camera starts the video imaging upon receiving of the signal instructing the start of the video imaging ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), ([0112] via “Returning to FIG. 7, next, in Step S9, the shutter 17 starts outputting of the captured image by the camera 11. Note that the shutter 17 causes the camera 11 to output a captured image to the communicator 19, for example, every one second. ... Note that the shutter 17 causes the captured image to be output while the first camera robot 1 is operating according to the selected operation pattern, i.e., during a period from the start of imaging until one or more subjects are no longer detected in the captured image.”). Regarding Claim 14, Ino teaches an imaging method, comprising: selecting, from among a plurality of gestures registered in advance, a gesture that a robot is to be caused to execute at a time of video imaging ([0059] via “One of the first camera robot 1, the second camera robot 2, and the third camera robot 3 selects an operation pattern for the first camera robot 1, the second camera robot 2, and the third camera robot 3 from among a plurality of operation patterns based on a detected attribute of one or more subjects. … The one camera robot operates according to the selected operation pattern. The other camera robot operates according to the notified operation pattern.”), ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), (Note: The Examiner interprets the operation pattern of Ino as the gesture); starting the video imaging by a camera with the robot as a subject, in conjunction with a start of execution by the robot of the selected gesture ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), ([0084] via “The communicator 19 transmits, to the other camera robots (the second camera robot 2 and the third camera robot 3), operation start instruction information for causing the other camera robots (the second camera robot 2 and the third camera robot 3) to operate in accordance with an operation pattern selected by the controller 13. The communicator 19 also transmits the captured image output from the camera 11 to the server 4.”); and ending the video imaging at a timing corresponding to a timing at which the robot ends the gesture ([0082] via “The shutter 17 causes the captured image to be output every one second, in a case where one or more subjects are no longer detected by the image recognizer 12, the shutter 17 causes the outputting of the captured image by the camera 11 to end. Furthermore, in a case where one or more subjects are no longer detected by the image recognizer 12, the controller 13 causes the operation of the first camera robot 1 according to the operation pattern to end.”), ([0086] via “When the image recognizer 12 no longer detects one or more subjects, the controller 13 causes the communicator 19 to transmit, to the other camera robots (the second camera robot 2 and the third camera robot 3), operation end instruction information for ending operation of the other camera robots (the second camera robot 7 and the third camera robot 3). The communicator 19 transmits the operation end instruction information to the other camera robots (the second camera robot 2 and the third camera robot 3).”). Regarding Claim 15, Ino teaches a non-transitory storage medium storing a program readable by a computer of an imaging system ([0052] via “A non-transitory computer readable recording medium storing a control processing program according to a still another aspect of the present disclosure is a control processing program for causing a robot to operate together with another robot to capture an image of one or more subjects ….”), the program causing the computer to realize: a function of selecting, from among a plurality of gestures registered in advance, a gesture that a robot is to be caused to execute at a time of video imaging ([0059] via “One of the first camera robot 1, the second camera robot 2, and the third camera robot 3 selects an operation pattern for the first camera robot 1, the second camera robot 2, and the third camera robot 3 from among a plurality of operation patterns based on a detected attribute of one or more subjects. … The one camera robot operates according to the selected operation pattern. The other camera robot operates according to the notified operation pattern.”), ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), (Note: The Examiner interprets the operation pattern of Ino as the gesture); a function of starting the video imaging by a camera with the robot as a subject, in conjunction with a start of execution by the robot of the selected gesture ([0082] via “When operation according to an operation pattern selected by the controller 13 is started, the shutter 17 causes the camera 11 to start outputting a captured image.”), ([0084] via “The communicator 19 transmits, to the other camera robots (the second camera robot 2 and the third camera robot 3), operation start instruction information for causing the other camera robots (the second camera robot 2 and the third camera robot 3) to operate in accordance with an operation pattern selected by the controller 13. The communicator 19 also transmits the captured image output from the camera 11 to the server 4.”); and a function of ending the video imaging at a timing corresponding to a timing at which the robot ends the gesture ([0082] via “The shutter 17 causes the captured image to be output every one second, in a case where one or more subjects are no longer detected by the image recognizer 12, the shutter 17 causes the outputting of the captured image by the camera 11 to end. Furthermore, in a case where one or more subjects are no longer detected by the image recognizer 12, the controller 13 causes the operation of the first camera robot 1 according to the operation pattern to end.”), ([0086] via “When the image recognizer 12 no longer detects one or more subjects, the controller 13 causes the communicator 19 to transmit, to the other camera robots (the second camera robot 2 and the third camera robot 3), operation end instruction information for ending operation of the other camera robots (the second camera robot 7 and the third camera robot 3). The communicator 19 transmits the operation end instruction information to the other camera robots (the second camera robot 2 and the third camera robot 3).”). Claim Rejections - 35 USC § 103 8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ino et al. (US 20220182533 A1 hereinafter Ino) in view of Fountain (US 20240185492 A1 hereinafter Fountain). Regarding Claim 4, Ino teaches the imaging system according to claim 2, but is silent on wherein the at least one processor ends the video imaging at the timing corresponding to the timing at which the robot ends the gesture by ending, based on an execution time length registered in advance in association with the gesture, the video imaging by the camera. However, Fountain teaches wherein the at least one processor ends the video imaging at the timing corresponding to the timing at which the robot ends the gesture by ending, based on an execution time length registered in advance in association with the gesture, the video imaging by the camera ([0046] via “For example, in the case where operation to move the position of the robot 3 is performed based on the movement operation data obtained by the operation data acquisition unit 131, the image data acquisition unit 133 initializes the capture image data stored in the memory unit 12. For example, the image data acquisition unit 133 deletes the capture image data stored in the memory unit 12 to initialize the capture image data.”), ([0047] via “Further, even if the robot 3 is present at the same position, after elapse of a long period of time, the environment around the robot 3 may change. … So, in the case where the operator U finishes operation of the robot 3, or after elapse of predetermined time, the image data acquisition unit 133 may initialize the capture image data stored in the memory unit 12.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Fountain wherein the at least one processor ends the video imaging at the timing corresponding to the timing at which the robot ends the gesture by ending, based on an execution time length registered in advance in association with the gesture, the video imaging by the camera. Doing so prevents the infiltration of unnecessary or irrelevant data from being captured after the robot already completes the gesture, as stated by Fountain ([0047] via “Further, even if the robot 3 is present at the same position, after elapse of a long period of time, the environment around the robot 3 may change. If a plurality of pieces of the past capture image data are combined even though the environment around the robot 3 has changed, mismatch occurs at the borders between the plurality of pieces of combined capture image data. So, in the case where the operator U finishes operation of the robot 3, or after elapse of predetermined time, the image data acquisition unit 133 may initialize the capture image data stored in the memory unit 12.”). 11. Claim(s) 5, 6, 10, and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ino et al. (US 20220182533 A1 hereinafter Ino) in view of Kalouche et al. (US 11794349 B2 hereinafter Kalouche). Regarding Claim 5, Ino teaches the imaging system according to claim 2, but is silent on wherein the at least one processor ends the video imaging at the timing corresponding to the timing at which the robot ends the gesture by ending the video imaging by the camera upon receiving, from the robot, a notification indicating that the gesture is ended. However, Kalouche teaches wherein the at least one processor ends the video imaging at the timing corresponding to the timing at which the robot ends the gesture by ending the video imaging by the camera upon receiving, from the robot, a notification indicating that the gesture is ended (Col. 8 lines 14-19, where “As shown in block 401, instructions may be captured by an imaging device, such as imaging device 254. The imaging device, such as camera 90, may be positioned in front of display device 16 of picking station 10 to capture the visual instructions sent by the WS, executed on the warehouse system 203 and displayed on the display device 16.”), (Col. 10 lines 10-26, where “By performing the pick and place functions generated from the visual instructions, the instructions in the visual instructions may be completed and completion of the instructions may be confirmed, as shown in block 409. Referring to FIG. 1, the picking station 10 may include physical completion buttons 62 and 64 that may be pressed when a visual instruction is completed. As described, one or more completion buttons may also be presented on display device 252, such as display 16, of warehouse system 203. By pressing a completion button the WS may be notified that the current visual instruction has been completed and the next visual instruction may be sent to the display 252. Alternatively, the “complete” or similar button on the display 252 may be selected to provide notification to the WS that the current instruction has been completed.”), (Note: See Figure 4 of Kalouche as well. The Examiner interprets that since the workflow has been determined to be complete, this also includes ending the capturing of the visual instructions of step 401.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Kalouche wherein the at least one processor ends the video imaging at the timing corresponding to the timing at which the robot ends the gesture by ending the video imaging by the camera upon receiving, from the robot, a notification indicating that the gesture is ended. Doing so ends the process of the gesture performed by the robot such that the robot may either perform a next task or be done its tasks entirely, as stated above by Kalouche in Col. 10 lines 10-26. Regarding Claim 6, modified reference Ino teaches the imaging system according to claim 5, comprising: the robot ([0057] via “The imaging system illustrated in FIG. 1 includes a first camera robot 1, a second camera robot 2, a third camera robot 3, and a server 4.”). Ino is silent on wherein upon ending of the gesture, the robot sends the notification indicating that the gesture is ended. However, Kalouche teaches wherein upon ending of the gesture, the robot sends the notification indicating that the gesture is ended (Col. 10 lines 10-26, where “By performing the pick and place functions generated from the visual instructions, the instructions in the visual instructions may be completed and completion of the instructions may be confirmed, as shown in block 409. Referring to FIG. 1, the picking station 10 may include physical completion buttons 62 and 64 that may be pressed when a visual instruction is completed. As described, one or more completion buttons may also be presented on display device 252, such as display 16, of warehouse system 203. By pressing a completion button the WS may be notified that the current visual instruction has been completed and the next visual instruction may be sent to the display 252. Alternatively, the “complete” or similar button on the display 252 may be selected to provide notification to the WS that the current instruction has been completed.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Kalouche wherein upon ending of the gesture, the robot sends the notification indicating that the gesture is ended. Doing so ends the process of the gesture performed by the robot such that the robot may either perform a next task or be done its tasks entirely, as stated above by Kalouche. Regarding Claim 10, Ino teaches the imaging system according to claim 8, but is silent on wherein the at least one processor upon the robot ending the gesture, sends, to the camera, a notification indicating that the gesture has ended. However, Kalouche teaches wherein the at least one processor upon the robot ending the gesture, sends, to the camera, a notification indicating that the gesture has ended (Col. 8 lines 14-19, where “As shown in block 401, instructions may be captured by an imaging device, such as imaging device 254. The imaging device, such as camera 90, may be positioned in front of display device 16 of picking station 10 to capture the visual instructions sent by the WS, executed on the warehouse system 203 and displayed on the display device 16.”), (Col. 10 lines 10-26, where “By performing the pick and place functions generated from the visual instructions, the instructions in the visual instructions may be completed and completion of the instructions may be confirmed, as shown in block 409. Referring to FIG. 1, the picking station 10 may include physical completion buttons 62 and 64 that may be pressed when a visual instruction is completed. As described, one or more completion buttons may also be presented on display device 252, such as display 16, of warehouse system 203. By pressing a completion button the WS may be notified that the current visual instruction has been completed and the next visual instruction may be sent to the display 252. Alternatively, the “complete” or similar button on the display 252 may be selected to provide notification to the WS that the current instruction has been completed.”), (Note: See Figure 4 of Kalouche as well.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Kalouche wherein the at least one processor upon the robot ending the gesture, sends, to the camera, a notification indicating that the gesture has ended. Doing so ends the process of the gesture performed by the robot such that the robot may either perform a next task or be done its tasks entirely, as stated above by Kalouche in Col. 10 lines 10-26. Regarding Claim 11, modified reference Ino teaches the imaging system according to but is silent on claim 10, wherein the camera ends the video imaging upon receiving of the notification indicating that the gesture has ended. However, Kalouche teaches wherein the camera ends the video imaging upon receiving of the notification indicating that the gesture has ended (Col. 8 lines 14-19, where “As shown in block 401, instructions may be captured by an imaging device, such as imaging device 254. The imaging device, such as camera 90, may be positioned in front of display device 16 of picking station 10 to capture the visual instructions sent by the WS, executed on the warehouse system 203 and displayed on the display device 16.”), (Col. 10 lines 10-26, where “By performing the pick and place functions generated from the visual instructions, the instructions in the visual instructions may be completed and completion of the instructions may be confirmed, as shown in block 409. Referring to FIG. 1, the picking station 10 may include physical completion buttons 62 and 64 that may be pressed when a visual instruction is completed. As described, one or more completion buttons may also be presented on display device 252, such as display 16, of warehouse system 203. By pressing a completion button the WS may be notified that the current visual instruction has been completed and the next visual instruction may be sent to the display 252. Alternatively, the “complete” or similar button on the display 252 may be selected to provide notification to the WS that the current instruction has been completed.”), (Note: See Figure 4 of Kalouche as well. The Examiner interprets that since the workflow has been determined to be complete, this also includes ending the capturing of the visual instructions of step 401.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Kalouche wherein the camera ends the video imaging upon receiving of the notification indicating that the gesture has ended. Doing so ends the process of the gesture performed by the robot such that the robot may either perform a next task or be done its tasks entirely, as stated above by Kalouche in Col. 10 lines 10-26. 12. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ino et al. (US 20220182533 A1 hereinafter Ino) in view of Sohmshetty et al. (US 20250085694 A1 hereinafter Sohmshetty). Regarding Claim 7, Ino teaches the imaging system according to claim 2, wherein the at least one processor ends the video imaging at the timing corresponding to the timing at which the robot ends the gesture by, upon making a determination that the robot has ended the gesture, ending the video imaging ([0082] via “The shutter 17 causes the captured image to be output every one second, in a case where one or more subjects are no longer detected by the image recognizer 12, the shutter 17 causes the outputting of the captured image by the camera 11 to end. Furthermore, in a case where one or more subjects are no longer detected by the image recognizer 12, the controller 13 causes the operation of the first camera robot 1 according to the operation pattern to end.”), ([0086] via “When the image recognizer 12 no longer detects one or more subjects, the controller 13 causes the communicator 19 to transmit, to the other camera robots (the second camera robot 2 and the third camera robot 3), operation end instruction information for ending operation of the other camera robots (the second camera robot 7 and the third camera robot 3). The communicator 19 transmits the operation end instruction information to the other camera robots (the second camera robot 2 and the third camera robot 3).”). Ino is silent on wherein the at least one processor determines, based on a video obtained by the video imaging, whether the robot has ended the gesture. However, Sohmshetty teaches wherein the at least one processor determines, based on a video obtained by the video imaging, whether the robot has ended the gesture ([0025] via “The vision system 28 includes one or more cameras, such as a two-dimensional (2D) camera, a three-dimensional (3D) camera, a stereo vision camera, ….”), ([0044] via “During and/or after the finishing operation, the vision system 28 generates a finishing map based on updated scanned data and the sensing system 42 acquires real-time updated measurements of the die surface relating to the surface roughness of the target area and the geometry/contour of the die surface in step 96. The finishing map and surface roughness are compared with the previous image of the die surface or the CAD model to determine whether the finishing operation is complete or whether further finishing operation is required in step 98.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Sohmshetty wherein the at least one processor determines, based on a video obtained by the video imaging, whether the robot has ended the gesture. Doing so determines whether there are still additional gestures for the robot to perform in order to fully complete a task so as to not prematurely end the process, as stated by Sohmshetty ([0044] via “If the surface roughness of the target area is within the predetermined range of the target surface roughness, the method goes to step 102 to determine whether more target area(s) need to be finished in step 102. If no more target area needs to be finished, the method ends in step 104. If more target area(s) need to be finished, the method goes back to step 90 to select a desired finishing operation for another target area.”). 13. Claim(s) 12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ino et al. (US 20220182533 A1 hereinafter Ino) in view of Sakamoto et al. (US 6505098 B1 hereinafter Sakamoto). Regarding Claim 12, Ino teaches the imaging system according to claim 1, but is silent on wherein the robot has set at least one of a personality parameter expressing a pseudo-personality or a growth parameter expressing pseudo-growth, and the gesture that the robot is to be caused to execute changes in accordance with at least one of the personality parameter or the growth parameter. However, Sakamoto teaches wherein the robot has set at least one of a personality parameter expressing a pseudo-personality or a growth parameter expressing pseudo-growth (Col. 24 lines 23-31, where “Concretely speaking, prepared for the pet robot 121 in this pet robot system 120 are four "growth steps" of "baby period," "child period," "young period" and "adult period." Preliminarily stored in a memory 122A (FIG. 19) of a controller 122 (FIG. 10) are action and motion models consisting of various kinds of control parameters and control programs to be used as bases of actions and motions related to four items of "walking condition," "motion," "action" and "sound" for each "growth step."”), and the gesture that the robot is to be caused to execute changes in accordance with at least one of the personality parameter or the growth parameter (Col. 24 lines 51-58, where “When a total value of accumulative frequencies of the growth factors (hereinafter referred to as a total experience value of the growth factors) exceeds a predetermined threshold value, the controller 122 modifies the action and motion models for "baby period" into action and motion models for "child period" at a higher growth level (at which actions and motions are harder and more complicated) on the basis of the accumulative frequencies of the growth factors.”), (Col. 25 lines 15-23, where “As a result, the pet robot 121 changes stepwise "walking condition" from "tottering walk" to "firm walking," changes "motion" from "simple" to "upgraded and complicated," changes "action" from "monotonous" to "action with a purpose" and changes "sound" from "low and short" to "long and loud" as the pet robot 121 has ascended "growth step" (that is, "growth step" changes from "baby period" to "child period," from "child period" to "young period" and from "young period" to "adult period").”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Sakamoto wherein the robot has set at least one of a personality parameter expressing a pseudo-personality or a growth parameter expressing pseudo-growth, and the gesture that the robot is to be caused to execute changes in accordance with at least one of the personality parameter or the growth parameter. Doing so provides a more realistic robot that changes over time with respect to its surroundings, as stated by Sakamoto (Col. 24 lines 16-22, where “The pet robot system 120 has the same configuration as the pet robot system 50 (FIG. 8) except that a pet robot 121 has a function of changing motions and actions as if the real animal "grew", in accordance with a history of operation inputs such as spurring and orders given with a sound commander from a user and histories of own actions and motions.”). Regarding Claim 13, Ino teaches the imaging system according to claim 1, but is silent on wherein the robot includes a housing in which a head is coupled to a torso by a coupler, and an exterior covering the torso. However, Sakamoto teaches wherein the robot includes a housing in which a head is coupled to a torso by a coupler (Col. 5 lines 14-19, where “The pet robot 2 is formed by coupling leg member units 11A through 11D with front right, front left, rear right, and rear left portions of a body member unit 10 and connecting a head member unit 12 and a tail member unit 13 to a front end and a rear end of the body member unit 10, as apparent from FIG. 1.”), and an exterior covering the torso (Col. 6 lines 29-34, where “Accordingly, the robot system 1 is configured to allow the cover unit 3 to be fitted over the pet robot 2 in a fixed condition by fitting the cover unit 3 over the body member unit 10 of the pet robot 2 and tightening screws 4 into the tapped holes 10B of the body member unit 10 of the pet robot 10 through the screw holes 3B of the cover unit 3.”), (Note: The Examiner interprets the cover unit of Sakamoto as the exterior.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Sakamoto wherein the robot includes a housing in which a head is coupled to a torso by a coupler, and an exterior covering the torso. Doing so allows for changeability and customization with the robot, improving user amusement and satisfaction, as stated by Sakamoto (Col. 1 lines 41-46, where “Furthermore, considering that a pet robot can wear a cover, not only its appearance can be changeable but also if it can perform different actions depending on the appearance, it is considered that such a pet robot will be capable of giving higher emotions of intimacy and satisfaction to users, which improve an amusement property in the pet robot.”). Examiner’s Note 14. The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123. Conclusion 15. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BYRON X KASPER whose telephone number is (571)272-3895. The examiner can normally be reached Monday - Friday 8 am - 5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BYRON XAVIER KASPER/Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Aug 30, 2024
Application Filed
Feb 12, 2026
Non-Final Rejection — §102, §103
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594964
METHOD OF AND SYSTEM FOR GENERATING REFERENCE PATH OF SELF DRIVING CAR (SDC)
2y 5m to grant Granted Apr 07, 2026
Patent 12594137
HARD STOP PROTECTION SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12583101
METHOD FOR OPERATING A MODULAR ROBOT, MODULAR ROBOT, COLLISION AVOIDANCE SYSTEM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 24, 2026
Patent 12576529
ROBOT SIMULATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12564962
ROBOT REMOTE OPERATION CONTROL DEVICE, ROBOT REMOTE OPERATION CONTROL SYSTEM, ROBOT REMOTE OPERATION CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
88%
With Interview (+18.4%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 103 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month