Prosecution Insights
Last updated: April 19, 2026
Application No. 18/034,452

ROBOT SYSTEM, ROBOT ARM, END EFFECTOR, AND ADAPTER

Non-Final OA §102§103§112
Filed
Apr 28, 2023
Examiner
SHARIFF, MICHAEL ADAM
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Nikon Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
94 granted / 115 resolved
+19.7% vs TC avg
Strong +22% interview lift
Without
With
+22.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
16 currently pending
Career history
131
Total Applications
across all art units

Statute-Specific Performance

§101
17.9%
-22.1% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 115 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 78 is objected to because of the following informalities: the claim term “changethe” should recite “change the” for proper spacing, grammar, and spelling. Appropriate correction is required. Claim 89 is objected to because of the following informalities: the claim term “the predetermined initial position” should recite “a predetermined initial position” for proper antecedent basis. Appropriate correction is required. Claim Interpretation Regarding claims 73-74, 77-81, 83-84, and 86-87, they all recite some format of the claim recitation “at least one of A, B, and C” which triggers a claim interpretation under the SuperGuide Corp. v. DirecTV Enters., Inc., 358 F.3d 870 (Fed. Cir. 2004) decision which states that “at least one of x, y, and z” is to be interpreted as “at least one of A, at least one of B, and at least one of C”. It does not appear from Applicant’s specification (para. [0130] of the present application discussing the first and second imaging devices operating independently in terms of movement, taking images, etc.) that Applicant intended to recite these limitations in the conjunctive. Examiner requests that Applicant change the language throughout all these claims to have the format “at least one of x, y, or z” to make clear that these claim limitations were intended to be in the disjunctive, rather than conjunctive. Proper Corrections are Requested. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 73-74, 77-81, 83-84, and 86-87 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Regarding claim 78, it recites a “third time” and a “fourth time” without any introduction of a “first time” or a “second time” so therefore the times are indefinite. However, it appears that Applicant intended to make claim 78 depend from claim 77 and not claim 76 where the first and second times are introduced. Therefore, for examination, Examiner will be taking claim 78 to depend from claim 77 and not claim 76. Proper Corrections are Requested. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 72-78, 80, 82, and 87 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Japanese Patent Application Publication No.: JP 2005297168 A (Ogasawara). Regarding claim 1, Ogasawara teaches a robot system including a robot arm with a movable portion, the robot system comprising: (Ogasawara, page 4, para. 1-4; FIG. 3: “FIG. 3 shows the overall configuration of the remote-control robot, and FIG. Fig. 5 shows the configuration of the remote-control area, and Fig. 5 shows an enlarged view of the robot hand shown in Fig. 3. As shown in FIG. 3, the robot includes a transport unit 10 having wheels, a first body unit 11A connected to the transport unit 10 via a rotation (shaft) unit, and a rotation unit on the first body unit 11A. A second body part 11B connected via a rotating part, a head 12 connected to the second body part 11B via a rotating part, and left and right arms connected to the transport part 10 via rotating parts Zd and Ze ( Arm) 14R, 14L.”; PNG media_image1.png 386 706 media_image1.png Greyscale ) a first imaging device and a second imaging device attached to the robot arm (Ogasawara, page 3, para 7; page 4, para. 2; FIG. 1; FIG. 2: “FIG. 1 shows the configuration of the camera unit of the remote-control robot according to the embodiment … The head 12 is provided with a pair of main camera units 13 corresponding to the left and right eyes (not shown, but the camera unit is covered with a transparent cover) … a stereoscopic image (moving image) of the direction in which the robot advances and the tip of the hand is taken.”; “As shown in FIGS. 1 and 2, in the camera unit 13 of the main body, a right camera 25R and a left camera 25L are arranged to obtain a stereoscopic image, and these left and right cameras 25R and 25L are independent of each other.”; PNG media_image2.png 340 442 media_image2.png Greyscale ; PNG media_image3.png 188 254 media_image3.png Greyscale ; the robot arms 11B and 11A control movement of the head 12 of the robot which houses the main camera units 13 having a right and left side cameras with stereo vision); and processing circuitry configured to control the robot system; acquire information on a distance to a target object (Ogasawara, page 5, para. 4-5: “The robot may be provided with various distance measuring devices that are generally used for measuring a predetermined part of the main body or the distance (imaging distance) from the cameras 25R and 25L to the object.” “The embodiment is configured as described above, and the robot is moved and controlled by remote operation of the personal computer 71 in FIG. 4” … At this time, the left and right cameras 25R and 25L perform, for example, passive autofocus, and distance information to the object (subject) obtained by this autofocus control (information obtained by other distance measuring means”); change a baseline length, the baseline length being a distance between the first imaging device and the second imaging device; and acquire the information on the distance to the target object based on the baseline length (Ogasawara, page 6, para. 1-3; page 6, para. 6; FIG. 6A-6B; FIG. 7: “FIG. 6A shows the relationship between the distance to the object (shooting distance) and the camera interval. As shown in the figure, the distance d between the left and right cameras 25R and 25L is far from the object. It is set to become larger as it goes (in other words, to become smaller as the distance gets closer). According to this, as shown in FIG. 7, for example in front of the object B with respect to the robot (camera 25R, 25L) even in X2 if there is a distance X1, right and left cameras 25R, 25L is constant The object can be photographed at the angle α, and a constant and good stereoscopic effect (perspective) can be maintained. The camera interval d can be freely changed by the operator by operating the personal computer 71 or the like, and can be reset to a state in which each operator can easily view stereoscopically. Further, the left and right cameras 25R and 25L can be rotated by the control of the motors 16a and 26b as described above, and also by the pan rotation of the cameras themselves, for example, as indicated by a chain line 100 in FIG. 7. In addition, the three-dimensional effect (perspective) can be changed by setting the main body so as to face each other from the straight direction of the main body. FIG. 6B shows the relationship between the zoom magnification and the camera interval. In the embodiment, even when a zoom operation is performed, the camera interval d is variably controlled according to the magnification. That is, the cameras 25R and 25L are provided with a zoom mechanism for optically enlarging an image by a zoom operation, and when this zoom operation is performed, as shown in FIG. 6B, the distance d between the left and right cameras 25R and 25L is adjusted so as to increase. Therefore, even when the zoom magnification is changed, it is possible to obtain a stereoscopic image having a good stereoscopic effect (perspective) … In the above embodiment, the camera interval d is changed based on the distance information obtained by the distance measuring means and the operation of the remote operator. For example, an arm (or manipulator) having a different length is attached depending on the work. In this case, the camera interval d may be changed according to the arm length information or according to the arm bending motion information (according to the distance between the left and right cameras and the work object”; PNG media_image4.png 316 352 media_image4.png Greyscale PNG media_image5.png 240 316 media_image5.png Greyscale ; as shown in FIG. 7 above, the distance x1 is found from the first imaging camera 25R and second imaging camera 25L from the work object B (found using known triangulation methods based on disparity and baseline length), and based on this distance, the baseline length d is changed to be smaller or larger depending the need for proper stereoscopic imaging perspective; in the case of FIG. 7, once the cameras are moved to the new position using pan rotation indicated by chain line 100, the baseline length d is smaller, and the distance to the target object is x2 which is less than x1; the robot uses this method to properly image objects with stereo vision; the baseline distance and object distance are continuously changing and affecting one another as the robot moves). Regarding claim 72, Ogasawara teaches the robot system of claim 1, wherein the processing circuitry is configured to change the baseline length depending on work content of the robot system (Ogasawara, page 6, para. 1-3; page 6, para. 6; FIG. 6A-6: see rejection of claim 1 above; if the robot must move closer to the target object to pick up the object with the grabber/effector (work content) then, the baseline length must be shorter; The distance to an object is inversely proportional to the baseline between cameras meaning a larger baseline allows for greater depth perception over a longer range, while a smaller baseline is better for accuracy at closer distances; as another example of “work content” would be monitoring an object that the robot is far away from and requires zooming/magnification to observe the object on a display by a user of the computer in communication with the robot; the baseline length is increased if more zoom is needed for farther objects and baseline length is decreased is less zoom is needed for closer objects; FIG. 6A shows object distance vs baseline length and FIG. 6B shows zoom/magnification vs baseline length). Regarding claim 73, Ogasawara teaches the system of claim 72, wherein the work content is a work including a moving-away movement where at least one of the first imaging device and the second imaging device and the target object are moved to be away from each other, and the processing circuitry is configured to change the baseline length to be larger according to the moving-away movement (Ogasawara, page 6, para. 1-3; page 6, para. 6; FIG. 6A-6B; FIG. 7; see rejection of claim 1 above; although the example shown in FIG. 7 is when the baseline length is shortened to move closer to the target object with a smaller distance to the object, the opposite is possible where the robot follows the same movement shown in FIG. 7 but backwards where the baseline length increases as the distance to the target object from the imaging devices increases; FIG. 6A shows the relationship between baseline length and object distance and this works both ways during a moving-toward movement or a moving away movement by the robot; Ogasawara, page 2, para. 3: “the stereoscopic effect of the video varies depending on the eyes and preferences of the observer, and if this stereoscopic effect can be changed corresponding to each operator, the usability is improved”; the robot moves toward or away from an object based on the user preferences of the stereoscopic image displayed to them on a computer communicating with the robot). Regarding claim 74, Ogasawara teaches the robot system of claim 72, wherein the work content is a work including an approaching movement where at least one of the first imaging device and the second imaging device and the target object are approached with each other, and the processing circuitry is configured to change the baseline length to be smaller according to the approaching movement (Ogasawara, page 6, para. 1-3; page 6, para. 6; FIG. 6A-6B; FIG. 7; see rejection of claim 1 above; FIG. 7 shows the example of moving the robot closer to the object and the baseline length is shortened in response to a smaller target object distance; the robot has an arm to grab things so it has an approach movement to pick things up). Regarding claim 75, Ogasawara teaches the robot system of claim 72, wherein the work content is a work to search for the target object (page 6, para. 4; FIG. 4: “Furthermore, the robot of the embodiment is provided with the finger protector 20 as described above, and as shown by the right hand 16R in FIG. 3, the protector 20 is aligned with the finger 18 (i, c, o). If it is arranged, the strength of the entire hand 16R can be increased while protecting these fingers 18, and for example, it is possible to easily perform a burdensome operation such as raising a person or moving a heavy object.”; PNG media_image6.png 288 236 media_image6.png Greyscale ; As shown in FIG. 4 above, a person sits at a computer and sees the stereo vision that the robot sees and controls the robot to do operations such as moving a heavy object; the user is able to control the zoom (magnification) of the imaging devices and the movement of the robot (target object distance) and the baseline length changes based on this movement; if there is an object in the path of the robot but is blurry, unclear or not fully enough in the stereo vision frame seen by the user on the computer display, then they can control the robot to zoom in/out and/or move farther/closer away as needed so the baseline length changes and they can “Search” for the object in the frame to make the object in clear view; Ogasawara, page 3, para. 6: “According to the remote operation robot of the present invention, even when the distance to the object in front of the movement or the work object changes, it is possible to obtain an image with a good stereoscopic effect, and the remote operator can observe with a sense of perspective. There is an advantage that an object can be recognized by a video that is easy to perform”; this meets the broadest reasonable interpretation of the term “search”). Regarding claim 76, Ogasawara teaches the robot system of claim 1, wherein the processing circuitry is configured to change the baseline length based on a distance between the first imaging device and/or the second imaging device, and the target object (Ogasawara, page 6, para. 1-3; page 6, para. 6; FIG. 6A-6B; FIG. 7; see rejection of claim 1 above; “the camera interval d is changed based on the distance information obtained by the distance measuring mean”). Regarding claim 77, Ogasawara teaches the robot system of claim 76, wherein when the distance between the first imaging device and/or the second imaging device, and the target object at a first time is larger than a distance between at least one of the first imaging device and the second imaging device and the target object at a second time after the first time, the processing circuitry is configured to change the baseline length such that the baseline length at the second time is smaller than the baseline length at the first time (Ogasawara, page 6, para. 1-3; page 6, para. 6; FIG. 6A-6B; FIG. 7; see rejection of claim 1 above; FIG. 7 shows the distance x1 between the imaging devices 25R 25L and the target object B with a baseline length of B and then the robot is moved closer to the target object so the x1 is larger than x2 and the baseline length is decreased compared to the first baseline length d; time passing is implicitly taught since it takes time for the robot to move from one place to another). Regarding claim 78, Ogasawara teaches the robot system of claim 76, wherein when the distance between the first imaging device and/or the second imaging device, and the target object at a third time is smaller than a distance between at least one of the first imaging device and the second imaging device and the target object at a fourth time after the third time, the processing circuitry is configured to change the baseline length such that the baseline length at the fourth time is larger than the baseline length at the third time (Ogasawara, page 6, para. 1-3; page 6, para. 6; FIG. 6A-6B; FIG. 7; see rejection of claim 1 above; FIG. 7 shows the distance x1 between the imaging devices 25R 25L and the target object B with a baseline length of B and then the robot is moved closer to the target object so the x1 is larger than x2 and the baseline length is decreased compared to the first baseline length d; time passing is implicitly taught since it takes time for the robot to move from one place to another; the example from Fig. 7 is reversed if the robot starts at a distance x2 away from target object B and the distance x2 is smaller than a new distance x1 when the robot moves backwards away from the target object B and the baseline increases in accordance with the new distance to d; nothing taught in Ogasawara precludes the robot moving both toward and away from an object; the relationship between object distance and baseline length shown in FIG. 6A is maintained either way and is calculated. Regarding claim 80, Ogasawara teaches the robot system of claim 1, wherein the processing circuitry is configured to move at least one of the first imaging device and the second imaging device with respect to the robot arm based on a capturing result of the target object captured by at least one of the first imaging device and the second imaging device (Ogasawara, page 6, para. 1-3; page 6, para. 6; FIG. 6A-6B; FIG. 7; FIG. 2; see rejection of claim 1 above; the imaging devices 25R 25L are moved with respect to the robot arm when the baseline length is changed after the robot moves closer/away to/from the target object; FIG. 7 shows both movements of the robot and the imaging devices and FIG. 2 shows the type of movement of just the imaging devices relative to the robot arm; Ogasawara, page 5, para. 3: “Motors 26A and 26B are provided for rotating in the left / right (pan) direction. Further, a pinion 28 that meshes with both the rack 27A disposed on the right camera 25R side and the rack 27B disposed on the left camera 25L, and an interval variable motor 29 that drives the pinion 28 are attached. By controlling the rotation of the motor 29, the distance d between the left and right cameras 25A and 25B can be variably adjusted at the same time. In addition, an independent pinion 28 is provided for each of the racks 27A and 27B, and two interval variable motors 29 for driving the two pinions 28 are provided. The two interval variable motors 29 use the racks 27A and 27B. The distance between the left and right cameras 25R and 25L may be changed by individually moving 27B.”). Regarding claim 82, Ogasawara teaches the robot system of claim 1, further comprising: a structure configured to connect the first imaging device to the second imaging device, wherein the structure is configured to hold the first imaging device and the second imaging device in a state in which relative postures of the first imaging device and the second imaging device are maintained at predetermined postures (Ogasawara, page 4, para. 1-4; FIG. 3; FIG. 1; FIG. 2: see rejection of claim 1 above; “The head 12 is provided with a pair of main camera units 13 corresponding to the left and right eyes (not shown, but the camera unit is covered with a transparent cover)”; the head shown in Fig. 3 above in the rejection of claim 1 shows both the imaging devices 13 in the head (structure) that holds them in a predetermined state with specific postures of being parallel to one another and looking forward outward as the robots “eyes” with the same optical axis (necessary for stereo vision); further structures holding the cameras are the racks 27A 27B shown in FIG. 2; the racks attach the cameras 25R 25L to the head 12). Regarding claim 87, Ogasawara teaches the robot system of claim 1, wherein at least one of the first imaging device and the second imaging device is movable such that long sides of an image sensor of the first imaging device and an image sensor of the second imaging device are parallel to each other (Ogasawara, FIG. 1; FIG. 2; the long sides of the cameras 25R 25L are parallel to one another and they are movable with respect one another while the long sides stay parallel; Ogasawara, page 5, para. 3: “Motors 26A and 26B are provided for rotating in the left / right (pan) direction. Further, a pinion 28 that meshes with both the rack 27A disposed on the right camera 25R side and the rack 27B disposed on the left camera 25L, and an interval variable motor 29 that drives the pinion 28 are attached. By controlling the rotation of the motor 29, the distance d between the left and right cameras 25A and 25B can be variably adjusted at the same time. In addition, an independent pinion 28 is provided for each of the racks 27A and 27B, and two interval variable motors 29 for driving the two pinions 28 are provided. The two interval variable motors 29 use the racks 27A and 27B. The distance between the left and right cameras 25R and 25L may be changed by individually moving 27B.”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 79, 81, and 90 are rejected under 35 U.S.C. 103 as being unpatentable over Ogasawara, in view of well-known art (Official Notice). Regarding claim 79, Ogasawara teaches the robot system of claim 1. Ogasawara fails to expressly teach wherein when other objects different from the target object are positioned around the robot arm and, when at least one of the first imaging device and the second imaging device is moved with respect to the target object by controlling the robot arm, the processing circuitry is configured to move at least one of the first imaging device and the second imaging device with respect to the robot arm such that at least one of the first imaging device and the second imaging device do not come into contact with the other objects. The Examiner takes Official Notice, that it was well known in the art before the effective filing date of the claimed invention to move an imaging device such that it does not come in contact with other objects near the target object as the robot arm moves toward the target object. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the processing circuitry, as taught by Ogasawara, to be configured to move at least one of the first imaging device and the second imaging device with respect to the robot arm such that at least one of the first imaging device and the second imaging device do not come into contact with the other objects, wherein when other objects different from the target object are positioned around the robot arm. The suggestion/motivation for doing so would have been to allow the robot arm to properly function to move and grab a target object accurately without distorting the stereo vision the robot has that is necessary to pick up target objects. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ogasawara, with well-known art, to obtain the invention as specified in claim 79. Regarding claim 81, Ogasawara teaches the robot system of claim 80. Ogasawara fails to expressly teach wherein the processing circuitry is configured to move the first imaging device and/or the second imaging device with respect to the robot arm in a case where at least one of the first imaging device and the second imaging device cannot capture the target object with a movement of the robot arm. The Examiner takes Official Notice, that it was well known in the art before the effective filing date of the claimed invention to move an imaging device on a robot arm if movement of the arm was not sufficient to capture a target object. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the processing circuitry, as taught by Ogasawara, to be configured to move the first imaging device and/or the second imaging device with respect to the robot arm in a case where at least one of the first imaging device and the second imaging device cannot capture the target object with a movement of the robot arm. The suggestion/motivation for doing so would have been to allow the robot arm to grab a target object accurately. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ogasawara, with well-known art, to obtain the invention as specified in claim 81. Regarding claim 90, Ogasawara teaches the robot system of claim 1. Ogasawara fails to expressly teach wherein at least the first imaging device moves to a predetermined end position before the robot system is powered off. The Examiner takes Official Notice, that it was well known in the art before the effective filing date, to move a lens/imaging device back to an ending position while/before powering off devices (U.S. Patent Application Publication No.: 2018/0091737 (Kadambala et al.), para. [0020]: “In many implementations, when the camera is powered down, a 3A algorithm (for performing auto-focus, auto-exposure, auto-white point) moves the lens to a default position, for example, the infinity position, using the actuator to physically move the lens”; this well-known step of having an ending position for imaging devices when powering a device off is applied to a robotic system with cameras that rely on proper calibration). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the first imaging device, as taught by Ogasawara to move to a predetermined end position before the robot system is powered off. The suggestion/motivation for doing so would have been to maintain proper calibration settings for a camera attached to a robot arm of a robot so the start-up phase is the same each time the robot is put into use to image, identify, and pick up objects. Therefore, it would have been obvious to combine Ogasawara, with well-known art, to obtain the invention as specified in claim 90. Claim 83 is rejected under 35 U.S.C. 103 as being unpatentable over Ogasawara, in view of U.S. Patent Application Publication No.: 2006/0103734 (Kim et al.) (hereinafter Kim). Regarding claim 83, Ogasawara teaches the robot system of claim 1. Ogasawara fails to teach wherein the processing circuitry is configured to rotate at least one of a first image acquired by the first imaging device and a second image acquired by the second imaging device to adjust a direction of the acquired image. Kim teaches wherein the processing circuitry is configured to rotate at least one of a first image acquired by the first imaging device and a second image acquired by the second imaging device to adjust a direction of the acquired image (Kim, para. [0006]: “FIG. 2 is a flowchart illustrating a convention method for rotating an image stored in a conventional digital camera. An image to rotate is selected (operation 200) and a rotate menu is selected (operation 202). Whether to rotate the selected image is judged (operation 204) and a rotational direction is set (operation 206). If the rotational direction is set, the selected image is rotated (operation 208) and the rotated image is stored (operation 210).”). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the processing circuitry, as taught by Ogasawara, to be configured to rotate at least one of a first image acquired by the first imaging device and a second image acquired by the second imaging device to adjust a direction of the acquired image, as taught by Kim. The suggestion/motivation for doing so would have been to allow the robot to properly image target objects that are in a side orientation or upside down so that the user controlling the robot from a computer remotely has the ability to properly identify the object and properly grab the object, if necessary. Therefore, it would have been obvious to combine Ogasawara, with Kim, to obtain the invention as specified in claim 83. Claim 84 is rejected under 35 U.S.C. 103 as being unpatentable over Ogasawara, in view of U.S. Patent Application Publication No.: 2020/0358999 (Zhou et al.) (hereinafter Zhou). Regarding claim 84, Ogasawara teaches the robot system of claim 1. Ogasawara fails to teach wherein the processing circuitry is configured to adjust a direction of an image to be acquired, by rotating at least one of an image sensor of the first imaging device and an image sensor of the second imaging device. Zhou teaches wherein the processing circuitry is configured to adjust a direction of an image to be acquired, by rotating at least one of an image sensor of the first imaging device and an image sensor of the second imaging device (Zhou, para. [0197]: “Turning now to FIG. 14, another exemplary imaging system 100 is shown as a UAV 200 having a left imaging device 110 a and right imaging device 110 b affixed to an adjustable frame 250 of the UAV 200. Stated somewhat differently, the baseline adjustment mechanism 170 is, or can be a portion of, the adjustable frame 250. The adjustable frame 250 can include one or more adjustable members 251 adjustably attached to a fuselage 220 of the UAV 200. As non-limiting example, each of the adjustable members 251 can be configured to pivot with respect to an attachment point 252. In one embodiment, the attachment point 252 is arranged at the fuselage 220. The left imaging device 110 a and right imaging device 110 b can be affixed to distal ends of the adjustable members 251, respectively. FIG. 14 shows the adjustable members 251 in a compact configuration. Adjusting the position of the adjustable members 251 (for example, by pivoting about one or more attachment points 252) results in increasing the baseline b between the imaging devices 110a and 110b so as to reach an extended configuration shown in FIG. 15. Similarly, by folding the adjustable members 251 into the compact configuration shown in FIG. 14, the baseline b can be decreased, as desired.”; PNG media_image7.png 474 578 media_image7.png Greyscale PNG media_image8.png 468 676 media_image8.png Greyscale ). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the processing circuitry, as taught by Ogasawara, to be configured to adjust a direction of an image to be acquired, by rotating at least one of an image sensor of the first imaging device and an image sensor of the second imaging device, as taught by Zhou. The suggestion/motivation for doing so would have been that “the baseline b [baseline length] can be increased when the flight mode is a takeoff mode since the UAV 200 may benefit from a better view of the surroundings of the UAV 200 during takeoff; alternatively, and/or additionally, the baseline b can be increased when the flight mode is an aerial image acquisition mode to see farther during aerial imaging” (Zhou, para. [0181]); by rotating the cameras on the robot (UAV), the UAV has an additional ability to change the baseline length to help image objects more accurately at different distances and at different magnitudes. Therefore, it would have been obvious to combine Ogasawara, with Zhou, to obtain the invention as specified in claim 84. Claims 85-86 are rejected under 35 U.S.C. 103 as being unpatentable over Ogasawara, in view of U.S. Patent Application Publication No.: 2020/0086493 (Lecuyer et al.) (hereinafter Lecuyer). Regarding claim 85, Ogasawara teaches the robot system of claim 1. Ogasawara fails to teach wherein the first imaging device and the second image device are disposed around the robot arm. Lecuyer teaches wherein the first imaging device and the second imaging device are disposed around the robot arm (Lecuyer, para. [0125]-[0128]; para. [0131]-[0134]; FIG. 9; FIG. 3; FIG. 7: “As illustrated in FIG. 9, the robot arm 100 comprises a plurality of arm segments 110, 112 and 114 rotatably connected together. The arm segments comprise a proximal segment 110 which may be securable to a mobile platform, a distal arm segment 112 and five arm segments 114 connected between the proximal arm segment 110 and the distal arm segment 112 … The vision guiding system 102 is connected at the distal end of the distal arm segment 112 and the gripper 104 (not shown in FIG. 9) is connected to the vision guiding system 102 so that the vision guiding system 102 is positioned between the distal arm segment 112 and the gripper 104 … In one embodiment, the vision guiding system 102 is rotatably secured to the arm segment 112 and/or to the gripper 104. The vision guiding system 102 may be motorized for rotating the vision guiding system 102 relative to the arm segment 112 and/or the gripper 104.”; “As illustrated in FIGS. 6 and 7, the top portion 122 comprises an image sensor device. In the illustrated embodiment, the image sensor device comprises a 2D image sensor 126 and a 3D image sensor 128 which includes two cameras 130 and 132 and an IR light source/projector 134. The cameras 130 and 132 are located on opposite sides of the IR light source 134 and the 2D image sensor is positioned below the IR light source 134. The 2D and 3D image sensors 126 and 128 are positioned so as to face the gripper 104 when the vision guiding system 102 is secured to the gripper 104.”; PNG media_image9.png 858 662 media_image9.png Greyscale PNG media_image10.png 514 474 media_image10.png Greyscale ; PNG media_image11.png 410 452 media_image11.png Greyscale ). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the first and second imaging devices as taught by Ogasawara, to be disposed around the robot arm, as taught by Lecuyer. The suggestion/motivation for doing so would have been that “some robotic arms are provided with a vision system installed on the robotic arm to help the user teleoperating the robotic arm such as to help the user grasping objects; however, for such usual vision guided robotic arms, the clamp's final movement to grasp an object to be grasped is usually performed blindly; when he is a handicapped person having mobility or motricity limitations, the user might not have the physical ability to always see what is happening when the robotic arm tries to grasp an object or when the object is being handled by the robotic arm, for example; therefore there is a need for improved vision guided robotic arm and method for operating vision guided robotic arms” (Lecuyer, para. [0006]); a circumferentially movable vision system attached to a robot arm allows for accurate determination of target objects the robot arm identifies to pick up. Therefore, it would have been obvious to combine Ogasawara, with Lecuyer, to obtain the invention as specified in claim 85. Regarding, claim 86, Ogasawara, in view of Lecuyer, teaches the robot system of claim 85, wherein at least one of the first imaging device and the second imaging device is movable in a predetermined circumferential direction around the robot arm (Lecuyer, para. [0125]-[0128]; FIG. 9; FIG. 3; see rejection of claim of claim 85 above). Claim 88 is rejected under 35 U.S.C. 103 as being unpatentable over Ogasawara, in view of Korean Patent Publication No.: KR 101888310 B1 (Won et al.) (hereinafter Won). Regarding claim 88, Ogasawara teaches the robot system of claim 1. Ogasawara fails to teach a sensor configured to acquire at least position information of the first imaging device, wherein the processing circuitry is configured to acquire the baseline length based on the position information of the first imaging device acquired by the sensor. Won teaches a sensor configured to acquire at least position information of the first imaging device, wherein the processing circuitry is configured to acquire the baseline length based on the position information of the first imaging device acquired by the sensor (Won, page 3, para. 5-7; page 5, para. 2; FIG. 2; FIG. 3: “a detection apparatus 200 using a stereo camera according to an embodiment of the present invention includes n (n is a natural number of 2 or more) cameras 201, a sensor 203, a processor 205, A database 207 may be included. The sensor 203 can sense the acceleration of the flying object every set cycle.”; “As another example of adjusting the length of the baseline between the cameras, the processor 205 determines, by the sensor 203, the length of the baseline between the n cameras 201 in proportion to the increase in sensed acceleration The n cameras 201 can be moved.”; PNG media_image12.png 286 460 media_image12.png Greyscale PNG media_image13.png 238 486 media_image13.png Greyscale ; sensing acceleration entails sensing position implicitly since acceleration is the second-order rate of change (derivative) rate of change of position). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the robot system, as taught by Ogasawara, to include a sensor configured to acquire at least position information of the first imaging device, wherein the processing circuitry is configured to acquire the baseline length based on the position information of the first imaging device acquired by the sensor, as taught by Won. The suggestion/motivation for doing so would have been that “by flexibly varying the [base length] distance, it is possible to easily detect an obstacle regardless of the change of the operating environment (for example, the flying body moves between indoor and outdoor)” (Won, page 2, para. 9). Therefore, it would have been obvious to combine Ogasawara, with Won, to obtain the invention as specified in claim 88. Claims 89 is rejected under 35 U.S.C. 103 as being unpatentable over Ogasawara, in view of U.S. Patent Application Publication No.: 2019/0184582 (Namiki). Regarding claim 89, Ogasawara teaches the robot system of claim 1. Ogasawara fails to teach wherein at least the first imaging device is configured to move to the predetermined initial position after the robot system is powered on. Namiki teaches wherein at least the first imaging device is configured to move to the predetermined initial position after the robot system is powered on (Namiki, para. [0036]-[0037]; FIG. 3: “In the example illustrated in FIG. 3, the camera 6 captures the image of the workpiece 38 at the position P6a … Note that the camera 6 may capture a plurality of first images without predetermining the plurality of the positions and orientations of the robot 1. For example, the initial position of the camera 6 may be set to be the position P6 a, and the position of the camera 6 after movement may be set to be the position P6 b. The images of the workpiece 38 may be captured at constant time intervals during the time when the camera 6 is being moved from the position P6a to the position P6b with the robot 1 being driven.”; PNG media_image14.png 526 646 media_image14.png Greyscale ; Examiner is interpreting Namiki to teach the initial position of the robot arm/camera to be the starting position of when the robot is turned on implicitly i.e. when the robot begins its process of evaluating a target object). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the first imaging device, as taught by Ogasawara, to be configured to move to the predetermined initial position after the robot system is powered on, as taught by Namiki. The suggestion/motivation for doing so would be that having a camera with a known starting position provides benefits like accurate initial calibration and object recognition, which enables the robot to correctly perceive its environment and begin tasks like grasping or navigating; this initial positioning is crucial for the robot to understand its starting orientation relative to a target and for correcting any drift from previous operations, ensuring smooth and precise execution. Therefore, it would have been obvious to combine Ogasawara, with Namiki, to obtain the invention as specified in claim 89. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL ADAM SHARIFF whose telephone number is 571-272-9741. The examiner can normally be reached M-F 8:30-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL ADAM SHARIFF/ Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Apr 28, 2023
Application Filed
Apr 28, 2023
Response after Non-Final Action
Sep 12, 2023
Response after Non-Final Action
Mar 01, 2024
Response after Non-Final Action
Nov 26, 2025
Non-Final Rejection — §102, §103, §112
Apr 06, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602903
Method for Analyzing Image Information Using Assigned Scalar Values
2y 5m to grant Granted Apr 14, 2026
Patent 12579776
DISPLAY DEVICE, DISPLAY METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12561959
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR TARGET IMAGE PROCESSING
2y 5m to grant Granted Feb 24, 2026
Patent 12548293
IMAGE DETECTION METHOD AND APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12541976
RELATIONSHIP MODELING AND ANOMALY DETECTION BASED ON VIDEO DATA
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+22.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 115 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month