Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims
The following claims have been rejected or allowed for the following reasons:
Claim(s) 1-20 is rejected under 35 USC § 103
Claim(s) 6 and 13 is rejected under 35 USC § 103 after being interpreted by 35 USC § 112 (b)
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 63/418,380, filed on 10/21/22.
Information Disclosure Statement
The information disclosure statement/statements (IDS) were filed on 4/11/24 and 12/22/23. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 6 recites the limitation "The Pruning" in relationship to the area of interest. There is insufficient antecedent basis for this limitation in the claim. This claim only depends on claim 1, in which no recitation of a pruning step is made.
Claim 13 recites the limitation "the visualization" in relation to the output of the system. There is insufficient antecedent basis for this limitation in the claim. The is claim is only dependent on claim 1 which makes no recitation to outputting any information to the user.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-4, 7-10, 12, 15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Poelman (US 20220147026 A1), in further view of Washington (US 20220040863 A1).
Regarding claim 1 Poelman teaches determine a region of interest associated with the camera and a robot with which the camera is associated; (Poelman [0073] reads “The localization process 274 localizes one or more cameras relative to other features of the cell, such as a robotic arm and/or other elements such as fiducials, fixtures, pallets, parts, feeders, trays, visual features, points of interest, etc. within the robotic cell. In one embodiment, the system uses a virtual origin frame to localize the cameras in space, relative to the robotic cell”);
select a set of sample points within the region of interest; (Poelman [0027] reads “In one embodiment, one or more fiducials which are observable from various vantage points in the work area, are affixed to the robotic cell structure. In one embodiment, the fiducials are distributed through the work area (or volumetric distribution), so that each camera always sees at least one fiducial affixed to the work area, in addition to any fiducials on the robotic arm and/or a calibration board or other assembled element held by the robotic arm.”);
cause the robot to move through a set of trajectories to position the robot, successively with respect to each of at least a subset of the sample points, in a predetermined pose at a location associated with the sample point (Poelman [0037] reads “In one embodiment, the process creates paths for the robotic arm that are observable from the camera(s). … In other words, a path is selected that facilitates movements of one or more of the various movable components (robot, tools, trays, parts, and pallets, and/or robot mounted cameras) such that data can be captured (camera images, or data from other sensors).”);
and, at each location cause the camera to generate a corresponding image that includes at least a fiducial marker located on the robot; and use the respective predetermined poses and corresponding images to perform a set of calibration computations with respect to the camera. (Poelman [0108] reads “At block 640, each fiducial is identified, and its pose is identified. In one embodiment, a frame/camera/timestamp association is used at block 645. The sensor and image data is processed at block 650, to perform camera and lens calibration 655. The lens calibration corrects for lens distortion. By observing the movement of an object around the cell, the distortion can be corrected for differences in various regions of the robotic cell, and differences between cameras.”);
Poelman does not teach A system, comprising: a user interface configured to receive a selection of a camera to be calibrated; and a processor coupled to the user interface and configured to:
Washington in analogous art, teaches A system, comprising: a user interface configured to receive a selection of a camera to be calibrated; and a processor coupled to the user interface and configured to: (Washington [0058] reads “The calibration module 350 calibrates cameras of the camera systems 160 connected to the lab automation system 100. The calibration module 350 receives requests to calibrate one or more cameras from the graphic user interface module 355.”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Poelman with that of Washington to include a method that would allow the user to pick which system or sensor they would like to have calibrated. This would allow the user to keep their robotic systems functioning with a high degree of accuracy easier. (Washington [0003] reads “Though automating protocols may streamline the necessary processes, automation in lab environments poses unique challenges. For one, the language used by operators or for robots and equipment in labs is not standardized, so communications about protocols for a lab system to perform may be difficult for the lab system to correctly parse. Secondly, operators in labs may not be versed in how to use a lab system for automation given their specific scientific backgrounds. Further, although some of robots and equipment may be capable of easy integration into the lab system for automation, not all robots or equipment may be configured for automation and may lack the appropriate interfaces for the lab system to communicate with. Lastly, each of a range of robots and equipment connected to the lab system may have its own interface for communicating, and the lab system may need to determine how to communicate with each different interface, which may increase latency.”);
Regarding claim 2 Poelman/Washington teaches The system of claim 1, wherein the region of interest is determined (Poelman [0073] reads “The localization process 274 localizes one or more cameras relative to other features of the cell, such as a robotic arm and/or other elements such as fiducials, fixtures, pallets, parts, feeders, trays, visual features, points of interest, etc. within the robotic cell. In one embodiment, the system uses a virtual origin frame to localize the cameras in space, relative to the robotic cell”);
at least in part by retrieving and reading data from a configuration file. (Poelman [0160] reads “It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 1220 or read only memory 1250 and executed by processor 1210. This control logic or software may also be resident on an article of manufacture comprising a computer readable medium having computer readable program code embodied therein and being readable by the mass storage device 1230 and for causing the processor 1210 to operate in accordance with the methods and teachings herein.”);
Regarding claim 3 Poelman/Washington teaches The system of claim 2, wherein the configuration file (Poelman [0160] reads “It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 1220 or read only memory 1250 and executed by processor 1210. This control logic or software may also be resident on an article of manufacture comprising a computer readable medium having computer readable program code embodied therein and being readable by the mass storage device 1230 and for causing the processor 1210 to operate in accordance with the methods and teachings herein.”);
includes a CAD file that represents a workspace in which the robot is located. (Poelman [0055] reads “The translated control commands may be applied to a virtualized robotic cell A75, or virtual representation of the robotic cell system. The virtualized robotic cell A75 is a software representation of the individual configured robotic cell, and may be used for testing, and verification, also referred to as “digital twin” because in one embodiment it is configured to simulate the actual robotic cell.”);
Regarding claim 4 Poelman/Washington teaches The system of claim 1, wherein the region of interest is determined (Poelman [0073] reads “The localization process 274 localizes one or more cameras relative to other features of the cell, such as a robotic arm and/or other elements such as fiducials, fixtures, pallets, parts, feeders, trays, visual features, points of interest, etc. within the robotic cell. In one embodiment, the system uses a virtual origin frame to localize the cameras in space, relative to the robotic cell”);
based at least in part on a robotic application associated with the robot. (Poelman [0105] reads “At block 625, the calibration process is identified for the cell. In one embodiment, the calibration process depends on the current configuration and/or use case for the robotic cell, the positioning of the cameras and other sensors, and the particular kind of robotic arm. For example, a robotic cell such as an A01 machine may need a less involved calibration process than an assembly cell utilizing a 6-axis robotic arm.”);
Regarding claim 7 Poelman/Washington teaches The system of claim 1, wherein the processor is further configured to compute the set of trajectories. (Poelman [0072] reads “In one embodiment, the calibration procedure design 270 includes a path design 272. The path design 272 designs movement patterns for the robotic arm for various actions, and calibrates the movement inaccuracies or discrepancies between the predicted positions in the movement pattern and the measured positions of the robotic arm.”);
Regarding claim 8 Poelman/Washington teaches The system of claim 1, wherein the user interface is configured to send to the processor one or more of an identification of the selected camera, an identification of the robot, and an indication of the robotic application the robot is configured to perform. (Washington [0064] reads “The graphic user interface module 355 generates graphic user interfaces for display on one or more client devices 120 connected to the lab automation system 100. … The graphic user interface module 355 receives, from a client device 120, a request to view a virtual representation of a lab 140. … The graphic user interface module 355 renders the virtual representation in the graphic user interface. The virtual elements of the virtual representation may be interactive elements that a user may interact with. For instance, upon receiving a mouseover of a virtual element via the graphic user interface, the graphic user interface module 355 may cause the virtual element to become highlighted within the virtual representation or mimic an operation being performed. Further, upon selection of a virtual element, the graphic user interface module 355 may present a tag element connected to the virtual element. A user may enter text to the tag element to label the virtual element and its associated component with. The graphic user interface module 355 sends the text to the rending module 320 for addition to the virtual representation in the representation database 360.”);
Regarding claim 9 Poelman/Washington teaches The system of claim 1, wherein the processor comprises a calibration service running at a node that is remote from the user interface. (Poelman [0051] reads “In one embodiment, in addition to the local UI A55 there may be a remote UI A50, coupled to the robotic cell A10 via a network A05. The remote user interface A50 may be a portable user interface, such as a tablet. The remote user interface A50 may be linked to the robotic cell A10 via a local area network (LAN), personal area network (PAN), or another type of network. In one embodiment, some remote Uls A50 may require proximity to a robotic cell A10, while other remote Uls A50 may be operable from anywhere.”);
Regarding claim 10 Poelman/Washington teaches The system of claim 1, wherein the processor is further configured to generate a calibration result based at least in part on the calibration computations. (Poelman abstract reads “The method includes identifying a discrepancy in robotic arm position between a predicted position and the determined position in real time, and computing, by an auto-calibrator, a compensation for the identified discrepancy, the auto-calibrator solving for the elements in the robotic cell system as a system.”);
Regarding claim 12 Poelman/Washington teaches The system of claim 1, wherein the processor is further configured to generate based at least in part on the calibration computations and return to the user interface data usable to display at the user interface a visualization of a calibration result based on the calibration computations. (Poelman [0042] reads “In one embodiment, a user interface provides a way to display the calibration output, and statistics. Over time, this may lead to improved cell design, as issues which cause the cell to become unstable or require recalibration are identified and eliminated.”);
Regarding claim 15 Poelman/Washington teaches The system of claim 1, wherein the processor is configured to use a robotic application specific component to do one or more of determine the region of interest, select the set of sample points, and cause the robot to move through the set of trajectories, (Poelman [0097] reads “At block 540, tool tip calibration is performed. Tool tip calibration determines the position of the tool tip—the end of arm tool of the robotic arm. Tool tip calibration is necessary to validate the current position of the robotic arm, and for all commands, since that is the portion of the robotic arm that interacts with work pieces. In one embodiment, each of the portions of the robotic arm may also be calibrated.”);
and to use a general component not specific to the robot application to perform the set of calibration computations. (Poelman [0029] reads “In addition to fiducials, or identified natural features, on the frame structure, the system may include a plurality of other fiducials, or identified natural features, including at the end of arm. In one embodiment, for calibration one or more calibration boards or pieces moved by the robotic arm may also be used. … The fiducials may be positioned on each link, on the tool relative to the end of arm tool contact, near each joint. In one embodiment, the fiducials are used by identifying their center points, and using the center points for calibration and measurement. … In general, any consistently identified point, whether directly identified or identified in relation to another point, whether on a fiducial or as a natural feature, may be used.”);
Regarding claim 16 Poelman teaches determining a region of interest associated with the camera and a robot with which the camera is associated; (Poelman [0073] reads “The localization process 274 localizes one or more cameras relative to other features of the cell, such as a robotic arm and/or other elements such as fiducials, fixtures, pallets, parts, feeders, trays, visual features, points of interest, etc. within the robotic cell. In one embodiment, the system uses a virtual origin frame to localize the cameras in space, relative to the robotic cell”);
selecting a set of sample points within the region of interest; (Poelman [0027] reads “In one embodiment, one or more fiducials which are observable from various vantage points in the work area, are affixed to the robotic cell structure. In one embodiment, the fiducials are distributed through the work area (or volumetric distribution), so that each camera always sees at least one fiducial affixed to the work area, in addition to any fiducials on the robotic arm and/or a calibration board or other assembled element held by the robotic arm.”);
causing the robot to move through a set of trajectories to position the robot, successively with respect to each of at least a subset of the sample points, in a predetermined pose at a location associated with the sample point (Poelman [0037] reads “In one embodiment, the process creates paths for the robotic arm that are observable from the camera(s). … In other words, a path is selected that facilitates movements of one or more of the various movable components (robot, tools, trays, parts, and pallets, and/or robot mounted cameras) such that data can be captured (camera images, or data from other sensors).”);
and, at each location cause the camera to generate a corresponding image that includes at least a fiducial marker located on the robot; and using the respective predetermined poses and corresponding images to perform a set of calibration computations with respect to the camera. (Poelman [0108] reads “At block 640, each fiducial is identified, and its pose is identified. In one embodiment, a frame/camera/timestamp association is used at block 645. The sensor and image data is processed at block 650, to perform camera and lens calibration 655. The lens calibration corrects for lens distortion. By observing the movement of an object around the cell, the distortion can be corrected for differences in various regions of the robotic cell, and differences between cameras.”);
Poelman does not teach A method, comprising: receiving, via a user interface, a selection of a camera to be calibrated;
Washington in analogous art, teaches A method, comprising: receiving, via a user interface, a selection of a camera to be calibrated; (Washington [0058] reads “The calibration module 350 calibrates cameras of the camera systems 160 connected to the lab automation system 100. The calibration module 350 receives requests to calibrate one or more cameras from the graphic user interface module 355.”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Poelman with that of Washington to include a method that would allow the user to pick which system or sensor they would like to have calibrated. This would allow the user to keep their robotic systems functioning with a high degree of accuracy easier. (Washington [0003] reads “Though automating protocols may streamline the necessary processes, automation in lab environments poses unique challenges. For one, the language used by operators or for robots and equipment in labs is not standardized, so communications about protocols for a lab system to perform may be difficult for the lab system to correctly parse. Secondly, operators in labs may not be versed in how to use a lab system for automation given their specific scientific backgrounds. Further, although some of robots and equipment may be capable of easy integration into the lab system for automation, not all robots or equipment may be configured for automation and may lack the appropriate interfaces for the lab system to communicate with. Lastly, each of a range of robots and equipment connected to the lab system may have its own interface for communicating, and the lab system may need to determine how to communicate with each different interface, which may increase latency.”);
Regarding claim 17 Poelman/Washington teaches The method of claim 16, wherein the region of interest is determined (Poelman [0073] reads “The localization process 274 localizes one or more cameras relative to other features of the cell, such as a robotic arm and/or other elements such as fiducials, fixtures, pallets, parts, feeders, trays, visual features, points of interest, etc. within the robotic cell. In one embodiment, the system uses a virtual origin frame to localize the cameras in space, relative to the robotic cell”);
based at least in part on a robotic application associated with the robot. (Poelman [0105] reads “At block 625, the calibration process is identified for the cell. In one embodiment, the calibration process depends on the current configuration and/or use case for the robotic cell, the positioning of the cameras and other sensors, and the particular kind of robotic arm. For example, a robotic cell such as an A01 machine may need a less involved calibration process than an assembly cell utilizing a 6-axis robotic arm.”);
Regarding claim 18 Poelman/Washington teaches The method of claim 16, further comprising generating a calibration result based at least in part on the calibration computations. (Poelman abstract reads “The method includes identifying a discrepancy in robotic arm position between a predicted position and the determined position in real time, and computing, by an auto-calibrator, a compensation for the identified discrepancy, the auto-calibrator solving for the elements in the robotic cell system as a system.”);
Regarding claim 20 Poelman teaches determining a region of interest associated with the camera and a robot with which the camera is associated; (Poelman [0073] reads “The localization process 274 localizes one or more cameras relative to other features of the cell, such as a robotic arm and/or other elements such as fiducials, fixtures, pallets, parts, feeders, trays, visual features, points of interest, etc. within the robotic cell. In one embodiment, the system uses a virtual origin frame to localize the cameras in space, relative to the robotic cell”);
selecting a set of sample points within the region of interest; (Poelman [0027] reads “In one embodiment, one or more fiducials which are observable from various vantage points in the work area, are affixed to the robotic cell structure. In one embodiment, the fiducials are distributed through the work area (or volumetric distribution), so that each camera always sees at least one fiducial affixed to the work area, in addition to any fiducials on the robotic arm and/or a calibration board or other assembled element held by the robotic arm.”);
causing the robot to move through a set of trajectories to position the robot, successively with respect to each of at least a subset of the sample points, in a predetermined pose at a location associated with the sample point (Poelman [0037] reads “In one embodiment, the process creates paths for the robotic arm that are observable from the camera(s). … In other words, a path is selected that facilitates movements of one or more of the various movable components (robot, tools, trays, parts, and pallets, and/or robot mounted cameras) such that data can be captured (camera images, or data from other sensors).”);
and, at each location cause the camera to generate a corresponding image that includes at least a fiducial marker located on the robot; and using the respective predetermined poses and corresponding images to perform a set of calibration computations with respect to the camera. (Poelman [0108] reads “At block 640, each fiducial is identified, and its pose is identified. In one embodiment, a frame/camera/timestamp association is used at block 645. The sensor and image data is processed at block 650, to perform camera and lens calibration 655. The lens calibration corrects for lens distortion. By observing the movement of an object around the cell, the distortion can be corrected for differences in various regions of the robotic cell, and differences between cameras.”);
Poelman does not teach A computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for: receiving, via a user interface, a selection of a camera to be calibrated;
Washington in analogous art, teaches A computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for: receiving, via a user interface, a selection of a camera to be calibrated; (Washington [0058] reads “The calibration module 350 calibrates cameras of the camera systems 160 connected to the lab automation system 100. The calibration module 350 receives requests to calibrate one or more cameras from the graphic user interface module 355.”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Poelman with that of Washington to include a method that would allow the user to pick which system or sensor they would like to have calibrated. This would allow the user to keep their robotic systems functioning with a high degree of accuracy easier. (Washington [0003] reads “Though automating protocols may streamline the necessary processes, automation in lab environments poses unique challenges. For one, the language used by operators or for robots and equipment in labs is not standardized, so communications about protocols for a lab system to perform may be difficult for the lab system to correctly parse. Secondly, operators in labs may not be versed in how to use a lab system for automation given their specific scientific backgrounds. Further, although some of robots and equipment may be capable of easy integration into the lab system for automation, not all robots or equipment may be configured for automation and may lack the appropriate interfaces for the lab system to communicate with. Lastly, each of a range of robots and equipment connected to the lab system may have its own interface for communicating, and the lab system may need to determine how to communicate with each different interface, which may increase latency.”);
Claim(s) 5, 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Poelman/Washington in further view of Onose (US 20190351554 A1).
Regarding claim 5 Poelman/Washington teaches The system of claim 1,
Poelman/Washington does not teach wherein determining the region of interest includes determining an initial region of interest and pruning the initial region of interest.
Once in analogous art, teaches, wherein determining the region of interest includes determining an initial region of interest and pruning the initial region of interest. (Onose [0003] reads “Conventionally, there is a known robot system in which an operation limitation area of a robot is set at an surrounding area of an operator, who is a safety monitoring target, in such a case where the operator has a possibility of entering an operation range of the robot, and when the robot enters the operation limitation area, the known robot system performs safety operation control, emergency stop control, and the like of the robot.” And [0076] reads “Accordingly, it is possible to reduce the size of the operation limitation area LA in response to the situations, and it is possible to prevent the work efficiency of the robot 10 from being lowered,”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Poelman/Washington with that of Once to include a method for decreasing the size of a robots target area. This would lead to increased safety in situations in which robots and humans interact. (Onose [0003] reads “Conventionally, there is a known robot system in which an operation limitation area of a robot is set at an surrounding area of an operator, who is a safety monitoring target, in such a case where the operator has a possibility of entering an operation range of the robot, and when the robot enters the operation limitation area, the known robot system performs safety operation control, emergency stop control, and the like of the robot.”);
Regarding claim 6 Poelman/Washington teaches The system of claim 4.
Poelman/Washington does not teach wherein the pruning is performed at least in part to avoid a collision.
Once in analogous art, teaches, wherein the pruning is performed at least in part to avoid a collision. (Onose [0074] reads “Further, the control unit 21 resets the operation limitation area LA by using at least one of the information about the structure with which the object person O may be collided, and the information about the structure over which the object person O may be stumbled. In such a case where the structure with which the object person O may be collided, or the structure over which the object person O may be stumbled exists in the work area AR, there is the higher possibility that the object person O tumbles over or loses the balance. In this embodiment, the operation limitation area LA is changed to the direction which improves the safety of the object person O in response to the information about the above described structures existing in the work area AR.”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Poelman/Washington with that of Once to include a method for decreasing the size of a robots target area. This would lead to increased safety in situations in which robots and humans interact. (Onose [0003] reads “Conventionally, there is a known robot system in which an operation limitation area of a robot is set at an surrounding area of an operator, who is a safety monitoring target, in such a case where the operator has a possibility of entering an operation range of the robot, and when the robot enters the operation limitation area, the known robot system performs safety operation control, emergency stop control, and the like of the robot.”);
Claim(s) 11, 13-14 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Poelman/Washington in further view of Wang (NPL | 2022 | VISUALIZATION ERROR ANALYSIS FOR AUGMENTED REALITY STEREO VIDEO SEE-THROUGH HEAD-MOUNTED DISPLAYS IN INDUSTRY 4.0 APPLICATIONS).
Regarding claim 11 Poelman/Washington teaches The system of claim 10.
Poelman/Washington does not teach wherein the calibration result includes one or more transformation matrixes to transform locations determined based on image data from the camera from a frame of reference associated with the camera to a frame of reference associated with control of the robot.
Wang in analogous art, teaches wherein the calibration result includes one or more transformation matrixes to transform locations determined based on image data from the camera from a frame of reference associated with the camera to a frame of reference associated with control of the robot. (Wang page 6 fourth paragraph reads “Where the C WR is the 3×3 rotation matrix and the C WT is the translation matrix, they are defined as the extrinsic parameters of the camera. Both of them are augmented into the transformation matrix, C WM, mapping a point from the world coordinate system to the camera coordinate system.”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teaching of Poelman/Washington with that of Wang to include a method that would allow the operator of a robotic system to better understand the errors that the robot is committing. This would allow for more accurate and precise operation of the robotic system. (Wang abstract reads “For example, in AR-based human-robot interaction, the inaccurate rendering of 3D virtual objects with respect to the real environment, will lead to users’ mistaking operations, and therefore, causes an invalid tool path planning result. In spite of many works related to system calibration and error reduction for optical see-through STHMDs, there are few efforts at figuring out the nature and factors of those errors in video see-through STHMDs. In this paper, taking consumer-available AR video see-through STHMDs as an example, we identify error sources ∗Corresponding author. of registration and build a mathematical model of the display progress to describe the error propagation in the stereo video see-through systems.”);
Regarding claim 13 Poelman/Washington teaches The system of claim 1.
Poelman/Washington does not teach wherein the visualization includes a visual representation of a fiducial marker and a visual representation of the difference between an actual location of a feature of the fiducial marker and a perceived location of the feature as perceived by the camera.
Wang in analogous art, teaches The system of claim 1, wherein the visualization includes a visual representation of a fiducial marker and a visual representation of the difference between an actual location of a feature of the fiducial marker and a perceived location of the feature as perceived by the camera. (Wang figure 1a clearly shows that the current location of the real robot and a virtual representation of the robot could be displayed on top of each other in an augmented reality setting.);
PNG
media_image1.png
328
324
media_image1.png
Greyscale
Wang Figure 1a
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teaching of Poelman/Washington with that of Wang to include a method that would allow the operator of a robotic system to better understand the errors that the robot is committing. This would allow for more accurate and precise operation of the robotic system. (Wang abstract reads “For example, in AR-based human-robot interaction, the inaccurate rendering of 3D virtual objects with respect to the real environment, will lead to users’ mistaking operations, and therefore, causes an invalid tool path planning result. In spite of many works related to system calibration and error reduction for optical see-through STHMDs, there are few efforts at figuring out the nature and factors of those errors in video see-through STHMDs. In this paper, taking consumer-available AR video see-through STHMDs as an example, we identify error sources ∗Corresponding author. of registration and build a mathematical model of the display progress to describe the error propagation in the stereo video see-through systems.”);
Regarding claim 14 Poelman/Washington teaches The system of claim 1.
Poelman/Washington does not teach wherein the calibration computations include performing ray- based calibration based on the predetermined poses and corresponding images.
Wang in analogous art, teaches wherein the calibration computations include performing ray- based calibration based on the predetermined poses and corresponding images. (Wang 6 third paragraph reads “The origin of the camera coordinates system, Oc, is the center of projection, with viewport pointing towards to +z axis. The image plane is located at a distance of f , which is the focal length of the camera. A ray is targeting a point Pc and leaving an intersection, Pi , on the image plane. With homogeneous coordinates system, the camera coordinate point Pc = [xc, yc,zc,1] T can be mapped to the image point Pd = [u, v,1] T through the similar triangles,”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teaching of Poelman/Washington with that of Wang to include a method that would allow the operator of a robotic system to better understand the errors that the robot is committing. This would allow for more accurate and precise operation of the robotic system. (Wang abstract reads “For example, in AR-based human-robot interaction, the inaccurate rendering of 3D virtual objects with respect to the real environment, will lead to users’ mistaking operations, and therefore, causes an invalid tool path planning result. In spite of many works related to system calibration and error reduction for optical see-through STHMDs, there are few efforts at figuring out the nature and factors of those errors in video see-through STHMDs. In this paper, taking consumer-available AR video see-through STHMDs as an example, we identify error sources ∗Corresponding author. of registration and build a mathematical model of the display progress to describe the error propagation in the stereo video see-through systems.”);
Regarding claim 19 Poelman/Washington teaches The method of claim 16.
Poelman/Washington does not teach wherein the calibration computations comprise ray-based calibration computations.
Wang in analogous art, teaches wherein the calibration computations comprise ray-based calibration computations. (Wang 6 third paragraph reads “The origin of the camera coordinates system, Oc, is the center of projection, with viewport pointing towards to +z axis. The image plane is located at a distance of f , which is the focal length of the camera. A ray is targeting a point Pc and leaving an intersection, Pi , on the image plane. With homogeneous coordinates system, the camera coordinate point Pc = [xc, yc,zc,1] T can be mapped to the image point Pd = [u, v,1] T through the similar triangles,”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teaching of Poelman/Washington with that of Wang to include a method that would allow the operator of a robotic system to better understand the errors that the robot is committing. This would allow for more accurate and precise operation of the robotic system. (Wang abstract reads “For example, in AR-based human-robot interaction, the inaccurate rendering of 3D virtual objects with respect to the real environment, will lead to users’ mistaking operations, and therefore, causes an invalid tool path planning result. In spite of many works related to system calibration and error reduction for optical see-through STHMDs, there are few efforts at figuring out the nature and factors of those errors in video see-through STHMDs. In this paper, taking consumer-available AR video see-through STHMDs as an example, we identify error sources ∗Corresponding author. of registration and build a mathematical model of the display progress to describe the error propagation in the stereo video see-through systems.”);
Other references not Cited
Throughout examination other references were found that could read onto the prior art. Though these references were not used in this examination they could be used in future examination and could read on the contents of the current disclosure. These references are, Dinauer (US 20110080476 A1); Moore (US 20190094866 A1); Paverman (US 20220335726 A1).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN MARTIN O'MALLEY whose telephone number is (571)272-6228. The examiner can normally be reached Mon - Fri 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270 - 5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN MARTIN O'MALLEY/Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658