DETAILED ACTION
This action is in response to Application No. 17/011,993 originally filed 09/03/2020. The Request for Continued Examination and amendment presented on 06/13/2025 which provides claims 14 and 21 are currently amended and claims 15 - 17 were previously canceled is hereby acknowledge. Claims 1 - 14 and 18 - 22 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed June 23, 2025 has been entered.
Double Patenting
3. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper time wise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
4. A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
5. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
6. Claims 1 - 12, 15, 18 and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 - 2, 6 - 15 and 21 - 22 of U.S. Patent No. 10,768,708. Although the claims at issue are not identical, the claims are not patentably distinct from each other because the patent and the current application are claiming common subject matter, as follows:
7. Comparing the current application with Patent No. 10,768,708 as below:
Current Application No. 17/011,993
Patent No. 10,768,708
Claim 1, A method comprising:
detecting, from a capture of an object in a three-dimensional (3D) space, a motion of the object;
determining data based, at least in part, on at least one of a representation of the object or a representation of a target object, the data representing a manipulation of the target object based, at least in part, on an emulation of the motion of the object; and
translating the data into a command causing a manipulation of the target object by a robotic tool,
wherein the manipulation of the target object emulates the data based, at least in part, on at least one of the representation of the object or the representation of the target object, and
wherein the translating includes determining at least one of a path of motion or a magnitude of a force that emulates the data representing the manipulation of the target object.
Claim 1, A method of using free-form gestures to manipulate a workpiece by a robotic tool, the method including:
capturing free-form gestures of a hand in a three-dimensional (3D) sensory space utilizing two cameras to capture sequential images of the hand while empty making motions directed to commanding a robotic tool, analyzing the images as captured and translating the free-form gestures of the hand into robotic tool commands that produce smoothed emulating motions by a robotic tool, including:
recognizing in the captured images of the hand while empty moving, a gesture segment within the free-form gestures of the hand that represents physical contact between the robotic tool and a workpiece; wherein there is no physical contact between the hand and any object during the gesture segment that represents physical contact;
determining a command to the robotic tool to apply a force to the workpiece without actual physical contact between the robotic tool and the hand, wherein a magnitude of the force to be applied to the workpiece is determined based on a human hand motion during the gesture segment in which there is no physical contact with the hand as captured by the two cameras, without actual physical contact between the robotic tool and the hand, during the gesture segment associated with a detected path of the hand and a grasping motion of the hand; and
issuing the command to the robotic tool to apply the force to the workpiece.
Claim 2, The method of claim 1, wherein the determining of the data comprises capturing edge information of at least a portion of the object and computing an attribute of a 3D model of the object.
Claim 2, The method of claim 1, further including capturing edge information for fingers of a hand that performs the free-form gestures and computing finger positions of a 3D solid hand model for the hand during the free-form gestures.
Claim 3, The method of claim 1, wherein the translating of the data comprises: interpreting the at least a portion of the motion of the object as a parameter of a robotic tool translation based, at least in part, on at least one of the capture of the object in the 3D space or a 3D model of the object.
Claim 6, The method of claim 2, further including: using the 3D hand model to capture a curling of the hand during the free-form gestures; and
interpreting the curling as a parameter of robotic tool translation.
Claim 4, The method of claim 1, wherein the translating of the data comprises: detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and
interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on the degree of motion of the object.
Claim 7, The method of claim 6, further including: using the 3D solid hand model to detect the curling as an extreme degree of motion of the hand during the free-form gestures; and
responsive to detecting the curling as an extreme degree of motion of the hand, interpreting a maximum value of a parameter of robotic tool actuation.
Claim 5, The method of claim 1, wherein the translating of the data comprises: detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on an amplification function of the degree of motion.
Claim 6, wherein the translating of the data comprises: detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a polynomial function of the degree of motion.
Claim 7, The method of claim 1, wherein the translating of the data comprises: detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a transcendental function of the degree of motion.
Claim 8, The method of claim 7,
wherein the maximum value of the parameter is an amplification function of the extreme degree of motion.
Claim 9, The method of claim 7,
wherein the maximum value of the parameter is a polynomial function of the extreme degree of motion.
Claim 10, The method of claim 7,
wherein the maximum value of the parameter is a transcendental function of the extreme of motion.
Claim 8, The method of claim 1, wherein the translating of the data comprises: detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a step function of the degree of motion.
Claim 11, The method of claim 7,
wherein the maximum value of the parameter is a step function of the extreme degree of motion.
Claim 9, The method of claim 1, wherein the translating of the data comprises:
using a 3D model of the object to determine a torsion of the motion of the object: and
interpreting the torsion as a parameter of an actuation of the robotic tool .
Claim 12, The method of claim 2, further including:
using the 3D hand model to capture a torsion of the hand during the free-form gestures; and
interpreting the torsion as a parameter of robotic tool actuation.
Claim 10, The method of claim 1, wherein the translating of the data comprises:
using a 3D model of the object to detect a degree of motion of the object based,
at least in part, on a torsion of the motion of the object; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on the degree of motion of the object.
Claim 13, The method of claim 12, further including:
using the 3D hand model to detect the torsion as an extreme degree of motion of the hand during the free-form gestures; and
responsive to the detecting the torsion as extreme degree of motion, interpreting a maximum value of a parameter of robotic tool actuation.
Claim 11, The method of claim 1, wherein the translating of the data comprises: using a 3D model of the object to detect a degree of motion of the object based, at least in part, on a torsion of the object; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on an amplification function of the degree of motion of the object.
Claims 14, The method of claim 13, wherein the maximum value of the parameter is an amplification function of the extreme degree of motion.
Claim 12, The method of claim 1, wherein the translating of the data comprises: using a 3D model of the object to detect a degree of motion of the object based, at least in part, on a torsion of the object; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a polynomial function of the degree of motion of the object.
Claim 15, The method of claim 13,
wherein the maximum value of the parameter is a polynomial function of the extreme degree of motion.
Claim 13, A system comprising:
a sensor;
a robotic tool; and
a processor coupled to memory, the memory being loaded with computer instructions that, upon execution on the processor, cause the processor to implement operations comprising:
detecting, from a capture of an object in a three-dimensional (3D) space, a motion of the object;
determining data based, at least in part, on at least one of a representation of the object or a representation of a target object, the data representing a manipulation of the target object based, at least in part, on an emulation of the motion of the object; and
translating the data into a command causing a manipulation of the target object by a robotic tool, wherein the manipulation of the target object emulates the data based, at least in part, on at least one of the representation of the object or the representation of the target object, and wherein the translating includes determining at least one of a path of motion or a magnitude of a force that emulates the data representing the manipulation of the target object.
Claim 14, A non-transitory computer readable storage medium impressed with computer instructions that, upon execution by a processor, implement operations comprising: detecting, from a capture of an object in a three-dimensional (3D) space, a motion of the object;
determining data based, at least in part, on at least one of a representation of the object or a representation of a target object, the data representing a manipulation of the target object based, at least in part, on an emulation of the motion of the object; and
translating the data into a command causing a manipulation of the target object by a robotic tool, wherein the manipulation of the target object emulates the data based, at least in part, on at least one of the representation of the object or the representation of the target object, and
wherein the translating includes determining at least one of a path of motion or a magnitude of a force that emulates the data representing the manipulation of the target object.
Claim 22, A method of using free-form gestures to manipulate a workpiece by
a robotic tool, the method including:
capturing free-form gestures of a hand in a three-dimensional (3D) sensory space while interacting with a manipulable object and translating the free-form gestures made by the hand interacting with the manipulable object into robotic tool commands that produce smoothed emulating motions by a robotic tool, including:
recognizing a gesture segment within the free-form gestures of the hand that represents physical contact between the robotic tool and a workpiece; wherein there is neither physical nor electrical connection facilitating passage of a signal between the manipulable object and the robotic tool and sensor during the gesture segment that represents physical contact;
determining a command to the robotic tool to apply a force to the workpiece without actual physical contact between the robotic tool and the hand, wherein a magnitude of the force to be applied to the workpiece is determined based on a human hand motion captured, without actual physical contact between the robotic tool and the hand and without physical nor electrical connection between the manipulable object and the robotic tool and sensor, during the gesture segment associated with a detected path of the hand and a grasping motion of the hand; and
issuing the command to the robotic tool to apply the force to the workpiece.
Claim 21, A non-transitory computer readable medium storing a plurality of instructions for programming one or more processors to use free-form gestures for manipulating a workpiece by a robotic tool, the instructions, when executed on the processors, implementing actions including:
capturing free-form gestures of a hand in a three-dimensional (3D) sensory space utilizing two cameras to capture sequential images of the hand while empty making motions directed to commanding a robotic tool, analyzing the images as captured and translating the free-form gestures of the hand into robotic tool commands that produce smoothed emulating motions by a robotic tool, including:
recognizing in the captured images of the hand while empty moving, a gesture segment within the free-form gestures of the hand that represents physical contact between the robotic tool and a workpiece; wherein there is no physical contact between the empty hand and any object during the gesture segment that represents physical contact;
determining a command to the robotic tool to apply a force to the workpiece without actual physical contact between the robotic tool and the hand, wherein a magnitude of the force to be applied to the workpiece is determined based on a human hand motion during the gesture segment in which there is no physical contact with the hand as captured by the two cameras, without actual physical contact between the robotic tool and the hand, during the gesture segment associated with a detected path of the hand and a grasping motion of the hand; and issuing the command to the robotic tool to apply the force to the workpiece.
Claim 18, A method comprising: receiving, by a robotic tool, a command that causes a manipulation of a target object, by the
robotic tool, that emulates data representing a manipulation of a target object,
wherein the command is based, at least in part, on a translation of data determined based, at least in part, on a motion of an object, and the translation is based, at least in part, on a path of motion or a magnitude of force that emulates the manipulation of the target object.
Claim 15, The method of claim 13, wherein the maximum value of the parameter is a polynomial function of the extreme degree of motion.
Claim 19, A system comprising:
a sensor;
a robotic tool; and
a processor coupled to memory, the memory being loaded with computer instructions that, upon execution on the processor, cause the processor to implement operations comprising:
receiving, by a robotic tool, a command that causes a manipulation of a target object, by the robotic tool, that emulates data representing a manipulation of the target object,
wherein the command is based, at least in part, on a translation of data determined based, at least in part, on a motion of an object, and the translation is based, at least in part, on a path of motion or a magnitude of force that emulates the manipulation of the target object.
Claim 20, A non-transitory computer readable storage medium impressed with computer instructions that, upon execution by a processor, implement operations comprising: receiving, by a robotic tool, a command that causes a manipulation of a target object, by the robotic tool, that emulates data representing a manipulation of the target object,
wherein the command is based, at least in part, on a translation of data determined based, at least in part, on a motion of an object, and the translation is based, at least in part, on a path of motion or a magnitude of force that emulates the manipulation of the target object.
Claim 21, The method of claim 1, comprising issuing the command to the robotic tool to apply the force to a workpiece.
Claim 22, The method of claim 18, comprising executing the command by at least one of an actuation of the path of motion of an actuation of the magnitude of force.
Claim 22, A method of using free-form gestures to manipulate a workpiece by a robotic tool, the method including:
capturing free-form gestures of a hand in a three-dimensional (3D) sensory space while interacting with a manipulable object and translating the free-form gestures made by the hand interacting with the manipulable object into robotic tool commands that produce smoothed emulating motions by a robotic tool, including:
recognizing a gesture segment within the free-form gestures of the hand that represents physical contact between the robotic tool and a workpiece; wherein there is neither physical nor electrical connection facilitating passage of a signal between the manipulable object and the robotic tool and sensor during the gesture segment that represents physical contact;
determining a command to the robotic tool to apply a force to the workpiece without actual physical contact between the robotic tool and the hand, wherein a magnitude of the force to be applied to the workpiece is determined based on a human hand motion captured, without actual physical contact between the robotic tool and the hand and without physical nor electrical connection between the manipulable object and the robotic tool and sensor, during the gesture segment associated with a detected path of the hand and a grasping motion of the hand; and
issuing the command to the robotic tool to apply the force to the workpiece.
Claim 21, A non-transitory computer readable medium storing a plurality of instructions for programming one or more processors to use free-form gestures for manipulating a workpiece by a robotic tool, the instructions, when executed on the processors, implementing actions including:
capturing free-form gestures of a hand in a three-dimensional (3D) sensory space utilizing two cameras to capture sequential images of the hand while empty making motions directed to commanding a robotic tool, analyzing the images as captured and translating the free-form gestures of the hand into robotic tool commands that produce smoothed emulating motions by a robotic tool, including:
recognizing in the captured images of the hand while empty moving, a gesture segment within the free-form gestures of the hand that represents physical contact between the robotic tool and a workpiece; wherein there is no physical contact between the empty hand and any object during the gesture segment that represents physical contact;
determining a command to the robotic tool to apply a force to the workpiece without actual physical contact between the robotic tool and the hand, wherein a magnitude of the force to be applied to the workpiece is determined based on a human hand motion during the gesture segment in which there is no physical contact with the hand as captured by the two cameras, without actual physical contact between the robotic tool and the hand, during the gesture segment associated with a detected path of the hand and a grasping motion of the hand; and issuing the command to the robotic tool to apply the force to the workpiece.
Claim Rejections - 35 USC § 102
2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
3. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
4. Claims 1, 3 - 5, 9 - 11, 13 - 14 and 18 - 22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Itkowitz “US 2012/0071891”.
Re-claim 1, Itkowitz teaches a method comprising:
detecting, from a capture of an object in a three-dimensional (3D) space, a motion of the object; (pars. [0113] The three-dimensional locations of the fiducial markers are computed by triangulation of multiple cameras having a common field of view. The three-dimensional locations of the fiducial markers are used to infer the three-dimensional pose (translation and orientation) of the hand and also the grip size. [0114] The marker locations need to be calibrated before use. For example, the surgeon can show the hand with markers in different poses to the camera. The different poses are then used in the calibration. And [0115] - [0117])
determining data based, at least in part, on at least one of a representation of the object or a representation of a target object, the data representing a manipulation of the target object based, at least in part, on an emulation of the motion of the object; (pars. [0028] The hand gesture can be any one of a hand gesture pose, a hand gesture trajectory, or a combination of a hand gesture pose and a hand gesture trajectory. When the hand gesture is a hand gesture pose and the plurality of known hand gestures includes a plurality of known hand gesture poses, a user interface of the minimally invasive surgical system is controlled based on the hand gesture pose. [0029] Further, in one aspect, when the hand gesture is hand gesture pose, the hand gesture selection includes generating an observed feature set from the tracked plurality of locations. The observed feature set is compared with feature sets of the plurality of known hand gesture poses. One of the known hand gesture is selected as the hand gesture pose. The selected known hand gesture pose is mapped to a system command, and the system command is triggered in the minimally invasive surgical system.)
(pars. [0032] In still another aspect, a sensor element mounted on part of a human is tracked to obtain a location of the part of the human hand. Based on the location, the method determines whether a position of the part of the human hand is within a threshold distance from a position of a master tool grip of a minimally invasive surgical system. Operation of the minimally invasive surgical system is controlled based on a result of the determining. In one aspect, teleoperation of a teleoperated slave surgical instrument coupled to the master tool grip is controlled based on a result of the determination. In another aspect, display of a user interface, or display of a proxy visual is controlled based on the result of the determination. [0033] In one aspect, the position of the part of the human hand is specified by a control point position. In another aspect, the position of the part of the human hand is an index finger position.)
translating the data into a command causing a manipulation of the target object by a robotic tool, wherein the manipulation of the target object emulates the data based, at least in part, on at least one of the representation of the object or the representation of the target object, (pars. [0058] A stereoscopic endoscope 112 mounted on manipulator 113 provides an image of surgical site 103 within patient 111 that is displayed on display 187 and on the display in surgeon's console 185. The image includes images of any of the slave surgical devices in the field of view of stereoscopic endoscope 112. The interactions between the master tool manipulators on surgeon's console 185, the slave surgical devices and stereoscopic endoscope 112 are the same as in a known system and so are known to those knowledgeable in the field. [0059] In one aspect, surgeon 181 moves at least one digit of the surgeon's hand, which in turn causes a sensor in master finger tracking grip 170 to change location. Hand tracking transmitter 175 provides a field so that the new position and orientation of the digit is sensed by master finger tracking grip 170. The new sensed position and orientation are provided to hand tracking controller 130. And [0060]) and
wherein the translating includes determining at least one of a path of motion or a magnitude of a force that emulates the data representing the manipulation of the target object. (pars. [0061] In another aspect, hand tracking of at least a part of the hand of surgeon 181 or of the hand of surgeon 180 is used by hand tracking controller 130 to determine whether a hand gesture pose is made by the surgeon, or a combination of a hand gesture pose and a hand gesture trajectory is made by the surgeon. Each hand gesture pose and each trajectory combined with a hand gesture pose is mapped to a different system command. The system commands control, for example, system mode changes and control other aspects of minimally invasive surgical system 100. [0062] For example in place of using foot pedals and switches as in a known minimally invasive surgical system, a hand gesture, either a hand gesture pose or a hand gesture trajectory, is used (i) to initiate following between motions of the master tool grip and the associated teleoperated slave surgical instrument, (ii) for master clutch activation (which decouples master control of the slave instrument), (iii) for endoscopic camera control (which allows the master to control endoscope movement or features, such as focus or electronic zoom), (iv) for robotic arm swap (which swaps a particular master control between two slave instruments), and pars. [0063] - [0064])
Re-claim 3, Itkowitz teaches wherein the translating of the data comprises: interpreting the at least a portion of the motion of the object as a parameter of a robotic tool translation based, at least in part, on at least one of the capture of the object in the 3D space or a 3D model of the object. (pars. [0080] & [0088])
Re-claim 4, Itkowitz teaches wherein the translating of the data comprises:
detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on the degree of motion of the object. (par. [0132])
Re-claim 5, Itkowitz teaches wherein the translating of the data comprises:
detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on an amplification function of the degree of motion. (par. [0055])
Re-claim 9, Itkowitz teaches wherein the translating of the data comprises: using a 3D model of the object to determine a torsion of the motion of the object: and interpreting the torsion as a parameter of an actuation of the robotic tool. (par. [0080] Once an object type is determined, the 3-D model can be refined using constraints based on characteristics of the object type. For instance, a human hand would characteristically have five fingers (not six), and the fingers would be constrained in their positions and angles relative to each other and to a palm portion of the hand. Any ellipses in the model that are inconsistent with these constraints can be discarded. In some embodiments, block 622 can include recomputing all or portions of the per-slice analysis (block 604) and/or cross-slice correlation analysis (block 620) subject to the type-based constraints. In some instances, applying type-based constraints may cause deterioration in accuracy of reconstruction if the object is misidentified. (Whether this is a concern depends on implementation, and type-based constraints can be omitted if desired. [0088])
Re-claim 10, Itkowitz teaches wherein the translating of the data comprises:
using a 3D model of the object to detect a degree of motion of the object based, (par. [0052] As used herein, a location includes a position and an orientation) at least in part, on a torsion of the motion of the object; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on the degree of motion of the object. (par. [0061] In another aspect, hand tracking of at least a part of the hand of surgeon 181 or of the hand of surgeon 180 is used by hand tracking controller 130 to determine whether a hand gesture pose is made by the surgeon, or a combination of a hand gesture pose and a hand gesture trajectory is made by the surgeon. Each hand gesture pose and each trajectory combined with a hand gesture pose is mapped to a different system command. The system commands control, for example, system mode changes and control other aspects of minimally invasive surgical system 100.)
Re-claim 11, Itkowitz teaches wherein the translating of the data comprises:
using a 3D model of the object to detect a degree of motion of the object based, at least in part, on a torsion of the object; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on an amplification function of the degree of motion of the object. (pars. [0064] and [0075])
Re-claim 13, the rejection of claim 1 incorporated into the rejection of claim 13 and only further limitation will be addressed below.
Itkowitz teaches a system comprising:
a sensor; (fig. 1; hand tracking controller 130 detects a hand gesture pose, or a hand gesture pose and a hand gesture trajectory)
a robotic tool; (fig. 1; 113) and
a processor (fig. 1; 151) coupled to memory, (fig. 1; 132 and par. [0183]) the memory being loaded with computer instructions that, upon execution on the processor, cause the processor to implement operations (par. [0183] In one aspect, processes 810 to 850 are performed by hand tracking controller 130 (FIG. 1). Controller 130 executes finger tracking module 135 on a processor 131 to perform processes 810 to 850. In this aspect, finger tracking module 135 is stored in memory 132. Process 850 sends a system event to system controller 140 that in turn performs process 860. par. [0184] It is to be appreciated that hand tracking controller 130 and system controller 140 may be implemented in practice by any combination of hardware, software that is executed on a processor, and firmware. Also, functions of these controllers, as described herein, may be performed by one unit, or divided up among different components, each of which may be implemented in turn by any combination of hardware, software that is executed on a processor, and firmware. When divided up among different components, the components may be centralized in one location or distributed across system 100 for distributed processing purposes.) (see claim 1 ejection above for the rest of the rejection):
Re-claim 14, the rejection of claim 1 incorporated into the rejection of claim 14 and only further limitation will be addressed below.
Itkowitz teaches a non-transitory computer readable storage medium impressed with computer instructions that, upon execution by a processor, (fig. 1; 151) implement operations (par. [0280] Herein, a computer program product includes a medium configured to store computer readable code needed for any one or any combination of the processes described with respect to hand tracking or in which computer readable code for any one or any combination of processes described with respect to hand tracking is stored. Some examples of computer program products are CD-ROM discs, DVD discs, flash memory, ROM cards, floppy discs, magnetic tapes, computer hard drives, servers on a network and signals transmitted over a network representing computer readable program code. A non-transitory tangible computer program product includes a non-transitory tangible medium configured to store computer readable instructions for any one of, or any combination of processes described with respect to various controllers or in which computer readable instructions for any one of, or any combination of processes described with respect to the various controllers are stored. Non-transitory tangible computer program products are CD-ROM discs, DVD discs, flash memory, ROM cards, floppy discs, magnetic tapes, computer hard drives and other non-transitory physical storage mediums.) (see claim 1 rejection above) comprising:
Re-claim 18, is rejected as a method as applied to claim 1 above because the scope and contents of the recited limitations are substantially the same.
Re-claim 19, is rejected as a system as applied to claim 13 above because the scope and contents of the recited limitations are substantially the same.
Re-claim 20, is rejected as applied to claim 14 above because the scope and contents of the recited limitations are substantially the same.
Re-claim 21, Itkowitz teaches comprising issuing the determined command to the robotic tool to apply the force to a workpiece. (par. [0055])
Re-claim 22, Itkowitz teaches executing the command by at least one of an actuation of the path of motion of an actuation of the magnitude of force. (pars. [0061] - [0064])
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Itkowitz et al. 2012/0071891” in view of Holz “US 2013/0182079”.
Re-claim 2, Itkowitz teaches all the limitations of claim 1 but does not explicitly teach wherein the determining of the data comprises capturing edge information of at least a portion of the object and computing an attribute of a 3D model of the object.
However, Holz teaches wherein the determining of the data comprises capturing edge information of at least a portion of the object and computing an attribute of a 3D model of the object. (pars. [0032] FIG. 1 is a simplified illustration of a motion capture system 100 according to an embodiment of the present invention. System 100 includes two cameras 102, 104 arranged such that their fields of view (indicated by broken lines) overlap in region 110. Cameras 102 and 104 are coupled to provide image data to a computer 106. Computer 106 analyzes the image data to determine the 3-D position and motion of an object, e.g., a hand 108, that moves in the field of view of cameras 102, 104. [0044])
It would have been obvious to one of ordinary skill in the art before the effective filing date to further modify the invention of the combination with the teachings of Holz to obtain by using cameras to capture images of the object and analyzing the images to detect object edges. (par. [0031])
7. Claims 6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Itkowitz “2012/0071891” in view of Fleischmann et al. “US 2014/0232631”.
Re-claim 6, Itkowitz teaches all the limitations of claim 1 but Itkowitz does not explicitly teach wherein the translating of the data comprises:
detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and
interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a polynomial function of the degree of motion.
However, Fleischmann teaches wherein the translating of the data comprises:
detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a polynomial function of the degree of motion. (par. [0076])
It would have been obvious to one of ordinary skill in the art before the effective filing date to further modify the invention of the combination with the teachings of Fleischmann the user's hands and fingers, and then, subsequently, analyzing this tracked data to identify gestures performed by the user. (par. [0026])
Re-claim 12, Itkowitz teaches all the limitations of claim 1 but Itkowitz does not explicitly teach wherein the translating of the data comprises:
using a 3D model of the object to detect a degree of motion of the object based, at least in part, on a torsion of the object; and
interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a polynomial function of the degree of motion of the object.
However, Fleischmann teaches wherein the translating of the data comprises:
using a 3D model of the object to detect a degree of motion of the object based, at least in part, on a torsion of the object; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a polynomial function of the degree of motion of the object. (par. [0076])
It would have been obvious to one of ordinary skill in the art before the effective filing date to further modify the invention of the combination with the teachings of Fleischmann the user's hands and fingers, and then, subsequently, analyzing this tracked data to identify gestures performed by the user. (par. [0026])
8. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Itkowitz “US 2012/0071891” in view of Moskalev “US 2016/0224202”.
Re-claim 7, Itkowitz teaches all the limitations of claim 1 but Itkowitz does not explicitly teach wherein the translating of the data comprises:
detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and
interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a transcendental function of the degree of motion.
However, Moskalev teaches wherein the translating of the data comprises:
detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a transcendental function of the degree of motion. (pars. [0009] and [0028])
It would have been obvious to one of ordinary skill in the art before the effective filing date to further modify the invention of the combination with the teachings of Moskalev for processing translates the user's hand and/or finger movements into gestures, which are used to control the large screen 440 in front of the user. (par. [0033])
9. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Itkowitz “US 2012/0071891” in view of Feinstein “US 2015/029093”.
Re-claim 8, Itkowitz teaches all the limitations of claim 1 but Itkowitz does not explicitly teach wherein the translating of the data comprises:
detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and
interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a step function of the degree of motion.
However, Feinstein teaches wherein the translating of the data comprises:
detecting that the motion of the object includes a degree of motion of the object that satisfies a threshold that is based, at least in part, on at least one of a range of motion of the object or a range of motion of the robotic tool; and interpreting a value of a parameter of an actuation of the robotic tool based, at least in part, on a step function of the degree of motion. (par. [0069])
It would have been obvious to one of ordinary skill in the art before the effective filing date to further modify the invention of the combination with the teachings of Feinstein to provide filtering for the sensor data to reduce the jumpy behavior of the rotation measurements. (par. [0063])
Contact Information
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sosina Abebe whose telephone number is (571) 270-7929. The examiner can normally be reached on Mon-Friday from 9:00-5:30 If attempts to reach the examiner by telephone are unsuccessful, the examiner's Supervisor, Temesghen Ghebretinsae can be reached on (571) 272-3017. The fax phone number for the organization where this application or proceeding is assigned is 703-872-9306. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/S.A/Examiner, Art Unit 2626
/TEMESGHEN GHEBRETINSAE/Supervisory Patent Examiner, Art Unit 2626 3/24/26C