Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Amendments to the claims have been recorded.
Response to Amendment
Applicant’s arguments with respect to claims have been considered but are moot because the arguments do not apply to the new references being used in the current rejection.
Applicant argues that the Double patenting rejection should be removed as the current claims recite switching operations. Examiner notes that if a different mode or function that has been selected by the touch screen, the switching operation must be performed abased on the selection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 9 are rejected under 35 U.S.C. 101 because independent claim recited a computer program comprising a computer-readable non-transitory storage medium and a program instruction cause the processor to perform a method. Although the program maybe program instruction that is stored on the non-transitory storage medium, there is no indication that the program itself is executable by a processor to perform the method. Examiner recommends the amendment of “a non-transitory computer readable medium storing a program
Subject Matter Eligibility of Computer Readable Media
The United States Patent and Trademark Office (USPTO) is obliged to give claims their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re Zletz, 893 F.2d 319(Fed. Cir. 1989)(during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signals per se, the claim must be rejected under 35 U.S.C. @ 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Insinuations for Evaluating Subject Matter Eligibility Under 35 U.S.C. 101, Aug. 24, 2009; p. 2.
The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 U.S.C. 8 101 as covering both non-statutory subject matter and statutory subject matter. In an effort to assist the patent community in overcoming a rejection or potential rejection under 35 U.S.C. 4 101 in this situation, the USPTO suggests the following approach. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. 101 by adding the limitation "non-transitory" to the claim. CJ: Animals -Patentability, 1 077 0) Gaz. Pat. Office 24 (April 21, 1987) (suggesting that applicants add the limitation "non-human" to a claim covering a multi-cellular organism to avoid a rejection under 35 U.S.C. 5 101). Such an amendment would typically not raise the issue of new matter, even when the specifications silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. The limited situations in which such an amendment could raise issues of new matter occur, for example, when the specification does not support a non-transitory embodiment because a signal per se is the only viable embodiment such that the amended claim is impermissibly broadened beyond the supporting disclosure. See, e.g., Gentry Gallery, Inc. v. Berkline Corp., 134 F.3d 1473 (Fed. Cir. 1998).
Claim Interpretation
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are:
control system configured to determine
reception unit configured to receive
switching unit configured to switch
notification unit configured to notify
central processing unit configured to…
in the claims.
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Nonstatutory Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
The claim of this instant application are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim of co-pending Application. Although the conflicting claims are not identical, they are not patentably distinct from each other.
This is a provisional obviousness-type double patenting rejection because the conflicting claims have not in fact been patented.
Double Patenting Rejections will not be revisited and be held in abeyance until allowable subject matter is to be found.
Instant Application
US 20210178598
1. A control system configured to determine an operation to be performed by a robot based on handwritten input information input to an interface and control the operation performed by the robot, the control system comprising: a handwritten input information reception unit configured to receive an input of the handwritten input information; and a switching unit configured to switch a plurality of input modes of the handwritten input information for causing the robot to perform different types of operations.
2. The control system according to claim 1, further comprising a notification unit configured to notify a user who inputs the handwritten input information about a current input mode.
3. The control system according to claim 1, wherein the handwritten input information includes trajectory information of the input performed using a finger, a stylus pen, a pointing device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, or a Mixed Reality (MR) device.
4. The control system according to claim 1, wherein the handwritten input information is input to a captured image obtained by capturing an environment in which the robot is located, and the plurality of input modes include a first mode for changing a direction from which the captured image is captured, a second mode for moving the robot, and a third mode for causing an end effector to perform a grasping operation.
5. The control system according to claim 4, wherein when the captured image does not include an area in which the robot is movable, the switching unit switches to a mode other than the second mode, and when the end effector is grasping an object to be grasped, the switching unit switches to a mode other than the third mode.
6. The control system according to claim 1, wherein the switching unit switches the plurality of input modes in response to a selection operation.
7. The control system according to claim 1, wherein the handwritten input information is input to a captured image obtained by capturing an environment in which the robot is located, and the switching unit inputs the captured image to a learning model and switches to a mode indicated by information output from the learning model.
8. A control method for determining an operation to be performed by a robot based on handwritten input information input to an interface and controlling the operation performed by the robot, the control method comprising: receiving an input of the handwritten input information to a captured image; and switching a plurality of input modes of the handwritten input information for causing the robot to perform different types of operations.
9. A non-transitory computer readable medium storing a program for causing a computer to perform a control method for determining an operation to be performed by a robot based on handwritten input information input to an interface and controlling the operation performed by the robot, the control method comprising: receiving an input of the handwritten input information to a captured image; and switching a plurality of input modes of the handwritten input information for causing the robot to perform different types of operations.
1. A remote control system configured to remotely control a device to be operated comprising an end effector, the remote control system comprising: an imaging unit configured to shoot an environment in which the device to be operated is located; a recognition unit configured to recognize objects that can be grasped by the end effector based on a shot image of the environment shot by the imaging unit; an operation terminal configured to display the shot image and receive handwritten input information input to the displayed shot image; and an estimation unit configured to, based on the objects that can be grasped which the recognition unit has recognized and the handwritten input information input to the shot image, estimate an object to be grasped which has been requested to be grasped by the end effector from among the objects that can be grasped and estimate a way of performing a grasping motion by the end effector, the grasping motion having been requested to be performed with regard to the object to be grasped.
2. The remote control system according to claim 1, wherein the handwritten input information includes a first image that simulates the way of performing the grasping motion with regard to the object to be grasped.
3. The remote control system according to claim 1, wherein the estimation unit further estimates a level of the grasping motion which has been requested to be performed with regard to the object to be grasped based on the objects that can be grasped which the recognition unit has recognized and the handwritten input information input to the shot image.
4. The remote control system according to claim 3, wherein the handwritten input information includes the first image that simulates the way of performing the grasping motion with regard to the object to be grasped, and an image that shows the level of the grasping motion.
5. The remote control system according to claim 1, wherein when a plurality of the handwritten input information pieces are input to the shot image, the estimation unit estimates, for each of the plurality of the handwritten input information pieces, the object to be grasped and the way of performing the grasping motion which has been requested to be performed with regard to the object to be grasped based on the objects that can be grasped which the recognition unit has recognized and the plurality of the handwritten input information pieces input to the shot image.
6. The remote control system according to claim 5, wherein the estimation unit further estimates an order of performing a plurality of the grasping motions which have been requested by the respective plurality of the handwritten input information pieces based on the objects that can be grasped which the recognition unit has recognized and the plurality of the handwritten input information pieces input to the shot image.
7. The remote control system according to claim 6, wherein the handwritten input information includes the first image that simulates the way of performing the grasping motion with regard to the object to be grasped, and an image that shows the order of performing the grasping motions.
8. The remote control system according to claim 5, wherein the estimation unit further estimates the order of performing the plurality of the grasping motions which have been requested by the respective plurality of the handwritten input information pieces based on an order in which the plurality of the handwritten input information pieces are input.
9. The remote control system according to claim 2, wherein the estimation unit estimates the way of performing the grasping motion which has been requested to be performed with regard to the object to be grasped by using a learned model that uses the first image of the handwritten input information as an input image.
10. A remote control method performed by a remote control system configured to remotely control a device to be operated comprising an end effector, the remote control method comprising: shooting an environment in which the device to be operated is located; receiving, by an operation terminal displaying a shot image of the environment, handwritten input information input to the displayed shot image; recognizing objects that can be grasped by the end effector based on the shot image; and based on the objects that can be grasped and the handwritten input information input to the shot image, estimating an object to be grasped which has been requested to be grasped by the end effector from among the objects that can be grasped and estimating a way of performing a grasping motion by the end effector, the grasping motion having been requested to be performed with regard to the object to be grasped.
11. A non-transitory computer readable medium storing a program for causing a computer to: recognize, based on a shot image of an environment in which a device to be operated comprising an end effector is located, objects that can be grasped by the end effector; and based on handwritten input information input to the shot image displayed on an operation terminal and the objects that can be grasped, estimate an object to be grasped which has been requested to be grasped by the end effector from among the objects that can be grasped and estimate a way of performing a grasping motion by the end effector, the grasping motion having been requested to be performed with regard to the object to be grasped.
Also:
20210178581 claims 1-5; shot images and grasp
20240075628 claims 1-13 Movable area and shot images
A patentee or applicant may disclaim or dedicated to the public the entire term, or any terminal part of the term of a patent. 35 U.S.C. 253. The statue does not provide for a terminal disclaimer of only a specified claim or claims. The terminal disclaimer must operate with respect to all claims in the patent. MPEP 804.02.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
The claims are generally narrative and indefinite, failing to conform with current U.S. practice. They appear to be a literal translation into English from a foreign document and are replete with grammatical and idiomatic errors.
Claim 5 recites the limitation of “when the end effector is grasping an object to be grasped”. This action cannot logically happen as it has already happened.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Buehler US 20130346348 in view of Pinter US 20110288682 and in further view of Hoffman 9,283,674.
Claims 1-9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by.
1. A control system configured to determine an operation to be performed by a robot based on handwritten input information input to an interface and control the operation performed by the robot, the control system comprising:
a central processing unit configured to:
receive an input of the handwritten input information; and 20; the robot receives a visual outline drawn by the user on a drawing interface.
switch a plurality of input modes of the handwritten input information for causing the robot to perform different types of control operations,20; involve recognizing the object [recognizing types of operation switched from handwritten input] as a member of one of multiple object classes defined in memory.
the handwritten input information is input to a captured image obtained by capturing an environment in which the robot is located, 20; the object within or into the field of view of the camera.
the plurality of input modes include a first mode for changing a direction from which the captured image is captured, 36; It has one camera 109 in each of its two wrists so that the robot 100 can "see" objects it is about to pick up and adjust its grippers 106 accordingly. Also 35; The robot 100 also has a head with a screen 108 [with camera 111] and status lights, which serve to inform the user of the robot's state. The head and screen 108 [with camera 111] can rotate about a vertical access, and nod about a horizontal axis running parallel to the screen 108.
a second mode for moving the robot, and 36; robot pucks up object
a third mode for causing an end effector to perform a grasping operation.36; and adjust its grippers 106 accordingly.
Buehler teaches all of the limitations of claim 1 but does not teach
the handwritten input information includes
a rotation of a camera,
a traveling route of the robot, and
However, Pinter teaches
the handwritten input information includes Fig.9 touchscreen
a rotation of a camera, Fig.13 and Fig.9 #248
a traveling route of the robot, and Fig.9 #250 and para 64
Buehler and Pinter teach all of the limitations of claim 1 but do not teach
a direction from which a hand grasps an object,
However, Hoffman teaches
a direction from which a hand grasps an object, and Fig. 3B and Fig.4B 274B [direction can be also adjusted based on drive or tong direction]
Therefore, it was well known at the time the invention was filed and would have been obvious to one of ordinary skill in the art to combine the teachings with a reasonable expectation of success in order to be tele-present and remotely operate a mobile robot such that the claimed invention as a whole would have been obvious.
2. The control system according to claim 1, wherein the central processing unit is further configured to notify a user who inputs the handwritten input information about a current input mode. 20; the robot establishes the visual outline of the object by displaying [notify a user of current input mode i.e. camera view] an outline in the camera view,
3. The control system according to claim 1, wherein the handwritten input information includes trajectory information of the input performed using
a finger, 57; [Also includes or addition to handwritten input] The user may also indicate the object of interest simply by pointing at it with his finger [finger and pointing device]. Also 58; (e.g., by touching [finger or pointing device or stylus pen] it on a touch screen)
a stylus pen, a pointing device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, or a Mixed Reality (MR) device.
4. Canceled
5. The control system according to claim 1, wherein when the captured image does not include an area in which the robot is movable, 36; one camera 109 in each of its two wrists so that the robot 100 can "see" objects it is about to pick up. [movement of the robot is not captured I the wrist cameras, only movement of the surroundings]
the central processing unit is configured to switch to a mode other than the second mode, and 36; adjust its grippers [other then pick up]
when the end effector is grasping the object, 36; Also 35; allows the robot to grasp. See 112
the central processing unit is configured to switch to a mode other than the third mode. 35; Lift [other than adjust gripper]
6. The control system according to claim 1, wherein the central processing unit is configured to switch the plurality of input modes in response to a selection operation. 35; The robot 100 also has a head with a screen 108 and status lights, which serve to inform the user of the robot's state. The head and screen 108 can rotate [switching directions of plurally of inputs i.e. camera 111 and touch screen 108] about a vertical access, and nod about a horizontal axis running parallel to the screen 108.
7. The control system according to claim 1,
wherein the handwritten input information is input to the captured image obtained by capturing the environment in which the robot is located, and 20; receiving user feedback related to the outline (which the user may provide, e.g., by repositioning the camera that provides the camera view relative to the object), the robot receives a visual outline drawn by the user on a drawing interface.
the central processing unit is configured to input the captured image to a learning model and 19; a robot-implemented method of learning to visually recognize objects.
switch 19; selecting a vision model from among a plurality of computer-implemented vision models and generating a representation of the object in accordance with the selected model
8. is rejected using the same rejections as made to claim 1.
9. is rejected using the same rejections as made to claim 1.
10. (New) The control system according to claim 1, Pinter teaches wherein at least one of the different types of control operations includes moving a carriage portion of the robot, Fig.3
the carriage portion supporting two driving wheels and two rear wheels Also see fig.1 for all three wheels
a caster, each of which is in contact with a traveling surface, Fig.1 front wheel
inside a cylindrical housing. inside 112
Also see Ebrahimi
Citation of Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hoffman US 20150190925
Rosenstein US 20180281201
Ebrahimi US 20220187841 Fig. 87B for claim 10
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIHAR A KARWAN whose telephone number is (571)272-2747. The examiner can normally be reached on M-F 11am.-7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached on 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SIHAR A KARWAN/Examiner, Art Unit 3664