Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of US Patent No. 12124295 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant claims are broader version of the patented claims and all proposed claims are obvious and included in the patented claims, and any infringement over the patents would also infringe over the instant claims.
The table below demonstrates claims 1-20 from the instant application and claims 1-20 from the US patent No. 12124295 B2. The limitations recited in the US Patent that correspond to the limitations recited in the instant application have been bold for Application’s convenience.
Claim limitation of the instant App.
Claim limitation of US Patent No. 12124295 B2
1. An apparatus, comprising: an artificial body shaped to represent at least a portion of a being; and a mounting interface coupled to the artificial body, the mounting interface configured to couple a computing device to the artificial body.
1. An apparatus, comprising: an artificial body shaped to represent at least a portion of a being and comprising at least one robotic appendage; a mounting interface coupled to a face portion of the artificial body, the mounting interface configured to removably couple a computing device to the artificial body such that the computing device is selectively removable from the artificial body, the mounting interface compatible with various types of computing devices that are selectively removable from the mounting interface; and a communication module configured to: receive a live image or video of a remote user from a remote device communicatively coupled to the computing device; display the live image or video of the remote user on the computing device that is removably coupled to the mounting interface on the face portion of the artificial body; receive input from the remote device via a remote interface separate from the mounting interface and displaying a plurality of actions and an identification of a robotic appendage of the at least one robotic appendage configured to perform each action of the plurality of actions, the input comprising a selection of an action to be performed by the robotic appendage; and actuate the robotic appendage to perform the action received from the remote device.
2. The apparatus of claim 1, wherein the computing device comprises a mobile device.
2. The apparatus of claim 1, wherein the computing device comprises a mobile device.
3. The apparatus of claim 1, wherein the being represented by the artificial body comprises one or more of a humanoid, an animal, and a fictional being.
3. The apparatus of claim 1, wherein the being represented by the artificial body comprises one or more of a humanoid, an animal, and a fictional being.
4. The apparatus of claim 1, wherein the artificial body comprises at least a portion of an inanimate mannequin.
4. The apparatus of claim 1, wherein the artificial body comprises at least a portion of an inanimate mannequin.
5. The apparatus of claim 1, wherein the artificial body is at least partially robotic and comprises one or more at least partially robotic appendages configured to perform scripted actions defined for robotic movement of the one or more at least partially robotic appendages.
5. The apparatus of claim 1, wherein the artificial body is at least partially robotic and the plurality of actions comprise a plurality of scripted actions defined for robotic movement of the at least one robotic appendage.
6. The apparatus of claim 5, wherein the at least partially robotic artificial body is configured to receive control commands over a data network, the control commands indicating which of the scripted actions the one or more at least partially robotic appendages perform.
6. The apparatus of claim 5, wherein the at least partially robotic artificial body is configured to receive control commands over a data network, the control commands indicating which of the plurality of scripted actions the at least one robotic appendage performs.
7. The apparatus of claim 5, wherein the at least partially robotic artificial body comprises one or more sensors, the at least partially robotic artificial body configured to select and perform one of the scripted actions using the one or more at least partially robotic appendages based on data from the one or more sensors.
7. The apparatus of claim 5, wherein the at least partially robotic artificial body comprises one or more sensors, the at least partially robotic artificial body configured to select and perform one of the scripted actions using the at least one robotic appendage based on data from the one or more sensors.
8. The apparatus of claim 5, wherein the scripted actions comprise one or more of: the one or more at least partially robotic appendages opening a gate to enter a user's property; the at least partially robotic artificial body using the one or more at least partially robotic appendages to climb one or more stair steps to a user's residence; the one or more at least partially robotic appendages ringing a doorbell associated with a door of a user; a head movement of the at least partially robotic artificial body in response to an interaction with a user sensed by the one or more sensors; and shaking a hand of a user.
8. The apparatus of claim 1, wherein the action comprises one or more of: the at least one robotic appendage opening a gate to enter a user's property; the artificial body using the at least one robotic appendage to climb one or more stair steps to a user's residence; the at least one robotic appendage ringing a doorbell associated with a door of a user; and shaking a hand of a user via the at least one robotic appendage.
9. The apparatus of claim 1, wherein the mounting interface is coupled to one or more of a face portion, a hand portion, and a chest portion of the artificial body.
9. The apparatus of claim 1, wherein the mounting interface is coupled to one or more of a hand portion and a chest portion of the artificial body.
10. The apparatus of claim 1, wherein the mounting interface is configured to removably couple the computing device to the artificial body, the mounting interface comprising one or more of a magnetic interface, a mechanical clamp shaped to releasably receive the computing device, and a case for the computing device coupled to the artificial body.
10. The apparatus of claim 1, wherein the mounting interface comprises one or more of a magnetic interface, a mechanical clamp shaped to releasably receive the computing device, and a case for the computing device coupled to the artificial body.
11. The apparatus of claim 1, wherein the computing device comprises one or more of a flexible, curved, electronic display screen and an at least semi-transparent surface with a projector positioned to project an image onto a rear of the semi-transparent surface.
11. The apparatus of claim 1, wherein the computing device comprises one or more of a flexible, curved, electronic display screen and an at least semi-transparent surface with a projector positioned to project an image onto a rear of the semi-transparent surface.
12. The apparatus of claim 1, wherein the computing device comprises a touchscreen, the touchscreen facing away from the artificial body in response to the mounting interface coupling the computing device to the artificial body, the computing device providing a user interface on the touchscreen.
12. The apparatus of claim 1, wherein the computing device comprises a touchscreen, the touchscreen facing away from the artificial body in response to the mounting interface coupling the computing device to the artificial body, the computing device providing a user interface on the touchscreen.
13. The apparatus of claim 12, wherein the user interface comprises a document signature interface allowing a user to digitally sign an electronic document displayed on the touchscreen of the computing device.
13. The apparatus of claim 12, wherein the user interface comprises a document signature interface allowing a user to digitally sign an electronic document displayed on the touchscreen of the computing device.
14. The apparatus of claim 1, wherein the mounting interface comprises one or more electrical connections between the artificial body and the computing device, the one or more electrical connections comprising one or more of an electrical charging interface in electrical communication with a battery of the artificial body and an audio connection with one or more of an audio speaker and a microphone of the artificial body.
14. The apparatus of claim 1, wherein the mounting interface comprises one or more electrical connections between the artificial body and the computing device, the one or more electrical connections comprising one or more of an electrical charging interface in electrical communication with a battery of the artificial body and an audio connection with one or more of an audio speaker and a microphone of the artificial body.
15. A system, comprising: a computing device; an artificial body shaped to represent at least a portion of a being; and a mounting interface coupled to the artificial body, the mounting interface coupling the computing device to the artificial body.
15. A system, comprising: a computing device; an artificial body shaped to represent at least a portion of a being and comprising at least one robotic appendage; a mounting interface coupled to a face portion of the artificial body, the mounting interface removably coupling the computing device to the artificial body such that the computing device is selectively removable from the artificial body, the mounting interface compatible with various types of computing devices that are selectively removable from the mounting interface; and a communication module configured to: receive a live image or video of a remote user from a remote device communicatively coupled to the computing device; display the live image or video of the remote user on the computing device that is removably coupled to the mounting interface on the face portion of the artificial body; receive input from the remote device via a remote interface separate from the mounting interface and displaying a plurality of actions and an identification of a robotic appendage of the at least one robotic appendage configured to perform each action of the plurality of actions, the input comprising a selection of an action to be performed by the robotic appendage; and actuate the robotic appendage to perform the action received from the remote device.
16. The system of claim 15, wherein the computing device comprises an electronic display configured to display a remote user from a different location than the artificial body to a local user within viewing proximity of the electronic display.
16. The system of claim 15, wherein the computing device comprises an electronic display configured to display a remote user from a different location than the artificial body to a local user within viewing proximity of the electronic display and wherein receiving the input from the remote device via the remote interface comprises receiving the input from the remote user.
17. The system of claim 16, further comprising a plurality of additional artificial bodies and mounting interfaces each coupling an additional computing device to the additional artificial bodies.
17. The system of claim 16, further comprising a plurality of additional artificial bodies and mounting interfaces each removably coupling an additional computing device to the additional artificial bodies.
18. The system of claim 17, wherein the single remote user is selectively displayable on screens of each of the additional computing devices.
18. The system of claim 17, wherein the single remote user is selectively displayable on screens of each of the additional computing devices.
19. The system of claim 17, wherein screens of the additional computing devices each display different remote users.
19. The system of claim 17, wherein screens of the additional computing devices each display different remote users.
20. An apparatus, comprising: means for artificially representing at least a portion of a being; and means for coupling a computing device to the means for artificially representing at least a portion of a being.
20. An apparatus, comprising: means for artificially representing at least a portion of a being, the being comprising at least one robotic appendage; means for removably coupling a computing device to a face portion of the means for artificially representing at least a portion of a being such that the computing device is selectively removable from the means for artificially representing at least a portion of a being, the mounting interface compatible with various types of computing devices that are selectively removable from the mounting interface; and means for receiving a live image or video of a remote user from a remote device communicatively coupled to the computing device; and means for displaying the live image or video of the remote user on the computing device that is removably coupled to the mounting interface on the face portion of the artificial body; means for receiving input from the remote device via a remote interface separate from the mounting interface and displaying a plurality of actions and an identification of a robotic appendage of the at least one robotic appendage configured to perform each action of the plurality of actions, the input comprising a selection of an action to be performed by the robotic appendage; and means for actuating the robotic appendage to perform the action received from the remote device.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “means for artificially representing at least a portion of a being” and “means for coupling a computing device to the means for artificially representing at least a portion of a being” in claim 20.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
From specification “means for artificially representing at least a portion of a being” can be a humanoid (e.g., a mannequin, a human, a humanoid alien, a sasquatch, or the like) ([0036]) and “means for coupling” can be adhesives, glues, welds, integrated housings, hooks, clips, clamps, notches, hook-and-loop materials, fasteners ([0076]).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-12, 14-16 and 20 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Mahoor et al. (US Pub No. 2020/0114521 A1 and Mahoor hereinafter)
Regarding Claim 1, Mahoor discloses (figs. 1-5) an apparatus, comprising: an artificial body (100) shaped to represent at least a portion of a being; and a mounting interface (user interface, [0029]) coupled to the artificial body, the mounting interface configured to couple a computing device (125) to the artificial body.
Regarding Claim 2, Mahoor discloses (figs. 1-5) the apparatus of claim 1, wherein the computing device comprises a mobile device ([0024]).
Regarding Claim 3, Mahoor discloses (figs. 1-5) the apparatus of claim 1, wherein the being represented by the artificial body comprises one or more of a humanoid, an animal, and a fictional being ([0029]).
Regarding Claim 4, Mahoor discloses (figs. 1-5) the apparatus of claim 1, wherein the artificial body comprises at least a portion of an inanimate mannequin ([0029, companion robot]).
Regarding Claim 5, Mahoor discloses (figs. 1-5) the apparatus of claim 1, wherein the artificial body is at least partially robotic and comprises one or more at least partially robotic appendages configured to perform scripted actions ([715, 720,730,740 and 750])defined for robotic movement of the one or more at least partially robotic appendages (fig.7 and [0107]-[0115]).
Regarding Claim 6, Mahoor discloses (figs. 1-5) the apparatus of claim 5, wherein the at least partially robotic artificial body is configured to receive control commands over a data network (185), the control commands indicating which of the scripted actions the one or more at least partially robotic appendages perform (fig.7 and [0107]-[0115]).
Regarding Claim 7, Mahoor discloses (figs. 1-5) the apparatus of claim 5, wherein the at least partially robotic artificial body comprises one or more sensors, the at least partially robotic artificial body configured to select and perform one of the scripted actions using the one or more at least partially robotic appendages based on data from the one or more sensors ([0053]).
Regarding Claim 8, Mahoor discloses (figs. 1-5) the apparatus of claim 5, wherein the scripted actions comprise one or more of:
the one or more at least partially robotic appendages opening a gate to enter a user's property;
the at least partially robotic artificial body using the one or more at least partially robotic appendages to climb one or more stair steps to a user's residence;
the one or more at least partially robotic appendages ringing a doorbell associated with a door of a user;
a head movement of the at least partially robotic artificial body in response to an interaction with a user sensed by the one or more sensors (fig. 7 and [0053]); and shaking a hand of a user.
Regarding Claim 9, Mahoor discloses (figs. 1-5) the apparatus of claim 1, wherein the mounting interface is coupled to one or more of a face portion, a hand portion, and a chest portion of the artificial body (fig.1).
Regarding Claim 10, Mahoor discloses (figs. 1-5) the apparatus of claim 1, wherein the mounting interface is configured to removably couple the computing device to the artificial body, the mounting interface comprising one or more of a magnetic interface, a mechanical clamp shaped to releasably receive the computing device, and a case for the computing device coupled to the artificial body ([0042], case of iPad).
Regarding Claim 11, Mahoor discloses (figs. 1-5) the apparatus of claim 1, wherein the computing device comprises one or more of a flexible, curved, electronic display screen ([0042]) and an at least semi-transparent surface with a projector positioned to project an image onto a rear of the semi- transparent surface.
Regarding Claim 12, Mahoor discloses (figs. 1-5) the apparatus of claim 1, wherein the computing device comprises a touchscreen, the touchscreen facing away from the artificial body in response to the mounting interface coupling the computing device to the artificial body, the computing device providing a user interface on the touchscreen ([0042]).
Regarding Claim 14, Mahoor discloses (figs. 1-5) the apparatus of claim 1, wherein the mounting interface comprises one or more electrical connections between the artificial body and the computing device, the one or more electrical connections comprising one or more of an electrical charging interface in electrical communication with a battery of the artificial body ([0029]) and an audio connection with one or more of an audio speaker (140) and a microphone (135) of the artificial body.
Regarding Claim 15, Mahoor discloses (figs. 1-5) a system, comprising: a computing device; an artificial body (100) shaped to represent at least a portion of a being; and a mounting interface (user interface, [0029]) coupled to the artificial body, the mounting interface configured to couple a computing device (125) to the artificial body.
Regarding Claim 16, Mahoor discloses (figs. 1-5) the system of claim 15, wherein the computing device comprises an electronic display configured to display a remote user from a different location than the artificial body to a local user within viewing proximity of the electronic display ([0042]).
Regarding Claim 20, Mahoor discloses (figs. 1-5) an apparatus, comprising: means for artificially representing at least a portion of a being (100) ; and means for coupling (interface) a computing device ([0029]), 125) to the means for artificially representing at least a portion of a being.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Mahoor et al in view of Westen et al (US Patent No. 10032325 B1 and Westen hereinafter)
Regarding Claim 13, Mahoor discloses (figs. 1-5) the apparatus of claim 12. Mahoor does not explicitly disclose wherein the user interface comprises a document signature interface allowing a user to digitally sign an electronic document displayed on the touchscreen of the computing device. However, Westen teaches (figs. 1-4) wherein the user interface comprises a document signature interface allowing a user to digitally sign an electronic document displayed on the touchscreen of the computing device (col 3, lines 53-60). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the user interface comprises a document signature interface of Westen to the apparatus of Mahoor in order to provide a the touch device is communicatively coupled to the network and can receive an input of a signature on a touch screen via a finger, stylus, or the like (Westen and col 5, lines 18-25).
Claims 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Mahoor et al in view of Schluntz et al (US Pub No. 2021/0342479 A1 and Schluntz hereinafter)
Regarding Claim 17, Mahoor discloses (figs. 1-5) the system of claim 16. Mahoor does not explicitly disclose a plurality of additional artificial bodies and mounting interfaces each coupling an additional computing device to the additional artificial bodies. However, Schluntz teaches (figs.1-2) a plurality of additional artificial bodies (a group of robots or a robotic unit having group of robots) and mounting interfaces (each robot having display) coupling an additional computing device to the additional artificial bodies. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine a plurality of additional artificial bodies of Schluntz to the system of Mahoor in order to provide robots that are equipped with cameras, microphones, and sensors to gather information about its surrounding environment and perform tasks based on the information (Schluntz, [0167]).
Regarding Claim 18, Mahoor/ Schluntz discloses the system of claim 17. Schluntz further teaches (figs. 1-2) wherein the single remote user is selectively displayable on screens of each of the additional computing devices ([0024-0031]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine wherein the single remote user is selectively displayable on screens of each of the additional computing devices of Schluntz to the system of Mahoor in order to provide robots that are equipped with cameras, microphones, and sensors to gather information about its surrounding environment and perform tasks based on the information (Schluntz, [0167]).
Regarding Claim 19, Mahoor/ Schluntz discloses system of claim 17. Schluntz further teaches (figs. 1-2) wherein screens of the additional computing devices each display different remote users ([0024-0031]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine screens of the additional computing devices each display different remote users of Schluntz to device of Mahoor in order to provide robots that are equipped with cameras, microphones, and sensors to gather information about its surrounding environment and perform tasks based on the information (Schluntz, [0167]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROCKSHANA D CHOWDHURY whose telephone number is (571)272-1602. The examiner can normally be reached M-F: 8 AM - 4:30 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JINHEE LEE can be reached on 571-272-1977. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROCKSHANA D CHOWDHURY/Primary Examiner, Art Unit 2841