DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“Output unit” in claims 12 & 14;
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 – 10 and 12 - 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “detecting and tracking of the person by at least one sensor,” “determining a walking direction of the person,” “positioning the service robot ahead of the person in the walking direction with an angle of capture around 0°,” and ”reposition of the service robot at an angle of capture greater than 30° relative to the determined walking direction of the person.” However, the relationship between the recited method steps is unclear. For instance, it is unclear if the determination of the walking direction of the person is dependent on the detecting and tracking of the person by the at least one sensor. Additionally, although it is unclear if the at least one sensor is located on the service robot, the use of “angle of capture” in relationship to positioning and repositioning the service robot which was confirmed to describe the angle between the sensor and the path in page 9 of the uploaded Remarks filed on 06/17/2025, implies that the at least one sensor is on the service robot. As written, the at least one sensor could additionally encompass a sensor set up in the location of recording or attached to the person being tracked, making the positioning of the robot relative to “an angle of capture” unclear.
Claim 1 is also unclear in how the method steps of “positioning the service robot” and “repositioning the service robot” are carried out. While the claim includes the limitation of “by triggering a motor controller,” this merely includes a processing step but does not include structural details such as a motor or other movable element that would be required to position the service robot. Further detail is required to define the metes and bounds of the claim.
Claims 2 - 10 & 12 - 14 are rejected by virtue of their dependence on claim 1.
In regard to claim 3, the positively recited material includes that “the repositioning of the service robot enables an essentially lateral detection of the person.” Because detection is not a positively claimed component of claim 3 due to the use of “enables”, the claim is not further limiting as “an angle of capture of greater than 30°” is not substantially different than the angle of capture when it is possible that an essentially lateral detection of the person occurs.
In regard to claim 4, lines 1 - 3 recite “The computer-implemented method according to claim 1, predicting of a walking direction to be covered by the person based on the walking direction of the person,” is grammatically incorrect. The phrase “The computer-implemented method according to claim 1, predicting of a walking direction” should be rewritten to – comprising predicting a walking direction… -- to improve the clarity of the claim. Further, the “walking direction” is already defined in claim 1, line 6. The use of “a walking direction” in claim 4, line 2, which recites, “predicting of a walking direction to be covered by the person” is unclear as the walking direction has already been determined in the method of claim 1. Examiner suggests changing “a walking direction to be covered by the person” to include a different phrase such as -- a predicted path to be covered by the person -- to increase the clarity of the claim. Claims 5 - 7 are rejected by virtue of their dependence on claim 4.
In regard to claim 5, it is unclear if the “walking direction of the person” in line 4 is the predicted walking direction described in claim 4 or the determined walking direction described in claim 1.
In regard to claim 12, there is no positively recited method step.
In regard to claim 13, it is unclear what movement sequence is being evaluated as claim 1 from which claim 13 depends merely includes detecting and tracking a person using at least one sensor, determining a walking direction of a person, and positioning and repositioning the service robot. There is no mention of recording movement sequences. Further clarification is required.
In regard to claim 14, it is unclear what “rules” were laid out in claim 1 and how certain components, such as the motor controller and sensor for capturing a person, carry out the associated function. For instance, there is no recited structure, such as a motor or movable element that the motor controller communicates with. Additionally, it is unclear how the sensor is “capturing a person.” As written, “capturing a person” implies that the person will be physically detained versus something like recording data associated with the movement of a person. Further clarification is needed about the function of the sensor in relationship to the person.
In regard to claim 15, “the tracking module” in line 5 lacks antecedent basis. Additionally, claim 15 does not clearly describe how each claimed element relates to one another and how the action of “positioning a service robot” will be carried out. For example, it is unclear what rules are being referred to in lines 6 and how the robot is physically being positioned. Additionally, “whereby” in line 9 should be amended to -- wherein -- to improve the clarity of the claim. Claims 16 - 18 are rejected by virtue of their dependence from claim 15.
In regard to claim 16, how each claimed element relates to one another and how the action of “positioning a service robot” will be carried out the claim is not clearly described. In claim 15 from which claim 16 depends, a motor controller is used for positioning the service robot while in claim 16, “a positioning module comprising rules for moving the service robot,” is used for positioning. It is unclear if these two elements work together to position the service robot or each carry out an individual function.
Claim 17 recites “a tilting unit to which the at least one sensor is attached and wherein the tilting unit is tiltable to the at least one sensor unit…” However, it is unclear if the tilting unit is attached to the service robot. Additionally, it is unclear what is meant by “the tilting unit is tiltable to the at least one sensor...” Further clarification is needed. Examiner is interpreting this to mean that the tilting unit is configured to tilt the at least one sensor while the orientation of the service robot remains fixed.
In regard to claim 18, the claim is unclear in what the movement extraction module and movement assessment module actually do. It is also unclear what movement sequence is being evaluated as there is no mention in prior claims of recording movement sequences. Further clarification is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3, 4 - 7, 9, 12 - 16 & 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kitahama (US 20100063627 A1).
In regard to claims 1 & 15, A computer-implemented method and system for positioning a service robot with respect to a walking person for capturing lateral recordings of the person comprising:
detecting and tracking of the person by at least one sensor. Kitahama discloses an autonomous mobile apparatus or robot (FIG. 1) that moves autonomously along near a person. The robot detects and tracks the person, designated as the master, using a camera sensor (FIG. 1, component 10) where the robot recognizes the location of the person, area and angle of visibility of the person, surrounding environment, and direction, speed, and acceleration of the movement of the person (paragraph [0048]).
determining a walking direction of the person. Kitahama discloses that the robot recognizes the location of the person in addition to the direction, speed, and acceleration of the movement of the person (paragraph [0048]).
and, by triggering a motor controller, positioning the service robot ahead of the person in the walking direction with an angle of capture around 0°. Kitahama discloses that the robot can have a home location in one of four zones including A1, which is the leading area in front of the person (FIG. 8; paragraph [0079]). Additionally, the robot is programmed to predict a danger location based on other objects near the person such as mobile vehicles, where the robot changes its relative position with respect to the person and the potential danger. As such, the robot may change locations relative to the person including being ahead of the person in the walking direction at an angle of capture of around 0° (paragraph [0080], FIG. 10A).
repositioning the service robot at an angle of capture greater than 30° relative to the determined walking direction of the person. the robot is programmed to predict a danger location based on other objects near the person such as mobile vehicles, where the robot changes its relative position with respect to the person and the potential danger. As such, the robot may change locations relative to the person including being to the side of the person at an angle of capture of greater than 30° (FIG. 8; FIG. 10D; paragraph [0080]).
In regard to claim 3, Kitahama discloses the invention according to claim 1, wherein the repositioning of the service robot enables an essentially lateral detection of the person. Kitahama discloses that the robot can be positioned in different zones around the person, allowing for essentially lateral detection of the person (FIG. 8, 9A-D).
In regard to claim 4, Kitahama discloses the invention according to claim 1, comprising predicting of a walking direction to be covered by the person based on the walking direction of the person. Kitahama discloses that the robot includes danger prediction including predicting a danger location where there is a possibility of an object and the person coming into contact with one another and impeding the path of the person (FIGs. 7A & 7B; paragraph [0052]). One of ordinary skill would recognize that this action would require predicting the path of the person walking to predict if the person were likely to run into a hazard.
In regard to claim 5, Kitahama discloses the invention according to claim 4, wherein for repositioning of the service robot the angle of capture is the angle between at least one sensor of the service robot and the walking direction of the person. Kitahama discloses that the robot uses the camera (FIG. 1, component 10) to recognize the person and determine the direction, speed, and acceleration of movement of the person (paragraph [0048]). The angle of capture varies as the robot moves between zones around the person (FIG. 8, 9A-D).
In regard to claim 6, Kitahama discloses the invention according to claim 4, wherein for repositioning of the service robot the angle of capture is the angle between at least one sensor of the service robot and an object. Kitahama discloses that the robot uses the camera (FIG. 1, component 10) to objects in a person’s environment and determine the estimated time to contact between the person and the object and attributes related to the object (paragraph [0048]). The angle of capture varies as the robot moves to block an object in the path of a person (FIGs. 6A - 6C).
In regard to claim 7, Kitahama discloses the invention according to claim 4, wherein the angle of capture results from a mid-centered axis of the sensor on the one hand and the walking direction of the person on the other hand, with the mid-centered axis of the sensor, the walking direction of the person, and/or the predicted walking direction each projected onto a horizontal plane. Kitahama discloses that the robot uses the camera (FIG. 1, component 10) to recognize the person and determine the direction, speed, and acceleration of movement of the person (paragraph [0048]). The angle of capture varies as the robot moves between zones around the person (FIG. 8).
In regard to claim 9, Kitahama discloses the invention according to claim 1, comprising continuously calculating the distance between the service robot and the person; and positioning of the service robot by maintaining a minimum value for the distance between the service robot and the person while repositioning the service robot. Kitahama specifically discloses that the robot calculates a following distance based on the predicted danger levels in an area (paragraph [0078]) where a minimum distance (FIG.5, “dmin”) is maintained.
In regard to claim 12, Kitahama discloses the invention according to claim 1, further comprising, in the course of the detection and tracking of the person by at least one sensor, an output via an output unit with an indication of the direction of movement of the person and/or that of the service robot. Kitahama discloses that the robot utilizes a camera (FIG. 1, component 10) to detect and track a person and their surrounding environment. Kitahama additionally discloses that when a hazard is determined to be in the vicinity of the person, the robot repositions itself in between the person and the hazard (FIGs. 6A - 6C). The robot additionally provides an output such as a light, warning sound, or voiced information to indicate that the direction of movement of the robot contains a hazard (paragraph [0058]).
In regard to claim 13, Kitahama discloses a device for performing the computer-implemented method according to claim 1, further comprising evaluating a movement sequence of the detected and tracked person. Kitahama discloses that the robot uses the camera (FIG. 1, component 10) to monitor the person and the environment around the person where the direction, speed, and acceleration of the movement of the person is evaluated (paragraph [0048]).
In regard to claim 14, Kitahama discloses a device for performing the computer-implemented method according to claim 1 as best understood in light of the rejection under 112(b). Kitahama explicitly discloses a mobile robotic system (FIG. 1) capable of tracking a person with memory (paragraph [0050]), a processor (FIG. 1, component 20; paragraph [0050]), output unit to generate outputs (FIG. 1, components 30, 32, & 44), a motor controller (FIG. 1, component 26) to control movements of the service robot, and a sensor for capturing a person (FIG. 1, component 10) configured for performing the computer-implemented method according to claim 1.
In regard to claim 16, Kitahama discloses the invention according to claim 15, further comprising a movement planner in the memory for creating a prediction of the path to be covered by the person, with a positioning module comprising rules for moving the service robot adjacent to the person, for maintaining an approximately constant distance between the service robot and person, for taking a defined angle of capture, and/or rotating the service robot. Kitahama discloses that the robot includes danger prediction including predicting a danger location where there is a possibility of an object and the person coming into contact with one another and impeding the path of the person (FIGs. 7A & 7B; paragraph [0052]). One of ordinary skill would recognize that this action would require predicting the path of the person walking to predict if the person were likely to run into a hazard. The robot moves itself based on where the danger may occur or positions itself in a home position, such as that adjacent to the user (FIG. 8, components A2 & A4). The robot also calculates and maintains an acceptable distance between the robot and person based on the danger level (paragraph [0071], FIG. 5).
In regard to claim 18, Kitahama discloses the invention according to claim 15, further comprising a movement extraction module located in the memory and being adapted to be executed by the processor, wherein the movement extraction module is configured to extract features from a movement sequence of a tracked person and a movement assessment module located in the memory and being adapted to be executed by the processor for assessing the movements of that person. Kitahama discloses a system that recognizes a person using a camera (FIG. 1, component 10) and extracts features such as the location of the person, visual field height of the person, area of visibility of the person, and direction, speed, and acceleration of movement of the person (paragraph [0048]).
Claims 2 & 17 are rejected under 35 U.S.C. 103 as being unpatentable by Kitahama (US 20100063627 A1) as applied to claims 1 & 15 above further in view of Ishara (K. Ishara, I. Lee and R. Brinkworth, "Mobile robotic active view planning for physiotherapy and physical exercise guidance," 2015 IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), 2015; cited by applicant).
In regard to claim 2, Kitahama discloses the invention according to claim 1, comprising outputting an instruction to the person. Kitahama specifically discloses that the robot includes a speaker (FIG. 1, component 30) that is able to draw the attention of the person and others in the vicinity by issuing voiced information in response to a control signal output (paragraph [0058]). Kitahama does not specify that the voiced information can include instructions to walk essentially straight ahead.
However, Ishara teaches a method that includes a mobile robot able to track the movement of a person and includes the ability to transmit instructive guidance on movements, such as walking (page 131, column 2, paragraph 1, lines 2 - 3), in audio form (FIG. 2; page 131, column 1, paragraph 4, lines 9 - 10).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the computer-implemented method for positioning a service robot disclosed by Kitahama with the ability to provide instructions to a person to walk straight ahead as taught by Ishara because it would be considered use of a known technique to improve similar devices in the same way and would yield the obvious result of increasing the ability to provide voiced information to a person.
In regard to claim 17, Kitahama discloses the system according to claim 15. While Kitahama discloses the use of a camera to monitor a person and detect different potentially hazardous objects in the person’s environment (paragraph [0027]), they do not specify that the robot includes a tilting unit to monitor the person and environment.
However, Ishara teaches a teaches a system that includes a mobile robot able to track the movement of a person and includes a tilting unit on which the camera is mounted to keep the camera’s view on the person while the robot was in motion (FIG. 4; page 131, column 1, paragraph 4, lines 11 - 14).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the mobile robotic system disclosed by Kitahama with the tilting unit taught by Ishara because it would be considered use of a known technique, in this case the addition of a tilting unit, to improve similar devices, in this case mobile robots equipped with camera sensors, in the same way.
Claims 8 & 10 are rejected under 35 U.S.C. 103 as being unpatentable by Kitahama (US 20100063627 A1) as applied to claim 1 above further in view of Kobilarov (NPL - M. Kobilarov, G. Sukhatme, J. Hyams and P. Batavia, "People tracking and following with mobile robot using an omnidirectional camera and a laser," Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006; cited by applicant).
In regard to claim 8, Kitahama discloses the invention according to claim 1. While Kitahama discloses continuously recalculating distance between the robot and person to maintain a proper distance between the two while the robot monitors the person and their environment, they do not specify continuously recalculating the angle of capture while repositioning the service robot; and positioning the service robot while keeping the angle of capture approximately constant.
However, Kobilarov teaches a robot for tracking and following a person that includes a controller that works to minimize the absolute relative angle of the device to the person, interpreted by the examiner as the angle of capture (page 561, column 1, paragraph 2, lines 1 - 5). Because the robot as disclosed by Kobilarov is continuously following the person in motion, this angle of capture is continuously being calculated. The method taught by Kobilarov further includes the robot positioning itself such that the angle of capture is minimized (page 561, column 1, paragraph 2, lines 1 - 5) and a desired following range is achieved (page 561, column 1, paragraph 2, lines 1 - 5). FIG. 7 shows that this method keeps the bearing approximately constant.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the mobile robotic system disclosed by Kitahama with the continuous calculation of angle of capture and repositioning to keep the angle of capture constant because it would be considered use of a known technique to improve similar devices in the same way where minimizing the absolute relative bearing to the person is combined with keeping a desired following range of the person.
In regard to claim 10, Kitahama discloses the invention according to claim 1, further comprising the repositioning of the service robot. While Kitahama discloses that the robot is repositioned in the event that a hazardous stimuli is detected or predicted danger levels change, they do not specify that the repositioning of the robot occurs after a predetermined time and/or distance with the angle of capture between the sensor of the service robot and the walking direction of the person being essentially smaller than 30°.
However, Kobilarov teaches a method that includes a robot that is designed to track a moving person and follow the path of the person. In order to accomplish this, the system includes two controllers based on direct following and path following that work to minimize both the angle of capture and changes to the desired following distance (page 561, column 1, paragraph 2, lines 1 - 4). The robot is also tracking error based on changes to the distance between the person and the robot, following distance, and speed (page 561, column 1, paragraph 3 - 4). When the distance between the robot and the person that they are tracking becomes too high, such as when the distance becomes too great between the two, the robot will stop and reinitiate tracking of the person, causing repositioning of the device (page 561, column 1, paragraph 1, lines 15 - 18). The angle of capture is minimized by one of the two controllers. FIG. 7 shows that the angle of capture is consistently smaller than 30°.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method of tracking a person and repositioning a robot as disclosed by Kitahama with the teaching that repositioning of the robot occurs when a certain distance with the angle of capture between the sensor of the service robot and the walking direction of the person is essentially smaller than 30° because it would be considered use of a known technique to improve similar devices in the same way where calculated error causes the system to reposition itself.
Response to Arguments
Applicant’s arguments and amended language, see Remarks, filed 06/17/2025, with respect to the interpretation of “a detection and/or evaluation unit” in claims 1, 3, 4, 6, 8 - 12, 15, & 16, “processing unit” in claim 15, “tilting unit” in claim 17, and “movement sequence extraction module” in claim 18 under 35 U.S.C. 112(f) have been fully considered and are persuasive.
Applicant’s arguments, see Remarks, filed 06/17/2025, with respect to the rejections of claims 1 - 18 under 35 U.S.C. 112(b) have been fully considered but are not fully persuasive. In regard to claim 1, while the amendment addresses some aspects of the rejection under 35 U.S.C. 112(b), see Non-Final Rejection, filed 01/17/2025, the amended claim fails to address how the method would reposition the service robot without a structure such as a motor or movable element claimed. Applicant additionally argues that it was made clear that the positioning of the robot is performed a) “for a walking person” and b) “for the purpose of capturing lateral recordings of a person,” in order to distinguishes the invention from the prior art, but those details are only included in the preamble of claim 1 and do not hold patentable weight. In regard to claim 3, the arguments for claim 3 have been considered but are not persuasive. The detection of an essentially lateral detection of the person is not a claimed limitation in the claim as written; thus, an angle of capture greater than 30° would make it possible for or “enable” essentially lateral detection of the person to occur. The arguments for claims 8 - 11 have been considered and are persuasive. The 112(b) rejections for claims 8 - 11 have been withdrawn. However, Applicant has included new language that lacks clarity as discussed above in Claim Rejections - 35 USC 112 in regard to independent claims 1 & 15 and dependent claims 2 - 10 & 12 - 14 & 16 - 18.
Applicant’s arguments, see Remarks, filed 06/17/2025, with respect to the rejections of claims 1, 4, 5, 7 - 16, & 18 under 35 U.S.C. 102 and claims 2, 3, 6, & 17 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Kitahama for claims 1, 3 - 7, 9, 12 - 16, & 18 under 35 U.S.C. 102; in view of Kitahama and further in view of Ishara for claims 2 & 17; and in view of Kitahama and further in view of Kobilarov for claims 8 & 10. Kitahama discloses a robot that is able to position itself in front of or to the sides of a person and further detects and tracks a person, determining movement direction, speed, and acceleration of the person.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIENNA CHRISTINE PYLE whose telephone number is (703)756-5798. The examiner can normally be reached 8 am - 5:30 pm M - T; Off first Fridays; 8 am - 4 pm second Fridays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Marmor, II can be reached on (571) 272-4730. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC F WINAKUR/Primary Examiner, Art Unit 3791
/S.C.P./Examiner, Art Unit 3791