DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office Action is in response to the application filed on 10/18/2024. Claims 1 - 6 are presently pending and are presented for examination.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2024-022679 & JP2023-181617, filed on 02/19/2024 & 10/23/2023.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/18/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: An obstacle detection unit and a control unit in claim 1.
An obstacle detection unit will be interpreted as an object detection sensor. Paragraph [0022] in the specification recites this object detection sensor.
A control unit will be interpreted as the ECU. The specification recites this in paragraph [0019].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 3 – 4, & 6 are rejected under 35 U.S.C. 103 as being unpatentable over US20130325244A1 (hereinafter, “Wang”), and further in view of US20200206916A1 (hereinafter, “Jung”).
10. Regarding claim 1, Wang teaches an autonomous mobile body comprising (Fig. 1): Figure 1 shows an autonomous mobile body of a moving robot.
a first unit including a drive wheel and a chassis and configured to move straight and turn and move left and right ([0156] – [0159] Fig. 2); A base (120) is recited which constitutes as a chassis. This base (120) includes drive wheels (210a, 210b, 210c) [0156] – [0157]. Each wheel has a respective drive motor (220a, 220b, 220c) that can independently drive each wheel (210a, 210b, 210c) in a forward or backwards direction allowing the robot (100) itself to move straight, turn, move left, and move right [0159]. The drive system (200) allows for omni-directional [0158].
11. Wang teaches a second unit disposed at an upper portion of the first unit and including…and an oscillating mechanism for performing oscillating motion of moving around a vertical axis with reference to the first unit ([0170] Fig. 1); Figure 1 shows an upper portion from the base (120). Portions such as the leg (130), the torso (140), the neck (150), and the head (160) all qualify as upper portions. The neck (150) includes a rotator (152) which allows for a continuous rotation (oscillating motion) of 360 degrees.
Wang does not explicitly teach …a top plate…
However, Jung teaches …a top plate… ([0026], [0028] – [0032] Fig. 3) Jung teaches a top plate (230). Based on figure 1, this top plate (400) can be used to place items on [0041].
Wang and Jung are analogous art because Wang teaches on a robot that has both a lower and upper portion that can rotate independently from each other while Jung teaches on a robot that has a top plate where items can be placed on. One of ordinary skill in the art would have the motivation to combine both Wang and Jung because combining Jung’s top plate provides a predictable mechanism for supporting items on a mobile robot while Wang’s independently rotatable upper portion provides improved maneuverability and orientation control. Mounting Jung’s item-supporting top plate on Wang’s rotating upper portion would predictably enhance the robot’s payload handling and flexibility without altering the fundamental operation of either system.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Jung, to modify the teachings of Wang to include the teachings of Jung to improve transport capabilities and navigation efficiency.
12. Wang teaches an obstacle detection unit configured to detect a surrounding obstacle; and [0176] – [0177] The robot (100) may have a sensor system (400) that includes several different types of sensors to monitor the environment of the robot (100). This sensor system (400) may include obstacle detection obstacle avoidance (ODOA) sensors.
a control unit configured to [0155], when the obstacle detection unit detects an obstacle in a traveling direction of the autonomous mobile body during movement of the autonomous mobile body, upon controlling the first unit to change the traveling direction of the autonomous mobile body to avoid the obstacle, control the second unit to perform oscillating motion before changing the traveling direction [0095] – [0096], [0342] – [0343]. The robot (100) has a controller (500) that coordinates operation and movement of the robot (100) [0155]. This robot (100) also has a sensor system (400) which is used to detect objects and obstacles that may be in the way of the robot (100). The ODOA sensors are used to avoid and detect objects or obstacles in the way of the robot (100) [0187], [0190], [0198]. The robot (100) can also rotate the upper portion (second unit) towards a second heading (direction). Once the robot has rotated the upper portion to a panning limit, the robot (100) will then begin rotating the lower portion of the robot towards the second heading after the upper portion has faced the second heading such that now both the upper and lower portions are facing the same heading to travel in [0095] – [0096], [0342] – [0343]. Therefore, the robot (100) has a controller (500) which will operate the movement based on a detection of the sensor system (400). Once the robot (100) has detected an object or obstacle in the travel path of the robot (100), the robot (100) will move the upper portion to a second heading first, followed by the lower portion moving towards the second heading right after, and the robot (100) will continue travel in that direction of that heading.
13. Regarding claim 3, Wang teaches the autonomous mobile body according to claim 1, wherein the control unit causes the second unit to perform a different recognition operation according to the obstacle when causing the second unit to perform a predetermined recognition operation [0180], [0182] – [0185], [0187]. The sensor system (400) may contain a sonar proximity sensor (410) that provides the ability to see objects in the horizontal plane [0180]. The sensor system (400) may also include cliff proximity sensors (420) to detect a cliff [0182] – [0183]. The sensor system (400) may also include contact sensors (430) so that the robot (100) can physically detect a bump of an obstacle [0185]. The sensor system (400) may also include a laser scanner (440) to scan an area in front of the robot (100) [0187]. All these sensors located on the upper portion of the robot (100) perform distinct recognition routines for different types of obstacles. The controller uses these different recognition results to issue appropriate drive commands and orientation adjustments, demonstrating that the robot’s sensor system (400) inherently performs different recognition operations based on the type of detected object. Therefore, the robot’s (100) upper portion which houses the sensors and performs these different kinds of logic, can perform varied recognition operations according to the object or obstacle detected.
14. Regarding claim 4, Wang teaches the autonomous mobile body according to claim 1, wherein
when causing the second unit to perform a predetermined recognition operation, the control unit causes the second unit to perform a different recognition operation according to a distance between the autonomous mobile body and the obstacle or a relative speed between the autonomous mobile body and the obstacle [0176] – [0177], [0195]. Wang does disclose imaging and sensing system which is mounted on the upper portion of the robot (100). The three-dimensional image sensors (450a, 450b) being shown in figure 1. These three-dimensional image sensors capture detailed environmental information that varies with object proximity and scene context. The imaging system and mapping module can be used to generate a three-dimensional representation of objects and features in the robot’s path and the controller (500) uses those representations to adjust navigation and obstacle avoidance. Due to the upper portion’s sensors providing depth and visual data about nearby and distance objects, the controller (500) will selectively apply different imaging parameters when processing sensor data for navigation and avoidance. One of ordinary skill in the art would understand that the upper portion inherently engages in different recognition operations based on the detected context of the objects as part of its perception processing. Therefore, the upper portion’s recognition behavior varies according to the nature and proximity of detected objects and obstacles.
15. Regarding claim 6, Wang teaches the autonomous mobile body according to claim 1, wherein when controlling the first unit to change the traveling direction of the autonomous mobile body to avoid the obstacle, upon controlling the second unit to perform the oscillating motion before changing the traveling direction, the control unit controls an angle of the oscillating motion of the second unit such that the second unit faces a direction of the destination until the autonomous mobile body arrives at the destination [0095] – [0096], [0342] – [0343]. The robot (100) has a controller (500) that coordinates operation and movement of the robot (100) [0155]. This robot (100) also has a sensor system (400) which is used to detect objects and obstacles that may be in the way of the robot (100). The ODOA sensors are used to avoid and detect objects or obstacles in the way of the robot (100) [0187], [0190], [0198]. The robot (100) can also rotate the upper portion (second unit) towards a second heading (direction). Once the robot has rotated the upper portion to a panning limit, the robot (100) will then begin rotating the lower portion of the robot towards the second heading after the upper portion has faced the second heading such that now both the upper and lower portions are facing the same heading to travel in [0095] – [0096], [0342] – [0343]. Therefore, the robot (100) has a controller (500) which will operate the movement based on a detection of the sensor system (400). Once the robot (100) has detected an object or obstacle in the travel path of the robot (100), the robot (100) will move the upper portion to a second heading first, followed by the lower portion moving towards the second heading right after, and the robot (100) will continue travel in that direction of that heading which means that the upper portion will continue to face in the heading direction until arrival at its destination.
Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over US20130325244A1 (hereinafter, “Wang”), and further in view of US20200206916A1 (hereinafter, “Jung”), and further in view of US20210331315A1 (hereinafter, “Park”), and further in view of NPL – Videre: Journal of Computer Vision Research (hereinafter, “Camus”).
17. Regarding claim 2, Wang teaches the autonomous mobile body according to claim 1, wherein
…causes the second unit to perform oscillating motion in the same direction as a direction in which the traveling direction is changed before changing the traveling direction,… This robot (100) also has a sensor system (400) which is used to detect objects and obstacles that may be in the way of the robot (100). The ODOA sensors are used to avoid and detect objects or obstacles in the way of the robot (100) [0187], [0190], [0198]. The robot (100) can also rotate the upper portion towards a second heading (direction). Once the robot has rotated the upper portion to a panning limit, the robot (100) will then begin rotating the lower portion of the robot towards the second heading after the upper portion has faced the second heading such that now both the upper and lower portions are facing the same heading to travel in [0095] – [0096], [0342] – [0343]. Therefore, the robot (100) has a controller (500) which will operate the movement based on a detection of the sensor system (400). Once the the robot (100) has detected an object or obstacle in the travel path of the robot (100), the robot (100) will move the upper portion to a second heading first, followed by the lower portion moving towards the second heading right after, and the robot (100) will continue travel in that direction of that heading.
Wang as modified by Jung does not explicitly teach when the obstacle detection unit detects an obstacle in the traveling direction of the autonomous mobile body during the movement of the autonomous mobile body, the control unit calculates a travel route for avoiding the obstacle based on a predicted traveling direction of the obstacle and a distance to the obstacle,…and controls an angle of the oscillating motion of the second unit such that the angle is larger than a maximum angle change amount at which the traveling direction of the first unit changes most greatly in the travel route.
However, Park teaches when the obstacle detection unit detects an obstacle in the traveling direction of the autonomous mobile body during the movement of the autonomous mobile body, the control unit calculates a travel route for avoiding the obstacle based on a predicted traveling direction of the obstacle and a distance to the obstacle,… [0095], [0107] – [0111] LiDAR may be used to sense objects or obstacles in the travel direction of the robot (1) and the distance of that object or obstacle in relation to the robot (1). Based on the detected objects and the distance in relation to the robot (1), the controller (740) will generate a navigation route via waypoints based on position information of the waypoints and the sensed objects. The generation of this navigation route is the same as calculating a travel route for avoiding obstacles or objects.
Wang as modified by Jung and Park does not explicitly teach …and controls an angle of the oscillating motion of the second unit such that the angle is larger than a maximum angle change amount at which the traveling direction of the first unit changes most greatly in the travel route.
However, Camus teaches …and controls an angle of the oscillating motion of the second unit such that the angle is larger than a maximum angle change amount at which the traveling direction of the first unit changes most greatly in the travel route [Pg. 46 - 7 Gaze Control]. Camus teaches gaze control where a camera can be rotated passed its limits (maximum angle). Both require a controller of some sort to control an angular change whether that boundary is implemented as a software limit or a physical stop. The operative concept here is controlling an angular change that exceeds a defined maximum constraint. The difference is only linguistic, not functional.
Wang, Park, and Camus are art used because Wang teaches having the upper portion of the robot rotate to the direction it is heading first before having the lower portion follow suit while Park teaches on LiDAR that can detect objects and obstacles and generate a navigation route avoiding these detected objects and obstacles while Camus teaches having a rotation implementation that can rotate the object or portion of a robot passed a maximum limit.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Park and Camus to modify the teachings of the combination of Wang as modified by Jung to include the teachings of Park and Camus because doing so will increase the range of direction the robot can face allowing for more flexible movement.
18. Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over US20130325244A1 (hereinafter, “Wang”), and further in view of US20200206916A1 (hereinafter, “Jung”), and further in view of US20210346557A1 (hereinafter, “Brooks”).
19. Regarding claim 5, Wang as modified by Jung does not explicitly teach the autonomous mobile body according to claim 1, wherein
when the obstacle detection unit detects an obstacle in the traveling direction of the autonomous mobile body during the movement of the autonomous mobile body, the control unit determines whether the obstacle recognizes presence of the autonomous mobile body, and if the control unit determines that the obstacle recognizes the presence of the autonomous mobile body, the control unit causes the second unit to perform a predetermined recognition operation.
However, Brooks teaches the autonomous mobile body according to claim 1, wherein
when the obstacle detection unit detects an obstacle in the traveling direction of the autonomous mobile body during the movement of the autonomous mobile body, the control unit determines whether the obstacle recognizes presence of the autonomous mobile body, and if the control unit determines that the obstacle recognizes the presence of the autonomous mobile body, the control unit causes the second unit to perform a predetermined recognition operation ([0027], [0054]). Brooks robot can detect the presence of a human and communicatively interact with the human [0054]. When a human (object) comes into the traveling direction of the robot, the robot will detect the human and determine if the human recognizes the presence of the robot allowing for the robot to predict where the human will be based on interaction cues [0027]. The robot will alert the human in various ways to signal to the human that the robot is aware of the human’s presence and vice versa by also alerting the human of the robot’s presence and will adjust its action (predetermined recognition operation) accordingly based on the human’s interaction with the robot. The robot may stop cleaning if the human needs to pass by, the robot may return to cleaning if the human moves away, etc. Therefore, Brooks infers human acknowledgement of the robot and adjusts its behavior based on the human’s interaction.
One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Wang as modified by Jung with the teachings of Brooks, to enhance the adjustment of the robot with human interaction to adapt motion to further minimize risk and improve safety.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID MESQUITI OVALLE JR. whose telephone number is (571)272-6229. The examiner can normally be reached Monday - Friday 7:30am - 5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached on (571) 270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID MESQUITI OVALLE/Examiner, Art Unit 3669
/Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669