DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-18 have been examined.
P = paragraph e.g. P[0001] = paragraph[0001]
Claim Objections
Claim 1 is objected to because of the following informalities: lines 18-20 recite “to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower”. An “update” cannot be “as a real-time operation state” and this is a nonsensical limitation, as an update is not “as” anything, and is instead a process of updating and not a state of a mower. Appropriate correction is required.
Claim 1 is objected to because of the following informalities: lines 18-20 recite “to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower”. The limitation “to match the real-time of the self-moving mower” is nonsensical, as an update cannot match a “real-time” of a mower, as “the real-time of the self-moving mower” is a nonsensical limitation. A mower may perform a function in “real-time”, but to claim that the mower includes or somehow possesses a “real-time” as implied by the claim is nonsensical. Appropriate correction is required.
Claim 7 is objected to because of the following informalities: lines 1-2 recite “wherein moving path is generated”. This is improper grammar, as no article precedes “moving path”. Appropriate correction is required.
Claim 15 is objected to because of the following informalities: lines 1-2 recite “wherein moving path is generated”. This is improper grammar, as no article precedes “moving path”. Appropriate correction is required.
Claim 17 is objected to because of the following informalities: lines 11-13 recite “and controlling the simulated scene image to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower”. An “update” cannot be “as a real-time operation state” and this is a nonsensical limitation, as an update is not “as” anything, and is instead a process of updating and not a state of a mower. Appropriate correction is required.
Claim 17 is objected to because of the following informalities: lines 11-13 recite “to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower”. The limitation “to match the real-time of the self-moving mower” is nonsensical, as an update cannot match a “real-time” of a mower, as “the real-time of the self-moving mower” is a nonsensical limitation. A mower may perform a function in “real-time”, but to claim that the mower includes or somehow possesses a “real-time” as implied by the claim is nonsensical. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 7 and 15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
As per Claim 7, the subject matter is the claimed “wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber”.
There is no disclosure of an algorithm that describes exactly how the “preset path scrubber” generates the “moving path”, and there is no disclosure of an algorithm that describes exactly how any of “a rectangular-ambulatory-plane path scrubber”, “a bow-shaped path scrubber” and “a linear path scrubber” generates the “moving path”.
Furthermore, there is no disclosure of exactly each of a “preset path scrubber”, a “rectangular-ambulatory-plane path scrubber”, a “bow-shaped path scrubber” or a “linear path scrubber” are and how each is generated.
Furthermore, there is no disclosure of the meaning of the term “scrubber”, and there is no disclosure that defines exactly what would or would not be equivalent to a “preset path scrubber”, a “rectangular-ambulatory-plane path scrubber”, a “bow-shaped path scrubber” or a “linear path scrubber”.
Furthermore, there is no disclosure of what type of path corresponds to a “rectangular-ambulatory-plane path”.
P[0070] of the specification recites “In another implementation, the path generation module 900c generates a preset path scrubber such as a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber for a user to select. The path generation module 900c forms a selectable path scrubber on an interactive interface 520c, and the user selects a corresponding path scrubber and scrubs an area expected to be operated by an actuating mechanism 100c in the real-time image 530c or the simulated scene image, thereby generating a rectangular-ambulatory-plane path, a bow-shaped path and a linear path in the corresponding area so as to generate the corresponding moving path 910c in the real-time image 530c or the simulated scene image”, which provides none of the disclosure indicated above as not existing.
As such, there is no indication in the specification that the inventors had possession of the control method of the self-moving mowing system of claim 6, wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber.
As per Claim 15, the subject matter is the claimed “wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber”.
There is no disclosure of an algorithm that describes exactly how the “preset path scrubber” generates the “moving path”, and there is no disclosure of an algorithm that describes exactly how any of “a rectangular-ambulatory-plane path scrubber”, “a bow-shaped path scrubber” and “a linear path scrubber” generates the “moving path”.
Furthermore, there is no disclosure of exactly each of a “preset path scrubber”, a “rectangular-ambulatory-plane path scrubber”, a “bow-shaped path scrubber” or a “linear path scrubber” are and how each is generated.
Furthermore, there is no disclosure of the meaning of the term “scrubber”, and there is no disclosure that defines exactly what would or would not be equivalent to a “preset path scrubber”, a “rectangular-ambulatory-plane path scrubber”, a “bow-shaped path scrubber” or a “linear path scrubber”.
Furthermore, there is no disclosure of what type of path corresponds to a “rectangular-ambulatory-plane path”.
P[0070] of the specification recites “In another implementation, the path generation module 900c generates a preset path scrubber such as a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber for a user to select. The path generation module 900c forms a selectable path scrubber on an interactive interface 520c, and the user selects a corresponding path scrubber and scrubs an area expected to be operated by an actuating mechanism 100c in the real-time image 530c or the simulated scene image, thereby generating a rectangular-ambulatory-plane path, a bow-shaped path and a linear path in the corresponding area so as to generate the corresponding moving path 910c in the real-time image 530c or the simulated scene image”, which provides none of the disclosure indicated above as not existing.
As such, there is no indication in the specification that the inventors had possession of the control method of the self-moving mowing system of claim 14, wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-8 and 17-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per Claim 1, the limitations “to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower” are unclear.
Specifically, these limitations include two intended use limitations that each start with “to”, making it unclear what is actually required by these limitations, as each intended use limitation is written as an intended use or intended result rather than the specific process used to achieve the result.
Furthermore, it is unclear what is meant by “as a real-time operation state of the self-moving mower”, and it is unclear how “a display content” is updated to achieve updating “as a real-time operation state of the self-moving mower”. Furthermore, to update content “as a real-time operation state of the self-moving mower” is nonsensical, as an update is a process, not a state of a mower.
Furthermore, the limitation “to match the real-time of the self-moving mower” is unclear, as it is unclear what is meant by “the real-time”.
Therefore, the claim is unclear.
As per Claim 7, the claim recites “wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber”.
It is unclear how the “preset path scrubber” generates the “moving path”, and it is unclear exactly how any of “a rectangular-ambulatory-plane path scrubber”, “a bow-shaped path scrubber” and “a linear path scrubber” generates the “moving path”.
It is also unclear exactly what each of a “preset path scrubber”, a “rectangular-ambulatory-plane path scrubber”, a “bow-shaped path scrubber” or a “linear path scrubber” are and how each is generated.
Also, the meaning of the term “scrubber” is unclear, and it is unclear exactly what would or would not be equivalent to a “preset path scrubber”, a “rectangular-ambulatory-plane path scrubber”, a “bow-shaped path scrubber” or a “linear path scrubber”.
Furthermore, it is unclear what type of path corresponds to a “rectangular-ambulatory-plane path”.
P[0070] of the specification recites “In another implementation, the path generation module 900c generates a preset path scrubber such as a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber for a user to select. The path generation module 900c forms a selectable path scrubber on an interactive interface 520c, and the user selects a corresponding path scrubber and scrubs an area expected to be operated by an actuating mechanism 100c in the real-time image 530c or the simulated scene image, thereby generating a rectangular-ambulatory-plane path, a bow-shaped path and a linear path in the corresponding area so as to generate the corresponding moving path 910c in the real-time image 530c or the simulated scene image”, which provides no clarification to the above indicated issues.
Therefore, the claim is unclear.
As per Claim 15, the claim recites “wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber”.
It is unclear how the “preset path scrubber” generates the “moving path”, and it is unclear exactly how any of “a rectangular-ambulatory-plane path scrubber”, “a bow-shaped path scrubber” and “a linear path scrubber” generates the “moving path”.
It is also unclear exactly what each of a “preset path scrubber”, a “rectangular-ambulatory-plane path scrubber”, a “bow-shaped path scrubber” or a “linear path scrubber” are and how each is generated.
Also, the meaning of the term “scrubber” is unclear, and it is unclear exactly what would or would not be equivalent to a “preset path scrubber”, a “rectangular-ambulatory-plane path scrubber”, a “bow-shaped path scrubber” or a “linear path scrubber”.
Furthermore, it is unclear what type of path corresponds to a “rectangular-ambulatory-plane path”.
P[0070] of the specification recites “In another implementation, the path generation module 900c generates a preset path scrubber such as a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber for a user to select. The path generation module 900c forms a selectable path scrubber on an interactive interface 520c, and the user selects a corresponding path scrubber and scrubs an area expected to be operated by an actuating mechanism 100c in the real-time image 530c or the simulated scene image, thereby generating a rectangular-ambulatory-plane path, a bow-shaped path and a linear path in the corresponding area so as to generate the corresponding moving path 910c in the real-time image 530c or the simulated scene image”, which provides no clarification to the above indicated issues.
Therefore, the claim is unclear.
As per Claim 17, the limitations “and controlling the simulated scene image to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower” are unclear.
Specifically, these limitations include two intended use limitations that each start with “to”, making it unclear what is actually required by these limitations, as each intended use limitation is written as an intended use or intended result rather than the specific process used to achieve the result.
Furthermore, it is unclear what is meant by “as a real-time operation state of the self-moving mower”, and it is unclear how “a display content” is updated to achieve updating “as a real-time operation state of the self-moving mower”. Furthermore, to update content “as a real-time operation state of the self-moving mower” is nonsensical, as an update is a process, not a state of a mower.
Furthermore, the limitation “to match the real-time of the self-moving mower” is unclear, as it is unclear what is meant by “the real-time”.
Therefore, the claim is unclear.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 9 and 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Einecke et al. (2014/0032033).
Regarding Claim 9, Einecke et al. teaches the claimed control method of a self-moving mowing system comprising a self-moving mower, comprising:
acquiring a real-time image comprising at least part of a mowing area and at least one obstacle located within the mowing area (“…cameras 2…”, see P[0057] and FIG. 3 and “…the robotic lawn mower is configured to send a message containing a snapshot image taken by the at least one camera to a remote smart device preferably using a wireless channel, and receive information about an annotated snapshot image from the smart device as user input. The snapshot image is a still image obtained from the input image of the at least one camera. The user can annotate the image on the remote smart device, in order to provide information about an obstacle”, see P[0024]);
displaying, by a display device, the real-time image or a simulated scene image generated according to the real-time image (“…the robotic lawn mower is configured to send a message containing a snapshot image taken by the at least one camera to a remote smart device preferably using a wireless channel, and receive information about an annotated snapshot image from the smart device as user input. The snapshot image is a still image obtained from the input image of the at least one camera. The user can annotate the image on the remote smart device, in order to provide information about an obstacle”, see P[0024]);
generating, by calculating characteristic parameters, a first virtual obstacle identifier corresponding to the at least one obstacle in the real-time image or the simulated scene image to form a first fusion image (“…calculate the position and/or height of one or more detected obstacles”, see P[0023]);
controlling the self-moving mowing system to avoid the at least one obstacle corresponding the first virtual obstacle identifier in the first fusion image (“The classifier module 3 is functionally connected to the memory 10, in order to identify obstacles 6 based on the stored information and on the output signal 9 of the at least one camera 2. It can further send a scene classification signal 12 to a (movement) control unit 4 of the mower 1, which then uses the information about the scene and the obstacle 6 for a certain action, e.g. to avoid an obstacle 6”, see P[0051]).
Regarding Claim 13, Einecke et al. teaches the claimed control method of the self-moving mowing system of claim 9, further comprising:
generating, by calculating characteristic parameters of the real-time image, a first virtual boundary corresponding to a mowing boundary in the real-time image or the simulated scene image to form the first fusion image (“The classifier module 3 is functionally connected to the memory 10, in order to identify obstacles 6 based on the stored information and on the output signal 9 of the at least one camera 2. It can further send a scene classification signal 12 to a (movement) control unit 4 of the mower 1, which then uses the information about the scene and the obstacle 6 for a certain action, e.g. to avoid an obstacle 6”, see P[0051]); and
controlling the self-moving mower to operate within the first virtual boundary (“The classifier module 3 is functionally connected to the memory 10, in order to identify obstacles 6 based on the stored information and on the output signal 9 of the at least one camera 2. It can further send a scene classification signal 12 to a (movement) control unit 4 of the mower 1, which then uses the information about the scene and the obstacle 6 for a certain action, e.g. to avoid an obstacle 6”, see P[0051]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Einecke et al. (2014/0032033) in view of Balutis et al. (2016/0165795).
Regarding Claim 1, Einecke et al. teaches the claimed control method of a self-moving mowing system comprising a self-moving mower, comprising:
acquiring a real-time image comprising at least part of a mowing area and at least one obstacle located within the mowing area (“…cameras 2…”, see P[0057] and FIG. 3 and “…the robotic lawn mower is configured to send a message containing a snapshot image taken by the at least one camera to a remote smart device preferably using a wireless channel, and receive information about an annotated snapshot image from the smart device as user input. The snapshot image is a still image obtained from the input image of the at least one camera. The user can annotate the image on the remote smart device, in order to provide information about an obstacle”, see P[0024]);
displaying, by a display device, the real-time image or a simulated scene image generated according to the real-time image (“…the robotic lawn mower is configured to send a message containing a snapshot image taken by the at least one camera to a remote smart device preferably using a wireless channel, and receive information about an annotated snapshot image from the smart device as user input. The snapshot image is a still image obtained from the input image of the at least one camera. The user can annotate the image on the remote smart device, in order to provide information about an obstacle”, see P[0024]);
generating, by calculating characteristic parameters, a first virtual obstacle identifier corresponding to the at least one obstacle in the real-time image or the simulated scene image to form a first fusion image (“…calculate the position and/or height of one or more detected obstacles”, see P[0023]);
receiving an information input by a user of whether the first virtual obstacle identifier in the first fusion image needs to be corrected (“Preferably, the robotic lawn mower is configured to receive a user notification as user input, and to determine from the user notification, whether the snapshot image contains an obstacle or not”, see P[0026] and “Preferably, the robotic lawn mower is configured to determine from the fact that a user has added a contour around an object in the snapshot image that said object is an obstacle”, see P[0027]);
receiving a user instruction to correct the first virtual obstacle identifier to generate a second virtual obstacle identifier in the real-time image or the simulated scene image to form a second fusion image when the user inputs the information that the first virtual obstacle identifier needs to be corrected (“Preferably, the robotic lawn mower is configured to receive a user notification as user input, and to determine from the user notification, whether the snapshot image contains an obstacle or not”, see P[0026] and “Preferably, the robotic lawn mower is configured to determine from the fact that a user has added a contour around an object in the snapshot image that said object is an obstacle” (emphasis added), see P[0027]); and
controlling the self-moving mowing system to avoid the at least one obstacle corresponding the first virtual obstacle identifier in the first fusion image or the second virtual obstacle identifier in the second fusion image (“The classifier module 3 is functionally connected to the memory 10, in order to identify obstacles 6 based on the stored information and on the output signal 9 of the at least one camera 2. It can further send a scene classification signal 12 to a (movement) control unit 4 of the mower 1, which then uses the information about the scene and the obstacle 6 for a certain action, e.g. to avoid an obstacle 6”, see P[0051]).
Einecke et al. does not expressly recite the claimed
and controlling the simulated scene image to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower.
However, Balutis et al. (2016/0165795) teaches controlling a simulated scene image to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower (Balutis et al.; “FIG. 4G depicts an example screenshot of the displayed map image 460 with a graphic overlay 468 showing a graphic overlay 466 showing the progress of the robot lawnmower 10 and a projected remaining path of the robot lawnmower 10 as it mows the lawn”, see P[0062] and FIG. 4G).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al., and controlling the simulated scene image to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower, as rendered obvious by Balutis et al., in order to allow “the user to visualize both the portion of the yard that has been mowed and a path the root lawnmower…will follow to complete mowing of the yard” (Balutis et al.; see P[0062]).
Regarding Claim 5, Einecke et al. teaches the claimed control method of the self-moving mowing system of claim 1, further comprising:
generating, by calculating characteristic parameters of the real-time image, a first virtual boundary corresponding to a mowing boundary in the real-time image or the simulated scene image to form the first fusion image (“The classifier module 3 is functionally connected to the memory 10, in order to identify obstacles 6 based on the stored information and on the output signal 9 of the at least one camera 2. It can further send a scene classification signal 12 to a (movement) control unit 4 of the mower 1, which then uses the information about the scene and the obstacle 6 for a certain action, e.g. to avoid an obstacle 6”, see P[0051]); and
controlling the self-moving mower to operate within the first virtual boundary (“The classifier module 3 is functionally connected to the memory 10, in order to identify obstacles 6 based on the stored information and on the output signal 9 of the at least one camera 2. It can further send a scene classification signal 12 to a (movement) control unit 4 of the mower 1, which then uses the information about the scene and the obstacle 6 for a certain action, e.g. to avoid an obstacle 6”, see P[0051]).
Regarding Claim 17, Einecke et al. teaches the claimed control method of a self-moving mowing system comprising a self-moving mower, comprising:
acquiring a real-time image comprising at least part of a mowing area and at least one obstacle located within the mowing area (“…cameras 2…”, see P[0057] and FIG. 3 and “…the robotic lawn mower is configured to send a message containing a snapshot image taken by the at least one camera to a remote smart device preferably using a wireless channel, and receive information about an annotated snapshot image from the smart device as user input. The snapshot image is a still image obtained from the input image of the at least one camera. The user can annotate the image on the remote smart device, in order to provide information about an obstacle”, see P[0024]);
displaying, by a display device, the real-time image or a simulated scene image generated according to the real-time image (“…cameras 2…”, see P[0057] and FIG. 3 and “…the robotic lawn mower is configured to send a message containing a snapshot image taken by the at least one camera to a remote smart device preferably using a wireless channel, and receive information about an annotated snapshot image from the smart device as user input. The snapshot image is a still image obtained from the input image of the at least one camera. The user can annotate the image on the remote smart device, in order to provide information about an obstacle”, see P[0024]);
generating, according to an instruction input by a user, a virtual obstacle identifier corresponding to the at least one obstacle in the real-time image or the simulated scene image to form a first fusion image (“Preferably, the robotic lawn mower is configured to receive a user notification as user input, and to determine from the user notification, whether the snapshot image contains an obstacle or not”, see P[0026] and “Preferably, the robotic lawn mower is configured to determine from the fact that a user has added a contour around an object in the snapshot image that said object is an obstacle”, see P[0027]); and
controlling the self-moving mowing system to avoid the at least one obstacle corresponding the virtual obstacle identifier in the first fusion image (“The classifier module 3 is functionally connected to the memory 10, in order to identify obstacles 6 based on the stored information and on the output signal 9 of the at least one camera 2. It can further send a scene classification signal 12 to a (movement) control unit 4 of the mower 1, which then uses the information about the scene and the obstacle 6 for a certain action, e.g. to avoid an obstacle 6”, see P[0051]).
Einecke et al. does not expressly recite the claimed
and controlling the simulated scene image to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower.
However, Balutis et al. (2016/0165795) teaches controlling a simulated scene image to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower (Balutis et al.; “FIG. 4G depicts an example screenshot of the displayed map image 460 with a graphic overlay 468 showing a graphic overlay 466 showing the progress of the robot lawnmower 10 and a projected remaining path of the robot lawnmower 10 as it mows the lawn”, see P[0062] and FIG. 4G).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al., and controlling the simulated scene image to synchronously update a display content as a real-time operation state of the self-moving mower to match the real-time of the self-moving mower, as rendered obvious by Balutis et al., in order to allow “the user to visualize both the portion of the yard that has been mowed and a path the root lawnmower…will follow to complete mowing of the yard” (Balutis et al.; see P[0062]).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Einecke et al. (2014/0032033) in view of Balutis et al. (2016/0165795) further in view of Ebrahimi Afrouzi et al. (11,241,791).
Regarding Claim 2, Einecke et al. teaches the claimed control method of the self-moving mowing system of claim 1, further comprising:
…convert position information of the first virtual obstacle identifier or the second virtual obstacle identifier to position information of the at least one obstacle (“…the classifier module is configured to estimate the position of the obstacle in the field of view of the camera by using a segmentation algorithm that separates foreground pixels from background pixels”, see P[0022] and “…calculate the position and/or height of one or more detected obstacles”, see P[0023] and “The classifier module 3 is functionally connected to the memory 10, in order to identify obstacles 6 based on the stored information and on the output signal 9 of the at least one camera 2. It can further send a scene classification signal 12 to a (movement) control unit 4 of the mower 1, which then uses the information about the scene and the obstacle 6 for a certain action, e.g. to avoid an obstacle 6”, see P[0051] and P[0055]).
Einecke et al. does not expressly recite the bolded portions of the claimed
establishing a pixel coordinate system to convert position information of the first virtual obstacle identifier or the second virtual obstacle identifier to position information of the at least one obstacle.
However, Ebrahimi Afrouzi et al. (11,241,791) teaches use of coordinate system of pixel coordinates for an image, where the pixel coordinates are used when analyzing distances to objects (Ebrahimi Afrouzi et al.; see col.14, particularly lines 8-67).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Ebrahimi Afrouzi et al., and establishing a pixel coordinate system to convert position information of the first virtual obstacle identifier or the second virtual obstacle identifier to position information of the at least one obstacle, as rendered obvious by Ebrahimi Afrouzi et al., in order to determine “a most feasible position of the robotic device” (Ebrahimi Afrouzi et al.; see Abstract).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Einecke et al. (2014/0032033) in view of Balutis et al. (2016/0165795) further in view of Madsen et al. (2014/0324272).
Regarding Claim 3, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 1, further comprising:
when the self-moving mower is operating, analyzing operation data and environmental data of the self-moving mower…(“…calculate the position and/or height of one or more detected obstacles”, see P[0023]).
Einecke et al. does not expressly recite the bolded portions of the claimed
when the self-moving mower is operating, analyzing operation data and environmental data of the self-moving mower, modeling and generating corresponding simulated scene image information according to the operation data and the environmental data, and generating the simulated scene image according the simulated scene image information.
However, Madsen et al. (2014/0324272) teaches modeling and generating corresponding simulated scene image information according to the operation data and the environmental data, and generating the simulated scene image according the simulated scene image information (Madsen et al.; see P[0062]-P[0063] and FIG. 2).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Madsen et al., and when the self-moving mower is operating, analyzing operation data and environmental data of the self-moving mower, modeling and generating corresponding simulated scene image information according to the operation data and the environmental data, and generating the simulated scene image according the simulated scene image information, as rendered obvious by Madsen et al., in order to provide for “operating an automatic guidance system of an agricultural vehicle” (Madsen et al.; see Abstract).
Claims 4, 6-8 and 18 is rejected under 35 U.S.C. 103 as being unpatentable over Einecke et al. (2014/0032033) in view of Balutis et al. (2016/0165795) further in view of Balutis et al. (2018/0116105).
Examiner’s Note:
Balutis et al. (2018/0116105) will also be referred to as “Balutis et al. ‘105” for the remainder of this Office Action.
Regarding Claim 4, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 1, further comprising:
previewing a mowing operation state and a mowing operation effect of the self-moving mower avoiding the first virtual obstacle identifier or the second virtual obstacle identifier.
However, Balutis et al. (2018/0116105) teaches previewing a mowing operation state and a mowing operation effect of the self-moving mower avoiding the first virtual obstacle identifier or the second virtual obstacle identifier (Balutis et al. ‘105; “When the robot is scheduled to mow lawn area 102b, the robot will follow the routes as described in FIG. 7A. When the robot is scheduled to mow lawn area 102c, the robot will bypass the lawn areas 102a-b to mow the lawn area 102c, as shown in FIG. 7B”, see P[0088] and FIG. 7A and “FIG. 8 shows a screenshot of the user interface of the exemplary lawn area shown in FIG. 2. The mobile device displays a menu portion 3000 and a map portion 3500. The menu portion 3000 shows options including “Show Routes”, “Schedule”, and “Default Settings”. Tapping “Show Routes” on the menu portion 3000 will reveal the points or cells stored as a result of training the traversal routes, bypass routes, and lawn routes”, see P[0084] and FIG. 8).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al. ‘105, and previewing a mowing operation state and a mowing operation effect of the self-moving mower avoiding the first virtual obstacle identifier or the second virtual obstacle identifier, as rendered obvious by Balutis et al. ‘105, so that a “robot can be scheduled to execute several mowing operations at different times so that the power system of the robot can be sustained through the entire mowing operation” and so that the “robot can be trained to include several bypass routes that allow the robot to bypass lawn areas that are not to be mowed in a mowing operation” (Balutis et al. ‘105; see P[0105]).
Regarding Claim 6, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 1, further comprising:
generating, according to an instruction input by the user, a moving path in the real-time image or the simulated scene image to form the first fusion image;
controlling the self-moving mower to move along the moving path in the first fusion image.
However, Balutis et al. (2018/0116105) teaches generating, according to an instruction input by the user, a moving path in the real-time image or the simulated scene image to form a first fusion image, and controlling the self-moving mower to move along the moving path in the first fusion image (Balutis et al. ‘105; “Thus, to navigate between lawn area 102b and lawn area 102c, the robot must traverse region 104b while avoiding the fire pit 131”, see P[0045] and FIG. 2, and see “The user can use the touchscreen to draw geometric shapes that function as virtual regions that can assist the robot in training. A user can set a virtual region as a lawn area, which can give the controller a general location for the boundary of the lawn areas. The user can further set virtual regions for the traversal regions and for obstacles”, see P[0085] and “At step S1005A, after the robot reaches the traversal launch point of the traversal route, the robot disables the cutting system and begins traversing the traversal region via the traversal route (including the traversal start point, the intermediate traversal points, and the traversal end point) determined in steps S910 and S920. The robot completes the traversal route at the traversal end point determined in step S920”, see P[0099] and “At step S1010A, after the robot traverses the traversal region, the robot enables the cutting system and begins to mow the second lawn via the second lawn route determined in step S925”, see P[0100]).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al. ‘105, and generating, according to an instruction input by the user, a moving path in the real-time image or the simulated scene image to form the first fusion image, controlling the self-moving mower to move along the moving path in the first fusion image, as rendered obvious by Balutis et al. ‘105, so that a “robot can be scheduled to execute several mowing operations at different times so that the power system of the robot can be sustained through the entire mowing operation” and so that the “robot can be trained to include several bypass routes that allow the robot to bypass lawn areas that are not to be mowed in a mowing operation” (Balutis et al. ‘105; see P[0105]).
Regarding Claim 7, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 6, wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber.
However, Balutis et al. (2018/0116105) teaches wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber (Balutis et al. ‘105; “FIGS. 7A-B depict automatic generation of the lawn route data 325a-c as well as the different path behaviors 330 associated with the route data 315, 320 generated above. The lawn route data 325a-c includes a start point, an end point, and a movement pattern. The movement pattern can be, for example, a spiral pattern, a corn row pattern, zig-zag pattern, etc. A user can select a desired movement pattern for each of the lawn areas and different patterns may be selected for different areas of the same lawn”, see P[0079] and FIGS. 7A-7B).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al. ‘105, and wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber, as rendered obvious by Balutis et al. ‘105, so that a “robot can be scheduled to execute several mowing operations at different times so that the power system of the robot can be sustained through the entire mowing operation” and so that the “robot can be trained to include several bypass routes that allow the robot to bypass lawn areas that are not to be mowed in a mowing operation” (Balutis et al. ‘105; see P[0105]).
Regarding Claim 8, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 1, further comprising:
receiving a virtual guide channel between a first virtual sub-mowing area and a second virtual sub-mowing area set by the user; and
guiding the self-moving mower in a moving path between a first sub-mowing area corresponding to the first virtual sub-mowing area and a second sub-mowing area corresponding to the second virtual sub-mowing area.
However, Balutis et al. (2018/0116105) teaches receiving a virtual guide channel between a first virtual sub-mowing area and a second virtual sub-mowing area set by the user, and guiding the self-moving mower in a moving path between a first sub-mowing area corresponding to the first virtual sub-mowing area and a second sub-mowing area corresponding to the second virtual sub-mowing area (Balutis et al. ‘105; “Thus, to navigate between lawn area 102b and lawn area 102c, the robot must traverse region 104b while avoiding the fire pit 131”, see P[0045] and FIG. 2, and see “The user can use the touchscreen to draw geometric shapes that function as virtual regions that can assist the robot in training. A user can set a virtual region as a lawn area, which can give the controller a general location for the boundary of the lawn areas. The user can further set virtual regions for the traversal regions and for obstacles”, see P[0085] and “At step S1005A, after the robot reaches the traversal launch point of the traversal route, the robot disables the cutting system and begins traversing the traversal region via the traversal route (including the traversal start point, the intermediate traversal points, and the traversal end point) determined in steps S910 and S920. The robot completes the traversal route at the traversal end point determined in step S920”, see P[0099] and “At step S1010A, after the robot traverses the traversal region, the robot enables the cutting system and begins to mow the second lawn via the second lawn route determined in step S925”, see P[0100]).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al. ‘105, and receiving a virtual guide channel between a first virtual sub-mowing area and a second virtual sub-mowing area set by the user; and guiding the self-moving mower in a moving path between a first sub-mowing area corresponding to the first virtual sub-mowing area and a second sub-mowing area corresponding to the second virtual sub-mowing area, as rendered obvious by Balutis et al. ‘105, so that a “robot can be scheduled to execute several mowing operations at different times so that the power system of the robot can be sustained through the entire mowing operation” and so that the “robot can be trained to include several bypass routes that allow the robot to bypass lawn areas that are not to be mowed in a mowing operation” (Balutis et al. ‘105; see P[0105]).
Regarding Claim 18, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 17, further comprising:
generating, according to an instruction input by the user, a moving path in the real-time image or the simulated scene image to form the first fusion image.
However, Balutis et al. (2018/0116105) teaches generating, according to an instruction input by the user, a moving path in the real-time image or the simulated scene image to form the first fusion image (Balutis et al. ‘105; “Thus, to navigate between lawn area 102b and lawn area 102c, the robot must traverse region 104b while avoiding the fire pit 131”, see P[0045] and FIG. 2, and see “The user can use the touchscreen to draw geometric shapes that function as virtual regions that can assist the robot in training. A user can set a virtual region as a lawn area, which can give the controller a general location for the boundary of the lawn areas. The user can further set virtual regions for the traversal regions and for obstacles”, see P[0085] and “At step S1005A, after the robot reaches the traversal launch point of the traversal route, the robot disables the cutting system and begins traversing the traversal region via the traversal route (including the traversal start point, the intermediate traversal points, and the traversal end point) determined in steps S910 and S920. The robot completes the traversal route at the traversal end point determined in step S920”, see P[0099] and “At step S1010A, after the robot traverses the traversal region, the robot enables the cutting system and begins to mow the second lawn via the second lawn route determined in step S925”, see P[0100]).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al. ‘105, and generating, according to an instruction input by the user, a moving path in the real-time image or the simulated scene image to form the first fusion image, as rendered obvious by Balutis et al. ‘105, so that a “robot can be scheduled to execute several mowing operations at different times so that the power system of the robot can be sustained through the entire mowing operation” and so that the “robot can be trained to include several bypass routes that allow the robot to bypass lawn areas that are not to be mowed in a mowing operation” (Balutis et al. ‘105; see P[0105]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Einecke et al. (2014/0032033) in view of Ebrahimi Afrouzi et al. (11,241,791).
Regarding Claim 10, Einecke et al. teaches the claimed control method of the self-moving mowing system of claim 9, further comprising:
…convert position information of the first virtual obstacle identifier or the second virtual obstacle identifier to position information of the at least one obstacle (“…the classifier module is configured to estimate the position of the obstacle in the field of view of the camera by using a segmentation algorithm that separates foreground pixels from background pixels”, see P[0022] and “…calculate the position and/or height of one or more detected obstacles”, see P[0023] and “The classifier module 3 is functionally connected to the memory 10, in order to identify obstacles 6 based on the stored information and on the output signal 9 of the at least one camera 2. It can further send a scene classification signal 12 to a (movement) control unit 4 of the mower 1, which then uses the information about the scene and the obstacle 6 for a certain action, e.g. to avoid an obstacle 6”, see P[0051] and P[0055]).
Einecke et al. does not expressly recite the bolded portions of the claimed
establishing a pixel coordinate system to convert position information of the first virtual obstacle identifier or the second virtual obstacle identifier to position information of the at least one obstacle.
However, Ebrahimi Afrouzi et al. (11,241,791) teaches use of coordinate system of pixel coordinates for an image, where the pixel coordinates are used when analyzing distances to objects (Ebrahimi Afrouzi et al.; see col.14, particularly lines 8-67).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Ebrahimi Afrouzi et al., and establishing a pixel coordinate system to convert position information of the first virtual obstacle identifier or the second virtual obstacle identifier to position information of the at least one obstacle, as rendered obvious by Ebrahimi Afrouzi et al., in order to determine “a most feasible position of the robotic device” (Ebrahimi Afrouzi et al.; see Abstract).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Einecke et al. (2014/0032033) in view of Madsen et al. (2014/0324272).
Regarding Claim 11, Einecke et al. teaches the claimed control method of the self-moving mowing system of claim 9, further comprising:
when the self-moving mower is operating, analyzing operation data and environmental data of the self-moving mower…(“…calculate the position and/or height of one or more detected obstacles”, see P[0023]).
Einecke et al. does not expressly recite the bolded portions of the claimed
when the self-moving mower is operating, analyzing operation data and environmental data of the self-moving mower, modeling and generating corresponding simulated scene image information according to the operation data and the environmental data, and generating the simulated scene image according the simulated scene image information.
However, Madsen et al. (2014/0324272) teaches modeling and generating corresponding simulated scene image information according to the operation data and the environmental data, and generating the simulated scene image according the simulated scene image information (Madsen et al.; see P[0062]-P[0063] and FIG. 2).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Madsen et al., and when the self-moving mower is operating, analyzing operation data and environmental data of the self-moving mower, modeling and generating corresponding simulated scene image information according to the operation data and the environmental data, and generating the simulated scene image according the simulated scene image information, as rendered obvious by Madsen et al., in order to provide for “operating an automatic guidance system of an agricultural vehicle” (Madsen et al.; see Abstract).
Claims 12 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Einecke et al. (2014/0032033) in view of Balutis et al. (2018/0116105).
Regarding Claim 12, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 9, further comprising:
previewing a mowing operation state and a mowing operation effect of the self-moving mower avoiding the first virtual obstacle identifier.
However, Balutis et al. (2018/0116105) teaches previewing a mowing operation state and a mowing operation effect of the self-moving mower avoiding the first virtual obstacle identifier (Balutis et al. ‘105; “When the robot is scheduled to mow lawn area 102b, the robot will follow the routes as described in FIG. 7A. When the robot is scheduled to mow lawn area 102c, the robot will bypass the lawn areas 102a-b to mow the lawn area 102c, as shown in FIG. 7B”, see P[0088] and FIG. 7A and “FIG. 8 shows a screenshot of the user interface of the exemplary lawn area shown in FIG. 2. The mobile device displays a menu portion 3000 and a map portion 3500. The menu portion 3000 shows options including “Show Routes”, “Schedule”, and “Default Settings”. Tapping “Show Routes” on the menu portion 3000 will reveal the points or cells stored as a result of training the traversal routes, bypass routes, and lawn routes”, see P[0084] and FIG. 8).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al. ‘105, and previewing a mowing operation state and a mowing operation effect of the self-moving mower avoiding the first virtual obstacle identifier, as rendered obvious by Balutis et al. ‘105, so that a “robot can be scheduled to execute several mowing operations at different times so that the power system of the robot can be sustained through the entire mowing operation” and so that the “robot can be trained to include several bypass routes that allow the robot to bypass lawn areas that are not to be mowed in a mowing operation” (Balutis et al. ‘105; see P[0105]).
Regarding Claim 14, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 9, further comprising:
generating, according to an instruction input by the user, a moving path in the real-time image or the simulated scene image to form the first fusion image; and
controlling the self-moving mower to move along the moving path in the first fusion image.
However, Balutis et al. (2018/0116105) teaches generating, according to an instruction input by the user, a moving path in the real-time image or the simulated scene image to form the first fusion image, and controlling the self-moving mower to move along the moving path in the first fusion image (Balutis et al. ‘105; “Thus, to navigate between lawn area 102b and lawn area 102c, the robot must traverse region 104b while avoiding the fire pit 131”, see P[0045] and FIG. 2, and see “The user can use the touchscreen to draw geometric shapes that function as virtual regions that can assist the robot in training. A user can set a virtual region as a lawn area, which can give the controller a general location for the boundary of the lawn areas. The user can further set virtual regions for the traversal regions and for obstacles”, see P[0085] and “At step S1005A, after the robot reaches the traversal launch point of the traversal route, the robot disables the cutting system and begins traversing the traversal region via the traversal route (including the traversal start point, the intermediate traversal points, and the traversal end point) determined in steps S910 and S920. The robot completes the traversal route at the traversal end point determined in step S920”, see P[0099] and “At step S1010A, after the robot traverses the traversal region, the robot enables the cutting system and begins to mow the second lawn via the second lawn route determined in step S925”, see P[0100]).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al. ‘105, and generating, according to an instruction input by the user, a moving path in the real-time image or the simulated scene image to form the first fusion image; and controlling the self-moving mower to move along the moving path in the first fusion image, as rendered obvious by Balutis et al. ‘105, so that a “robot can be scheduled to execute several mowing operations at different times so that the power system of the robot can be sustained through the entire mowing operation” and so that the “robot can be trained to include several bypass routes that allow the robot to bypass lawn areas that are not to be mowed in a mowing operation” (Balutis et al. ‘105; see P[0105]).
Regarding Claim 15, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 14, wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber.
However, Balutis et al. (2018/0116105) teaches wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber (Balutis et al. ‘105; “FIGS. 7A-B depict automatic generation of the lawn route data 325a-c as well as the different path behaviors 330 associated with the route data 315, 320 generated above. The lawn route data 325a-c includes a start point, an end point, and a movement pattern. The movement pattern can be, for example, a spiral pattern, a corn row pattern, zig-zag pattern, etc. A user can select a desired movement pattern for each of the lawn areas and different patterns may be selected for different areas of the same lawn”, see P[0079] and FIGS. 7A-7B).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al. ‘105, and wherein moving path is generated by a preset path scrubber, and the preset path scrubber comprises at least one of a rectangular-ambulatory-plane path scrubber, a bow-shaped path scrubber and a linear path scrubber, as rendered obvious by Balutis et al. ‘105, so that a “robot can be scheduled to execute several mowing operations at different times so that the power system of the robot can be sustained through the entire mowing operation” and so that the “robot can be trained to include several bypass routes that allow the robot to bypass lawn areas that are not to be mowed in a mowing operation” (Balutis et al. ‘105; see P[0105]).
Regarding Claim 16, Einecke et al. does not expressly recite the claimed control method of the self-moving mowing system of claim 9, further comprising:
receiving a virtual guide channel between a first virtual sub-mowing area and a second virtual sub-mowing area set by the user; and
guiding the self-moving mower in a moving path between a first sub-mowing area corresponding to the first virtual sub-mowing area and a second sub-mowing area corresponding to the second virtual sub-mowing area.
However, Balutis et al. (2018/0116105) teaches receiving a virtual guide channel between a first virtual sub-mowing area and a second virtual sub-mowing area set by the user, and guiding the self-moving mower in a moving path between a first sub-mowing area corresponding to the first virtual sub-mowing area and a second sub-mowing area corresponding to the second virtual sub-mowing area (Balutis et al. ‘105; “Thus, to navigate between lawn area 102b and lawn area 102c, the robot must traverse region 104b while avoiding the fire pit 131”, see P[0045] and FIG. 2, and see “The user can use the touchscreen to draw geometric shapes that function as virtual regions that can assist the robot in training. A user can set a virtual region as a lawn area, which can give the controller a general location for the boundary of the lawn areas. The user can further set virtual regions for the traversal regions and for obstacles”, see P[0085] and “At step S1005A, after the robot reaches the traversal launch point of the traversal route, the robot disables the cutting system and begins traversing the traversal region via the traversal route (including the traversal start point, the intermediate traversal points, and the traversal end point) determined in steps S910 and S920. The robot completes the traversal route at the traversal end point determined in step S920”, see P[0099] and “At step S1010A, after the robot traverses the traversal region, the robot enables the cutting system and begins to mow the second lawn via the second lawn route determined in step S925”, see P[0100]).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Einecke et al. with the teachings of Balutis et al. ‘105, and receiving a virtual guide channel between a first virtual sub-mowing area and a second virtual sub-mowing area set by the user; and guiding the self-moving mower in a moving path between a first sub-mowing area corresponding to the first virtual sub-mowing area and a second sub-mowing area corresponding to the second virtual sub-mowing area, as rendered obvious by Balutis et al. ‘105, so that a “robot can be scheduled to execute several mowing operations at different times so that the power system of the robot can be sustained through the entire mowing operation” and so that the “robot can be trained to include several bypass routes that allow the robot to bypass lawn areas that are not to be mowed in a mowing operation” (Balutis et al. ‘105; see P[0105]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISAAC G SMITH whose telephone number is (571)272-9593. The examiner can normally be reached Monday-Thursday, 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANISS CHAD can be reached at 571-270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ISAAC G SMITH/ Primary Examiner, Art Unit 3662