Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This office action is in response to Applicant Amendments and Remarks filed on 11/06/2025, for application number 18/245,598 filed on 03/16/2023, in which claims 1-5, 8-9, and 13-18 were previously presented for examination.
Claims 1 and 15-18 are amended.
Claims 19-21 are new.
Claims 1-5, 8-9, and 13-21 are currently pending in this application.
Response to Arguments
Applicant Amendments and Remarks filed on 11/06/2025 in response to the Non-Final office action mailed on 08/14/2025 have been fully considered and are addressed as follows:
Regarding the Claim Rejections under 35 USC § 112(b): The rejections of claims for being indefinite are withdrawn, as the amended claims have properly addressed the rejections recited in the Non-Final office action.
Regarding the Claim Rejection under 35 USC § 101: The rejection of claim 16 for being unpatentable is withdrawn, as the amended claim has properly addressed the rejections recited in the Non-Final office action.
Regarding the Claim Rejections under 35 USC §§ 102 and 103: With respect to the previous claim rejections under 35 U.S.C. §§ 102 and 103, Applicant has amended the independent claims and these amendments have changed the scope of the original application. The Office has supplied new grounds for rejection attached below in the FINAL office action and therefore the prior arguments are considered moot.
FINAL OFFICE ACTION
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4 and 13-21 are rejected under 35 U.S.C. 103 as being unpatentable over Schwer et al. (US 2019/0299412 A1, hereinafter “Schwer”) in view of in view of Chen et al. (US 2020/0039427 A1, which is found in the IDS submission on 03/16/2023, hereinafter “Chen”) further in view of MESSINGHER LANG et al. (US 2023/0274756 A1, hereinafter “Lang”).
Regarding claim 1, Schwer discloses a method of indicating a condition of at least one robotic device to a user (Schwer at para. [0024]: “The inventive AR system 1 used by user A comprises a detection unit 1a, which detects the real environment with working unit 3 and monitoring unit 2”), comprising the steps of:
obtaining, via an augmented-reality (AR) interface associated with the user, at least one planned movement path of the robotic device from a server that schedules, controls, monitors, and/or coordinates movement of the at least one robotic device (Schwer at para. [0025]: “the AR system 1 comprises a communication unit 1b which receives the safety data from the monitoring unit 2 and the process data from the working unit 3, in particular from the control 3a of the working unit 3”; para. [0027]: “the control unit creates a future working area 2b of the monitoring unit 2 and a future position zP of the working unit 3 as a virtual mapping on the basis of the safety data and the process data at a future point in time”);
obtaining, via positioning equipment in the AR interface or a communication interface of the AR interface in communication with an external positioning service, a position of the user (Schwer at para. [0009]: “the user's position can be determined using geolocation data (GPS) or indoor GPS and transmitted to the control unit. For this purpose, the augmented reality system preferably comprises a corresponding GPS transmitter-receiver unit”); and
displaying in real-time, via the AR interface, a visualization of the at least one planned movement path relative to the user position (Schwer at FIG. 2 and para. [0028]: “The current mapping (solid lines) and the virtual mapping (dashed lines) are displayed in relation to each other to user A on the display unit le of the AR system 1”; para. [0029]: “on the display unit le of the AR device 1, the user A using the inventive AR system 1 can see the real environment, i.e. the image of himself, the monitoring unit 2 with its current working area 2a and the working unit 3 in its current position aP, and a future environment following at the point in time, i.e. the future working area 2b of the monitoring unit 2 and the future position zP of the working unit 3 at the future point in time”).
However, Schwer does not explicitly state:
wherein the visualization of the at least one planned movement path has an appearance that differs for different values of at least one quantity selected from:
an identity of the robotic device, and
a mass or physical dimensions of the robotic device, the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization.
In the same field of endeavor, Chen teaches:
wherein the visualization of the at least one planned movement path has an appearance that differs for different values of at least one quantity selected from:
an identity of the robotic device, and
a mass or physical dimensions of the robotic device, (Chen at para. [0097]: “the mobile robot 10 moves along the east direction, and a pedestrian 43 moves along the west direction. When the mobile robot 10 obtains sensing data that includes a first location of the pedestrian 43, the mobile robot 10 determines path prompt information 44 according to the sensing data, and projects and displays the path prompt information 44 on the floor in a form of an animation guide arrow by using the width of the mobile robot 10 as a projection boundary, to prompt a planned path of the mobile robot 10 to the pedestrian 43”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Schwer by adding visualization of Chen with a reasonable expectation of success. The motivation to modify the method of Schwer in view of Chen is to provide precise indication information to pedestrians (see Chen at para. [0020]).
However, Schwer in view of Chen does not explicitly state:
the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization.
Nevertheless, Chen at least suggests the idea of generating a sound signal (see Chen at para. [0163]).
In the same field of endeavor, Lang teaches:
the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization (Lang at para. [0038]: “The operating system can access the metadata which includes size of an object that is associated with the application. The operating system can monitor size of each object associated with each application and determine or modify audio parameters based on size of each object. Each object can have dedicated audio parameters that are paired to that object (and the underlying application). In other words, application A can have its own audio parameters that are determined based on size of object A. Independently, application B can have its own audio parameters that are determined based on size of object B”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Schwer in view of Chen by adding the audio signal of Lang with a reasonable expectation of success. The motivation to modify the method of Schwer in view of Chen further in view of Lang is to provide an extended reality environment so that a person can sense the size of an object by sound.
Regarding claim 2, Schwer in view of Chen further in view of Lang teaches the method of claim 1.
Schwer further discloses further comprising obtaining said at least one quantity (Schwer at para. [0006]: “an augmented reality system for overlapping an augmented reality with a real environment, comprising a detection unit which detects the real environment and provides it as environmental data, wherein at least one monitoring unit and at least one working unit are arranged in the environment, a communication unit which receives safety data from the monitoring unit and process data from the working unit”; para. [0016]: “the safety data of the monitoring unit include geometric data of the working area of the monitoring unit and the process data of the working unit motion data of the working unit”).
Regarding claim 3, Schwer in view of Chen further in view of Lang teaches the method of claim 1.
Schwer further discloses wherein the at least one planned movement path is a data structure which represents locations of the at least one robotic device at different points in time and which optionally includes specifics of the robotic device, such that said at least one quantity is derivable from the data structure (Schwer at para. [0022]: “The controlled processes of the working unit 3 are stored as process data in the control unit 3a. The process data include in particular transaction data of working unit 3”; para. [0030]: “the display unit le shows a transition of the working unit 3 from the current position aP to the future position zP on the basis of the process data as a visual sequence of a movement of the working unit 3 to the user A”).
Regarding claim 4, Schwer in view of Chen further in view of Lang teaches the method of claim 1.
Schwer further discloses wherein the AR interface associated with the user is worn by the user (Schwer at para. [0037]: “The AR system 1 preferably comprises a pair of glasses, a helmet display or an electronic communication device”).
Regarding claim 13, Schwer in view of Chen further in view of Lang teaches the method of claim 1.
Schwer further discloses wherein the visualization includes an indication of a risk of the robotic device colliding with the user (Schwer at para. [0034]: “Advantageously, potential collisions are indicated to user A by warning color in the AR system 1”; potential collision corresponds to an indication of a risk).
Regarding claim 14, Schwer in view of Chen further in view of Lang teaches the method of claim 13.
Chen further teaches wherein the risk of colliding with the user that exceeds a predetermined threshold is represented by any of:
a local deviation from the at least one planned movement path of particles of a particle flow,
a shift of animated pointing elements (Chen at para. [0096]: “The animation guide arrow is used for indicating a movement direction of the mobile robot 10”; para. [0097]: “the mobile robot 10 moves along the east direction, and a pedestrian 43 moves along the west direction. When the mobile robot 10 obtains sensing data that includes a first location of the pedestrian 43, the mobile robot 10 determines path prompt information 44 according to the sensing data, and projects and displays the path prompt information 44 on the floor in a form of an animation guide arrow by using the width of the mobile robot 10 as a projection boundary, to prompt a planned path of the mobile robot 10 to the pedestrian 43”; [0100]: “Because the path prompt information usually changes dynamically, the mobile robot 10 may project and display a dynamic video on the target projection plane. The dynamic video is used for indicating the planned path of the mobile robot 10”; The planned path dynamically changes based on the location of the pedestrian).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Schwer in view of Chen further in view of Lang by adding the shift of animated pointing elements of Chen with a reasonable expectation of success. The motivation to modify the method of Schwer in view of Chen further in view of Lang is to provide precise indication information to pedestrians (see Chen at para. [0020]).
Regarding claim 15, Schwer discloses an information system configured to indicate a condition of one or more robotic devices to a user (Schwer at para. [0024]: “The inventive AR system 1 used by user A comprises a detection unit 1a, which detects the real environment with working unit 3 and monitoring unit 2”), the information system comprising:
a communication interface (Schwer at para. [0025]: “the AR system 1 comprises a communication unit 1b) configured to:
obtain at least one planned movement path of the robotic devices from a server that schedules, controls, monitors, and/or coordinates movement of the at least one robotic device (Schwer at para. [0025]: “the AR system 1 comprises a communication unit 1b which receives the safety data from the monitoring unit 2 and the process data from the working unit 3, in particular from the control 3a of the working unit 3”; para. [0027]: “the control unit creates a future working area 2b of the monitoring unit 2 and a future position zP of the working unit 3 as a virtual mapping on the basis of the safety data and the process data at a future point in time”), and
obtain a position of the user (Schwer at para. [0009]: “the user's position can be determined using geolocation data (GPS) or indoor GPS and transmitted to the control unit. For this purpose, the augmented reality system preferably comprises a corresponding GPS transmitter-receiver unit”);
an augmented-reality, AR, interface associated with the user, the AR interface having positioning equipment or being in communication with an external positioning service for obtaining the position of the user (Schwer at para. [0009]: “the user's position can be determined using geolocation data (GPS) or indoor GPS and transmitted to the control unit. For this purpose, the augmented reality system preferably comprises a corresponding GPS transmitter-receiver unit”); and
processing circuitry configured to display in real-time, by means of the AR interface, a visualization of the at least one planned movement path relative to the user position (Schwer at FIG. 2 and para. [0028]: “The current mapping (solid lines) and the virtual mapping (dashed lines) are displayed in relation to each other to user A on the display unit le of the AR system 1”; para. [0029]: “on the display unit le of the AR device 1, the user A using the inventive AR system 1 can see the real environment, i.e. the image of himself, the monitoring unit 2 with its current working area 2a and the working unit 3 in its current position aP, and a future environment following at the point in time, i.e. the future working area 2b of the monitoring unit 2 and the future position zP of the working unit 3 at the future point in time”).
However, Schwer does not explicitly state:
wherein the visualization of the at least one planned movement path has an appearance that differs for different values of at least one quantity selected from:
an identity of the robotic device, and
a mass or physical dimensions of the robotic device, the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization.
In the same field of endeavor, Chen teaches:
wherein the visualization of the at least one planned movement path has an appearance that differs for different values of at least one quantity selected from:
an identity of the robotic device, and
a mass or physical dimensions of the robotic device, (Chen at para. [0097]: “the mobile robot 10 moves along the east direction, and a pedestrian 43 moves along the west direction. When the mobile robot 10 obtains sensing data that includes a first location of the pedestrian 43, the mobile robot 10 determines path prompt information 44 according to the sensing data, and projects and displays the path prompt information 44 on the floor in a form of an animation guide arrow by using the width of the mobile robot 10 as a projection boundary, to prompt a planned path of the mobile robot 10 to the pedestrian 43”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Schwer by adding visualization of Chen with a reasonable expectation of success. The motivation to modify the system of Schwer in view of Chen is to provide precise indication information to pedestrians (see Chen at para. [0020]).
However, Schwer in view of Chen does not explicitly state:
the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization.
Nevertheless, Chen at least suggests the idea of generating a sound signal (see Chen at para. [0163]).
In the same field of endeavor, Lang teaches:
the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization (Lang at para. [0038]: “The operating system can access the metadata which includes size of an object that is associated with the application. The operating system can monitor size of each object associated with each application and determine or modify audio parameters based on size of each object. Each object can have dedicated audio parameters that are paired to that object (and the underlying application). In other words, application A can have its own audio parameters that are determined based on size of object A. Independently, application B can have its own audio parameters that are determined based on size of object B”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Schwer in view of Chen by adding the audio signal of Lang with a reasonable expectation of success. The motivation to modify the system of Schwer in view of Chen further in view of Lang is to provide an extended reality environment so that a person can sense the size of an object by sound.
Regarding claim 16, Schwer discloses a computer program product comprising a non-transitory computer readable medium having instructions to cause an information system configured to indicate a condition of one or more robotic devices to a user, the information system having:
a communication interface (Schwer at para. [0025]: “the AR system 1 comprises a communication unit 1b) configured to:
obtain at least one planned movement path of the robotic devices from a server that schedules, controls, monitors, and/or coordinates movement of the at least one robotic device (Schwer at para. [0025]: “the AR system 1 comprises a communication unit 1b which receives the safety data from the monitoring unit 2 and the process data from the working unit 3, in particular from the control 3a of the working unit 3”; para. [0027]: “the control unit creates a future working area 2b of the monitoring unit 2 and a future position zP of the working unit 3 as a virtual mapping on the basis of the safety data and the process data at a future point in time”), and
obtain a position of the user (Schwer at para. [0009]: “the user's position can be determined using geolocation data (GPS) or indoor GPS and transmitted to the control unit. For this purpose, the augmented reality system preferably comprises a corresponding GPS transmitter-receiver unit”);
an augmented-reality, AR, interface associated with the user, the AR interface having positioning equipment or being in communication with an external positioning service for obtaining the position of the user (Schwer at para. [0009]: “the user's position can be determined using geolocation data (GPS) or indoor GPS and transmitted to the control unit. For this purpose, the augmented reality system preferably comprises a corresponding GPS transmitter-receiver unit”); and
processing circuitry configured to display in real-time, by means of the AR interface, a visualization of the at least one planned movement path relative to the user position (Schwer at FIG. 2 and para. [0028]: “The current mapping (solid lines) and the virtual mapping (dashed lines) are displayed in relation to each other to user A on the display unit le of the AR system 1”; para. [0029]: “on the display unit le of the AR device 1, the user A using the inventive AR system 1 can see the real environment, i.e. the image of himself, the monitoring unit 2 with its current working area 2a and the working unit 3 in its current position aP, and a future environment following at the point in time, i.e. the future working area 2b of the monitoring unit 2 and the future position zP of the working unit 3 at the future point in time”).
However, Schwer does not explicitly state:
wherein the visualization of the at least one planned movement path has an appearance that differs for different values of at least one quantity selected from:
an identity of the robotic device, and
a mass or physical dimensions of the robotic device, the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization.
In the same field of endeavor, Chen teaches:
wherein the visualization of the at least one planned movement path has an appearance that differs for different values of at least one quantity selected from:
an identity of the robotic device, and
a mass or physical dimensions of the robotic device, (Chen at para. [0097]: “the mobile robot 10 moves along the east direction, and a pedestrian 43 moves along the west direction. When the mobile robot 10 obtains sensing data that includes a first location of the pedestrian 43, the mobile robot 10 determines path prompt information 44 according to the sensing data, and projects and displays the path prompt information 44 on the floor in a form of an animation guide arrow by using the width of the mobile robot 10 as a projection boundary, to prompt a planned path of the mobile robot 10 to the pedestrian 43”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the product of Schwer by adding visualization of Chen with a reasonable expectation of success. The motivation to modify the product of Schwer in view of Chen is to provide precise indication information to pedestrians (see Chen at para. [0020]).
However, Schwer in view of Chen does not explicitly state:
the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization.
Nevertheless, Chen at least suggests the idea of generating a sound signal (see Chen at para. [0163]).
In the same field of endeavor, Lang teaches:
the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization (Lang at para. [0038]: “The operating system can access the metadata which includes size of an object that is associated with the application. The operating system can monitor size of each object associated with each application and determine or modify audio parameters based on size of each object. Each object can have dedicated audio parameters that are paired to that object (and the underlying application). In other words, application A can have its own audio parameters that are determined based on size of object A. Independently, application B can have its own audio parameters that are determined based on size of object B”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the product of Schwer in view of Chen by adding the audio signal of Lang with a reasonable expectation of success. The motivation to modify the product of Schwer in view of Chen further in view of Lang is to provide an extended reality environment so that a person can sense the size of an object by sound.
Regarding claim 17, Schwer discloses a non-transitory data carrier, comprising:
a non-transitory computer readable medium having stored thereon a computer program comprising instructions which, when executed by an information system, cause the information system to indicate a condition of one or more robotic devices to a user (Schwer at para. [0008]: “It is advantageous to integrate the control unit into the glasses, the helmet display or the electronic communication device. Preferably, the control unit can be arranged as a separate computer unit that calculates the overlap of the current and virtual mapping and transmits it to the display unit. In other words, the control unit advantageously includes a CPU”), the information system having:
a communication interface configured to:
obtain at least one planned movement path of the robotic devices from a server that schedules, controls, monitors, and/or coordinates movement of the at least one robotic device (Schwer at para. [0025]: “the AR system 1 comprises a communication unit 1b which receives the safety data from the monitoring unit 2 and the process data from the working unit 3, in particular from the control 3a of the working unit 3”; para. [0027]: “the control unit creates a future working area 2b of the monitoring unit 2 and a future position zP of the working unit 3 as a virtual mapping on the basis of the safety data and the process data at a future point in time”), and
obtain a position of the user (Schwer at para. [0009]: “the user's position can be determined using geolocation data (GPS) or indoor GPS and transmitted to the control unit. For this purpose, the augmented reality system preferably comprises a corresponding GPS transmitter-receiver unit”);
an augmented-reality, AR, interface associated with the user, the AR interface having positioning equipment or being in communication with an external positioning service for obtaining the position of the user (Schwer at para. [0009]: “the user's position can be determined using geolocation data (GPS) or indoor GPS and transmitted to the control unit. For this purpose, the augmented reality system preferably comprises a corresponding GPS transmitter-receiver unit”); and
processing circuitry configured to display in real-time, by means of the AR interface, a visualization of the at least one planned movement path relative to the user position (Schwer at FIG. 2 and para. [0028]: “The current mapping (solid lines) and the virtual mapping (dashed lines) are displayed in relation to each other to user A on the display unit le of the AR system 1”; para. [0029]: “on the display unit le of the AR device 1, the user A using the inventive AR system 1 can see the real environment, i.e. the image of himself, the monitoring unit 2 with its current working area 2a and the working unit 3 in its current position aP, and a future environment following at the point in time, i.e. the future working area 2b of the monitoring unit 2 and the future position zP of the working unit 3 at the future point in time”).
However, Schwer does not explicitly state:
wherein the visualization of the at least one planned movement path has an appearance that differs for different values of at least one quantity selected from:
an identity of the robotic device, and
a mass or physical dimensions of the robotic device, the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization.
In the same field of endeavor, Chen teaches:
wherein the visualization of the at least one planned movement path has an appearance that differs for different values of at least one quantity selected from:
an identity of the robotic device, and
a mass or physical dimensions of the robotic device, (Chen at para. [0097]: “the mobile robot 10 moves along the east direction, and a pedestrian 43 moves along the west direction. When the mobile robot 10 obtains sensing data that includes a first location of the pedestrian 43, the mobile robot 10 determines path prompt information 44 according to the sensing data, and projects and displays the path prompt information 44 on the floor in a form of an animation guide arrow by using the width of the mobile robot 10 as a projection boundary, to prompt a planned path of the mobile robot 10 to the pedestrian 43”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the non-transitory data carrier of Schwer by adding visualization of Chen with a reasonable expectation of success. The motivation to modify the non-transitory data carrier of Schwer in view of Chen is to provide precise indication information to pedestrians (see Chen at para. [0020]).
However, Schwer in view of Chen does not explicitly state:
the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization.
Nevertheless, Chen at least suggests the idea of generating a sound signal (see Chen at para. [0163]).
In the same field of endeavor, Lang teaches:
the mass or physical dimensions being represented by a tune or an average pitch of an audio signal accompanying the visualization (Lang at para. [0038]: “The operating system can access the metadata which includes size of an object that is associated with the application. The operating system can monitor size of each object associated with each application and determine or modify audio parameters based on size of each object. Each object can have dedicated audio parameters that are paired to that object (and the underlying application). In other words, application A can have its own audio parameters that are determined based on size of object A. Independently, application B can have its own audio parameters that are determined based on size of object B”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the non-transitory data carrier of Schwer in view of Chen by adding the audio signal of Lang with a reasonable expectation of success. The motivation to modify the non-transitory data carrier of Schwer in view of Chen further in view of Lang is to provide an extended reality environment so that a person can directly sense the environment through various means, such as hearing and sight.
Regarding claim 18, Schwer in view of Chen further in view of Lang teaches the method of claim 1.
Chen further teaches wherein the appearance of the visualization of the at least one planned movement path differs for different values of at least one additional quantity selected from:
an activity or task of the robotic device,
a velocity of the robotic device (Chen at para. [0150]: “The projection length is used for indicating the movement speed of the mobile robot 10. A longer projection length indicates a higher movement speed of the mobile robot 10”), and
a proximity of the robotic device to the user position.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Schwer in view of Chen further in view of Lang by adding visualization of Chen with a reasonable expectation of success. The motivation to modify the method of Schwer in view of Chen further in view of Lang is to provide precise indication information to pedestrians (see Chen at para. [0020]).
Regarding claim 19, Schwer in view of Chen further in view of Lang teaches the information system of claim 15.
Chen further teaches wherein the appearance of the visualization of the at least one planned movement path differs for different values of at least one additional quantity selected from:
an activity or task of the robotic device;
a velocity of the robotic device (Chen at para. [0150]: “The projection length is used for indicating the movement speed of the mobile robot 10. A longer projection length indicates a higher movement speed of the mobile robot 10”), and
a proximity of the robotic device to the user position.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Schwer in view of Chen further in view of Lang by adding visualization of Chen with a reasonable expectation of success. The motivation to modify the system of Schwer in view of Chen further in view of Lang is to provide precise indication information to pedestrians (see Chen at para. [0020]).
Regarding claim 20, Schwer in view of Chen further in view of Lang teaches the computer program product of claim 16.
Chen further teaches wherein the appearance of the visualization of the at least one planned movement path differs for different values of at least one additional quantity selected from:
an activity or task of the robotic device;
a velocity of the robotic device (Chen at para. [0150]: “The projection length is used for indicating the movement speed of the mobile robot 10. A longer projection length indicates a higher movement speed of the mobile robot 10”), and
a proximity of the robotic device to the user position.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the product of Schwer in view of Chen further in view of Lang by adding visualization of Chen with a reasonable expectation of success. The motivation to modify the product of Schwer in view of Chen further in view of Lang is to provide precise indication information to pedestrians (see Chen at para. [0020]).
Regarding claim 21, Schwer in view of Chen further in view of Lang teaches the non-transitory data carrier of claim 17.
Chen further teaches wherein the appearance of the visualization of the at least one planned movement path differs for different values of at least one additional quantity selected from:
an activity or task of the robotic device;
a velocity of the robotic device (Chen at para. [0150]: “The projection length is used for indicating the movement speed of the mobile robot 10. A longer projection length indicates a higher movement speed of the mobile robot 10”), and
a proximity of the robotic device to the user position.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the non-transitory data carrier of Schwer in view of Chen further in view of Lang by adding visualization of Chen with a reasonable expectation of success. The motivation to modify the non-transitory data carrier of Schwer in view of Chen further in view of Lang is to provide precise indication information to pedestrians (see Chen at para. [0020]).
Claims 5 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Schwer in view of Chen further in view of Lang and Kuffner (US 2016/0055677 A1).
Regarding claim 5, Schwer in view of Chen further in view of Lang teaches the method of claim 1.
However, Schwer in view of Chen further in view of Lang does not explicitly state wherein the identity, the activity, or the task of the robotic device is represented by any of:
a hue of particles of a particle flow,
a hue of animated pointing elements.
In the same field of endeavor, Kuffner teaches wherein the identity, the activity, or the task of the robotic device is represented by any of:
a hue of particles of a particle flow,
a hue of animated pointing elements (Kuffner at para. [0102]: “the method 500 may optionally include animating the virtual representation on the augmented reality interface to illustrate the robotic device performing the task according to the planned trajectory”; para. [0128]: “a virtual representation of the action or the intent, and the virtual representation includes an indication of at least a portion of the planned trajectory of the robotic device or highlighting the object to be handled by the robotic device”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Schwer in view of Chen further in view of Lang by adding the hue of animated pointing elements as taught by Kuffner with a reasonable expectation of success. The motivation to modify the method of Schwer in view of Chen further in view of Lang and Kuffner is to inform humans nearby of an action or an intent of the robotic device (see Kuffner at para. [0003]).
Regarding claim 8, Schwer in view of Chen further in view of Lang teaches the method of claim 1.
However, Schwer in view of Chen further in view of Lang does not explicitly state wherein the appearance of the visualization is based on a sense of the at least one planned movement path.
In the same field of endeavor, Kuffner teaches wherein the appearance of the visualization is based on a sense of the at least one planned movement path (Kuffner at para. [0117]: “virtual representation 1010 with annotations overlaid thereon including arrows 1012 indicating that the robotic device 1002 intends to move toward the pitcher 1004”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Schwer in view of Chen further in view of Lang by adding the sense of the at least one planned movement path as taught by Kuffner with a reasonable expectation of success. The motivation to modify the method of Schwer in view of Chen further in view of Lang and Kuffner is to inform humans nearby of an action or an intent of the robotic device (see Kuffner at para. [0003]).
Office Note: “sense” is interpreted as “one of two opposite directions especially of motion (as of a point, line, or surface)” (“sense,” Merriam-Webster.com Dictionary, https://www.merriam-webster.com/dictionary/sense. Accessed 4/22/2025).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Schwer in view of Chen further in view of Lang and Sarkar (US 2019/0373395 A1, which is found in the IDS submission on 03/16/2023).
Regarding claim 9, Schwer in view of Chen further in view of Lang teaches the method of claim 1.
Schwer further discloses wherein a direction of the robotic device relative to the user is represented by(Schwer at para. [0034]: “user A to receive both audible and visual warnings from AR System 1”; para. [0036]: “the avoidance direction 4 or, advantageously, the avoidance route indicates or leads to a position which is outside the working area 2a and 2b of the monitoring unit 2 and/or the future position zP of the working unit 3. In this way, the AR-System 1 provides user A with a safe avoidance to protect user A from damage and avoid switching off working unit 3”).
However, Schwer in view of Chen further in view of Lang does not explicitly state an imaginary point of origin of an audio signal.
In the same field of endeavor, Sarkar teaches an imaginary point of origin of an audio signal (Sarkar at para. [0014]: “the AR device may artificially reverberate, mix, filter, or attenuate the audio signal to synthesize characteristics of sound from a first location ( e.g., an audio source location) to a second location (e.g., a user location) so that a user perceives the audio signal as being ‘real’”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Schwer in view of Chen further in view of Lang by adding the imaginary point of origin of an audio signal of Sarkar with a reasonable expectation of success. The motivation to modify the method of Schwer in view of Chen further in view of Lang and Sarkar is to provide audio to simulate the environment so that the audio appears real to a user (see Sarkar at para. [0004]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JISUN CHOI whose telephone number is (571)270-0710. The examiner can normally be reached Mon-Fri, 9:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Browne can be reached on (571)270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JISUN CHOI/Examiner, Art Unit 3666
/JESS WHITTINGTON/Primary Examiner, Art Unit 3666c