DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner’s Note
The recited line numbers throughout the rejection refer to the line numbers of each claim, as opposed to page line numbers.
Claim Objections
Claims 1, 3, 6, 8, 13, 15, and 18 are objected to because of the following informalities:
“a user interface; one or more sensors” recited in claim 1, ln. 12-13 should likely read “a user interface; and one or more sensors”;
“the body” recited in claim 1, ln. 13 should likely read “the user’s body” to avoid claim ambiguity;
“and the user interface; the interactions” recited in claim 1, ln. 16-17 should likely read “and the user interface; wherein the interactions”;
“real time, wherein” recited in claim 1, ln. 18 should likely read “real time, and wherein”;
“The system of claim 1” recited in claim 3, ln. 1 should likely read “The system of claim [[1]]2”;
“the processor” recited in claim 6, ln. 1 should likely read “the at least one processor” for consistency purposes;
“one or more aspects of the images” recited in claim 8, ln. 3 should likely read “one or more aspects of
“at least a portion of the environment within the field of view of the mixed reality device” recited in claim 13, ln. 2-3 should likely read “at least a portion of [[the]]an environment within [[the]]a field of view of the mixed reality device”;
“A method for a productivity facilitating including” recited in claim 15, ln. 1 should likely read “A method for [[a]]facilitating productivity
“at least one datum of task execution to” recited in claim 15, ln. 3 should likely read “at least one datum of task execution instruction to”;
“device; capturing” recited in claim 15, ln. 5-6 should likely read “device; and capturing”;
“the rendering” recited in claim 15, ln. 7 should likely read “[[the]]a rendering”;
Claim 15 should end in a period (see MPEP 608.01(m)); and
“receives the at least one datum of task execution” recited in claim 18, ln. 1-2 should likely read “receives .
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 15-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 15 recites in part “wherein the sensed movement and audio of the user as well as the rendering of the captured environment is manipulated by one or more aspects of the user interface”. It is indefinite as to whether the sensed data, including the audio, is manipulated by one or more aspects of the user interface (e.g., using interactive controls), or rather, whether the claim is intended to read “wherein the sensed movement and audio of the user as well as the rendering of the captured environment is used to manipulate
Claims 16-20 are rejected by virtue of their dependencies on claim 15.
For examination purposes, the claim is interpreted to read ““wherein the sensed movement and audio of the user as well as the rendering of the captured environment is used to manipulatesee Specification, [0009-0010]; [0019-0021]; [0039-0040]).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-6 and 8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Recker et al. (U.S. Pub. 2020/0388177 A1) (hereinafter “Recker”).
Regarding claim 1, Recker discloses a productivity facilitating system (Figs. 1-2; [0002]; [0008]; [0048]) including:
a mixed reality device including at least one wearable technology device ([0010]; [0049]; [0090]; [0097], headset for displayed a simulated real world task that includes a mixed reality task);
a storage medium including at least one datum of task execution instruction data, wherein the at least one storage medium is communicatively coupled to the mixed reality device (Fig. 2; [0057]; [0107]);
the at least one datum of task execution instruction including one or more coded algorithms ([0055]; [0057]; [0107], the memory components (e.g., storage medium) storing instructions include task execution data);
at least one processor (Figs. 1-2; [0049]; [0055]) configured to:
receive the at least one datum of task execution instruction data (Figs. 1-2; [0055]; [0057]; [0107]);
interpret the at least one datum of task execution instruction data ([0055]; [0057]);
present the at least one datum of task execution instruction data to a user via a user interface (Figs. 1-2 & 18; [0049]; [0055-0057]);
one or more sensors that detect movements of one or more portions of the body and one or more sounds emitted from the user, wherein the one or more sensors are communicatively coupled to the processor to facilitate interactions between the user and the user interface (Figs. 1, 6, 9A, & 18; [0049]; [0054-0055]; [0063]; [0068-0071]; [0097-0099], one or more sensors, such as a camera 902, motion sensor 904, eye tracker 920, body-worn motion sensors, and microphone, for measuring user responses and/or actions which are displayed in the simulated reality environment);
the interactions between the user and the user interface are displayed by the mixed reality device in substantially real time ([0049]; [0063]; [0066]; [9968-0071]; [0097-0099]), wherein the at least one datum of task execution instruction includes one or more types of training to assist the user with performing one or more tasks ([0008-0010]; [0090]; [0107]; [0118]; [0127], wherein the mixed reality task, configured by task execution data, may be a real world scenario that a user would encounter in the real world and may be specific to a profession (e.g., performing a surgery) and where feedback is provided based on the user interaction with the simulation, wherein such simulated reality systems are used for training).
Regarding claim 2, Recker further discloses a visual capture device that is configured to transmit a portion of an environment within a field of view, wherein the user interface is communicatively coupled to the visual capture device to incorporate the environment into the at least one datum of task execution instruction data (Figs. 1-2 & 18; [0049-0051]; [0056]; [0068]; [0097-0099], e.g., wherein the headset/head-mounted device may include a camera configured to capture an environment within a field of view (e.g., a workbench that includes real world objects, virtual objects, or a combination thereof, as well as the user’s hands in the field of view of the camera while the user is interacting with the objects on the workbench)).
Regarding claim 3, Recker further discloses wherein the at least one datum of task execution instruction data includes the environment captured by the visual capture device, wherein the interactions between the user and the user interface incorporate and adapt to the information received by the visual capture device (Figs. 1-3, 6-7, & 18; [0049-0051]; [0056]; [0066]; [0068]; [0097-0099]; [0107], e.g., wherein the camera may detect that the user’s actual hands are touching a real object or hovering over a virtual object on a workbench, and the user interface correspondingly displays the user’s hand touching or hovering over the region of interest).
Regarding claim 4, Recker further discloses wherein the at least one wearable technology device includes a headgear or a headset with at least one partially transparent eye panel, wherein the at least one partially transparent eye panel is configured to present the user interface within a field of view of the user (Fig. 18; [0049]; [0097-0099], wherein the head-mounted device 1810 of the mixed reality hardware platform may include a lens 1815 (partially transparent eye panel)).
Regarding claim 5, Recker further discloses wherein the user interface includes one or more two-dimensional or three-dimensional static, moving, or movable images (Fig. 18; [0097-0099], e.g., a display of the user’s hand touching or hovering over a region of interest associated with a displayed object on a workbench).
Regarding claim 6, Recker further discloses wherein data received by the processor from the one or more sensors is interpreted and converted to one or more instructions that facilitate one or more interactions between the user and the user interface (Figs. 1, 6, 9A, & 18; [0049-0050]; [0054-0055]; [0063]; [0068-0071]; [0097-0099]).
Regarding claim 8, Recker further discloses wherein the one or more sensors include one or more motion sensors, wherein the one or more motion sensors detect one or more hand or eye movements to manipulate one or more aspects of the images of the user interface (Fig. 18; [0068]; [0070-0071]; [0097-0099]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 7 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Recker in view of Kritzler et al. (U.S. 11,099,633 B2) (hereinafter “Kritzler”).
Regarding claim 7, Recker further discloses wherein the one or more sensors include a microphone ([0060]; [0068]; [0071]; [0120-0121], wherein the sensors measure user responses and/or actions during a task presented via the simulation, wherein a microphone picks-up the user’s voice during the task). However, Recker may not further explicitly disclose wherein the microphone sends and receives one or more verbal commands from the user to manipulate one or more aspects of the user interface. However, Kritzler, directed to augmented reality authoring using virtual reality, wherein augmented reality is noted to be useful to a human user when performing training, simulation, maintenance, repair, etc. (Col. 1, ln. 11-42; Col. 2, ln. 48-59), teaches wherein a microphone can be used to manipulate aspects of a user interface of a virtual reality environment (e.g., “Show me a wrench” to enable or disable objects) (Col. 6, ln. 21-33; Col. 7, ln. 10-27). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize the sensors of the system, including the microphone, in order to manipulate aspects of a user interface of the simulated reality environment in Recker, as taught by Kritzler, to allow the user to control aspects of the simulated reality environment verbally in addition to physically (e.g., using their hands, wherein the motion data is gathered from additional sensors, such as a camera 902, motion sensor 904, and/or body-worn motion sensors).
Regarding claim 15, Recker discloses a method for a productivity facilitating (Fig. 3; [0002]; [0008]; [0048]) including:
displaying a user interface on a mixed reality device (Figs. 2 & 18; [0010]; [0049]; [0055-0057]; [0090]; [0097-0099], displaying a simulated real world task that includes a mixed reality task via a head-mounted device);
sending at least one datum of task execution to the user, wherein the at least one datum of task execution instruction includes one or more types of training ([0008-0010]; [0055]; [0057]; [0090]; [0107]; [0118]; [0127], wherein the mixed reality task, configured by task execution data, may be a real world scenario that a user would encounter in the real world and may be specific to a profession (e.g., performing a surgery) and where feedback is provided based on the user interaction with the simulation, wherein such simulated reality systems are used for training);
sensing movement and audio of a user interacting with the mixed reality device (Figs. 1, 6, 9A, & 18; [0049]; [0054-0055]; [0063]; [0068-0071]; [0097-0099], measuring user responses and/or actions, which are displayed in the simulated reality environment, via one or more sensors, such as a camera 902, motion sensor 904, eye tracker 920, body-worn motion sensors, and microphone);
capturing the environment of the user through a video capture device, wherein the sensed movement of the user as well as the rendering of the captured environment is manipulated by one or more aspects of the user interface (Figs. 1-2 & 18; [0049-0051]; [0056]; [0060]; [0068]; [0070-0071]; [0097-0099]; [0120-0121], wherein the sensed user responses and/or actions by the one or more sensors are measured during a task presented via the simulation and displayed in the simulated reality environment, and where the headset/head-mounted device may include a camera configured to capture an environment within a field of view (e.g., a workbench that includes real world objects, virtual objects, or a combination thereof, as well as the user’s hands in the field of view of the camera while the user is interacting with the objects on the workbench)).
While Recker discloses wherein the one or more sensors include a microphone ([0060]; [0068]; [0071]; [0120-0121], wherein the sensors measure user responses and/or actions during a task presented via the simulation, wherein a microphone picks-up the user’s voice during the task), Recker may not further explicitly disclose wherein the sensed audio of the user is used to manipulate one or more aspects of the user interface. However, Kritzler, directed to augmented reality authoring using virtual reality, wherein augmented reality is noted to be useful to a human user when performing training, simulation, maintenance, repair, etc. (Col. 1, ln. 11-42; Col. 2, ln. 48-59), teaches wherein a microphone can be used to manipulate aspects of a user interface of a virtual reality environment (e.g., “Show me a wrench” to enable or disable objects) (Col. 6, ln. 21-33; Col. 7, ln. 10-27). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize the sensors of the system, including the microphone, in order to manipulate aspects of a user interface of the simulated reality environment in Recker, as taught by Kritzler, to allow the user to control aspects of the simulated reality environment verbally in addition to physically (e.g., using their hands, wherein the motion data is gathered from additional sensors, such as a camera 902, motion sensor 904, and/or body-worn motion sensors).
Regarding claim 16, Recker further discloses wherein the mixed reality device provides a three-dimensional rendering of a tool that is manipulated through one or more motions, wherein the one or more motions provides information as to whether the tool is correct for a predefined operation (Fig. 5; [0003]; [0050]; [0066]; [0090]; [0097-0099]; [0111]; [0118]; [0139-0140], wherein the simulated reality experience provided by the mixed reality head-mounted display includes objects that correspond to real world items (e.g., tools), and wherein one or more motions by the user with respect to the objects are analyzed to determine correct selection of a tool for a predefined operation/task (e.g., wherein a task may require use of two or more objects in a particular order, such as tool A to be used before tool B)).
Regarding claim 17, Recker further discloses wherein the mixed reality task may be a real world scenario that a user would encounter in the real world and may be specific to a profession (e.g., performing a surgery) and where feedback is provided based on the user interaction with the simulation, wherein such simulated reality systems are used for training ([0008-0010]; [0090]; [0107]; [0118]; [0127]). While Recker may not further explicitly disclose wherein the one or more types of training is to assist a sterile processing technician, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to configure a mixed reality task system and method as taught by Recker for any profession that may require training, such as a sterile processing technician, to achieve the claimed invention.
Regarding claim 18, Recker further discloses wherein the user interface receives the at least one datum of task execution, wherein the at least one datum of task execution includes audio and visual data (Fig. 18; [0008-0010]; [0055]; [0057]; [0068]; [0070-0071]; [0090]; [0097-0099]; [0107]; [0118]; [0127], wherein the mixed reality task, configured by task execution data, may be a real world scenario that a user would encounter in the real world and may be specific to a profession, and wherein the sensed user responses and/or actions by the one or more sensors are measured during a task presented via the simulation and displayed in the simulated reality environment).
Regarding claim 19, Recker further discloses wherein the audio and visual data is displayed on at least one wearable technology device with at least one partially transparent eye panel, wherein the at least one partially transparent eye panel is configured to present the user interface within a field of view of the user (Fig. 18; [0049]; [0097-0099], wherein the head-mounted device 1810 of the mixed reality hardware platform may include a lens 1815 (partially transparent eye panel)).
Regarding claim 20, Recker further discloses wherein the user interface includes one or more two-dimensional or three-dimensional static, moving, or movable images (Fig. 18; [0097-0099], e.g., a display of the user’s hand touching or hovering over a region of interest associated with a displayed object on a workbench).
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Recker.
Regarding claim 9, Recker further discloses wherein the mixed reality task may be a real world scenario that a user would encounter in the real world and may be specific to a profession (e.g., performing a surgery) and where feedback is provided based on the user interaction with the simulation, wherein such simulated reality systems are used for training ([0008-0010]; [0090]; [0107]; [0118]; [0127]). While Recker may not further explicitly disclose wherein the user, of a specific profession, is a sterile processing technician, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to configure a mixed reality task as taught by Recker for any profession that may require training, such as a sterile processing technician, to achieve the claimed invention.
Regarding claim 10, Recker further discloses wherein the mixed reality device provides a three-dimensional rendering of a tool that is manipulated through one or more motions, wherein the one or more motions provides information as to whether the tool is correct for a predefined operation (Fig. 5; [0003]; [0050]; [0066]; [0090]; [0097-0099]; [0111]; [0118]; [0139-0140], wherein the simulated reality experience provided by the mixed reality head-mounted display includes objects that correspond to real world items (e.g., tools), and wherein one or more motions by the user with respect to the objects are analyzed to determine correct selection of a tool for a predefined operation/task (e.g., wherein a task may require use of two or more objects in a particular order, such as tool A to be used before tool B)).
Claims 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Recker in further view of Nguyen et al. (U.S. Pub. 2023/0092938 A1) (hereinafter “Nguyen”).
Regarding claim 11, Recker may not further explicitly disclose wherein the mixed reality device includes instructional videos via the user interface for a variety of role-specific tasks. However, Nguyen, directed to a system and method for presenting data to a user to assist the user to perform a desired task, wherein the user may utilize a device, such as an extended reality device, which may use mixed reality ([0026]), teaches wherein a video tutorial for performing a specific task(s) (e.g., performing a type of repair) may be output to the user ([0174], wherein the tutorial may be role-specific). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to output tutorial videos for a variety of role-specific tasks, as taught by Nguyen, in the invention of Recker as feedback and/or to further aid in training the user (e.g., when performing new tasks or tasks not recently performed) (Nguyen, [0174]).
Regarding claim 12, Recker may not further explicitly disclose wherein the productivity facilitating system includes a plurality of users. However, Nguyen teaches that limitation ([0046]; [0093]; [0100-0102], wherein the extended reality device and/or processing system enables interactions between various users, such as between multiple remote users that may each be utilizing a respective extended reality device such that the users can interact with one another to complete a task). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to allow a plurality of users to utilize a simulated reality system such as that disclosed in Recker, as taught by Nguyen, in order to allow the users to interact to complete a task (Nguyen, [0046]; [0093]; [0100-0102]).
Regarding claim 13, Recker may not further explicitly disclose, however, Nguyen teaches wherein a first user interacts with a second user, wherein the first user transmits at least a portion of the environment within the field of view of the mixed reality device of the first user to the mixed reality device of the second user such that the second user views and interacts with the user interface of the first user ([0093]; [0100-0102], where the users interact with one another to complete a task, wherein first output data is presented to a user of via extended reality device of the local user and second output data is presented to a second user via extended reality device of the second user). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to allow a plurality of users of a simulated reality system to interact with each other, wherein at least a portion of the environment within the field of view of a fist user is presented to a second user such that the second user interacts with the user interface of the first user, as taught by Nguyen, in order to allow the users to interact to complete a task (Nguyen, [0046]; [0093]; [0100-0102]).
Regarding claim 14, Recker may not further explicitly disclose, however, Nguyen teaches wherein a first user with a mixed reality device interacts with a second user, wherein the second user utilizes a computing device to view one or more images captured by the mixed reality device of the first user ([0030]; [0046]; [0093]; [0100-0102], where the users interact with one another to complete a task, wherein first output data is presented to a user of via extended reality device of the local user and second output data is presented to a second user via extended reality device of the second user, wherein the extended reality device (e.g., of the second user) may include a tablet, phone, or other suitable device (computing device)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to allow a plurality of users of a simulated reality system to interact with each other, wherein a computing device of a second user is used to view one or more images captured by a mixed reality device of a first user, as taught by Nguyen, in order to allow the users to interact to complete a task (Nguyen, [0046]; [0093]; [0100-0102]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Pub. 2021/0338366 A1 – This reference teaches a mixed reality system for training medical and non-medical personnel.
U.S. Pub. 2021/0177519 A1 – This reference teaches medical training in a mixed reality environment.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSSA N BRANDLEY whose telephone number is (571)272-4280. The examiner can normally be reached M-F: 8:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dmitry Suhol, can be reached at (571)272-4430. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALYSSA N BRANDLEY/Examiner, Art Unit 3715