Prosecution Insights
Last updated: April 19, 2026
Application No. 18/140,743

SIMULATOR FOR SKILL-ORIENTED TRAINING OF A HEALTHCARE PRACTITIONER

Non-Final OA §101§102
Filed
Apr 28, 2023
Examiner
BULLINGTON, ROBERT P
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Vrsim Inc.
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
74%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
243 granted / 557 resolved
-26.4% vs TC avg
Strong +31% interview lift
Without
With
+30.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
65 currently pending
Career history
622
Total Applications
across all art units

Statute-Specific Performance

§101
35.6%
-4.4% vs TC avg
§103
20.0%
-20.0% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
28.6%
-11.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1 – “Statutory Category Identification” Claim 1 is directed to “a simulator” (i.e. “a machine”), hence the claims are directed to one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). In other words, Step 1 of the subject-matter eligibility analysis is “Yes.” Step 2A, Prong 1 “Abstract Idea Identification” However, the claims are drawn to the abstract idea of “skill-oriented training of a healthcare task,” either in the form of “certain methods of organizing human activity,” in terms of managing personal behavior or relationships or interactions between people (including social activities, teaching and following rules or instructions), or reasonably in the form of “mental processes,” in terms of processes that can be performed in the human mind (including an observation, evaluation, judgement or opinion). Regardless, the claims are reasonably understood as either “certain methods of organizing human activity;” and/or “mental processes;” which require the following limitations: Per claim 1: “determine coordinates of a position, an orientation, and a speed and a direction of movement of the one or more controllers in relation to the patient as the operator takes actions to perform the healthcare task …; model the actions taken by the operator to perform the healthcare tasks to determine use of healthcare equipment and supplies and changes in condition of the patient, reaction of the patient, and the used healthcare equipment and supplies in relation to the actions taken; render the patient, the used healthcare equipment and supplies, the condition of the patient, the reaction of the patient, changes to the condition of the patient, changes to the used healthcare equipment and supplies, and sensory guidance as to the performance of the healthcare tasks from the actions taken by the operator in a three-dimensional virtual training environment; and simulate in real-time the three-dimensional virtual training environment depicting the rendered patient, the rendered reaction of the patient, the rendered used healthcare equipment and supplies, the rendered changes to the condition of the patient, the rendered changes to the used healthcare equipment and supplies, and the rendered sensory guidance as the operator performs the healthcare task in the training environment; wherein the rendered patient, the rendered reaction of the patient, the rendered used healthcare equipment and supplies, the rendered changes to the condition of the patient, the rendered changes to the used healthcare equipment and supplies, and the rendered sensory guidance are exhibited in near real-time to the operator within the training environment … to provide in-process correction and reinforcement of preferred performance characteristics as the operator performs the healthcare task; and wherein the rendered sensory guidance includes a plurality of visual, audio and tactile indications of performance by the operator as compared to optimal values for performance.” These limitations simply describe a process of data gathering and manipulation, which is partially analogous to “collecting information, analyzing it, and displaying certain results of the collection analysis” (i.e. Electric Power Group, LLC, v. Alstom, 830 F.3d 1350, 119 U.S.P.Q.2d 1739 (Fed. Cir. 2016)). Hence, these limitations are akin to an abstract idea which has been identified among non-limiting examples to be an abstract idea. In other words, Step 2A, Prong 1 of the subject-matter eligibility analysis is “Yes.” Step 2A, Prong 2 – “Practical Application” Furthermore, the applicants claimed elements of “a head-mounted display unit (HMDU),” “one or more controllers,” and “a data processing system,” are merely claimed to generally link the use of a judicial exception (e.g., pre-solution activity of data gathering and post-solution activity of presenting data) to (1) a particular technological environment or (2) field of use, per MPEP §2106.05(h); and are applying the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, per MPEP §2106.05(f). In other words, the claimed “skill-oriented training of a healthcare task,” is not providing a practical application, thus Step 2A, Prong 2 of the subject-matter eligibility analysis is “No.” Step 2B – “Significantly More” Likewise, the claims do not include additional elements that either alone or in combination are sufficient to amount to significantly more than the judicial exception because to the extent that, e.g. “a head-mounted display unit (HMDU),” “one or more controllers,” and “a data processing system,” are claimed, these are generic, well-known, and conventional data gather computing elements. As evidence that these are generic, well-known, and a conventional data gathering computing element (or an equivalent term), as a commercially available product, or in a manner that indicates that the additional elements are sufficiently well-known, the Applicant’s specification discloses these in a manner that indicates that the additional element is sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a), per MPEP § 2106.07(a) III (a). As such, this satisfies the Examiner’s evidentiary burden requirement per the Berkheimer memo. Specifically, the Applicant’s claimed “a head-mounted display unit (HMDU),” as described in para. [0038] of the Applicant’s written description as originally filed, provides the following: “[0038] Referring to FIGS. 1, 2A, 2B, and 3, one or more video cameras 42 and sensors 44 (e.g., tracking sensors), and one or more display devices 46 provided on, for example, a head-mounted display unit (HMDU) 40 worn by the operator 10, cooperates with the one or more controllers 60 and sensors 62 (e.g., tracking sensors) thereof, to provide data and information to a processing system 50.” As such, the Applicant’s “a head-mounted display unit (HMDU),” is reasonably interpreted as a generic, well-known, and conventional data computing element. Likewise, the Applicant’s claimed “one or more controllers,” as described in para. [0036] of the Applicant’s written description as originally filed, provides the following: “[0036] In one embodiment, the one or more handheld controllers 60 include a Pico Neo 3 controller of Qingdao Pico Technology Co., Ltd. dba Pico Immersive Pte. Ltd (Qingdao, China) (Pico Neo is a registered trademark of Qingdao Pico Technology Co., Ltd.). In one embodiment, the one or more handheld controllers 60 include an Oculus Quest 2 and/or an Oculus Rift controller of Facebook Technologies, LLC (Menlo Park, California) (Oculus Quest and Oculus Rift are registered trademarks of Facebook Technologies, LLC). In another embodiment, the one or more handheld controllers 60 include a Vive Pro Series controller of HTC Corporation (Taoyuan City Taiwan) (Vive is a registered trademark of HTC Corporation). In still another embodiment, it is within the scope of the present invention for the simulator 20 to be implemented in a controller-free embodiment, for example, where a user's hands and gestures made therewith (e.g., grasping, picking up and moving objects, pinching, swiping, and the like) are identified and tracked (e.g., with cameras and sensors within the virtual healthcare environment 100) rather than actions and movement initiated by the user with a handheld controller in the environment 100.” As such, the Applicant’s “one or more controllers,” is reasonably interpreted as a generic, well-known, and conventional data gathering computing element that is commercially available today. Finally, the Applicant’s claimed “a data processing system,” as described in para. [0045] of the Applicant’s written description as originally filed, provides the following: “[0045] In one embodiment, as illustrated in FIG. 3, a simplified block diagram view of the healthcare training simulator 20, the processing system 50 is a standalone or networked computing device 52 having or operatively coupled to one or more microprocessors (CPU), memory (e.g., internal memory 130 including hard drives, ROM, RAM, and the like), and/or data storage devices 150 (e.g., hard drives, optical storage devices, and the like) as is known in the art. The computing device 52 includes one or more input devices 53 such as, for example, a keyboard, mouse or like pointing device, touch screen portions of a display device, ports 58 for receiving data such as, for example, a plug or terminal receiving the wired communication connections 43 and 63 from the sensors 44 and 62 directly or from the tracking system 110, and one or more output devices 54. The output devices 54 include, for example, one or more display devices operative coupled to the computing device 52 to exhibit visual output, such as, for example, the one or more display devices 46 of the HMDU 40 and/or a monitor 56 coupled directly to the computing device 52 or a portable computing processing system (e.g., processing systems 93, described below) such as, for example, a personal digital assistant (PDA), IPAD, tablet, mobile radio telephone, smartphone (e.g., Apple™ iPhone™ device, Google™ Android™ device, etc.), or the like. The one or more output devices 54 also include, for example, one or more speakers 55 operative coupled to the computing device 52 to produce sound for auditory perception by the operator 10 and others. In one embodiment, the output devices 54 exhibit one or more graphical user interfaces (GUIs) 200 (as described below) that may be visually perceived by the operator 10 operating the coating simulator the instructor or certification agent 12, and/or other interested persons such as, for example, other medical trainees, observing and evaluating the operator's 10 performance.” As such, the Applicant’s “a data processing system,” is also reasonably interpreted as a generic, well-known, and conventional data gathering computing element that is commercially available today. Therefore, the Applicant’s claimed “a head-mounted display unit (HMDU),” “one or more controllers,” and “a data processing system,” are reasonably interpreted as generic, well-known, and conventional data gathering computing elements which provide no details of anything beyond ubiquitous standard off-the-shelf equipment and software within modern computing and does not provide anything significantly more. Therefore, Step 2B, of the subject-matter eligibility analysis is “No.” In addition, dependent claims 2-20 do not provide a practical application and are insufficient to amount to significantly more than the judicial exception. This is further supported in para. [0094] which provides the following: “[0094] Machine learning techniques, whether deep learning networks or other experiential/observational learning system, can be used to model information in the digital twin 130 and/or leverage the digital twin 130 to analyze and/or predict an outcome of a procedure, such as a surgical operation and/or other protocol execution, for example. Deep learning is a subset of machine learning that uses a set of algorithms to model high-level abstractions in data using a deep graph with multiple processing layers including linear and non-linear transformations. While many machine learning systems are seeded with initial features and/or network weights to be modified through learning and updating of the machine learning network, a deep learning network trains itself to identify “good” features for analysis. Using a multilayered architecture, machines employing deep learning techniques can process raw data better than machines using conventional machine learning techniques. Examining data for groups of highly correlated values or distinctive themes is facilitated using different layers of evaluation or abstraction. As such, dependent claims 2-20 are also rejected under 35 U.S.C. § 101, based on their respective dependencies to claim 1. Therefore, claims 1-20 are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject-matter. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Peterson (US 2019/0087544). Regarding claim 1, Peterson discloses a simulator for skill-oriented training of a healthcare task, the simulator comprising: a head-mounted display unit (HMDU) wearable by an operator operating the simulator, the HMDU having at least one camera, at least one speaker, at least one display device, and at least one HMDU sensor, the at least one camera, the at least one speaker, and the at least one display device providing visual and audio output to the operator (see para. [0047] In certain examples, a device, such as an optical head-mounted display (e.g., Google Glass, etc.,) can be used with augmented reality to identify and quantify items (e.g., instruments, products, etc.) in the surgical field, operating room, etc. For example, such a device can be used to validate items selected for inclusion (e.g., on the cart, with respect to the patient, etc.), items used, items tracked, etc., automatically by sight recognition and recording. The device can be used to pull in scanner details from all participants in a surgery, for example, modeled via the digital twin 130 and verified according to equipment list, surgical protocol, personnel preferences, etc. see para. [0051] In certain examples, an optical head-mounted display (e.g., Google™ Glass, etc.) can be used to scan and record item such as instruments, instrument trays, disposables, etc., in an operating room, surgical suite, surgical field, etc. As shown in the example of FIG. 3, an optical head-mounted display 300 can include a scanner or other sensor 310 that scans items in its field of view (e.g., scans barcodes, radiofrequency identifiers (RFIDs), visual profile/characteristics, etc.). Item identification, photograph, video feed, etc., can be provided by the scanner 310 to the digital twin 130, for example. The scanner 310 and/or the digital twin 130 can identify and track items within range of the scanner 310, for example. The digital twin 130 can then model the viewed environment and/or objects in the viewed environment based at least in part on input from the scanner 310, for example.); one or more controllers operable by the operator, the one or more controllers each having at least one controller sensor, the at least one controller sensor and the at least one HMDU sensor each cooperating to measure and to output one or more signals representing spatial positioning, angular orientation, speed and direction of movement data of the one or more controllers relative to a patient as the operator performs a healthcare task (see para. [0061] FIG. 7 illustrates an example operating room monitor 700 including a processor 710, a memory 720, an input 730, an output 740, and a surgical materials digital twin 130. The example input 730 can include a sensor 735, for example. The sensor 735 can monitor items, personnel, activity, etc., in an environment 500, 600 such as an operating room 500. see para. [0074] At block 910, the digital twin 130 is updated based on the monitored procedure execution. For example, the object position, time, state, condition, and/or other aspect captured by the sensor 735, optics 300, tablet 410, etc., is provided via the input 730 to be modeled by the digital twin 130. A new model can be created and/or an existing model can be updated using the information. For example, the digital twin 130 can include a plurality of models or twins focusing on particular aspects of the environment 500, 600 such as surgical instruments, disposables/implants, patient, surgeon, equipment, etc. Alternatively or in addition, the digital twin 130 can model the overall environment 500, 600.); a data processing system operatively coupled to the HMDU and the one or more controllers, the data processing system including a processor and memory operatively coupled to the processor with a plurality of executable algorithms stored therein (see para. [0053] In certain examples, the optical head-mounted display 300 can work alone and/or in conjunction with an instrument cart, such as a surgical cart 400 shown in the example of FIG. 4. The example surgical cart 400 can include a computing device 410, such as a tablet computer and/or other computing interface to receive input from a user and providing output regarding content of the cart 400, associated procedure/protocol, other user(s) (e.g., patient, healthcare practitioner(s), etc.), instrument usage for a procedure, etc. The computing device 410 can be used to house the surgical digital twin 130, update and/or otherwise communicate with the digital twin 130, store preference card(s), store procedure/protocol information, track protocol compliance, generate analytics, etc. see para. [0061] FIG. 7 illustrates an example operating room monitor 700 including a processor 710, a memory 720, an input 730, an output 740, and a surgical materials digital twin 130. The example input 730 can include a sensor 735, for example. The sensor 735 can monitor items, personnel, activity, etc., in an environment 500, 600 such as an operating room 500. see para. [0062] For example, the sensor 735 can detect items on the table(s) 504-508, status of the patient on the patient table 504, position of stand(s) 510-514, pole(s) 516-518, monitor 520, step 522, waste/linen 524, canisters 526, door(s) 528-530, storage 532, etc. As another example, the sensor 735 can detect cart(s) 602-608 and/or item(s) on/in the cart(s) 602-608. The sensor 735 can detect item(s) on/in the sterilizer(s) 612-640, on table(s) 622-632, in the pass-through 642, etc. Object(s) detected by the sensor 735 can be provided as input 730 to be stored in memory 720 and/or processed by the processor 710, for example. The processor 710 (and memory 720) can update the surgical materials digital twin 130 based on the object(s) detected by the sensor 735 and identified by the processor 710, for example.), the processor is configured by the executable algorithms to: determine coordinates of a position, an orientation, and a speed and a direction of movement of the one or more controllers in relation to the patient as the operator takes actions to perform the healthcare task based on the one or more signals output from the at least one HMDU sensor and the at least one controller sensor of each of the one or more controllers (see FIG. 9; see para. [0073] At block 906, the procedure is modeled for the patient using the digital twin 130. For example, based on the identified procedure, the digital twin 130 can model the procedure to facilitate practice for healthcare practitioners to be involved in the procedure, predict staffing and care team make-up associated with the procedure, improve team efficiency, improve patient preparedness, etc. At block 908, procedure execution is monitored. For example, the monitor 700 including the sensor 735, optics 300, tablet 410, etc., can be used to monitor procedure execution by detecting object position, time, state, condition, and/or other aspect to be modeled by the digital twin 130. See para. [0074] At block 910, the digital twin 130 is updated based on the monitored procedure execution. For example, the object position, time, state, condition, and/or other aspect captured by the sensor 735, optics 300, tablet 410, etc., is provided via the input 730 to be modeled by the digital twin 130. A new model can be created and/or an existing model can be updated using the information. For example, the digital twin 130 can include a plurality of models or twins focusing on particular aspects of the environment 500, 600 such as surgical instruments, disposables/implants, patient, surgeon, equipment, etc. Alternatively or in addition, the digital twin 130 can model the overall environment 500, 600.); model the actions taken by the operator to perform the healthcare tasks to determine use of healthcare equipment and supplies and changes in condition of the patient, reaction of the patient, and the used healthcare equipment and supplies in relation to the actions taken (see FIG. 9; see para. [0073] At block 906, the procedure is modeled for the patient using the digital twin 130. For example, based on the identified procedure, the digital twin 130 can model the procedure to facilitate practice for healthcare practitioners to be involved in the procedure, predict staffing and care team make-up associated with the procedure, improve team efficiency, improve patient preparedness, etc. At block 908, procedure execution is monitored. For example, the monitor 700 including the sensor 735, optics 300, tablet 410, etc., can be used to monitor procedure execution by detecting object position, time, state, condition, and/or other aspect to be modeled by the digital twin 130. see para. [0074] At block 910, the digital twin 130 is updated based on the monitored procedure execution. For example, the object position, time, state, condition, and/or other aspect captured by the sensor 735, optics 300, tablet 410, etc., is provided via the input 730 to be modeled by the digital twin 130. A new model can be created and/or an existing model can be updated using the information. For example, the digital twin 130 can include a plurality of models or twins focusing on particular aspects of the environment 500, 600 such as surgical instruments, disposables/implants, patient, surgeon, equipment, etc. Alternatively or in addition, the digital twin 130 can model the overall environment 500, 600.); render the patient, the used healthcare equipment and supplies, the condition of the patient, the reaction of the patient, changes to the condition of the patient, changes to the used healthcare equipment and supplies, and sensory guidance as to the performance of the healthcare tasks from the actions taken by the operator in a three-dimensional virtual training environment (see FIG. 9; see para. [0073] At block 906, the procedure is modeled for the patient using the digital twin 130. For example, based on the identified procedure, the digital twin 130 can model the procedure to facilitate practice for healthcare practitioners to be involved in the procedure, predict staffing and care team make-up associated with the procedure, improve team efficiency, improve patient preparedness, etc. At block 908, procedure execution is monitored. For example, the monitor 700 including the sensor 735, optics 300, tablet 410, etc., can be used to monitor procedure execution by detecting object position, time, state, condition, and/or other aspect to be modeled by the digital twin 130. see para. [0074] At block 910, the digital twin 130 is updated based on the monitored procedure execution. For example, the object position, time, state, condition, and/or other aspect captured by the sensor 735, optics 300, tablet 410, etc., is provided via the input 730 to be modeled by the digital twin 130. A new model can be created and/or an existing model can be updated using the information. For example, the digital twin 130 can include a plurality of models or twins focusing on particular aspects of the environment 500, 600 such as surgical instruments, disposables/implants, patient, surgeon, equipment, etc. Alternatively or in addition, the digital twin 130 can model the overall environment 500, 600. See para. [0075] At block 912, feedback is provided with respect to the procedure. For example, the digital twin 130 can work with the processor 710 and memory 720 to generate an output 740 for the surgeon, patient, hospital information system, etc., to impact conducting of the procedure, post-operative follow-up, rehabilitation plan, subsequent pre-operative care, patient care plan, etc. The output 740 can warn the surgeon, nurse, etc., that an item is in the wrong location, is running low/insufficient for the procedure, etc., for example. The output 740 can provide billing for inventory and/or service, for example, and/or update a central inventory based on item usage during a procedure, for example. See para. [0076] At block 914, periodic redeployment of the updated digital twin 130 is triggered. For example, feedback provided to and/or generated by the digital twin 130 can be used to update a model forming the digital twin 130. When a certain threshold of new data is reached, for example, the digital twin 130 can be retrained, retested, and redeployed to better mimic real-life surgical procedure information including items, instruments, personnel, protocol, etc. In certain examples, updated protocol/procedure information, new best practice, new instrument and/or personnel, etc., can be provided to the digital twin 130, resulting in an update and redeployment of the updated digital twin 130. Thus, the digital twin 130 and the monitor 700 can be used to dynamically model, monitor, train, and evolve to support surgery and/or other medical protocol, for example.); and simulate in real-time the three-dimensional virtual training environment depicting the rendered patient, the rendered reaction of the patient, the rendered used healthcare equipment and supplies, the rendered changes to the condition of the patient, the rendered changes to the used healthcare equipment and supplies, and the rendered sensory guidance as the operator performs the healthcare task in the training environment (see para. [0033] In certain examples, obtained images overlaid with sensor data, lab results, etc., can be used in augmented reality (AR) applications when a healthcare practitioner is examining, treating, and/or otherwise caring for the patent 110. Using AR, the digital twin 130 follows the patient's response to the interaction with the healthcare practitioner, for example. See para. [0034] Thus, rather than a generic model, the digital twin 130 is a collection of actual physics-based, anatomically-based, and/or biologically-based models reflecting the patient/protocol/item 110 and his or her associated norms, conditions, etc. In certain examples, three-dimensional (3D) modeling of the patient/protocol/item 110 creates the digital twin 130 for the patient/protocol/item 110. The digital twin 130 can be used to view a status of the patient/protocol/item 110 based on input data 120 dynamically provided from a source (e.g., from the patient 110, practitioner, health information system, sensor, etc.).; wherein the rendered patient, the rendered reaction of the patient, the rendered used healthcare equipment and supplies, the rendered changes to the condition of the patient, the rendered changes to the used healthcare equipment and supplies, and the rendered sensory guidance are exhibited in near real-time to the operator within the training environment on the at least one display device of the HMDU to provide in-process correction and reinforcement of preferred performance characteristics as the operator performs the healthcare task (see para. [0033] In certain examples, obtained images overlaid with sensor data, lab results, etc., can be used in augmented reality (AR) applications when a healthcare practitioner is examining, treating, and/or otherwise caring for the patent 110. Using AR, the digital twin 130 follows the patient's response to the interaction with the healthcare practitioner, for example. See para. [0034] Thus, rather than a generic model, the digital twin 130 is a collection of actual physics-based, anatomically-based, and/or biologically-based models reflecting the patient/protocol/item 110 and his or her associated norms, conditions, etc. In certain examples, three-dimensional (3D) modeling of the patient/protocol/item 110 creates the digital twin 130 for the patient/protocol/item 110. The digital twin 130 can be used to view a status of the patient/protocol/item 110 based on input data 120 dynamically provided from a source (e.g., from the patient 110, practitioner, health information system, sensor, etc.). See para. [0076] At block 914, periodic redeployment of the updated digital twin 130 is triggered. For example, feedback provided to and/or generated by the digital twin 130 can be used to update a model forming the digital twin 130. When a certain threshold of new data is reached, for example, the digital twin 130 can be retrained, retested, and redeployed to better mimic real-life surgical procedure information including items, instruments, personnel, protocol, etc. In certain examples, updated protocol/procedure information, new best practice, new instrument and/or personnel, etc., can be provided to the digital twin 130, resulting in an update and redeployment of the updated digital twin 130. Thus, the digital twin 130 and the monitor 700 can be used to dynamically model, monitor, train, and evolve to support surgery and/or other medical protocol, for example.); and wherein the rendered sensory guidance includes a plurality of visual, audio and tactile indications of performance by the operator as compared to optimal values for performance (see para. [0122] Example output 1520 can provide a display generated by processor 1530 for visual illustration on a monitor or the like. The display can be in the form of a network interface or graphic user interface (GUI) to exchange data, instructions, or illustrations on a computing device via communication interface 1550, for example. Example output 1520 may include a monitor (e.g., liquid crystal display (LCD), plasma display, cathode ray tube (CRT), etc.), light emitting diodes (LEDs), a touch-screen, a printer, a speaker, or other conventional display device or combination thereof.). Regarding claims 2 and 3, Peterson discloses an avatar or portion thereof, manipulated and directed by the operator with the one or more controllers to take the actions to perform the healthcare task in the three-dimensional virtual training environment (see para. [0049] In certain examples, the digital twin 130 can be used to model a preference card and/or other procedure/protocol information for a healthcare user, such as a surgeon, nurse, assistant, technician, administrator, etc. As shown in the example implementation 200 of FIG. 2, surgery materials and/or procedure/protocol information 210 in the real space 115 can be represented by the digital twin 130 in the virtual space 135. Information 220, such as information identifying case/procedure-specific materials, patient data, protocol, etc., can be provided from the surgery materials 210 in the real space 115 to the digital twin 130 in the virtual space 135. The digital twin 130 and/or its virtual space 135 provide information 240 back to the real space 115, for example. The digital twin 130 and/or virtual space 135 can also provide information to one or more virtual sub-spaces 150, 152, 154. As shown in the example of FIG. 2, the virtual space 135 can include and/or be associated with one or more virtual sub-spaces 150, 152, 154, which can be used to model one or more parts of the digital twin 130 and/or digital “sub-twins” modeling subsystems/subparts of the overall digital twin 130. For example, sub-spaces 150, 152, 154 can be used to separately model surgical protocol information, patient information, surgical instruments, pre-operative tasks, post-operative instructions, image information, laboratory information, prescription information, etc. Using the plurality of sources of information, the surgery/operation digital twin 130 can be configured, trained, populated, etc., with patient medical data, exam records, procedure/protocol information, lab test results, prescription information, care plan information, image data, clinical notes, sensor data, location data, healthcare practitioner and/or patient preferences, pre-operative and/or post-operative tasks/information, etc. See para. [0050] When a user (e.g., patient, patient family member (e.g., parent, spouse, sibling, child, etc.), healthcare practitioner (e.g., doctor, nurse, technician, administrator, etc.), other provider, payer, etc.) and/or program, device, system, etc., inputs data in a system such as a picture archiving and communication system (PACS), radiology information system (RIS), electronic medical record system (EMR), laboratory information system (LIS), cardiovascular information system (CVIS), hospital information system (HIS), population health management system (PHM) etc., that information can be reflected in the digital twin 130. Thus, the digital twin 130 can serve as an overall model or avatar of the surgery materials 210 and operating environment 115 in which the surgery materials 210 are to be used and can also model particular aspects of the surgery and/or other procedure, patient care, etc., corresponding to particular data source(s). Data can be added to and/or otherwise used to update the digital twin 130 via manual data entry and/or wired/wireless (e.g., WiFi™, Bluetooth™, Near Field Communication (NFC), radio frequency, etc.) data communication, etc., from a respective system/data source, for example. Data input to the digital twin 130 can be processed by an ingestion engine and/or other processor to normalize the information and provide governance and/or management rules, criteria, etc., to the information. In addition to building the digital twin 130, some or all information can also be aggregated to model user preference, health analytics, management, etc.). Regarding claim 4, Peterson discloses wherein the operator further includes a plurality of operators undertaking the skill-oriented training as a group cooperating to perform the healthcare task within the three-dimensional virtual training environment (see para. [0073] At block 906, the procedure is modeled for the patient using the digital twin 130. For example, based on the identified procedure, the digital twin 130 can model the procedure to facilitate practice for healthcare practitioners to be involved in the procedure, predict staffing and care team make-up associated with the procedure, improve team efficiency, improve patient preparedness, etc. At block 908, procedure execution is monitored. For example, the monitor 700 including the sensor 735, optics 300, tablet 410, etc., can be used to monitor procedure execution by detecting object position, time, state, condition, and/or other aspect to be modeled by the digital twin 130. See para. [0081] At block 1104, the update is processed to determine its impact on the modeled preference card of the digital twin 130. For example, a preference card can provide a logical set of instructions for item and personnel positioning for a surgical procedure, equipment and/or other supplies to be used in the surgical procedure, staffing, schedule, etc., for a particular surgeon, other healthcare practitioner, surgical team, etc. The digital twin 130 can model one or more preference cards including to update the preference card(s), simulate using the preference card(s), predict using the preference card(s), train using the preference card(s), analyze using the preference card(s), etc. FIG. 12 illustrates an example preference card 1200 for an arthroscopic orthopedic procedure.). Regarding claim 5, Peterson discloses wherein the operator is one of a medical professional and an individual providing home health aid (see para. [0043] In certain examples, instead of or in addition to the patient/protocol/item 110, the digital twin 130 can be used to model a robot, such as a robot to assist in healthcare monitoring, patient care, care plan execution, surgery, patient follow-up, etc. As with the patient/protocol/item 110, the digital twin 130 can be used to model behavior, programming, usage, etc., for a healthcare robot, for example. The robot can be a home healthcare robot to assist in patient monitoring and in-home patient care, for example. The robot can be programmed for a particular patient condition, care plan, protocol, etc., and the digital twin 130 can model execution of such a plan/protocol, simulate impact on the patient condition, predict next step(s) in patient care, suggest next action(s) to facilitate patient compliance, etc. see para. [0049] In certain examples, the digital twin 130 can be used to model a preference card and/or other procedure/protocol information for a healthcare user, such as a surgeon, nurse, assistant, technician, administrator, etc. As shown in the example implementation 200 of FIG. 2, surgery materials and/or procedure/protocol information 210 in the real space 115 can be represented by the digital twin 130 in the virtual space 135. Information 220, such as information identifying case/procedure-specific materials, patient data, protocol, etc., can be provided from the surgery materials 210 in the real space 115 to the digital twin 130 in the virtual space 135. The digital twin 130 and/or its virtual space 135 provide information 240 back to the real space 115, for example. The digital twin 130 and/or virtual space 135 can also provide information to one or more virtual sub-spaces 150, 152, 154. As shown in the example of FIG. 2, the virtual space 135 can include and/or be associated with one or more virtual sub-spaces 150, 152, 154, which can be used to model one or more parts of the digital twin 130 and/or digital “sub-twins” modeling subsystems/subparts of the overall digital twin 130. For example, sub-spaces 150, 152, 154 can be used to separately model surgical protocol information, patient information, surgical instruments, pre-operative tasks, post-operative instructions, image information, laboratory information, prescription information, etc. Using the plurality of sources of information, the surgery/operation digital twin 130 can be configured, trained, populated, etc., with patient medical data, exam records, procedure/protocol information, lab test results, prescription information, care plan information, image data, clinical notes, sensor data, location data, healthcare practitioner and/or patient preferences, pre-operative and/or post-operative tasks/information, etc. See para. [0050] When a user (e.g., patient, patient family member (e.g., parent, spouse, sibling, child, etc.), healthcare practitioner (e.g., doctor, nurse, technician, administrator, etc.), other provider, payer, etc.) and/or program, device, system, etc., inputs data in a system such as a picture archiving and communication system (PACS), radiology information system (RIS), electronic medical record system (EMR), laboratory information system (LIS), cardiovascular information system (CVIS), hospital information system (HIS), population health management system (PHM) etc., that information can be reflected in the digital twin 130. Thus, the digital twin 130 can serve as an overall model or avatar of the surgery materials 210 and operating environment 115 in which the surgery materials 210 are to be used and can also model particular aspects of the surgery and/or other procedure, patient care, etc., corresponding to particular data source(s).). Regarding claim 6, Peterson discloses wherein the medical professional includes at least one of an emergency medical technician (EMT), a licensed practical nurse (LPN), and a certified nursing assistant, nurse's aid, or a patient care assistant referred to herein as a CAN (See para. [0050] When a user (e.g., patient, patient family member (e.g., parent, spouse, sibling, child, etc.), healthcare practitioner (e.g., doctor, nurse, technician, administrator, etc.), other provider, payer, etc.) and/or program, device, system, etc.,). Regarding claim 7, Peterson discloses wherein a path of travel of the operator performing the healthcare tasks is modeled, based on at least one of a position, orientation, speed and direction of movement of the HMDU and the one or more controllers (see para. [0073] At block 906, the procedure is modeled for the patient using the digital twin 130. For example, based on the identified procedure, the digital twin 130 can model the procedure to facilitate practice for healthcare practitioners to be involved in the procedure, predict staffing and care team make-up associated with the procedure, improve team efficiency, improve patient preparedness, etc. At block 908, procedure execution is monitored. For example, the monitor 700 including the sensor 735, optics 300, tablet 410, etc., can be used to monitor procedure execution by detecting object position, time, state, condition, and/or other aspect to be modeled by the digital twin 130. see para. [0074] At block 910, the digital twin 130 is updated based on the monitored procedure execution. For example, the object position, time, state, condition, and/or other aspect captured by the sensor 735, optics 300, tablet 410, etc., is provided via the input 730 to be modeled by the digital twin 130. A new model can be created and/or an existing model can be updated using the information. For example, the digital twin 130 can include a plurality of models or twins focusing on particular aspects of the environment 500, 600 such as surgical instruments, disposables/implants, patient, surgeon, equipment, etc. Alternatively or in addition, the digital twin 130 can model the overall environment 500, 600.). Regarding claim 8, Peterson discloses wherein the visual indications of performance include an indication, instruction, and/or guidance of the optimal values for preferred performance of the healthcare task currently being performed by the operator (See para. [0075] At block 912, feedback is provided with respect to the procedure. For example, the digital twin 130 can work with the processor 710 and memory 720 to generate an output 740 for the surgeon, patient, hospital information system, etc., to impact conducting of the procedure, post-operative follow-up, rehabilitation plan, subsequent pre-operative care, patient care plan, etc. The output 740 can warn the surgeon, nurse, etc., that an item is in the wrong location, is running low/insufficient for the procedure, etc., for example. The output 740 can provide billing for inventory and/or service, for example, and/or update a central inventory based on item usage during a procedure, for example. See para. [0076] At block 914, periodic redeployment of the updated digital twin 130 is triggered. For example, feedback provided to and/or generated by the digital twin 130 can be used to update a model forming the digital twin 130. When a certain threshold of new data is reached, for example, the digital twin 130 can be retrained, retested, and redeployed to better mimic real-life surgical procedure information including items, instruments, personnel, protocol, etc. In certain examples, updated protocol/procedure information, new best practice, new instrument and/or personnel, etc., can be provided to the digital twin 130, resulting in an update and redeployment of the updated digital twin 130. Thus, the digital twin 130 and the monitor 700 can be used to dynamically model, monitor, train, and evolve to support surgery and/or other medical protocol, for example.). Regarding claims 9 and 10, Peterson discloses wherein the audio indications of performance include an audio tone output by the at least one speaker of the HMDU (see para. [0122] Example output 1520 can provide a display generated by processor 1530 for visual illustration on a monitor or the like. The display can be in the form of a network interface or graphic user interface (GUI) to exchange data, instructions, or illustrations on a computing device via communication interface 1550, for example. Example output 1520 may include a monitor (e.g., liquid crystal display (LCD), plasma display, cathode ray tube (CRT), etc.), light emitting diodes (LEDs), a touch-screen, a printer, a speaker, or other conventional display device or combination thereof). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT P. BULLINGTON whose telephone number is (313) 446-4841. The examiner can normally be reached on Monday through Friday from 8 A.M. to 4 P.M. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Peter Vasat, can be reached on (571) 270-7625. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). /Robert P Bullington, Esq./ Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Apr 28, 2023
Application Filed
Oct 07, 2025
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594463
METHOD, DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR ESTIMATING INFORMATION ON GOLF SWING
2y 5m to grant Granted Apr 07, 2026
Patent 12597367
Hysterectomy Model
2y 5m to grant Granted Apr 07, 2026
Patent 12553690
SHELL SIMULATED SHOOTING SIMULATION SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12527991
PHYSICAL TRAINING SYSTEM WITH MACHINE LEARNING-BASED TRAINING PROGRAMS
2y 5m to grant Granted Jan 20, 2026
Patent 12530988
MANNEQUIN FOR CARDIOPULMONARY RESUSCITATION TRAINING
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
74%
With Interview (+30.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month