DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This office action is in response to the patent application 18/397,164 originally filed on December 27, 2023. Claims 1-6 are presented for examination. Claim 1 is independent.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 on February 2, 2024. This application claims foreign priority of KR10-2023-0187711 (Republic of Korea), filed December 20, 2023.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“an evaluator simulation data acquisition unit” in claim 1.
“an evaluation index table generation unit” in claims 1 and 3-5.
“a subject simulation data acquisition unit” in claim 1.
“a virtual surgery evaluation unit” in claim 1.
Instant application publication paragraph [0043] describes each of the above-listed units as included in the virtual surgery evaluation apparatus. The apparatus is described in paragraph [0019] as including a first and second mixed reality (MR) devices that may be mainly implemented in the form of headsets or glasses. Paragraph [0045] states that the units can be software, hardware, or a combination of software and hardware. However, since the disclosure describes no instances where the units represent hardware, the units listed above will be interested as being software units operating as part of the virtual surgery evaluation apparatus.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim 1 is directed to “an apparatus” (i.e. a machine), hence the claims are directed to one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). In other words, Step 1 of the subject-matter eligibility analysis is “Yes.”
However, the claims are drawn to an abstract idea of “evaluating a virtual surgery,” reasonably in the form of “mental processes,” in terms of processes that can be performed in the human mind (including an observation, evaluation, judgement or opinion) which are “performed on a computer” (per MPEP 2106(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”).
Regardless, the claims are reasonably understood as “mental processes,” which require the following limitations:
“obtaining an evaluator's evaluator virtual surgery simulation data…
…extracting a preset evaluator-used surgical instrument parameter for each surgical instrument used by the evaluator based on the evaluator virtual surgery simulation data, and generating an evaluation index table based on the used surgical instrument parameter;
…obtaining a subject's subject virtual surgery simulation data…
…extracting a preset subject-used surgical instrument parameter for each surgical instrument used by the subject based on the subject virtual surgery simulation data, and comparing the evaluation index table with the subject-used surgical instrument parameter to evaluate a subject's skill level.”
These limitations simply describe a process of data gathering and manipulation, which is partially analogous to “collecting information, analyzing it, and displaying certain results of the collection analysis” (i.e. Electric Power Group, LLC, v. Alstom, 830 F.3d 1350, 119 U.S.P.Q.2d 1739 (Fed. Cir. 2016)). Hence, these limitations are akin to an abstract idea which has been identified among non-limiting examples to be an abstract idea. In other words, Step 2A, Prong 1 of the subject-matter eligibility analysis is “Yes.”
Furthermore, the claims do not include additional elements that either alone or in combination are sufficient to claim a practical application because to the extent that, e.g., “an apparatus,” “a first MR device,” “a second MR device,” “an evaluator simulation data acquisition unit,” “an evaluation index table generation unit,” “a subject simulation data acquisition unit,” and “a virtual surgery evaluation unit” are claimed, as these are merely claimed to add insignificant extra-solution activity to the judicial exception (e.g., data gathering) and/or do no more than generally link the use of a judicial exception to a particular technological environment or field of use. In other words, the claimed “evaluating a virtual surgery,” is not providing a practical application, thus Step 2A, Prong 2 of the subject-matter eligibility analysis is “No.”
Likewise, the claims do not include additional elements that either alone or in combination are sufficient to amount to significantly more than the judicial exception because to the extent that, e.g. “an apparatus,” “a first MR device,” “a second MR device,” “an evaluator simulation data acquisition unit,” “an evaluation index table generation unit,” “a subject simulation data acquisition unit,” and “a virtual surgery evaluation unit” are claimed these are all generic, well-known, and conventional computing elements. As evidence that these are generic, well-known, and conventional computing elements, Applicant’s specification discloses them in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a), per MPEP § 2106.07(a) III (a), which satisfies the Examiner’s evidentiary burden requirement per the Berkheimer memo.
Specifically, the Applicant’s claimed “apparatus” along with the “an evaluator simulation data acquisition unit,” “an evaluation index table generation unit,” “a subject simulation data acquisition unit,” and “a virtual surgery evaluation unit” are described in instant application publication paragraph [0045] as follows: “Each component of the virtual surgery evaluation apparatus 130 shown in FIG. 2 may refer to a unit in which at least one function or operation is processed, and may be implemented as a software module, a hardware module, or a combination of software and hardware.”
The “a first MR device” and “a second MR device” are described in instant application publication paragraph [0019] as being “mainly implemented in the form of headsets or glasses.”
These elements are reasonably interpreted as being generic computers or generic computer components, which provide no details of anything beyond ubiquitous standard equipment. As such, the claimed limitations are reasonably understood as not providing anything significantly more than the judicial exception. Therefore, Step 2B, of the subject-matter eligibility analysis is “No.”
In addition, dependent claims 2-6 do not provide a practical application and are insufficient to amount to significantly more than the judicial exception. As such, dependent claims 2-6 are also rejected under 35 U.S.C. § 101, based on their dependency to independent claim 1.
Therefore, claims 1-6 are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1 and 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Qiu et al. (hereinafter “Qiu,” US 2020/0367970) in view of Meess et al. (hereinafter “Meess,” US 2018/0130376).
Regarding claim 1, Qiu discloses an apparatus for evaluating a virtual surgery, the apparatus comprising:
…
an evaluation index table generation unit for extracting a preset evaluator-used surgical instrument parameter for each surgical instrument used by the evaluator based on the evaluator virtual surgery simulation data, and generating an evaluation index table based on the used surgical instrument parameter (Qiu [0153], “Surgical planning data may be parametrically defined, which includes at least one of entry and target points, line definitions, and segmentations (e.g., manual, semi-automatic, or automatic from contouring over patient images), which can lead to segmented surface models that are representations of the anatomy and an underlying pathology. Instrument representations may be generated through 3D modelling or CAD, or may be reversed-engineered through 3D modelling and CAD from a reference surface scan. The data models 152 used in AR can also be used to provide visual feedback where the spatial and temporal context is better conveyed by the alignment of the visual feedback to physical space. Furthermore, tracked instruments provide real-time qualitative (visual) and quantitative feedback (metrics) with respect to the surgical plan.”);
a subject simulation data acquisition unit for obtaining a subject's subject virtual surgery simulation data from a second MR device (Qiu [0099], “systems and methods for multi-client deployment of augmented reality (AR) instrument tracking which may be used in the context of at least one of surgical planning, intervention, guidance, and education. The example embodiments are not necessarily limited to multi-client deployment of AR, but may also be applicable to mixed reality (MR), virtual reality (VR), augmented virtuality (AV), and similar modes of technology.”; also Qiu [0202-0204], “the AR system 100 allows real-time guided AR intervention (e.g., surgery) using metrics. Referring now to FIG. 46, shown therein is a flow chart of an example embodiment of a method of guiding AR intervention 4600 in the AR system 100 of FIG. 1A. Method 4600 provides steps (which may or may not occur in an order, and some of which may be processed concurrently) that may be carried out in whole or in part to guide AR intervention using the server 110, the primary client device 170, and a replicate client device 170a. The primary client device 170 and the replicate client device 170a each have their own processors and input devices that can generate real-time input data… At 4610, the primary client device 170 receives model sets, an intervention plan having an intervention field, and session information about a session related to the AR intervention from the server 110… At 4615, the primary client device 170 receives real-time input data from the input device of the primary client device 170. The real-time input data may include tracked input device information such as pose and position, as well as video and sound if the first input device has that capability. The input device may include an instrument (e.g., an osteotome or scalpel) and a tracker camera tracking the instrument and providing the pose/orientation data for the instrument.”); and
a virtual surgery evaluation unit for extracting a preset subject-used surgical instrument parameter for each surgical instrument used by the subject based on the subject virtual surgery simulation data, and comparing the evaluation index table with the subject-used surgical instrument parameter to evaluate a subject's skill level (Qiu [0104], “the system provides a 3D comparison of a user's actual movements to a stored “surgical plan”. If a user makes the exact same movements, for example, as the pre-defined surgical plan, the system may provide the user with a very high score.”).
Qiu does not explicitly teach an evaluator simulation data acquisition unit for obtaining an evaluator's evaluator virtual surgery simulation data from a first MR device.
Qiu does disclose that there is a surgical plan that the subject is compared against (Qiu [0104], “the system provides a 3D comparison of a user's actual movements to a stored ‘surgical plan’”), but Qiu does not disclose how the surgical plan was created.
However, Meess discloses an evaluator simulation data acquisition unit for obtaining an evaluator's evaluator virtual surgery simulation data from a first MR device (Meess [0087], “The upper and lower thresholds or preferred variations can be based on the motions of an expert welder, based computer modeling, testing of similar prior welds, etc. For example, when a welder performs a weld (e.g., expert welder, instructor, a trainee, etc.), the position, orientation and movement of welding tool 460 of the welder and welding process parameters such as welding voltage, welding current, wire feed speed, etc. are recorded. After completing the weld, the welder can select an appropriate menu item that “clones” the procedure. The “cloned” procedure is then stored and can serve as a reference for future welding procedures.”).
Meess is analogous to Qiu, as both are drawn to the art of tools training. It would be obvious to try by one of ordinary skill in the art at the time of filing to have modified the method as taught by Qiu, to include an evaluator simulation data acquisition unit for obtaining an evaluator's evaluator virtual surgery simulation data from a first MR device, as taught by Meess, in order to train people in the use of tools (Meess [0064]). Doing so is a predictable solution that one of ordinary skill in the art could have pursued with a reasonable expectation of success.
Regarding claim 4, Qiu in view of Meess discloses wherein the evaluation index table generation unit divides the evaluator virtual surgery simulation data into a plurality of steps, and extracts the evaluator-used surgical instrument parameter comprising at least one or more of a surgical 28 instrument's entry position (center point), entry angle, entry direction, speed, delay time, and action for each surgical instrument used by the evaluator in each step (Qiu [0194], “The metrics module 188 contains software code for evaluators that may be used on static or dynamic data to generate quantitative output for feedback and guidance in virtual walkthroughs, simulations, or live cases. Real-time data may be live from connected devices or may be recorded from the devices. Examples of real-time data include position and orientation of tracked surgical instruments (e.g., needle tip position and orientation, plane of cutting saw or osteotome, drill tip position and orientation, orientation and position of surgical plates/screws, and depth of cut or movement)”).
Regarding claim 5, Qiu in view of Meess discloses wherein the evaluation index table generation unit generates the evaluation index table based on an average value for at least one or more of a surgical instrument's entry position (center point), entry angle, entry direction, speed, delay time, and action for each surgical instrument used by the evaluator in each step (Qiu [0329], “For needle or geometric resection procedures, evaluation of execution with respect to the plan includes time, distance, and angles (average and variance) to the planned trajectory or cut plane, undershoot/overshoot at target or depth, and jitter.”; also Qiu [0331], “Tracked tools, maneuvers, and tasks performed by an individual may be evaluated against averages across different skill groups and individual performance statistics”).
Regarding claim 6, Qiu in view of Meess discloses wherein the second MR device enters a skill level evaluation mode in a virtual environment by the subject's operation or command, outputs a target surgery name for evaluating a skill level and a virtual object corresponding to the surgical instrument, allows the surgical instrument to be gripped using a virtual hand gesture, and generates the subject virtual surgery simulation data when a corresponding surgery is completed after allowing each surgical procedure to be performed by hand gesture according to a simulation action of the surgical instrument while an anatomical structure of a patient corresponding to the surgery name is visualized (Qiu [0104], “the system provides a 3D comparison of a user's actual movements to a stored “surgical plan”. If a user makes the exact same movements, for example, as the pre-defined surgical plan, the system may provide the user with a very high score.”; also Qiu [0329], “Plans from the plans records 156 for procedures may also be used to score performances under tool tracking, either virtually or in combination with a physical model”; also Qiu [0330], “The client device 170 evaluates the individuals on how well they perform tasks. The application 172 produces and stores metrics that are generated by metrics module 188 using data from tracked instruments and/or tracked hands (i.e., the user's tracked hand movements) for score assessment. The application 172 displays the metrics so that the individuals can compare their own statistics across attempts as well as against the population.”).
Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Qiu in view of Meess, and in further view of Abe et al. (hereinafter “Abe,” US 2024/0282217).
Regarding claim 2, Qiu in view of Meess does not explicitly teach wherein the first MR device enters an evaluation index table generation mode in a virtual environment by the evaluator's operation or command, outputs a target surgery name for generating the evaluation index table and a virtual object corresponding to a surgical instrument, allows the surgical instrument to be gripped using a virtual hand gesture, and generates the evaluator virtual surgery simulation data when a corresponding surgery is completed after allowing each surgical procedure to be performed by hand gesture according to a simulation action of the surgical instrument while an anatomical structure of a patient corresponding to the surgery name is visualized.
However, Abe discloses wherein the first MR device enters an evaluation index table generation mode in a virtual environment by the evaluator's operation or command, outputs a target surgery name for generating the evaluation index table and a virtual object corresponding to a surgical instrument, allows the surgical instrument to be gripped using a virtual hand gesture, and generates the evaluator virtual surgery simulation data when a corresponding surgery is completed after allowing each surgical procedure to be performed by hand gesture according to a simulation action of the surgical instrument while an anatomical structure of a patient corresponding to the surgery name is visualized (see Abe Fig. 2; also Abe [0006], “One aspect of the present disclosure provides a work training support system. The work training support system includes: a detector detecting a movement of a tool during a first period, wherein the first period is a period in which an instructor performs work using the tool; a controller programmed to generate a first image using a detection result of the detector during the first period, wherein the first image represents the movement of the tool during the first period; and a head mounted display worn on a head of a trainee, wherein the head mounted display displays the first image in a field of view of the trainee during a second period, wherein the second period is a period in which the trainee performs the work using the tool.”).
Abe is analogous to Qiu in view of Meess, as both are drawn to the art of tools training. It would be obvious to try by one of ordinary skill in the art at the time of filing to have modified the method as taught by Qiu in view of Meess, to include wherein the first MR device enters an evaluation index table generation mode in a virtual environment by the evaluator's operation or command, outputs a target surgery name for generating the evaluation index table and a virtual object corresponding to a surgical instrument, allows the surgical instrument to be gripped using a virtual hand gesture, and generates the evaluator virtual surgery simulation data when a corresponding surgery is completed after allowing each surgical procedure to be performed by hand gesture according to a simulation action of the surgical instrument while an anatomical structure of a patient corresponding to the surgery name is visualized, as taught by Abe, in order to efficiently increase the proficiency of the trainee (Abe [0005]). Doing so is a predictable solution that one of ordinary skill in the art could have pursued with a reasonable expectation of success.
Regarding claim 3, Qiu in view of Meess does not explicitly teach wherein the evaluation index table generation unit generates the evaluation index table based on the used surgical instrument parameter by recognizing that data for generating an evaluation table for a virtual surgery is accumulated when a corresponding surgery is repeated by a preset number of times (n times) after allowing each surgical procedure to be performed on a simulation action for the target surgery name which is identical to the operation or command of the evaluator from the first MR device.
However, Abe discloses wherein the evaluation index table generation unit generates the evaluation index table based on the used surgical instrument parameter by recognizing that data for generating an evaluation table for a virtual surgery is accumulated when a corresponding surgery is repeated by a preset number of times (n times) after allowing each surgical procedure to be performed on a simulation action for the target surgery name which is identical to the operation or command of the evaluator from the first MR device (see Abe Fig. 2; also Abe [0006], “One aspect of the present disclosure provides a work training support system. The work training support system includes: a detector detecting a movement of a tool during a first period, wherein the first period is a period in which an instructor performs work using the tool; a controller programmed to generate a first image using a detection result of the detector during the first period, wherein the first image represents the movement of the tool during the first period; and a head mounted display worn on a head of a trainee, wherein the head mounted display displays the first image in a field of view of the trainee during a second period, wherein the second period is a period in which the trainee performs the work using the tool,” wherein the preset number of times may be 1. In Abe, the procedure is performed once).
Abe is analogous to Qiu in view of Meess, as both are drawn to the art of tools training. It would be obvious to try by one of ordinary skill in the art at the time of filing to have modified the method as taught by Qiu in view of Meess, to include wherein the evaluation index table generation unit generates the evaluation index table based on the used surgical instrument parameter by recognizing that data for generating an evaluation table for a virtual surgery is accumulated when a corresponding surgery is repeated by a preset number of times (n times) after allowing each surgical procedure to be performed on a simulation action for the target surgery name which is identical to the operation or command of the evaluator from the first MR device, as taught by Abe, in order to efficiently increase the proficiency of the trainee (Abe [0005]). Doing so is a predictable solution that one of ordinary skill in the art could have pursued with a reasonable expectation of success.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Mackenzie et al. (US 2018/0247560) Automated surgeon performance evaluation
Monson et al. (US 11,699,358) Dental hygiene and periodontal hand instrumentation tutor
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Stephen Alvesteffer whose telephone number is (571)272-8680. The examiner can normally be reached M-F 8:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Vasat can be reached at 571-270-7625. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEPHEN ALVESTEFFER/Examiner, Art Unit 3715